Passer au playerPasser au contenu principal
  • il y a 19 heures
How Can Companies Navigate the New Age of AI-Powered Cyber Threats?

Catégorie

🤖
Technologie
Transcription
00:09So, hello to all, delighted to be with you at VivaTech. For what is going to be, I must tell
00:18you, probably one of the most interesting panels that you'll hear in VivaTech, with two important lingo words, but they
00:24are actually both critical, cyber and AI.
00:28A few words of introduction about myself, and then I'll have a chance to present the distinguished experts for this
00:34panel.
00:35So, my name is Guy Goldstein. I'm a lecturer at the School of Economic Warfare here in Paris, and also
00:40an advisor to the PwC and to a few of these events.
00:43And I myself, I went to cyber via a novel that I wrote 18 years ago that imagined a struggle
00:50between China and the United States for the control of Taiwan taking place in cyberspace, a novel that got a
00:56bit of traction at MIT.
00:58Here in France, in some military corners, and also a lot in Israel, if I read a few essays, which
01:03was written about the acceleration of cyber in Israel.
01:06Yet, I must tell you that though that scenario was supposed to happen about now, never would I have imagined
01:14the importance, the strength, the development of AI as it has happened over the last ten years and with an
01:22interesting acceleration over the last two, three years.
01:25So, you may know all the figures, and if we get all the studies for the last 12 months, we're
01:31about 40 to 50% of the Western population in France, in the US, which is using some sort of
01:38AI, generative AI.
01:40We know also that there may be, evidently, a lot of productivity gains.
01:45National Bureau of Economic Research in the US stated that about 25% of productivity gains from AI, so it's
01:52a bit discussed.
01:53And then, we may have very interesting effects from this generative AI.
01:59Yet, yet, as with any tools, it can be used by the good guys, and it can be used by
02:06the bad guys.
02:06And this is what we are going to discuss here, and we know already that we've seen issues, vulnerabilities with
02:14the so-called large-language models,
02:17that sometimes are vulnerable to attack or to hallucination, that you can very easily take your voice or your face
02:27using AI,
02:28that maybe even some of the future smart hackers may be AI hackers, and that's on the bad guys, but
02:36also somehow that it can be used for the good guys.
02:40So, this is basically what is in the balance here, and to discuss that, I have a great chance to
02:47have a fantastic panel with a few of the best experts in each of the field.
02:53So, I will present them, and then we'll start on the conversation.
02:57So, with me, Olivier, Olivier Notet. Olivier, you have began your career back in 1988 in IT security management,
03:06and joined the BNP Paribas Group in 2004, and since 2015, you have been the Group Chief Information Security Officer,
03:17responsible for managing cyber risk and implementing high-end security programs.
03:21So, you're quite the expert here, and you're also involved in various European and French working groups and committees on
03:29cybersecurity.
03:30So, thanks a lot, Olivier, to be with us.
03:33Also with us, Emmanuel Celibard.
03:37You are the Chief Investigative Officer at GetReal, where you lead efforts to detect and expose digital deception.
03:46We'll go back to that.
03:48We mainly focus on AI-powered attacks and threats.
03:52Right there.
03:53You also, if I may, be a pioneer in visual verification.
03:57You have spent nearly a decade as a broadcast journalist in the United States,
04:01most recently at ABC News and NBC News, reporting and leading new teams at the intersection of breaking news and
04:10investigation.
04:10Fantastic.
04:12Probably lots of things we'll be able to discuss.
04:14Also with us, we have Toby Lewis.
04:18You are the Global Head of Threat Analysis at AI for one of the cybersecurity pioneers in cyber and AI,
04:27which is Darktrace.
04:28And thanks a lot to be with us.
04:29You've spent, actually, 15 years in the UK government, including at the UK National Cybersecurity Center as the Deputy Technical
04:38Director for Incident Management.
04:40So, a wealth of knowledge and skills on many different angles around cyber and AI.
04:47Thanks a lot to be with us.
04:48And, last but not least, we have with us Benjamin Netair.
04:52Benjamin Netair.
04:53Benjamin Netair, you are the founder of Riot, which is one of the leading platforms for protecting employees
04:59against cyber attacks.
05:01You started the company in 2020 at Y Combinator, congratulations, in Mountain View.
05:07And now, Riot protects more than 1.5 million employees worldwide.
05:12So, you see, thanks a lot to be with us, a very complementary set of skills in this panel for
05:17this huge issue which is emerging.
05:21And to have a first go at what's the actual threat landscape, I'll start with you.
05:28Olivier, as a head of a global CISO of BNP Paribas, a key global financial institution.
05:36So, from your standpoint, how is the threat landscape changing because of AI?
05:44In particular, do you believe AI could bring an evolution or a revolution in that threat landscape?
05:58That's a good question. I think we are only at the beginning of the changing world.
06:02So, well, as a banking industry, we have to protect the client and to protect also our system.
06:08So, on both of them, we'll see that we will be completely, the world is changing, AI has changed.
06:13Well, I am not able to speak Japanese, but I'm easily, it's easy for me to send a phishing email
06:20in Japanese.
06:21It really changed the way things are moving all over the world. So, that's the point.
06:27And on the internal side, we will see more and more zero-day attack patching to be deployed on our
06:37systems.
06:37And I'm pretty sure that all the attackers are using AI and LLM models to be able to decrypt all
06:45the information available from the provider
06:48against one patch to be deployed to write the zero-day attack.
06:53So, it has changed also the way we have to protect and the way we have to accelerate things.
06:59But I'm pretty sure things will change dramatically in the coming months or years.
07:05And we'll see more and more things changing with AI. We are only at the beginning.
07:09We are only at the beginning. Things are going to change dramatically.
07:13And maybe for the audience, can you just re-precise, because I think it's an important point,
07:17what is zero-day and why discovering zero-days is a critical element in this evolving threat landscape?
07:25I will take the example of the COVID. As soon as you've got the vaccine, you can get the vaccine
07:31and then you're protected.
07:33If you don't have the vaccine, then you're not protected. Zero-day is exactly like that.
07:36It's a failure in the software. If you do not know that there is an issue in the software, then
07:41the bad guys could use it
07:43and then be able to penetrate your systems.
07:46Okay, so really new-to-the-world type of vulnerabilities that somehow with AI we could discover.
07:54It's existing for a long time, but no, it really, really accelerates the things that are available
07:59and then it spreads all over the world in two days before it was, I don't know, two years maybe.
08:06So all of a sudden, new-to-the-world type of vulnerabilities could be discovered with AI.
08:10You said that we are the cusp of major changes and in itself this is an important signal that people
08:16should be listening to.
08:18This is what you see from the standpoint of BNP Paribas from your group.
08:23If you look around at either your clients or your suppliers, do you sense that they also understand that or
08:32that the message is still not there?
08:35Well, for me the message is still not there. And we have to, that's the whole world that needs to
08:41be changed.
08:42We are not working alone. We need providers, we need clients, we need partners.
08:47So it's end-to-end, we need to be protected. As soon as you've got an issue somewhere, that's the
08:53whole systems could be down.
08:54So all your data might be, even if you protected really well your data, if it's, you provide it to
09:00your partners
09:01and then if your partners get hacked, then your data is available on the dark web. So at the end
09:07of the day, you lose.
09:08Right. So if your partners don't understand what you just said, it's an issue for you, right?
09:14Indeed.
09:14Okay, so great statement and great signal. And this is coming again from the head of a very important financial
09:22institution.
09:24If, and this is one type, let's say, of attacks, though you mentioned the ability to change the way we
09:31talk,
09:32there may be also ability to change the way we look. I will turn myself now to Emmanuel.
09:41So from your field of expertise, which is this research on deepfake, a bit like Olivier, do you see also
09:50an increase in cyber threats
09:53with this emergence of new AI system? And if you do so, how much bigger it is from, say, last
10:00year or two years ago?
10:02Yeah, things have dramatically evolved just even in the last few months at Get Real.
10:08We were tracking all of these types of deception and deepfake attacks.
10:13I think what Olivier was talking about, which is really interesting is, you know, you have to protect,
10:18you have to think as an enterprise about protecting your consumer, not just when they're using your platform,
10:24but when they're on other platforms, you know, there could be what we're seeing a lot of scams of CEOs
10:31and CISOs
10:32being posted on Instagram or on Facebook trying to sell you the latest, you know, crypto scheme.
10:38And those are things that enterprises, you know, need to think of now because their consumers are falling for it.
10:43Then on the other end, I think in terms of enterprises, there's traditionally, they've protected IT infrastructure,
10:51but now their front doors are wide open. The HR process, the way that you're hiring employees,
10:57which is mostly remote, almost entirely remote. I had 12 interviews for ABC. All of them were remote.
11:03Wow.
11:03And we're seeing that type of attack, which is using someone's likeness through someone's voice to get inside a company.
11:13So the threats are changing and they're much more accessible than they used to be.
11:18This is a very important point. I want to stress what you said on HR.
11:22Do you have maybe a few examples on fake people, you know, knocking on the screen and not being who
11:32they are,
11:32and thus risking compromising the company?
11:36Yes, we actually have many examples.
11:39And, you know, previous to this, I was a journalist at ABC where I covered this type of threat for
11:42two years.
11:44And the threat was sort of silent, as in we knew it was happening inside companies,
11:48but no one wanted to talk about it because that's just bad business.
11:52Now we know it's happening. We're talking to CISOs all the time who have discovered North Koreans sitting on their
11:57network for two years.
11:58And there is a massive North Korean attacks right now where they are infiltrating companies posing as either existing employees
12:07or having stolen previous identities.
12:10And then there's just, there's also imposter.
12:13There's the imposter hiring that's happening or entirely fake candidates and someone will show up and they'll be totally different
12:19than the person they interviewed.
12:21They'll just like cross the room and say, wait a second, that's not at all the guy I hired.
12:25So there's an entire revision that needs to happen in terms of your HR.
12:29Every enterprise needs to look at their HR onboarding.
12:32And that's what we've been researching and working on at GetReal.
12:36And just on that point, which I think is extremely important, especially if there are people here who work in
12:42HR or who do recruit people.
12:44How do you assess from the exchange you may have, and by the way, this is a question also perhaps
12:49to Olivier and the rest of the crowd.
12:51How do you assess, you know, the strength of the HR function today with regards to that peculiar threat?
13:00It has to be rethought entirely. There needs to be, it needs to be multi-pronged.
13:04One, they need tools like real detection tools to, you know, check in stream whether the person is an authentic
13:12person, an avatar, or they're wearing a deep fake.
13:15There was a famous example like three months ago from a security company in Poland.
13:21They posted the video online and this developer candidate made it all the way to the CTO.
13:26They had been flagged as sort of strange and so they were able to record the interaction.
13:32But there needs to be awareness and I think there isn't enough awareness inside of enterprises of what risks the
13:40HR teams run by onboarding certain candidates.
13:44Tools, training, and then in the way Olivier was talking about, it's much easier now for people to use AI
13:50to write a perfect email in English.
13:53Whereas before you could sort of like detect, right, if the English was a little weird, that probably it wasn't
13:59so genuine.
13:59That's not the case anymore.
14:01Interesting. And one last point, I think, you know, these attacks using fake faces, do they have also a psychological
14:12component?
14:13I mean, yeah, deep fakes, are there additional threats which is not only, you know, getting someone through the door,
14:20someone who shouldn't be there,
14:21but also maybe some ways to affect psychology with deep fakes and this type of attack vectors?
14:28Yes. I mean, that's more in, not so much on the HR and onboarding, that's much more of real and
14:37impersonation.
14:37Right.
14:38So that threat is, we see it at the financial scam level, like impersonating an executive, impersonating a CEO.
14:46One, there's a lot of material. Generally, if you're a CISO or a CEO or an executive, there's a lot
14:52of material online now to easily engineer a deep fake of you and make it believable.
14:57If you're the White House chief of staff, for example. And that's much more visceral.
15:02It's much harder to say no to transfer, to open a bank account to someone who's your, you think is
15:10your boss than, you know, if it was someone you didn't know.
15:14So there is something emotional there. There's a psychological component, which is why we need technology and training to make
15:23sure, you know, we're not moving like millions of dollars somewhere else.
15:26Yeah. And training as you stated. Okay. So we start to get a picture, big changes, HR in danger, again,
15:34psychological effects, very interesting.
15:37Now let's switch into actually networks. Network detection. Toby, your company, Darktrace, has been one of the pioneers in using
15:48AI for network detection.
15:51Do you also, from your standpoint, feel there is a change in terms of threat landscape because of AI?
15:58And by the way, feel free to chime in to some of the additional points which have been said by
16:02Olivier and Emile.
16:03Yeah, absolutely. I mean, I think one of the things that I think really kind of chimes with something you
16:08said just there is when you take things from a purely text-based communication, there's an element of extra trust
16:15that comes with that.
16:16It becomes more believable. It becomes more realistic. And as a result, our guard drops slightly.
16:22There's been a number of attacks that have taken place in the UK, in the US, in Western Europe of
16:26sort of luxury retailers, you know, so in the UK you can't pick up a newspaper without seeing it at
16:31the moment.
16:31And one of the tactics that's being sort of spoken about is that this is a gang who are British,
16:39who are US.
16:40And that means they can fundamentally pick up the phone to an IT help desk and say, I forgot my
16:46password. Can you give me a password?
16:47And they will. Now suddenly when you add in the angle of deep fakes and now suddenly even somebody with
16:55the thickest, strongest regional accent in somewhere in Eastern Europe, for example, can now give themselves a trustworthy French accent
17:03or a trustworthy English accent.
17:05And now suddenly become much more believable to that IT help desk and so forth.
17:10I think in terms of the email attacks that we've seen, there's always a little bit of a challenge when
17:16it comes to, I suppose, almost AI detection.
17:19Was that created by AI? Was it not?
17:22And in many cases, it's we're in this causation correlation type argument.
17:27And one of the things that we saw is one of the metrics we use, for example, to identify suspicious
17:34emails is changes in linguistic complexity, which is basically the English got better.
17:40And so suddenly what we were seeing is that attacks that were coming in to customers of Dart Trace that
17:46we were already starting to see as unusual, as malicious and certainly concerning.
17:52The quality of the English, that linguistic complexity jumped.
17:57Now, what happens about the same sort of time? ChatGPT came out.
18:01Now, OK, as I said, I go back to causation and correlation. Was it really the same thing? That's always
18:05hard to know.
18:07There's a there's a German colleague of mine and I'm going to steal a phrase of his and he's German.
18:11You can understand why you only ever see the sausage, not the sausage maker, which, as I said, he's German.
18:17So you can appreciate the context. But from my perspective, that means that when you receive a AI developed email,
18:24do you ever really know if it was written in AI or just it was really well written from the
18:29start?
18:30And so we've almost reached this point as well, which is how much should we try and detect AI versus
18:37it's just generally malicious in the first place?
18:39Right. OK. Well, that's very interesting. And I'm glad to hear that all of a sudden maybe one day I'll
18:44be able to speak English without my goddamn French accent.
18:47But a key point here, you mentioning the use of AI, for example, in the development of ransomware or complex
18:58attacks.
19:00Have you seen, have you detected that we may have a new breed of ransomware attacks, which is using AI
19:09whether, you know, because it makes it easier or whether because it's actually get those attacks more sophisticated?
19:17It's something that is emerging or it's still, you know, on the radar. We have not seen that.
19:21I think I think generally speaking, in terms of the use of AI that we see is very much as
19:26a as a productivity tool, much in the way that we'd all use some form of AI, whether it's Microsoft
19:32Copilot or Google Gemini, whatever it might happen to be is some way of speeding up our processes.
19:38So now put yourself in the mindset of an attacker, a ransomware author, and they're trying to speed up some
19:45of their processes.
19:45They're trying to improve some of the quality. They're trying to diversify some of the techniques they're using.
19:52Well, of course, they're going to use AI to start doing some of that and to start to improve some
19:56of the techniques they're using.
19:58Now, when we talk about AI and we talk about cyber attacks, one of the things we start to think
20:03about is the idea that you've got this living and breathing
20:05AI thing on your computer network. I think the reality is we're not quite there yet.
20:11The AI is still living in the attacker space. They're using it as a tool to develop the capability and
20:16then they're throwing the output of AI towards their victims.
20:20And that's probably the most likely case that we're seeing at the moment, rather than some full thinking sentient being
20:29that can live and breathe in your network by itself.
20:31And I don't think we're quite there yet.
20:32OK, so we have not yet have the equivalent of John Carpenter, the thing as an AI moving into machines,
20:41but we do have some more sophistication, right, from attackers.
20:45OK, so this is, you know, going into the network, but usually before you go into the network, you need
20:51to send the very nice phishing email that helps you get into the network.
20:56And that moves to you, Benjamin, in terms of both phishing attack, you know, when you send the email, the
21:05bad thing that shouldn't click and you click it, all right.
21:09But also perhaps also defects. What do you see in terms of volume, in terms of quality, in terms of
21:17development of all those attacks, which is usually the first door that you need to knock to get into the
21:24networks if you're a hacker?
21:27Yes. So what we're seeing since the launch of ChatGPT is that the number of attacks has grown by more
21:34than a thousand percent.
21:35And as Olivia mentioned, it's now much easier if you're, let's say, Russian hacker, it's much easier to send an
21:45email in Japanese than it was before. Right.
21:47And what you're sending really sounds like you're Japanese or French or whoever you're targeting.
21:54So that's a big change. And as Emmanuel mentioned, also since COVID, I mean, you know, your personal life and
22:03your professional life are now mixing into one.
22:07And it also gives a lot of opportunities to hackers, to hack you on your personal life, to target your
22:15company and opens a lot of new kind of attacks.
22:20What we're seeing is that, you know, if we had this discussion five years ago, we would talk about phishing
22:27emails, but it's way less about emails today and it's changing in format.
22:32And the new formats can be WhatsApp messages, it can be social networks, it can be phone calls as well.
22:43And so the format is changing and the scale of the attacks are very different and they are much more
22:51sophisticated than they were before.
22:53And can you speak a little bit about this element of sophistication?
22:57Could I have an attack that could go actually successively on WhatsApp, on social networks and getting phone calls?
23:03Would I be able, beyond just the language thing, to perhaps speak exactly like if I was Benjamin Etter?
23:12Well, yeah, you can do that very easily. I'm everywhere on YouTube, so you can steal my voice.
23:17But I'll give you a simple experiment that you can do at home. Go on ChatGPT, try typing who is
23:25plus your email address.
23:26And ChatGPT is going to, you know, go through the whole internet, going to find everything about you. ChatGPT will
23:34draw a profile of you.
23:36And historically, I mean, if you're targeting someone at BNP, you would go through a lot of manual work, what
23:45we call internally OSINT, open source intelligence.
23:49Trying to find who Olivier is working for, trying to understand the connections, trying to, everything would take you a
23:57lot more time.
23:57And now you can do that in just a few seconds with ChatGPT, right?
24:04A second evolution, more recent, are the MCPs that gets you to...
24:11MCPs? What are those?
24:13Model context protocol that's been, you know, growing the last few months, and give you access to external applications directly
24:21from ChatGPT.
24:23So let's imagine now I know who Olivier is working for. I can, you know, the model can go on
24:29YouTube, find the right video, seal the voice, put it into Eleven Labs, recreate the voice, create a phone number
24:37on Twilio, create a WhatsApp account, and send the message and the right message to Olivier.
24:45And this would be done completely automatically in just a few months. We're not exactly there yet, but we're getting
24:51very close to that.
24:52And so we're going to see these kind of very targeted attacks at scale.
24:58So you're saying if I play around with a couple of MCPs, I'm able to pick up all the work
25:08in terms of open source intelligence that used to take days, perhaps sometimes weeks, in just a few hours, maybe
25:16even a few minutes,
25:17enable them to fashion precisely the one thing to get to our dear poor Olivier.
25:23Exactly. And another key element in an attack is what we call the pretext, which is the story you're going
25:30to tell Olivier, right? Sorry Olivier, by the way.
25:33It could be Emmanuel too. Emmanuel, you're next.
25:36But for now, I'll keep that example on Olivier. And the pretext, I mean, as a hacker, you had to
25:43be very creative up to this point.
25:45You know, you're going to try to understand what would resonate with Olivier, right?
25:51But now with LLens, you don't even need to be creative anymore. They can, from your profile, decide what would
25:57be the ideal pretext to use to act into your company.
26:02OK. All that is only for ethical defense, evidently. Just right. Right. So we see single change. We see many
26:13units in the corporation, especially HR, with visceral changes.
26:18We see things which are changing also at scale, though we're not at the monster dimension yet. But who knows?
26:26We know that in terms of open source intelligence,
26:29which is required to get to the right attack, spear phishing attack, we may be at scale with the right
26:38level of litigation quite soon, right?
26:40In the next, say, 12 months. That's about what you said. 12, 24 months.
26:45Exactly. Hackers, you know, it's an industrialized industry now. So it's not the same as it was before when it
26:52was just, you know, script kiddies,
26:55kids trying to act into companies after school. It's not like this anymore.
26:59OK. So now you have a sense of a threat landscape, thanks to all of you. And we're going to
27:05switch back.
27:06We had this story into the, you know, the black hat world. All right.
27:11We're going to switch back into the good guys and see how we can defend against what does seem, you
27:18know, when I hear the four of you,
27:20like really something emerging that, you know, in some areas could be really transformative in terms of attacks.
27:26Actually, going back to you, Benjamin, now that you're scared the hell of us.
27:32How would you defend and how do you defend, you know, against all this information, which is so easily found
27:40out?
27:41How do we defend against the use of MCPs?
27:45Well, for quite a long time, we thought that growing the culture of the company would solve the problem, right?
27:55But today attacks, they're evolving too fast, right?
27:59So it's evolving faster than we can actually train.
28:03And so we need to find new solutions.
28:05And my big take on that is that what actually matters is not so much the culture.
28:15The culture doesn't matter.
28:16What matters is your posture and what you share online and what will feed the LLMs.
28:22And so you need to look at what LLMs are finding on you online.
28:29And you need to take, you know, small actions that will make it harder to create that profile from LLMs
28:36easily.
28:37And you just need to get better than your next-door neighbor, right?
28:41Because hackers, they always go for the low-hanging fruits.
28:45Think also a bit about the next-door neighbor. We are a community here.
28:48Exactly. But inform your next-door neighbor as well that he needs to take care of his cyber posture, right?
28:56He needs to... I'll give you a few simple examples.
29:00Hide your family name on LinkedIn.
29:02It's going to make it much harder, you know, for hackers to create sophisticated attacks on your connections.
29:10Take a look at your privacy settings on LinkedIn. Simple example.
29:15And these kind of, you know, small actions that you can take in just a few minutes will make a
29:20tremendous impact on not getting...
29:23Not receiving those attacks instead of trying, you know, to adapt to an ever-evolving threat landscape.
29:31Okay, so let's be very mindful of that. And by the way, Olivier, you have a whole, you know, three
29:37panels of advices.
29:39You tell us at the end which are the ones that you take.
29:42If I move more, perhaps, into a...
29:45So we're here at the individual level, but each one of us, we are individuals, even if we work in
29:51corporations.
29:51So this is important advice that you gave Benjamin.
29:55At the more corporate level, perhaps at the network level, perhaps at the, you know, governance structure of a company.
30:01You know, and especially you, Toby, as you work at Darktrace, but you also have your past with the UK
30:09government.
30:10You know, when you see this evolving threat landscape, what could be the advice that you could give to the
30:16audience here and also to Olivier?
30:19So you sort of quite sort of pointed out, you know, my background is in the much more traditional approach
30:25of threat intelligence,
30:26where you recruit teams of analysts that go out and hunt what nation state adversaries are up to, what cyber
30:33criminals are up to.
30:34And I think if you go back maybe 10 years ago, the number of groups that existed where you needed
30:41to dedicate this level of resource was probably quite small on the grand scheme of things.
30:45You know, and it was quite complex to generate this capability, so they evolved quite slowly.
30:51So it was a manageable approach to, again, generate this threat intelligence and then really go look for that bad
30:58stuff in your network when it happens.
31:01Over the last sort of few years, whether it's AI, whether it's other capability, we've seen the quality, the sophistication
31:07of all groups improve.
31:10We've seen the barrier to entry drop. So now more individuals are getting involved, more actors are getting involved.
31:15And because of my earlier points, not only are they easy for them to get involved, it's easier for them
31:20to be good at what they're doing.
31:22And so that traditional threat-led approach, ultimately, it's reliant on this sacrificial lamb model, the idea that somebody has
31:31to get hacked first.
31:32Somebody has to get hacked first, and then all the threat intel companies go work out what happened and then
31:38share the information with everyone else so we can learn from somebody else's mistake.
31:43We can't keep doing that, and that's not a pace that we can kind of maintain with.
31:47So kind of the model that we very much took was, okay, we're talking about AI.
31:52There's very much a principle for us which is recognizing that AI isn't just about generative AI.
31:57And we were talking about this before, that AI has existed for a very, very long time.
32:02And to me, really, it's about data science. It's not about funny images on Instagram or social media and everything
32:08else.
32:09And so from our perspective, rather than trying to constantly chase what an attacker looks like,
32:16why don't we go away and learn what the defenders look like?
32:18Why don't we go away and learn what the enterprises are that we're defending?
32:21And rather than looking for something that looks bad, look for something that isn't good.
32:26And what we're effectively trying to say is that the old ways aren't bad,
32:30but we just need to come up with some new ways that keep pace and find other alternative ways of
32:35spotting malicious activity in our networks.
32:36Otherwise, we're never really going to keep up.
32:39Okay. You made a very important point here, and maybe also it's important for the audience.
32:44We talk a lot about AI, but the main bulk of the conversation the last few years has been generative
32:51AI.
32:52All right? Since ChatGPT and whatnot back in the end of 2022.
32:57Yet, as you mentioned, we have had AI, and I remember back in VivaTech already like 10 years ago,
33:04where people were already talking about AI, the first rift of AI.
33:07So maybe you can just pre-sess a little more what can be the use of generative AI,
33:14and what NIST, which is a U.S. institute called proactive AI, which is kind of a big bag,
33:19where all the other sorts of AI machine learning was put in.
33:24You know, how can proactive AI can play to protect networks?
33:29And how can generative AI can also have a play, but perhaps different versus proactive AI?
33:35So certainly in terms of the experience that we've had so far,
33:38when we're talking about that creative generative AI component,
33:42one of the real risks that we hear from our customers is,
33:46well, actually, if we've got something in our network that's coming up with,
33:50hey, that's bad, that's bad over there, you have this concern over hallucinations,
33:54this idea that, well, actually, is it going to spot something that actually isn't really bad,
33:58and is that going to waste time for our defenders?
34:01And so we kind of have to move some of those risks away from that mode of thinking.
34:06So where we've looked at generative AI has been much more in the awareness piece.
34:10So what can we do to demonstrate what the threat landscape looks like?
34:13What can we do to summarise huge volumes of data?
34:18One of the things that certainly I've experienced in my career,
34:20in a really mature but also a really large corporate environment,
34:24there's no lack of information.
34:27What there's a lack of is the ability to process it and understand it and make use of it.
34:31So again, that's where things like AI become really useful
34:34in terms of being able to process those huge volumes of data to go,
34:38okay, here's a little bit of a pattern that's starting to emerge here.
34:41And that's where we then start to look at the much more broader applications
34:44of artificial intelligence and more specifically, more broader machine learning,
34:49which is that it's recognising that really at its core, it's data science.
34:52And it's really leaning on that as a narrative to go,
34:54well, okay, with all this data, how do we make sense of it?
34:57Okay, well, never forget data science.
35:02So, beware of your own data, do use the right algorithms to protect yourself.
35:09Emmanuel, what can we do about deepfakes?
35:13Can we use also that?
35:14Are there other elements, tools, cultural approach that we should be using
35:19to fend up the threat of deepfakes that you mentioned?
35:24So, when I talk about this, I always say it's actually not about deepfakes.
35:31The question is really about trust.
35:34And trust has become the most valuable but also the most vulnerable company asset.
35:40Because you're talking about whether or not your consumers
35:44and your clients can trust your company
35:46and whether your internal conversations that happen on this little screen
35:51that we're all talking on now 24 hours a day,
35:53whether I can trust that communication.
35:56Am I really talking to Tina, our head of research?
35:59Or is that an impersonation?
36:01Or is she wearing a deepfake?
36:03So, those are the kinds of questions that we need to really think of.
36:08And the way we look at it is in two ways.
36:11We look at on-network attacks, which are attacks on the stream and streaming.
36:16So, that's HR onboarding, which has all of the types of engineering we talk
36:21from entering by phishing email, you know, all these things.
36:24But you get to HR onboarding is an on-network attack.
36:29Executive impersonation, on-network attack.
36:31And then you have the, which you can prepare for.
36:33One, you need to prepare your employees
36:35and they need to be aware of this type of risk.
36:37You need to have technology in stream to actually validate and verify the identity of your employees.
36:45I would not trust conversations happening over Zoom now, right?
36:49Or any sort of, if it's not validated.
36:52Because it's not even only unauthorized.
36:55It's authorized AI now.
36:57People are sending in their avatars.
36:59Never do that with me, by the way. I will not take that call.
37:03And then the other way is off-network, which you can't prepare for.
37:07Think about how do you destroy someone's reputation
37:11or a brand's reputation in a matter of minutes
37:14by posting something online, by posting something on social media.
37:18And now that has become child's play, right?
37:22Replicating someone's voice, someone's likeness, and posting it online.
37:27You can scream from the rooftop that it's not you
37:29and it doesn't look like you or sound like you, but it sure does.
37:32And that perception has already happened.
37:34So, preparation and then response to that, knowing who to call, how to respond.
37:39And then if it's going to the court of law, how do you actually prove forensically,
37:44prove forensically that that's AI and that's not an authentic image?
37:48So, those are the types of conversations that every single enterprise and individual needs to be happening now.
37:54And the good thing is that we may ask how to respond to conversations to Olivier.
38:00Now, you have this full gamut of threats.
38:03What do you see yourself as, right now, the most important threats you need to prepare
38:09and you already prepare your organization with and how?
38:12And perhaps also if you move a bit, you know, at mid-term, you know, three years from now,
38:16how do you see things evolving?
38:18Well, I think we've been talking about our data.
38:22Our data is key. It's personal data.
38:25So, for me, we have been talking a lot about awareness, training of people.
38:30We've done a lot. We will never do enough.
38:33For me, there's a solution is at the technological level.
38:37We've seen that at the European level, we will have the European digital wallet.
38:43That's, for me, it's really key for the impersonalization.
38:46Make sure that I am Olivier Noté and it's me.
38:49And even on Zoom or whatever, it will steal me and it will be validated.
38:54It will come in Europe.
38:56In the US, Donald Trump has canceled the initiative.
39:00So, for me, it's quite strange, but that's it.
39:04So, this is one of the more important things that will come to Europe
39:10in order to really secure the way we are trusting digital economy at a human perspective.
39:20After, at more at the industry level, it was highlighted that we've got more and more data,
39:30more and more things to manage.
39:32We are only working not together.
39:35We've got more and more SaaS things.
39:38We will have an agentic AI agent that will do a lot of things in an autonomous way.
39:45So, what we will have to do is more and more automation.
39:49I mean, we won't be able to do it by ourselves.
39:52Humans will not be able to manage to work 24 hours per day managing terabytes of data.
39:59So, make sure that all things are not automated in all industries.
40:06So, you will need the data, you will need the process, you will need the systems.
40:12And, of course, at the end, you've got machine learnings and maybe an automated treatment
40:16that will be at the end of the journey.
40:18That's things that will come in the next year.
40:21And, for me, it's really important for all the industry to get to be transformed,
40:26to have less people working on processes.
40:31People are there to define the processes, but not to deploy them.
40:36That should be done by automatic processes.
40:40Got it.
40:41A very important point here, especially when you mentioned the importance of automation,
40:45as an understanding.
40:47How do you, at a global level, how do you push further education for those changes?
40:53Or, actually, within the confines of BNP Paribas, which is already a big, large institution?
40:59Well, that's a culture of things.
41:01I mean, you can push on it, but everyone's recognized that they cannot do it by themselves.
41:07I mean, we've got so many, it's terabytes and terabytes of data.
41:11We've got more and more things to manage every day.
41:14AI will add more things.
41:16We've got more and more treatments, more and more digital treatments for the economy.
41:25So, it's coming naturally.
41:28So, of course, we have to push.
41:29We have to deploy systems.
41:31We have to be quite military in order to really define the same process all over the world.
41:36You can imagine that at BNP Paribas, it's not possible to have one process in the US, one process in
41:42France, one in Belgium.
41:43It's not sustainable.
41:45So, we are pushing for that.
41:46But then, at the end of the day, it's a natural transformation of the economy, at least in Europe, where
41:53everyone is going on the same pace.
41:55So, very important what you say here, and we're talking about pace.
42:00I would have another set of a few questions for each one of you.
42:03And stay actually on that element, which I think is extremely important, pace, speed.
42:10You are the global CISO of BNP Paribas.
42:15You report to the top guys at BNP Paribas.
42:20How do you convince them that, indeed, they need to move at the right speed and scale?
42:27How, at the management board level, how do you convince that things need to, perhaps, move up, as we hear
42:35that we are faced with a new, emerging, rapidly emerging threat?
42:40Well, the press, the TV is also helping us.
42:45Ten years ago, we were talking about cyber, maybe once a year, for really big, big things like Yahoo or
42:53whatever.
42:54Now, it's quite common.
42:56We've seen also, for France, the things about the kidnapping of the crypto people, because their data was publicly available.
43:06So, everyone is really having that in mind now, that we have to take care of personal data.
43:13Ransomware could arrive, and then you could lose your old companies in 10 seconds.
43:18So, the global awareness of any board companies has really raised in the last five years.
43:27Maybe it was the beginning of the, after COVID, and really the raise of ransomware for France in hospital or
43:35whatever.
43:35So, it really, I won't say that, but it really helped us in the way people think about the importance
43:43of these threats.
43:45And now, it's our job to really convince that sometimes we have to take some structural change.
43:52Maybe we have to get rid of processes, get rid of one business, because it's not sustainable, and cyber will
43:59cost too much to manage that.
44:02So, it's a job, but now it's much more easy to do it compared to five years ago.
44:07Okay, easier, so they will be ready if indeed we're moving at a much faster pace, you know, as we
44:15heard.
44:16After it's risk management also, for any companies, you define the pace, you define the budget, you've got your risk
44:23appetite.
44:23So, that's at the own company to decide how far they want to go.
44:30Fair point.
44:32Toby, one personal nagging question I would have is the evolution of those emerging capabilities.
44:42There was a report, which was published for the AI Summit for Action in Paris, that stated that the risk
44:51of cyber, but a purely autonomous agent, not the carpenter thing,
44:57was perhaps could happen in the future, but was not there there.
45:02That was back in early February of this year.
45:06Since then, we had a couple of interesting data points.
45:10For example, the top hacker in the leaderboard of HackerOne, which is a big bug bounty.
45:18The top hacker is actually an AI hacker.
45:22We had a research institute that stated that in a Capture the Flag competition, which is a competition for hackers,
45:30the AI team got to be in the top ten, even the top five percent.
45:36And this is all recent, right? This is all the last six months.
45:40So, again, from your standpoint, and from the evolution of the underlying models, because we have more and more powers
45:47and capabilities in models,
45:50what do you see is a bit of a timeline, and how do you see this threat and the pace
45:56of that threat evolving?
45:58So, I think there's probably a few different points to draw from that.
46:00I think the first is, in terms of the scenarios you've presented, those Capture the Flag exercises are very focused
46:09in some effects.
46:09So, actually, you can start to lean on, and you'll hear this around the hall today, is this idea of
46:15agentic AI.
46:16So, very, very targeted, very focused, very single function in terms of how it works.
46:21And I think that leans on the idea that, actually, it's about using the right AI for the right part
46:26of the workload that you're trying to automate.
46:29And so, yeah, I probably agree.
46:30There is probably some elements of using much more tightly bound bits of AI to support that.
46:36The other bit that I come back to is around the idea that, just because the capability is there, it
46:42then becomes a case of, will it get used?
46:45And this is where we can kind of look to the history of cybersecurity and some of the cyber threats
46:49we've seen of,
46:50well, when there have been huge changes in technology, how have the attackers adopted it?
46:55And there's a few things that you can kind of lean on.
46:57One of those are things like WannaCry and NotPetcher, which were ransomware or ransomware-esque attacks that targeted organizations,
47:08where the attribution seemed to be nation states, as far as all the data points align.
47:13But in both those two cases, these were technologies, these were capabilities that were at risk of spiraling out of
47:21control.
47:21Way more people ended up getting hit by WannaCry and NotPetcher than, I suppose, were led that the original attackers
47:27were led to believe.
47:28And so, what you saw there was this almost, from a nation state perspective, this apprehension about losing operational control.
47:36And would they still be able to really target the organizations they want to go after?
47:41I think the other data point that follows that is around colonial pipeline.
47:46So this was a large ransomware attack that took place on an oil pipeline on the US East Coast.
47:53And actually, one of the real concerns there was this attacker who got into that network, they became quite famous.
48:00Like, they got quite a bit of a spotlight turned on them.
48:03And that's, if you're a cybercriminal, that's really bad.
48:07Because suddenly, every law enforcement agency on the planet wants to try and arrest you.
48:11Now, let's say we have that first cybercriminal who uses AI for the first time.
48:17They're going to get every agency, every national intelligence body, every law enforcement laser-focused on catching them.
48:24So, I suppose just because a cybercriminal could use it, I think they were going to stay under the radar
48:30for just a little bit longer.
48:30So, very interesting. You're saying, A, so there may be capabilities, but the bad guys, as the good guys, are
48:38afraid to lose control of it.
48:40And B, if it's too powerful, well, and if you're in just for the money, evidently, then maybe you may
48:48not want to attract, you know, the spotlights and be that successful, right?
48:52So, very interesting points.
48:55Manuel, into the deepfakes.
49:00You mentioned actually something very important, which is the attack on the reputation.
49:05There's one thing that you check actually who's online or, of course, not going to some visioconference brand we will
49:12not name and verify who is calling you.
49:18But then you mentioned what happens when there's a bad message, a fake message that get out, you know, in
49:25the wild.
49:26This is a big issue because companies could be victims of this type of new reputational attacks.
49:34How do you go about that? How can we control the internet? How we can block at speed the distribution
49:42of those bad messages that can impact the corporate value of companies?
49:48I do not think you can control the internet or else someone would have figured that out already.
49:54And stopping the velocity of a falsehood, which is like I've spent the last decade verifying the authenticity of content,
50:03is extremely difficult, if not nearly impossible.
50:07The best bet, one is you need to understand what you're looking at or what you're hearing.
50:12And a lot of the requests that are coming across our desk now are actually verification of authentic content.
50:19Because in a world where everything can look real, but it is faked, then everything is under question.
50:27So those are the types of videos, images and audio that we're getting to look at.
50:32And we do that both through our platform and our technology, but also forensically and with investigative skills.
50:40So we add a human layer when it's a very complex or high stakes.
50:44And we work with governments.
50:46We work with enterprises.
50:49And so the stakes can be pretty high.
50:51I think once it hits online, you need to be able to tell the world that it's not real.
50:58If that's what you're dealing with, you need to be able to say that image of me or that video
51:04of me saying, you know, these racist things or whatever they'll make you say is not real.
51:09And here's how I can prove it.
51:11Because you will need proof.
51:13Your own word is not going to be enough, right?
51:16Especially because you're dealing with perception and you need to be able to respond very quickly.
51:22Because your stock prices are probably tanking.
51:24So you need to do that quickly.
51:28Yeah.
51:28Okay.
51:29So make the proof.
51:30Toby, you want to add something?
51:31Yeah.
51:31I mean, I think there's also an interesting aspect as well, which is we're in this area, as you say,
51:36where we're now cynical of what we see online.
51:38And we start to go, well, is that really a deep fake or AI or not?
51:42And maybe there's an interesting scenario.
51:44And we were considering this over the course of the last year when there were a lot of elections around
51:47the world, which is what if there really was a politician that came out and made a racist statement or
51:53did something they didn't want to?
51:55Well, they could just blame AI and go, it wasn't me.
51:58It was somebody deep faked me.
51:59When in reality, it was true.
52:01That's why we're doing a lot of authenticity checks because it's called the liar's dividend, which I know you know.
52:08But it's essentially that in this world where everything can be faked, then everything can be questioned, right?
52:13Nothing is real anymore.
52:14And that's where lies the real danger.
52:17Yeah.
52:18So, yeah, Olivier.
52:20As a company level, when you're a big company, you can structure big communication people aligning with CISO.
52:28For smaller companies, that will be much more complicated in the future to be able to react that the communication
52:34team are not working 24 hours per day.
52:37So that will be a game changer also in the way you need to be structured.
52:41And that could impact, for example, your suppliers.
52:43Yes.
52:44Yeah.
52:44And what we're seeing is sometimes not even the release, the attack off network.
52:49We're actually seeing now ransomware attacks where they've created an extremely believable, often it's really awful.
52:58It's a nude.
52:59And they pay because you do not want that release because people won't believe that it's not real.
53:05And so those are the types of new types of attacks that we're seeing right now.
53:09Sextortion.
53:10Yeah.
53:11Sextortion, but also videos or audio of you saying something that you didn't say.
53:15Yeah.
53:16Benjamin, you know, hey, first feel free to react to this landscape, which does need some solutions.
53:23And also, as you deal in part with a phishing simulation and whatnot, how can we make people, again, more
53:32aware of the danger?
53:34I say that because usually in phishing attacks, you know, we know that there is an incompressible part of a
53:39population, of a corporate population.
53:40It's still going to click.
53:42Even if you do overtraining, there will still be the one, two, three person.
53:46It's like this.
53:47How can we deal with that?
53:48Perhaps can, how can AI perhaps deal with that?
53:52I'm not sure there's a solution, unfortunately.
53:56But of course, I mean, the state of, you know, phishing and cyber attacks today, we're seeing it at Riot
54:04because we're sending 10 million emails a year.
54:08And, you know, even with the most basic email, phishing emails that we send to employees, 20% of the
54:18employees, they still click it.
54:19So I'm a bit pessimistic about the capacity of employees to actually, you know, spot the phishing emails, spot the
54:28fake phone calls, the fake WhatsApp messages, the fake audio messages.
54:34I'm a bit pessimistic, which makes me very optimistic about my business.
54:39That's great.
54:40So do you have any, you know, I wouldn't say silver bullet, but any thoughts of how we could change
54:48that?
54:49Could AI, education with AI, could help that?
54:54I mean, AI will definitely help because you can use the same tools that hackers are using.
55:02And, you know, we've been talking about OSINT and what LMS can find online on you.
55:08So obviously, AI can do the same.
55:11And I mean, we're on the defensive side.
55:13We'll be able to provide a solution that will do that for you.
55:16And, you know, spot where you have weaknesses and vulnerabilities and you're making life easier for hackers.
55:23And then it's more about how do we communicate that back to the employee.
55:27And I mean, if you have a small team, if you're 50 people, 100 people, I don't know.
55:32It's the communication is easy.
55:35When you have the company the size of BNP, it's a lot more work.
55:42And I think we have to reverse the problem.
55:46I mean, we know that people will continue to click even with a good awareness.
55:51Even 1% is too much.
55:54So we have to reverse things and think, okay, even if the guy is clicking, what we have to do
56:01in order to limit the problems.
56:03So we are talking about sandboxing.
56:05We are talking about segregation of flows.
56:09So this is all the things at the company level.
56:12We have to think about it to avoid this system because this problem will remain forever if we are not
56:19tackling it at the technological level.
56:23Okay, so now for the last 30 seconds, we'll do a final round.
56:27Will the good guys or the bad guys win?
56:30And what gives you hope?
56:32Well, Toby, let's start with you.
56:33Yeah, I was just about to pick up on the point you made about being pessimistic because I'm not.
56:37You know, I'm optimistic about the capability that we have through AI and machine learning to remove the human elements
56:43in terms of our detection.
56:44Actually, can we use technology to our advantage rather than hoping that somebody won't click it?
56:49Maybe we just assume they will and make it safe for when they do.
56:53Let's hope. Emmanuel?
56:55I'm always an optimist and I think replicated humans is actually quite complex.
57:00So I believe that we will win. Humans will win.
57:03Let's hope so. Benjamin?
57:04I don't think there's a winner. Everyone's losing.
57:07Okay.
57:08I'm still an optimistic, but...
57:10I'm not sure about that.
57:12I think we will lose some battles and get stronger and stronger because we will learn from this losing battle.
57:19Let's learn quickly. Thanks a lot. Fantastic panel. Bravo.
57:26And let's keep hope.
57:27Thank you.
Commentaires

Recommandations