Passer au playerPasser au contenu principal
  • il y a 2 jours
Critical Update Fixing the Human Element in Cybersecurity

Catégorie

🤖
Technologie
Transcription
00:00Hello everybody, thank you very much for coming. Looking forward to this panel.
00:04As a reporter in cyber, I often think about cyberattacks as a kind of three-step process.
00:10The first is when you discover what happened. The second is what's the impact, and that's normally what leads to
00:15a news article.
00:17And then the third one is how did it happen, and that is always the most interesting.
00:21That is always the bit that as a society we learn the most from as well.
00:25And it's the bit that takes months and months, sometimes years, to actually find out how these things happen.
00:30But quite often it is that human element. So we're here to talk about that and how you can reduce
00:34the risk in your organisation.
00:37And we've got a great panel for you. We have Guy-Philippe Goldstein, who's advisor to PwC and to VC
00:43fund X1 Capital and Cyber Security,
00:45lecturer at the School of Economic Warfare on Cyber Defence, and essayist and novelist.
00:50Maybe you give us a little wave so that we know who you are. Thank you very much.
00:53Catherine Jestin, who's executive VP digital at Airbus.
00:58Catherine's main focus is to foster digital innovation across Airbus' industrial ecosystem and its products and services portfolio,
01:05accelerating data analytics, artificial intelligence, and digital security for the company.
01:11Niklas Hellemann, who's co-founder, CEO, and managing director of SoSafe.
01:16Dr. Niklas is a psychologist and expert in social engineering, as well as co-founder and CEO at SoSafe,
01:23a cybersecurity awareness provider and one of the most successful scale-ups in Germany.
01:28And Zena Zucker, who's CTO at Cyber Security at Everden.
01:32Zena is a cybersecurity executive with over two decades at the forefront of cybersecurity and technology innovation,
01:40currently leads the development of solutions to protect data and secure digital transformation.
01:45And I'm Joe Tidy, as we heard earlier.
01:48So we've got 45 minutes to discuss this topic, and I think we'll have about 10 minutes at the end
01:53for questions.
01:54So if you have any questions you'd like to ask the panel, then go on to the app and use
01:58the slider,
01:58and I'll get them up on this screen here.
02:00And also, if you're brave enough, put your hand up and we'll send a microphone over to you as well.
02:04So let's start off, then, with the premise of this is that 95% of cybersecurity breaches are due to
02:10human error.
02:10But can I just ask, how do they happen?
02:14How is that statistic so true, Zena?
02:17Yeah, so, you know, in cybersecurity, we always say your security is as strong as your weakest link.
02:22And unfortunately, the human factor today is the weakest link for a simple reason that you can have the best
02:28technology in place.
02:29If it is not configured properly or used properly by a user, you will have a data breach, or you
02:35are a candidate, I would say, to a data breach.
02:37So the first, I would say, type of cyber attack that are exploiting human error is exploiting misconfiguration.
02:43And we see it every day.
02:44You were talking about, you know, in the news, how we hear every day of an attack.
02:47When we come to the anatomy of the attack, you see it.
02:50More often than not, it is because of a misconfiguration.
02:53So Capital One, as an example, you know, when you expose, what, 100 million users' data, it was due to
03:01a misconfiguration of a WAF, as simple as that.
03:05So we have this type of, unfortunately, human error that can lead to a data leak.
03:10But also, we have what I like to call the skill-based human error, is that sometimes you have the
03:15right technology.
03:16It is properly configured.
03:17It is doing its job.
03:18You have an alert that has actually reached the security operation center, stating there is potential intrusion.
03:25Yet, unfortunately, the security analyst does not act upon it.
03:29They misqualify the alert.
03:32And then this could lead to an actual data breach.
03:34It has happened often as well, even in publicly, I would say, exposed data breaches.
03:40And this leads us to the problem of a process that is defined by a human and not properly defined,
03:46or, unfortunately, the lack of appropriate skills.
03:49And then we have, I would say, the classical attack that actually explodes human weaknesses, or what we call the
03:56emotional triggers, and exploitation of those emotional triggers.
03:59The phishing, the phishing, voice phishing, spear phishing, wailing, and what have you.
04:04And all of those, if the users are not aware, even aware what it is, you know, I would expect
04:09phishing, everyone knows about it now.
04:11It has been here for decades.
04:13But not everyone knows what is voice phishing.
04:16Not everyone knows that they can actually really emulate your voice with the accent, with everything, of your colleague, of
04:22the manager.
04:23So you can actually fall for this trap.
04:25So all of those, if you're not training properly users, it's a problem.
04:30And this leads us to what I also consider the shared responsibility of organizations.
04:35Because also we have the problem of inadequate tools for users.
04:39You know, users need to do their job.
04:40If you don't give them the right tools, they're going to go and find better tools outside.
04:46And those better tools, cloud-based native tools, will lead to a problem.
04:50So we had issues just recently on a media company where you have a user that was using Trello.
04:55It's not the problem of Trello.
04:56Trello is a very good application.
04:58It's not a corporate tool.
05:01It's an unsanctioned tool.
05:03So not properly configured, not properly monitored.
05:05On a personal account, the data was exposed.
05:08Sensitive data was exposed.
05:09So those are just a few examples of, unfortunately, how a human error could lead to data breaches.
05:15And if I may add, I mean, this was obviously a very comprehensive overview.
05:20If I may add, because you asked why.
05:22Why is the human layer still the main go-to vector for cybercriminals?
05:27Because they're lazy.
05:29The cybercriminals or the workers?
05:31Yeah, no, no, no, no, no, no, no.
05:32Cybercriminals, they're efficient, right?
05:34Because the landscape of attackers has fundamentally changed over the last 30 years, I would say.
05:39You know, 40 years ago, we had, you know, individualistic, you know, nerds, hackers that wanted to improve systems.
05:45Now we are facing an industry.
05:48And this industry, we know that because also these cybercriminals groups are also getting hacked.
05:52And then internal chat leaks come out.
05:55For example, Conti was a prominent group that got leaked.
05:59We know they act like companies, like regular companies.
06:04And so apparently they're also focusing on very efficient operations.
06:07And if you look at the vast, you know, difference in IT landscapes across many organizations,
06:14there's only one factor that is always very, very similar.
06:18And the emotional buttons, they are very, very similar in all of us.
06:22So that's what cybercriminals understood very well probably 5 to 10 to 20 years ago
06:27and optimized on this psychological skill that they are now bringing to the table.
06:32And I think that they have also been very much better today in terms of social engineering.
06:39We were facing very recently in others one case where basically they exploited the fact that
06:46a new executive will join soon the organization.
06:50And they were trying to contact many people across the organization, faking a WhatsApp message.
06:59And some of our people fall in the trap because, and that's what will probably come later in the discussion,
07:06because we are, when we work in digital, we know all these tricks.
07:12We are always exposed to it.
07:16We hear about it.
07:18But for people who are working in finance or who are working in HR or working in commercial,
07:24that's not something they talk about it every day.
07:27I have every day cyber security conversation with my team,
07:31but I'm probably the only one in the executive committee to have this kind of discussion.
07:36So that's, and we underestimate the lack of awareness of the population.
07:45So we have 1,300, no, sorry, 135,000 employees and others.
07:53So if you want each and everybody to be aware,
07:56you need to put a lot of effort in teaching and in repeating because you also have new joiners.
08:03So last year we had something like 13,000 new joiners.
08:08So you need to retrain.
08:11So you think you have trained your teams and your employees,
08:16but in fact, because of the turnover, you need to do it again and again
08:21and to remind people about what is happening,
08:24to tell them, okay, what are the last techniques that have been employed, let's say,
08:30and that have been used against the company.
08:33And we were, I think the previous panel was talking about transparency.
08:38And I think that transparency is playing a very important role.
08:43Typically at Airbus what we do is every time that we are targeted by a specific type of attack,
08:48then we do a message to the one, to all the employees saying,
08:52okay, this is what happened, not necessarily telling if it hurts or not,
08:57but just to make them aware, okay, this is what the bad guys outside are trying to do.
09:03So, yeah, and this is really one element and one crux of matter indeed is that the fact that you
09:09have this whole organization
09:11that actually does not react as quickly sometimes as it should at each and every different level.
09:18For example, it was kind of mentioned, but the fact that you don't update the systems whenever you're prompted to
09:24update the systems,
09:26you could receive order and that's not done because you also have arbitrage decision in terms of a production system.
09:32You are not necessarily the CISO guy, you're running your production line, you're starting to think, yes,
09:38but, you know, if I do that now, I may run into if there's some issues because it's another thing
09:43pushed by the IT guys.
09:44Maybe it's going to stop it for a couple of hours.
09:47I don't want that now.
09:48I have a hard deadline and whatnot.
09:50And you push forward decision that should be happening, okay,
09:53and you see that at some of the mid-level management level.
09:56And of course, in terms of human error, we could even talk about top management level.
10:00It was mentioned on previous panel what happened with company Target in December 2013.
10:04They had all systems flashing right, you know, showing that something bad was happening,
10:09but top management, again, was lenient, okay, and did not react quickly enough.
10:16And we're talking about risk of crisis, which is a race between the good guys and the bad guys.
10:22If you're not quick enough or if you think you can push management decision in terms of doing the update
10:28later,
10:29then you're dead.
10:30Yep.
10:31On this one is something that we have identified,
10:34and basically what we have decided is that there are six people in my organization,
10:38and in Pascal Andrie's organization, he's a corporate security officer,
10:44where basically they can shut down all the information systems without asking permission.
10:51So we have six people who can do that, especially to protect ourselves from ransomware,
10:55because we know that to protect against ransomware, the best thing to do is just to react quickly.
11:02So you don't want to wait for the CEO or even myself.
11:06I'm not one of the six.
11:07So there are six people who can really make that decision and to decide that they can shut down the
11:15system.
11:16So there is a third, head of third, head of SOC, my head of cyber security, the head of networks,
11:22head of infrastructure, so six people in total.
11:25I was just going to say, and I'm sure that hasn't happened yet.
11:27Nope.
11:27But if it did, would that six, one of those six people that took that really, really drastic decision,
11:35would they be backed up?
11:37I will back up.
11:38And XCOM will back them up, because we made a memo that was distributed to all the XCOM members
11:43and approved by the XCOM that these guys have this autonomy and responsibility
11:49and accountability to protect the company.
11:51And each one of them, or it's like a nuclear submarine where you need two guys to...
11:55I was thinking that with the key, yeah, exactly.
11:56No, just one.
11:57Just one.
11:58Wow.
11:59Goes to show, I suppose, how important this decision is.
12:02You can't just have one person who might not be able to reach their phone.
12:05I don't know what they could be doing in a swimming pool or something.
12:08And that time matters, doesn't it, when you're in an incident.
12:11I wonder if we could break down that 95% that we heard at the very beginning here.
12:1595% of cyber attacks are because of human error.
12:18You've kind of touched upon what I always imagine is the biggest phishing emails,
12:24phishing or spear phishing emails.
12:25As you said earlier about these cyber criminals, they will do their research.
12:28They will find out exactly who that person is they need to social engineer.
12:32Of the 95%, what would you say as a panel is phishing emails?
12:37And what else is other things?
12:39So, for example, not patching your systems.
12:42Does that count as human error?
12:45I just seen a stat this morning coming here.
12:48So, you know, you take it as you want.
12:50But it was about 70% of cyber attacks coming from business email compromise, which include phishing, which is not
12:57only phishing.
12:58You know, it could be like impersonation of, say, the CEO or just impersonation of the guys to get sensitive
13:07information.
13:07But, you know, it seems that the channel of the email is still, you know, a big one in terms
13:14of getting into your cyber exploit.
13:17So you're saying that 70% of attacks are through your email?
13:21What I've read this morning.
13:22Yeah.
13:22No, I mean, that kind of chimes with what I, in my experience, it seems to be.
13:25It gives an order of magnitude.
13:27Yeah.
13:27It's still the number one.
13:28And the question is also, like, what's the basis, right?
13:30It's successful attacks.
13:32And usually we have, if we look at, for example, also the cost that is being created through that, ransomware
13:37and phishing are the most dominant phenomenon.
13:40Obviously, we also have disgruntled employees, insider threats.
13:44But usually that's a very small thing.
13:47And it's very hard to protect against that, probably through a good organizational culture.
13:51You would also maybe tackle that.
13:55But phishing is certainly the largest channel.
13:58However, this changes currently because we also see, and if we look at the Verizon breach report, for example, that
14:05pre-texting,
14:06so the use of various channels for attacks, for example, to legitimize a CEO fraud emailed via text message, is
14:12dramatically on the rise.
14:14So, again, these companies that are attacking us, I want to call them companies, they are also innovating.
14:20They are also using different channels.
14:22I mean, and we haven't even talked about AI, right?
14:24Because they are also using that to a great degree.
14:26It's funny that you say that, like, companies, because I've been on air doing live reporting and accidentally called them
14:31companies because that's the way they work.
14:33They're so organized.
14:35They have customer service departments.
14:36They have technical malware writers.
14:40They have the negotiators.
14:41Malware as a service.
14:43Yeah, it's incredible.
14:44So, I'm glad I'm not the only one that makes that mistake.
14:46And there is maybe another vector.
14:48So, what you describe is very true for the corporate world.
14:52But when you go to the industrial world, you still have less sophisticated stuff like through USB keys because we
15:02still have a lot of exchanges with the manufacturer or of the machines that are done on USB keys or
15:10disk to just update.
15:13You have, and in some cases, you can also be infected by your suppliers when they come on site to
15:21perform maintenance.
15:23So, you need also to put in place some measures to just check whatever USB key or device is connected
15:35to your industrial network.
15:37That's one thing.
15:39And the other thing is also it's now improving, but all the IoTs, so Internet of Things, they have not
15:49been really famous for their level of service.
15:55So, there is also some communication and some work that we need to do with these companies, usually a lot
16:06of them are startups and so on.
16:08So, where they need to embed, also in their own culture, when they produce this type of devices, the cybersecurity
16:16of their future customers.
16:18So, and today, it's not really the, let's say, their best quality.
16:23Yeah, I agree with Catherine because also, you know, when we talk about the third-party risk, it's the humans,
16:30not our humans, not our organization humans, but still, the human error extends throughout your supply chain.
16:35And the other point that I think also we don't take seriously enough is this rapid pace of innovation.
16:40So, today you have, we talked about IoT.
16:42So, when IoT first started being implemented in an organization, it might have been smart, but not secure.
16:48And now we're talking about AI, Gen AI.
16:51So, there are always new innovations that pops up.
16:53And sometimes they are quickly adopted by an organization, not fast enough to understand the risk associated with that.
16:59And then you cannot blame the user because you have not even explained to you how this works, what are
17:04the cybersecurity best practices that you can apply.
17:06And, again, not everyone is cybersecurity savvy.
17:09So, I think this is also a clear problem today that we need to be fast enough in our cycle
17:13of innovation, but also fast enough in the cycle of security awareness raising and updating and being agile on this
17:19front.
17:20We'll come to the, what we can do to help our staff next.
17:24But I wanted to talk about technical solutions because I often want to talk about this.
17:28We hear this, the term, the weakest link is your workers.
17:32And I sometimes feel that that's unfair because everyone is just trying to do their job and cybersecurity is really
17:38complicated and sometimes really boring.
17:41So, asking people to be 100% on it all the time is quite hard.
17:44So, what technical solutions can we put in place in our companies to make it easier for them?
17:50What is their software?
17:51Is their hardware?
17:51You mentioned USB key checking, for example.
17:54I didn't even know that was still a thing.
17:55I didn't know that USB attacks were still going on.
17:57That's amazing.
17:58That's how they did Stuxnet, isn't it?
18:00Like the first ever cyber attack.
18:01Not necessarily, but indeed the issue of connecting devices because we know that it was either USB keys or water
18:07pumps connected to Natanz for Stuxnet.
18:09Or even mobiles.
18:11So, let's go through it then.
18:13Let's give some takeaways for people.
18:14What sort of hardware or software would you put in place to help reduce that risk of the human factor?
18:22So, yeah, maybe I can start.
18:23I think in the end we need to understand our risk as an organization, you know, because depending on your
18:28industry, as Catherine was saying, also on your digital environment, you need to understand what are the risks and then
18:33put the appropriate cybersecurity controls as a prevention control that are in place in order to control access to the
18:40data, control usage of the data, encrypt the data, you know, everything around managing the life cycle of the sensitive
18:45data in your organization.
18:46And securing access to your environment because also when you're talking about ransomware attacks, we need to make sure that
18:52we're securing this digital fortress that is no longer a fortress because data is everywhere nowadays for an organization.
18:58So, I know it's easier said than done specifically for a large organization, but by adopting this risk-based approach,
19:05it is a solution that can help actually organization being able to contain the risk and implementing those best practices.
19:11I'm putting the right prevention, the right, you know, the least privileged access, for instance, principles being applied.
19:17But if I go back, because I just saw the question also popping up about the serial clicker, is that
19:21indeed, sometimes you do all the security awareness that you want, that you will have a responsible behavior that will
19:28put organization at risk.
19:29And this was the question about how do we manage serial clickers, that after multiple fake phishing campaign, they still
19:36click.
19:37But the thing is that I always use the example of, you know, the scorpion and the frog, they want
19:41to cross the river.
19:42And the scorpion asked the frog, help me cross the river.
19:45And the frog tell them, I'm not stupid, you're going to sting me.
19:48And then the scorpion said, we're going to die, both of us.
19:50I'm not going to do it.
19:51And yet in the middle of the river, the scorpion sting the frog and they both drown.
19:55So this irresponsible behavior exists.
19:57People will use passwords.
19:58People will try to bypass security controls because they see that there's more things to do.
20:02And if you don't have the right security controls, first, to prevent those type of things, second, to detect as
20:09fast as possible this type of behavior,
20:11then your organization is going to be at risk.
20:14And I want to add to that, obviously, strongly proposing a more optimistic view on people.
20:19Because, I mean, this question asking for serial clickers and, you know, the weakest link, it's actually one of the
20:24reasons why we found it so safe.
20:26Because six years ago, this was the predominant paradigm.
20:30Oh, my God, the weakest link.
20:31The problem is always in front of the PC.
20:33And we somehow, like, need to circumvent them.
20:37Which I understand as a psychologist, because this is a very, yeah, strong urge to just switch the problem off,
20:43right?
20:43But as we see in security, on all these technological layers, and also on the human layer,
20:49it's not something we solve overnight with one project or with one piece of software.
20:54It's a constant find.
20:55It's not a fight.
20:56It's a constant learning journey.
20:59It's constantly we need to expose ourselves to the latest attacks.
21:03We need to understand that.
21:04We need to help people understand and to establish secure habits.
21:07And also this won't kill 100% of the human risk.
21:12So we need to establish other layers as well.
21:14And this is, I think, a paradigm that is now much more dominant in the security industry.
21:20That there is not a, you know, silver bullet which solves everything overnight.
21:25But we need to establish proper multi-layer resilience factors or layers.
21:31And then it will be a continuous process.
21:34I totally agree with what's been said in terms of thinking about it and all the approaches.
21:40Just to go back to your question in terms of widgets that could help.
21:43And that will definitely not help the whole of the issue.
21:46But that can perhaps go a step at all.
21:48I'll just suggest one very obvious one.
21:51And one stuff I thought about watching a recent Microsoft presentation.
21:56So the obvious one is, you know, against the issue of the bad passwords
22:00that people would put in because it's too simple,
22:03because they use the name of their grandsons and whatnot.
22:07Just having goddamn multi-factor authentication system for God's sake.
22:13or use a password manager.
22:15Okay?
22:15And that, you know, to me that's like the seat belt that we should have today.
22:19All right.
22:20So this is the obvious one.
22:21The more bizarre one that I thought of, which is actually quite dangerous,
22:25is, so if you watch the Microsoft presentation of the new co-pilot PC whatnot with Recall,
22:32which is not a sci-fi thing, but which is a new product.
22:36So it would actually kind of being, that's the way I interpreted it,
22:40as a kind of co-pilot guy who would watch you over whatever you would be doing with your PCs
22:47and whatnot.
22:47So I don't know if there are any VCs or funders or whatnot here,
22:52but to me that's an interesting idea that we could have a security co-pilot just for behaviors,
22:59provided we have all the checks and whatnot.
23:01So the computer would say...
23:02Which is something our platform has.
23:04Fantastic.
23:05So go and take it here.
23:06All right.
23:07I didn't know that.
23:07I'm sorry.
23:08So the idea would be that the computer says,
23:10whoa, whoa, whoa.
23:10Don't send the email.
23:12They would be watching what you're doing.
23:14There's lots of behavioral things,
23:15and we have actually a real behavioral researcher here that would tell you how to do that,
23:19because if you have always one guy next to your shoulder,
23:21maybe at some point you're fed up.
23:23So there are ways to do around that, I would assume, but...
23:25Yeah.
23:25There's also data privacy issues, obviously.
23:28With that recall feature, it doesn't do content moderation.
23:30So if you jump back to, I don't know, yesterday,
23:33you will see your password there and your credentials with that Microsoft solution
23:39as it's currently being rolled out.
23:41But there's very...
23:42And this is something that's not often talked about,
23:45because when we talk about AI and social engineering, I mean, apparently,
23:48and we need to acknowledge that,
23:50currently the attackers are leveraging AI more than the defenders.
23:55I mean, on the social engineering side of things.
23:57Yes.
23:57But there's many applications where we can provide a very, very helpful co-pilot,
24:03for example, which is what we do for specific cases, right?
24:07If I lose my USB stick, I can ask Sophie.
24:09If I have a weird email, I can ask Sophie to actually help me understand that email.
24:14And these are cases where AI can really, really help us make more informed decisions.
24:19Is Sophie the name of your...
24:20Right.
24:21Okay.
24:23Maybe two points with respect to what was said.
24:25Today, before going to the co-pilot innovation approach,
24:29and also because we have the concern of data privacy,
24:33today we have the identity threat detection and response tools
24:35that help actually put a risk scoring to the user behavior in the organization.
24:41And this is already something that is, if an organization have it, you're not driving blind.
24:45You know, you have a visibility.
24:46And again, we can identify the user that maybe did not properly understand the security training that we have done,
24:51or user that just because, again, they don't know what this technology does.
24:55This human error, you can detect it with this type of solution.
24:58I think this is definitely something that will help organization on this front.
25:02And I don't know whether we want to go afterwards on the security awareness tools,
25:05because indeed there's a lot of innovation with Gen.AI currently that are very, very helpful.
25:09And the category name has just been renamed by Forrester.
25:13I mean, obviously, software categories are also a human-made construct.
25:17And so I absolutely like what you said, because Forrester just renamed it from security awareness and training,
25:23which is, I mean, the focus is on training, to human risk management,
25:26to understand better the individual risk profiles and the exposure also of your employees.
25:31And basically that's what we have started to implement at Arbus.
25:36Because basically we have today a monthly phishing campaign where that is sent to all the employees,
25:44or not all the employees, but I think it's 90,000 out of 135.
25:49So it is random.
25:51So we target XCOM members and blue colors.
25:56And we are looking at who is, and basically what we try to teach our employees is that if you
26:04detect something,
26:05report it.
26:07So we have a one-click button that allows you to, you just click and your email is immediately transmitted
26:14to the SOC.
26:16And then you get an answer within usually two hours to three hours saying,
26:21okay, yes, it's a suspicious email, or no, and it's a good one.
26:27And I was, so I clicked on that button recently because I was supposed to receive a DHL packet.
26:36And I was not expecting anything.
26:38And it was a little bit after Christmas.
26:41And in fact, it was a book that was sent by one of my supplier.
26:44But I was so suspicious that I clicked on the button and I refused to go to the DHL site
26:50to have my stuff.
26:52And three hours later, they say, no, no, this one is a good one.
26:55Better safe than study.
26:57Yeah, you have to, and better safe than stupid.
27:02So yes, three hours, I probably lost three hours, but it was not a big deal.
27:08And these campaigns, honestly, we have seen a significant improvement.
27:14So once a decrease on the click and second, an increase in the reporting rate.
27:20So more and more people start to think when they receive something, yes, I click.
27:25And we have, so it's a game.
27:27So on top, you get points.
27:28So there is, there is, so we have done some gamification around.
27:32So people get points.
27:34And after, at the end of the quarter, you get prices and so on and so forth.
27:40It's presented to XCOM.
27:42So for each of us, we get the score of our own organization.
27:47At XCOM.
27:47At XCOM level.
27:50And unfortunately, digital is not among the best actors, which can be surprising, but that's just a fact.
27:59So even people who are doing digital, they get tricked by the system.
28:04So never take it as granted that people will get it and continue again and again and again.
28:13The training, the awareness, the messages.
28:17Use the October months, cybersecurity October months, where we do a lot of, yeah, events for our people.
28:27Just to try to have that.
28:30And just before the break, the summer break, we do another one in June, July.
28:36Because that's also one of the tricks that the hackers are using.
28:41They know that there is less people at work during the summer break.
28:47And so they take, they try to take advantage of that.
28:50So we just make a kind of vaccine, let's say, of all our employees.
28:55Just right before to say, okay, guys, don't forget that summer break is an accurate break.
29:02And I hope also the Olympic Games, because it will be used for fishing again, like we always see.
29:07The best, and just for fun, the best campaign in terms of fishing.
29:11So the one of, was when we were supposedly offering a free pass for the Paris Air show,
29:20which in aerospace industry is very famous.
29:22So, and this one, everybody was clicking.
29:26And is that one that your company ran as a simulation?
29:28Yes, we ran as a simulation.
29:32Yeah, there are, they always find a way to hook you in, don't they?
29:36And I remember during the pandemic, in the early days, when we were all terrified,
29:40there was a really nasty campaign, for example, that sent out emails saying,
29:44click here to find out the cure.
29:46And I just, I remember thinking that is so evil and cruel.
29:51They will, they'll find a way to get you.
29:53So, so we've talked about the technicals.
29:55We talked about USB checking, two-factor authentication, which seems to be the absolute big one, doesn't it?
30:03And then other tools that we can put in place as well.
30:06So the email monitoring, that kind of thing.
30:07How much does this sort of thing that we now do at the BBC,
30:11how much does this work where it says, in your email it will say,
30:14this sender is outside of your organization.
30:17You were mentioning some of the things that you've been doing in your email.
30:19Do we know if that works, that kind of thing?
30:21Because I just ignore that.
30:23I just scroll past it.
30:24Yeah, like if you look at psychological research,
30:27probably you would see something like, I don't know, not security fatigue,
30:31but probably your brain is just filling that place with white space, essentially.
30:37It needs to be dynamic.
30:38It needs to fit to the specific email or the specific call you received.
30:42I think then it's quite powerful.
30:44But if you bombard people with the same alert all over the time, we know that.
30:48I mean, from fundamentals of psychology research, you will ignore it.
30:53And do the simulations work then?
30:55You say it works for your company.
30:57What about your experience?
30:58Does that work having your employees kind of put them through their paces?
31:01So I will not talk about my experience, but actually about results that we found out on the fields.
31:05and one from another study out of 12 million users and 30,000 organizations.
31:10So you assume quite a big range.
31:13And indeed, the phishing campaign, right, to prepare for, seems to have worked
31:19because the study would say that the click rate went from above 30%, which is quite sizable,
31:26to 5% after, but after 12 months of active campaigning and having at least once or twice
31:37type of that campaign every month.
31:40So you need to be very active on that.
31:44And you do have a sizable reduction.
31:45That being said, a couple of things around that.
31:47So, yeah, it seems to be working that there's an interesting return on your security investment on that.
31:53A couple of caveats around that.
31:54A, you still have 5%.
31:57And we go back to that issue that there seems to be a residual risk in human factor
32:03that you cannot fully mitigate.
32:06B, some issues actually around the way you are going to explain to the guys, you know,
32:12who were taken that didn't, that clicked though they shouldn't have clicked.
32:16You know, all the contextual training.
32:18It seems actually that it has a deleterious effect in the sense that the people who read that afterwards feel
32:26that there is some sort of security net from the company.
32:30And actually stop taking care of because, you know, at the end of the day, it was not completely, you
32:38know, it was not the real thing.
32:39And I see I'm taking care of.
32:41So, you know, I'm not on my own.
32:43So, you know, it's not that important.
32:45And voila.
32:46So, just some caveats around.
32:47If I may add one thing.
32:48And Katrin, you already mentioned it.
32:52There's various metrics we should focus on and we should actually explain to, use to explain the effect.
32:58And the reporting rate, in my point of view, is the much more powerful driver for resilience.
33:03I mean, bringing a click rate down and literally there won't be a company that can bring down the click
33:08rate to 0%.
33:08Because if you do that, then you probably train your people on two simple templates and they just spot them,
33:14which can even increase the risk.
33:16But if you bring it down to a substantial level, that's fine.
33:19Then you have obviously some risk reduction.
33:21But for that, for the reporting rate and, you know, you talk about like 70%, we even see 90%
33:27of reporting rates of all employees in our most successful customers.
33:30Then you can contain the blast radius because time is so essential.
33:34And there's one story of Reddit, for example, that did a tremendous job in creating this positive error-embracing culture.
33:41Where an engineer, also a technical staff member, clicked on a phishing email.
33:46Then thought twice.
33:47Realized it and immediately reported it through a very, very seamless channel.
33:52And they could contain the blast radius.
33:53And I don't think any data was even leaked or encrypted.
33:57And I think that's something we need also to talk about because this drives resilience.
34:02We don't know what kind of attacks or phishing attacks will come to us in a year.
34:07But if we train our people to be a little bit more suspicious, then we have a resilience driver.
34:12And again, I think the transparency thing is very important.
34:16So you need to also create a non-blaming structure where people know that if they have done a mistake
34:24and they report it, they will not be blamed for it.
34:27And that we will help them.
34:29We will teach them.
34:31So typically the serial clicker, because we have some, of course, the way we deal with it is we are
34:39not firing at them.
34:40But we just take time and even if it is every month, we go to the serial clicker.
34:48We spend some time with them.
34:50We show what they could have spot on that specific email so they can maybe improve.
34:59Doesn't mean that next month they will not click.
35:01But we really want to make sure that people feel safe to report when they have done something wrong, that
35:09they report it.
35:10Because as you said, the faster people report, the better it is for the organization.
35:15Because you can block, you can investigate, you can safeguard the rare that has been infected and so on and
35:22so forth.
35:22So, yeah.
35:23Yeah, I think also there is something that we need to understand is that our security awareness training, human risk
35:29management, whatever we want to call it, in the end, it needs to be agile.
35:33But, you know, it needs to be also not, you know, a program runs on the side that we update
35:38once per year.
35:39It's something that, you know, we collaborate with the SOC, with the intelligence team, what are the new type of
35:45technique, tactics, procedure used by the cyber criminals, and inform the user.
35:50Because in the end, it's about empowering the human, you know.
35:52How do we help our users to understand what's the issue?
35:57Phishing is one of the biggest problems.
35:58Everything that is, I would say, email-based attack is one of the big problems, but it's not the only
36:02problem.
36:02And also, we need to build the scenario-based approach so that we understand, depending on what is your function
36:07in the organization,
36:09what type of data do you access, what type of activities do you do, then help you understand why you
36:15need to be careful,
36:15and how you need to be careful when you're at the source problem.
36:18So, this program today is no longer, you know, just the basic, you know, adaptive learning, the computer base,
36:25I click and I do the QCM at the end, and then I'm done.
36:28No, it's a continuous awareness-raising program, which actually requires also budget for organization.
36:34It's a big investment.
36:36So, it's a culture thing as well, then, I suppose, what we're building towards here.
36:39But I suppose there is a fine line, isn't there, between tricking your staff and being too soft-handed.
36:47I remember there was a story, very quickly, that a railway company in the UK did a phishing exercise,
36:54which was really criticized, because it said, click here for a pay rise.
36:58And it looked like it was from the employer.
36:59So, obviously, everyone clicked it.
37:01Then they got in trouble for clicking it, which I just felt was a bit much.
37:04But anyway, should we go to some questions?
37:06Because we've only got, is that okay, unless you wanted to say something?
37:08Just a quick one on what you said.
37:10And if you take a drift from what we've seen in the airline industry,
37:14there's been the development of non-punitivity culture coming actually from Nordic airlines
37:20and now adopted by all Western airlines, which is very important.
37:23Because at the end of the day, this is what you said,
37:25it's very important that the info, the intel of what's wrong gets as quickly as possible to tap to CISOs.
37:32And for that, you shouldn't have the culture of people putting everything under the rug.
37:37So, this is critical.
37:37And it's even, I think it's not even at the size of the company.
37:41It's also something that we do across the industry and with other partners.
37:46So, when you are hit or when you are targeted by a specific attack,
37:51typically in the aerospace industry through our professional association,
37:57we share the indicator of compromission with our suppliers, with our customers, with our even competitors.
38:05I think that in the cyberspace, there is only two camps, the good guys and the bad guys.
38:11And we are the good guys.
38:12So, we need to share this information because otherwise, we are just putting the entire industry at risk.
38:20And if something that is happening in a company can help some others, we have to play that game.
38:28Something that we have done also, last year, one of our supplier was hit by a ransomware in northern Germany.
38:37And basically, I sent my team to help them recover from the ransomware because they were a smaller organization.
38:45They were not prepared for it.
38:47They had absolutely no clue where to start.
38:49And I was just sending five people for three weeks to help them recover the situation.
38:55It was for the interest, but it was my interest too.
38:58And by the way, it was good training also for my team to put in practice everything that we have
39:05prepared in case of we are hit one day.
39:09Did that affect the relationship that you have with that supplier?
39:12In the good way.
39:13Oh, right.
39:14In the good way.
39:15You didn't sort of like cut them out of the business?
39:17No, no.
39:17OK.
39:18OK.
39:19We've got five minutes.
39:20There's two questions here about AI.
39:22But there's one that's been upvoted quite a lot.
39:25While security awareness programs in larger companies are now fairly common, how have insider threat management programs evolved?
39:32The next one's about AI.
39:33That one isn't about AI.
39:34But let's go for that one.
39:35What do you guys think?
39:39Maybe I can take this one.
39:40So basically we have created an insider protection unit where we have people who are basically looking at the internal.
39:53So not taking care about the external threat, but the insider threat.
39:58can be just a stupid thing.
40:03We had one case where somebody who was really upset with his annual performance evaluation started to remove and delete
40:16all the information he was owner in Google Workspace.
40:21Wow.
40:23And it was detected thanks to the insider threat.
40:27So basically how it works.
40:29The team is working on algorithms to identify personnel and associate each and every employee to a profile, let's say.
40:41So they profile our behavior, so how much time I'm going to which website I used to go, how long
40:51I do that every day, how many emails do I send normally in a day, how many, what is the
40:59volume of information that I download or that I upload, and that kind of stuff.
41:03How many documents I am copying and so on.
41:07So everybody is profiled and as soon as your behavior on a specific day deviate from the baseline, basically it's
41:20raise an alert.
41:21So it can also be based on the working hours.
41:25So it's an interesting one because I was spotted recently because I was in the U.S.
41:30and basically I was working outside of the European hours.
41:35And so I was just, I created an alert to the insider team saying, okay, oops, Catherine is doing something
41:44bizarre.
41:45Normally two o'clock in the morning, she's not supposed to do anything.
41:48But yeah.
41:50But I imagine that's the sort of thing you can only really do with a very large wealthy company.
41:55In one word, is it possible to buy that kind of thing and have that kind of security if you're
42:01a small or medium size?
42:02Yes, of course, because you actually have off-the-shelf solutions so that they will not be as I would
42:07say industry specific as what Catherine has done because specific to the ionotic industry.
42:13But you have off-the-shelf solutions that are like UEBA, IDTR that are really based on this.
42:19Yeah.
42:20By creating a baseline and being able to afterwards add a risk code to your behavior is now behaving, becoming
42:25too risky, and then it will create an alert.
42:27So these are the two about AI.
42:29We'll try and do them both in less than two minutes.
42:32How concerned are you about AI in terms of people, you know, members of staff using it and accidentally giving
42:39away company data in chat GPT, which is the first question?
42:42And also, do you think it's quite positive?
42:44Are we looking at AI helping us as well?
42:47So just on shadow AI, it's a huge problem.
42:49There are some recent stats about the guys who, the white workers guys would be using AI.
42:57Seventy-eight percent of them, this was from one study, would do it as shadow AI.
43:02That is AI system, generative AI system most of the time, not sanctioned, not controlled by the company.
43:09So yes, it's a big issue.
43:10There start to be some actually tools and from new start-up cybersecurity company also coming of edge around that,
43:17including the issue of even local internal AI system, generative AI system, where, unfortunately, you may have answers that because
43:26of your role, your rank, your function,
43:27you shouldn't have access to, which is also an issue of leaking function.
43:30But even there, you start to have some solutions.
43:32But yes, it's a big problem.
43:34Maybe to answer the other side of this question.
43:37Today, for instance, the security awareness program are leveraging AI.
43:41So in terms of scenario-based attack simulation, there are a lot that are leveraging AI.
43:45And now, with Gen AI, you have like your real-time assistant.
43:48So usually, if you want to understand the security policy, I need to go to the information security management system,
43:54pull up the policy among all those policies, go to page XYZ, and then being able to understand what should
43:59I do.
43:59Now, with those type of real-time assistant, you know, okay, this is a document where there's customer emails.
44:05What should be the data classification?
44:07Or, for instance, I need to send this financial report to the external auditor.
44:10Can I do it under which type of controls?
44:13And then the AI can answer because it's analyzing the policy and then you need to do the encryption.
44:18And those are the controls, signature.
44:20So all of those things are brought by AI.
44:23But it takes us to also what you've said, is that we need also to secure it by design.
44:27Because otherwise, it can be used to also trick.
44:30And if you have like the AI isolation or AI poisoning, and then it becomes a bigger problem.
44:35So every tool that you use anyway, you need to properly secure it and monitor it.
44:39Otherwise, instead of being an ally, it becomes actually your point of failure.
44:43So there's a ray of hope there, but also a lot of caution as well.
44:46Thank you very, very much to the panel.
44:48I hope you find that very interesting.
44:49I certainly did.
44:50Lots of takeaways for us there.
44:51Thank you to Guy-Philippe Goldstein, Catherine Jestin, Nikas Hellemann, and Zeyna Zakor.
44:57Thank you.
44:58Thank you.
44:58Thank you very much to you, Joe.
45:00Thanks to your panelists also.
45:01That was a fantastic session.
45:03Now, this is what's going to be happening this afternoon here.
45:06Our afternoon is packed, and we're focused on the future of AI and work, and human interaction experiences.
45:12At one o'clock, we have John Chambers, the founder and CEO of J2E Ventures.
45:18And he's the former chairman of Cisco with us.
45:21At 2pm, Roman Hue, who is the head of developer experience at OpenAI,
45:25is going to be doing a live demo right here on this stage of chat GPT for Omni.
45:32You do not want to miss that.
45:34We also have speakers from Microsoft, EY, LVMH, Amazon, BNP, the very long list.
45:41And Carol Stubbings of PricewaterhouseCooper will be here doing an Ask Me Anything.
45:45I'm going to be handing you the microphone to Ask Carol.
45:49So, as I said, our afternoon is packed.
45:51Go grab yourself some lunch now.
45:53Back here at one o'clock.
45:55See you then.
45:55If you're headed across the bridge, make sure you check out the good hack.
45:59Thanks very much.
46:00See you at one o'clock.
46:01Thank you.
Commentaires

Recommandations