- 1 day ago
Category
🤖
TechTranscript
00:00So Mary, I wanted to start with you. We had a conversation earlier about social engineering
00:04and the fact that, you know, here in the UK we've seen M&S, Co-op, we've seen Jaguar Land Rover,
00:09the list goes on and on of organizations that have been hacked very, very badly.
00:14And the disruptions are very, very great and very, very expensive. And at the core of these
00:18are essentially kids, young people, using the phone to hack these organizations and bypass
00:25all of their defenses and cause an extraordinary amount of damage. We had an interesting conversation
00:30about how BAE protects its systems from these kinds of attacks, and it's really hard. And
00:35I was hoping you could kind of walk us through some of the best practices for protecting your
00:41network against essentially phone calls that manipulate call centers.
00:45Yeah, I think there's a few different aspects to it. So one of the first is that, like many
00:51companies, we outsource our help desk. But we maintain a really close relationship with
00:58our suppliers on those kind of things. And we're working with them to, well, flow down
01:02a load of requirements to them. So we don't mess around with the kind of requirements we
01:05flow down to them in terms of clearances of the staff, nationalities of the staff, because
01:12we're a defense company. So that matters in terms of the information that they can see.
01:15And then there's also the helping to train their staff. So we have a team who does security
01:23education awareness. And they're very much looking at it from that human angle of what
01:29are the human vulnerabilities and how do we work with that, not in a kind of telling off and blame
01:33way, but in an explaining the different ways that attackers can come at you and how to manage that and
01:41and how to think about that and how just to take a moment, just pause if it doesn't feel right,
01:47those kind of things. And then there's kind of the process pieces to it. So if someone rings up and
01:53asks for something like a password reset, there's a whole load of process that the help desk has to
01:59go through with the member of staff or person that's pretending to be them around asking for lots of
02:08different information, which, you know, it can't just be that, because depending on how much you put
02:14out on social media, sometimes that kind of information can be out there and attackers have
02:18harvested it. So you have many different questions to make it harder for them to harvest it all.
02:23There's also rather like your bank when they ask you for a password and you just put in a couple of
02:27characters, they do that verbally. And if they don't have any of that set up, then it flicks to a trusted
02:35person. So you have to get your manager to ring up and your manager will be on the network and
02:41therefore you can verify they're trusted. So in other words, if one thing fails, there's another
02:45thing, there's another thing, there's another thing. So it's kind of multi-layered to try and
02:49try and stop it. And that's the detail that I love in this. And, you know, part of my job is we
02:54will report on these attacks. We just did a bunch of reporting on Jaguar Land Rover. We've been
02:58reporting on all of it. And what you'll find at the core of so many of these attacks, I mean,
03:02Tim, I'm going to ask you about this in a second, is you'll find that there was somebody who called
03:06the call center, impersonated a real employee, answered a few security questions, and got a
03:11password changed, logged onto that person's computer, escalated their privileges, stolen admin
03:16credential, and off they go. And then the company is, you know, put out of business for a month,
03:20you know, or longer. And the detail that I love is that if you fail the security checks,
03:25or even if you pass many of them, but you don't have all the required information, your manager
03:30has to call. It's like your manager has to call in and say, I can vouch for this person. Like,
03:36that's a very, very high level of validation. Like, not a lot of companies would do that. Obviously,
03:42BAE would and, you know, kind of other highly sensitive companies would. But in your view,
03:46is that kind of like the gold standard? I've also heard video verification. Is that something
03:51that is like considered gold standard for validating that somebody's a real employee?
03:57Like all of these things, the bar's moving all the time, because the ways that you can impersonate
04:01you're moving. So, yeah, you could be on the company chat system, and you've got to be pretty
04:07verified to get onto that. Then you can switch on your video, and your voice won't work,
04:11because they don't know you. But I think the key is that we're constantly looking at
04:18how we're validating it, and how it needs to change according to how attackers are shifting their
04:23methodologies. And I imagine that would actually be challenging. A video verification would be
04:27challenging if you don't know what the person looks like. You just have a picture, right?
04:31Right. Yeah, it depends on how prolific they are on LinkedIn and other social media sites.
04:35But now, Tim, going to you, a big part of your work at Palo Alto's Unit 42 is responding to these
04:43attacks, right? Like a company will be breached. They might have lots and lots of cyber defenses.
04:48They might have world-class cyber defense, but somebody from a call center let a hacker through.
04:53What does that look like when your team goes into some of these environments,
04:56and you've got customers saying, I've spent millions, maybe tens of millions on cybersecurity,
05:02but somebody abused my call center and they got in. What does that look like for your team when you
05:06show up to a place like that? Well, firstly, one of the biggest challenges is that it's hard to know.
05:14The advent of the use of AI for deepfakes, false face, masking voices, and as Mary said, using it to
05:24harvest essentially a highly comprehensive dossier on probably not only one victim, but many,
05:31many potential victims. And then we would be remiss if we thought that they only tried one,
05:38right? They're probably trying many, many times to get in. And it only takes that one
05:44successful social engineering of a help desk persona to get into the organization. And then it's very
05:50quick to proliferate from there. When we go in, we potentially don't know that that was the source
05:56of the breach, right? We're typically called in. Unfortunately, at the end of the attack life
06:00cycle where there's an objective has been achieved, the data has been stolen, the ransomware has been
06:05deployed, some impact has been felt by the organization. So our job at that point is to painstakingly
06:12try to find all of the breadcrumbs that take you back to that initial access vector.
06:18But I can tell you that over 70% of our incidents worked. It was either social engineering or it
06:26was an exploit of a software vulnerability that's exposed externally. And it's roughly 50. So it's
06:31about a third of our cases are socially engineering as the root cause.
06:36And the important part about that is those are fixable problems. These are not kind of like
06:41futuristic, unstoppable cyber attacks that like nation state, right? Something that nobody could
06:47ever imagine or predict. Like these are things that are fixable, as Mary said, with enough training or
06:53at least enough control, right? In theory. I mean, the downside, as Mary said, that the capability of the
07:01attackers is ever evolving, because they are able to adopt AI and take advantage of it in ways where
07:07they don't have to worry about morals and ethics and legal legalities and regulation. So they are in a
07:14much more accelerated race to seek advantage from AI than than the people on the on the good side.
07:22You mentioned like kind of dual approvals, like get my manager in this. We've actually worked cases of
07:29insiders that have been working jobs, using perpetual like face alteration, voice alteration.
07:37So these tools are there, and they're being used in the world. And they are being effective. They are actually
07:42allowing a North Korean operative to be embedded as an insider and actually work a job. You know, they're
07:48having multiple zoom calls, they're doing the actual job. To disguise their face.
07:53They're disguising their identity. Every day. Yeah, every day, every interaction. There are telltale
08:00signs still, again, because we're in the early stages of this evolution of TAC techniques. But yeah,
08:06and it sounds like something from a movie, but it's happening. It's happening in the UK. We did a
08:12takedown of an insider, Korean insider of a UK business a few months ago. And we had to be
08:18quite covert in how we did that. That's so interesting. And you had told me a story earlier
08:24as well, I'll be moving to, you know, obviously the AI stuff, is that, you know, one of the research
08:27projects that Palo Alto is doing, can you tell us about, it's a dueling chat bot, right? Where you've
08:34got the chat bot fueled by AI and all the information that a hacker would have about a target. So you've
08:41profiled a person, you're trying to impersonate them and hack into your organization. And a chat bot on the
08:46other side that is trained with all the best practices of a call center, like the ultimate
08:50call center engineer, like the perfectly responding call center engineer. And you have these chat bots
08:56going back and forth to try to figure out what are the weak points. Can you talk a little bit about that?
09:00Yeah, absolutely. I mean, we heard the tail end of the last session, right, where they're talking,
09:03everyone's in a race to capitalize on AI to build a chat bot to be customer services and help desk.
09:09So as we know, it's a weak point, we're obviously looking to see, well, is that still going, is that
09:15it might create cost efficiency and efficacy and effectiveness in doing the intended job,
09:21but can it be abused? So yeah, we've, we've got a chat bot that is effectively the adversary,
09:26adversarial AI, it's trained on all of the recent sort of scattered spider cases where they're
09:31really successful in social engineering, their way in to get a granted authentication,
09:37et cetera. And, and that, that is really effective for one of the main advantages that AI brings,
09:45which is that persistence. If you think about it, that, that chat bot can be dogged in its approach
09:51and it can just keep going. And if you've supplied it with a dossier, with enough pretexting and
09:55background information, we found that it's really effective. You know, we've done it against a manual,
10:02you know, we all play the role of a help desk and we'll do it, but then we've built a help
10:06desk bot, as you say, but you, you do all the pivots that Mary was talking about and you ask all
10:11the background questions and all, it can, it can, it can win. It can get through all those questions.
10:16And what do you learn from it? Do you learn that it's doggedness? Like,
10:20is that what you trend people on? Like, I think the persistence, because again,
10:23if it's a human help desk person, person like on the, on the end, I was talking to someone last week
10:31at an event about AI and, and they were saying they'd listened into one of these transcripts and
10:35it was over an hour of like, just persistent, relentless, I need to get in. I need you to give
10:40me this. And, and that's also where if, if you go, if you create that sense of urgency as well. So
10:47it's like, Hey, do you know who I am? I'm the CFO, you know? And if you can take the CFO's voice and
10:52is more recognizable or their persona in general, it's, it's adding, exerting that pressure. And actually,
10:59more often than not, because, Hey, I'm a call center person. I'd rather keep my job and not
11:05upset the CFO by not helping him get to his urgent meeting that he's locked out because he's
11:10changed his phone. Cause he, you know, all that sort of stuff. So I think more often than not,
11:15human fallibility will come in and the human will. So maybe that's a key is that chatbots could give
11:23an advantage on the defender side in a help desk scenario. If they are relentless in there,
11:28denial and actually you can't fire it afterwards. Right. If it gets it wrong.
11:33Right. I like that. And I really liked that idea because that's what you're facing. I listened to
11:37a call at one point, one of these scattered spider calls, it was an hour and a half. Like
11:41when we think of these hacks, I would tend to think of them as a shorter term, kind of like
11:45a shorter experience. But if you're dealing with an hour, an hour and a half and you can't hang up the
11:50phone, like you're going to break at some point. And you know, I think that's really interesting
11:55advice that Mary, you were telling me something too. I mean, you obviously work for a defense
11:59company. Generative AI is a tricky proposition for a defense company. You can't just put it in your
12:05products and let it go. But you did mention there are some interesting areas that your organization
12:11is using generative AI. Can you talk about, I mean, it's even basic stuff like giant repair manuals.
12:17Yeah. For airplanes. Can you talk about like...
12:20So the nice thing about the kind of policy space and the manual space, if you can call it that,
12:27is that it's structured data and it's trusted data, which is that's your foundation for AI,
12:33having the good data that you understand and that you trust. And those support manuals,
12:39maintenance manuals for aircraft are very trusted, right? They've been very carefully verified over
12:44a number of years, but they're big and they're hefty and they're not particularly nice to flick
12:49through to work out how to do something. Are we talking hundreds, thousands of pages?
12:52Oh yeah, huge, huge. And you can imagine the learning curve for someone new coming in and
12:57having to use those. So running those through an AI engine, an LLM that allows you to ask a question,
13:06it comes back with the answer and it references where it came from. So if you're still nervous,
13:10you can go and check the manual yourself is a really good way to speed up the work for those
13:17users and also speed up the training for a new person coming in. So it's really good efficiency.
13:23I like that. It's closed-ended research. It's trained on a trusted data set that is known to
13:28the company. There's accountability, who wrote it, how long it's been around.
13:32Correct. And there's still a human in the loop on it.
13:34And another interesting use case you gave was knowledge transfer. Can you talk about using
13:40generative AI for knowledge transfer? Yeah. So again, this is about kind of
13:45training people up quickly. So we've got the workforce that are reaching the end of their
13:50retirement. And some of our product areas are pretty niche. They're really skilled individuals
13:57who've gathered a lot of intangible knowledge over a very long time that won't necessarily all be
14:03codified. So just looking at how do you take that out of their heads and give it to the young
14:10apprentices that are coming in. And we've got amazing apprentices training centers. So, you know,
14:16getting some of those, reaching the end of the career personnel to talk, recording.
14:22This is literally interviewing them.
14:23Yep. And pulling that out and using that as training for the young people coming in.
14:27So you interview them, you record the conversation, you apply generative AI to the audio of the
14:33interview. And then what is the generative AI? What is the product that is then given to the
14:39younger? Well, that's what we're experimenting with. So it could be they type in a question as
14:44if they're talking to that older person, say, what do I do with this? And get an answer. Or it could be
14:49in training material that we produce. And has that been useful? Have you found that to be?
14:53Still at the early stages of it. But yeah, it's pretty exciting. It's interesting. And
14:58another thing I wanted to talk to you guys about is, we're always looking for ways,
15:01and Palo Alto has published research on this, about ways that hackers are using generative AI.
15:06We talk about chatbots and social engineering, but also emails. Mary, can you talk about where
15:11you're seeing generative AI being used? These are large language models. So you would imagine
15:17the use cases would also involve language. Can you talk about where you're seeing
15:20hackers use this technology? So really, it's to increase their efficiency in areas where they
15:28might have struggled to do it before. So it's either doing it faster, or in the case of some
15:33phishing emails that we've seen, curating phishing emails in countries where they don't have that
15:39language as their first language. So there's a really large increase in the number of phishing emails
15:45that are appearing in Japan now. So not many people outside of Japan speak Japanese. So crafting a
15:51phishing email from scratch, pretty difficult to make that convincing. With an AI model, really easy
15:57and really scalable. Character-based languages, we're hearing this from others as well, with
16:02character-based languages which are really hard to learn. LLMs turn out to be really, really great
16:09at crafting very finely, very refined email messages and at scale. And Tim, you had mentioned as well
16:15that you're seeing hackers use generative AI, the initial recon of a network, right? Like, can you
16:21talk about that? Yeah, I mean, yeah, like you mentioned dossiers before, right? We're seeing evidence that
16:29they're using AI to bring together huge data sets. And then you're obviously creating, if it's targeting
16:37phishing campaign, there's understanding of what transactions might be happening, business
16:43transactions, who might be emailing them. So there's a higher fidelity, a higher efficacy in
16:48because it's more tailored, because the cost of tailoring is no longer cost prohibitive, right? So you
16:54can do en masse email phishing campaigns and you can do them highly tailored and spearfish at scale.
17:02So you get a higher hit rate, higher success rate. We even saw one instance recently where
17:07the email was compromised and then a phishing campaign was sent from a compromised email.
17:13So like right away, like, yes, straight away. So it's like legitimately right, right, let's go and
17:17see where we can get to. The other thing that we talked about was the sort of marrying up of data
17:23sets. So it's like, hey, let me grab a whole load of leaked credentials, compromised credentials.
17:27Let me marry them up with where they may be applicable. So where are their exposed authentication
17:32portals or VPNs and etc. And also where I've got an exploit, where might there be a unpatched system.
17:39So it's marrying up these data sets at scale. It's big data essentially, but to try and have a higher
17:46efficacy of it. There was one other really cool thing which I really wanted to say, which was
17:49we ourselves do hacking. So we have a red team that goes in and is paid to hack our organizations
17:56in a safe way. They recently had a success where they used talking about using like bypassing millions
18:03of dollars of security controls, like they were looking for ways to move laterally. And they actually
18:09abused co-pilot in an environment to gain access to a document which contains details of API keys,
18:17which then ultimately led them to further their attack and achieve their objective.
18:21The internal AI.
18:22Yeah. So they used legitimate use, business use, internal AI, co-pilot. And they were there,
18:29bear in mind, they were there with a foothold and illegitimate, but masquerading as a legitimate
18:34user. But that user didn't have privileged access to these secrets in these documents. But the co-pilot
18:40did because the co-pilot's got God status because it's there to help.
18:43I love that. I love that. We've got only a short amount of time left. I will say that
18:47another thing Mary mentioned as well was that the amount of time from the disclosure of a
18:51vulnerability to that vulnerability being exploited is now down to like hours. So a company will come
18:57out with a patch. You will be seeing attacks, you know, these, these folks were saying within three,
19:02four hours. So like your time has to get faster. You've got to be faster at defense because the
19:07hackers are getting faster at offense.
Recommended
5:40
|
Up next
0:38
10:12
12:33
5:18
4:55
5:51
Be the first to comment