Skip to playerSkip to main content
  • 2 years ago
Poppy Gustafsson, Chief Executive Officer, Darktrace Lama Nachman, Intel Fellow and Director, Intelligent Systems Research Lab, Intel Corp. Baroness Joanna Shields, OBE, Founder and CEO, Precognition Moderator: Ellie Austin, FORTUNE
Transcript
00:00So, Poppy, Lama, and Joanna, welcome.
00:03Thank you so much for being here.
00:04Poppy, I'm gonna start with you.
00:06So, according to some research conducted by Darktrace
00:09and released last week, I think, so very topical,
00:1274% of security practitioners believe AI-powered cyber
00:17attacks are already impacting their organizations,
00:20and only 60% believe that their organizations
00:24are adequately prepared to cope with this.
00:27So, within this context,
00:29what are some of the most challenging
00:30cybersecurity threats that your clients
00:32are currently reporting to you at Darktrace?
00:35Oh, it's such an interesting space,
00:37and working in cybersecurity,
00:39we're in that really privileged position
00:41because we're all thinking about
00:42how do we adopt AI for good,
00:44and how do we think about making sure
00:46we're using it safely and building trust
00:48between technology and people,
00:49but we're up against the adversaries
00:51that pay no attention to that,
00:52so they don't care about doing it safely,
00:54and if anything, they want to elicit trust,
00:57but they're trying to encourage people
00:59to do something that they otherwise wouldn't do,
01:02whether it's clicking on a link
01:04or downloading an attachment
01:06or just otherwise enacting something.
01:08So, they're using tools like AI to, firstly,
01:14really widen their ability to build trust
01:17because they can communicate
01:18in much more natural language,
01:20and they can access these systems
01:22that make it feel much more human and engaging,
01:25but also, suddenly, you're able to proliferate
01:29your phishing attacks or whatever it is
01:30at such a bigger scale
01:32because you want to send a phishing attack
01:34against a company in France.
01:35You don't speak French.
01:36No problem.
01:37You've got plenty of systems
01:38that can enable you to do that
01:39in a really convincing and compelling way.
01:42So, businesses are realizing
01:44that that interface between technology and people
01:47is so much easier to breach
01:48because people are able to be tricked by the technology
01:52by using things such as AI,
01:55and the number of phishing attacks that we've seen
01:57has increased enormously,
02:00but really underlying that
02:02is the type of phishing attacks that happen.
02:05So, it's no longer those spray and pray,
02:07great news, you've won the Nigerian lottery,
02:09and you think, I don't remember
02:10playing the Nigerian lottery,
02:11but it's actually much more specific.
02:14You know, when we met at the Fortune Conference,
02:15it was lovely to see you.
02:16I thought you might be interested in this,
02:18something that's much more human and convincing.
02:20So, the linguistic complexity
02:23is a measure that we look at.
02:25That linguistic complexity of the phishing emails
02:27has increased very significantly
02:29as attackers try and really build trust.
02:33And what technology are you having to leverage
02:35to combat that?
02:37AI is obviously a huge piece.
02:40And really, for us,
02:42cybersecurity is such a natural home for AI,
02:45and we've been doing it for more than a decade
02:48because you've got these huge swathes of data
02:51that exists within an enterprise,
02:53and you have to very quickly be able to analyze it
02:57and identify those patterns and trends
03:00and say, okay, right, well, is this in keeping?
03:03Is this what I would expect
03:04for our enterprise and our organization?
03:06And the challenge is
03:06is that every single business is completely different.
03:09The culture of businesses is very different,
03:11and that culture is reflected
03:13in the sort of digital and data activity.
03:16So, how do you be prescriptive and say,
03:18if you see this, then that is bad?
03:20Because what is good for one company
03:22might be bad for the next company.
03:23So, AI is a really important tool in saying,
03:26okay, right, how do we baseline you and your business
03:29and that normal pattern of life?
03:31And once you know that,
03:32it becomes a really powerful tool,
03:33and then being able to say,
03:35this is very out of keeping for you and your organization.
03:39Joanna, you have very many strings to your bow
03:41that we could talk about,
03:42but we're talking to you today
03:43as the founder and CEO of Precognition.
03:46Now, on your website,
03:47it says that the company empowers leaders
03:49in an AI-centric era to anticipate the future.
03:53What does that mean on a practical level?
03:55So, I wanna pick up on where Poppy left off.
03:59The human component of this is very important.
04:02So, the first 18 months we've seen,
04:04if Chats GPT was the firing shot
04:08at the beginning of the race,
04:09we're 18 months in,
04:10and what we're seeing now
04:12is the value of domain-specific expertise
04:15and human capital in developing these systems.
04:18So, we're far beyond where you're just using
04:21an AI chatbot now,
04:22where it's a copilot for what you do,
04:24but it's integrating human intelligence
04:28and massive amounts of data,
04:30decades of data,
04:31rich data sets in domain-specific areas,
04:34and applying that to where you really
04:36can actually leverage this capability
04:38and build systems that are resilient,
04:40that are transparent,
04:42and that have the expert at their heart.
04:45So, you have the ability to,
04:47through these systems,
04:48understand the response you're getting
04:51and know exactly what piece of data got you there.
04:54So, this level of granularity,
04:56this level of engaging experts in the process
05:00to build domain-specific large language models
05:03on top of the richness of what you get today
05:05with the standard foundational models
05:08is completely an empowering moment for enterprises,
05:12and we're starting to see,
05:14not just efficiencies,
05:16but completely new ways to deploy human capital
05:19by leveraging these models.
05:21So, I think it's such an exciting time.
05:23I'm gonna be the optimist on the side today,
05:26because I really believe that this is the most empowering
05:30and enabling technology of my lifetime,
05:32and I've been at this for about 40 years, so.
05:35Okay, we love an optimistic note,
05:37and we're gonna come back to that.
05:39Lama, I want to bring you in.
05:40So, as I mentioned in the introduction,
05:42you are the director
05:42of Intelligence Systems Research Lab at Intel.
05:45Can you give us briefly an example of a project
05:48that the lab is currently working on?
05:50Yeah, sure.
05:52I can mention a couple.
05:53So, actually, to pick up on the notion of domain expertise
05:57and how can we support people.
05:59So, a lot of the work that happens in my lab
06:01is focused on human-AI collaboration.
06:04So, what we're really trying to do
06:06is help support people in performing all sorts of tasks,
06:09and just think, for example,
06:11we're a manufacturing company.
06:13So, imagine people working,
06:16technicians who are working within a fab, for example,
06:19trying to accomplish things,
06:20do maintenance support, things like that.
06:24It's having access to all of the information
06:27and the documentation and all of that is great,
06:29but that's not really sufficient to solve problems,
06:32catch errors, things like that.
06:34So, one of the things that we've been working on
06:36is can we actually bring an AI system
06:39that is aware of the environment,
06:42that people are working within,
06:42can watch over what they're doing,
06:44answer questions in the context
06:46of what it is that they're doing.
06:47So, essentially, go from the notion of an LLM
06:51to a world model that can understand
06:54the physical context of people's work,
06:56and then answer questions within that,
06:58but also catch errors.
06:59If somebody did something that's unexpected,
07:01you can flag it and say,
07:03did you actually mean to screw this onto that,
07:05or things like that.
07:07So, what's interesting about that is,
07:10you could do this in a traditional way
07:13and go and try to train things
07:14that are very specific to that task,
07:18but really, the interesting question
07:19is the same one that we've had with LLM.
07:21So, how do you generalize these things
07:23so you don't have to do it over and over again
07:26every time a task changes?
07:28So, that's when we really leverage human-AI collaboration.
07:30So, as you're actually performing tasks,
07:32the AI system is actually learning from you visually
07:36what it is that you're doing,
07:37and then asking if it doesn't actually understand,
07:40and vice versa.
07:40So, really leverage that complementarity
07:43between the human and the AI.
07:45Poppy, when we talk about responsible AI,
07:47one really important element is deciding
07:49what is and isn't an appropriate use of AI.
07:52Can you give us an example within the cybersecurity space
07:55of one area that you don't think AI
07:57is appropriate to leverage?
08:01There's a big question around data and privacy,
08:05and when we set up our company as a cybersecurity business
08:0810 years ago, I was thinking,
08:10how do we protect businesses
08:12from the adversaries on the outside?
08:13I never expected people within a business
08:16to upload vast quantities of their core IP
08:19and coding data to third-party systems willingly.
08:22Like, that wasn't what I was expecting to happen,
08:24but oh, lo and behold, here we are, and that is happening.
08:27And so, I think, I always think about this,
08:31is it summer gone by, 18 months ago,
08:33where all the writers' strikes in Hollywood,
08:35and the actors were saying, you know,
08:36it's not okay to train a system on my voice and my face
08:41and create, you know, generated content that looks like me
08:45when I'm not getting paid for it.
08:46And you think, you know what?
08:48I get that.
08:49Like, I think that's probably fair enough,
08:51but yet enterprises are doing that all of the time.
08:54They are making their data available to these systems,
08:57whether it's on the web or just by uploading these,
08:59and these systems are learning off that data
09:02and then selling that intelligence back to them.
09:05And I think the corporate world is gonna start thinking
09:07really long and hard about actually
09:08how we really thinking about the privacy and security
09:12of the data that we have,
09:14and instead of passing that intelligence
09:16onto a third-party system
09:17where they can then sell it back to us,
09:18how do we leverage that ourselves?
09:21So, instead of giving them all of our data,
09:23how do we pull AI into our business
09:26where the data already is,
09:27and use AI as a tool to unlock the intelligence
09:30that we already have within our businesses
09:32so that we benefit from an organization
09:34rather than a third party?
09:36So, I think there's a bit of an interesting conversation
09:38around that, about how they're really thinking
09:40about sort of benefiting from that intelligence,
09:42but also maintaining the privacy of that data.
09:45And I do worry that privacy is being
09:47a little bit overlooked
09:48as people are thinking about onboarding AI.
09:50That's really interesting,
09:51because one question I wanted to bring up
09:53was how we train human employees
09:55to work well in conjunction with the AI,
09:57and also to recognize when maybe it's gone rogue.
10:00Joanna, maybe if we go to you,
10:02is there an example of a business that you've worked with
10:04that you think is doing particularly well
10:06in training its employees for this new era of work?
10:10Yeah, actually, I think in the biotech space,
10:14first of all, it's a well-established protocol
10:16of using multi-modal data
10:18to try to understand and make decisions.
10:21So, in the scientific research process,
10:23every stage you have to make
10:24the best decision available to you.
10:27So, by incorporating all this multi-modal data
10:31into large language models that are biodomain-specific,
10:34you have the opportunity for scientists
10:36to make decisions along the way
10:38that save billions of dollars, potentially, down the line,
10:43and to see failure in advance and to anticipate that.
10:47And I think what we are starting to see
10:49is the intelligence of human beings
10:52to be able to rationalize and say,
10:54the system has thrown this issue up to me,
10:57I'm going to investigate it,
10:59but there are real human reasons
11:01why we're not gonna pursue that,
11:02whether it's safety in the clinic
11:04or something that that scientist may know
11:08that would inform that decision
11:10that the system could not possibly know.
11:12So, over time, I think,
11:14as scientists engage with those systems and train them more
11:18and continue that feedback loop,
11:20we'll have a much stronger and robust system
11:23for scientific discovery.
11:25And, Lama, on that note,
11:26if we've got any HR or people officers in the audience,
11:30what would be your message to them,
11:32given your learnings in the responsible AI space
11:35over the past however many years,
11:36about how they need to be talking to their people
11:38about this technology?
11:39Yeah, so, I mean, I think it's absolutely important.
11:43I think it kind of also goes both ways, right?
11:46It's absolutely important for people
11:47to not over-rely on these systems
11:51and understand that they will make errors,
11:54but to be able to then make sense out of that,
11:57it's extremely important for these AI systems
12:00to actually be more explainable, right?
12:02To bring more context
12:04into why they made the decisions that they've made.
12:06Because this way, you can,
12:08it's an interesting way of trying to combine
12:10what I know to be true, as a human,
12:12with what the AI is specifically indexing on
12:16when it's actually trying to come up with that decision.
12:19And it's something, for example,
12:20that I've seen a lot.
12:22So, some of the work that I do
12:23is actually in helping people in manufacturing setting
12:26to try to diagnose problems.
12:28Like, imagine you're trying to fix yield issues, right,
12:32in manufacturing.
12:34And, you know, you could just say,
12:36okay, well, here's an AI system trained on tons of data.
12:38It can figure out which parts are good or bad
12:41and try to kill them earlier
12:42because it saves a lot of money and so on,
12:45which makes perfect sense to do.
12:47The problem is that if, you know,
12:50because of everything that we've heard earlier
12:51about data drift and everything like that,
12:54it is important to actually have the AI system tell you,
12:59these are the specific things
13:00that made me reach that decision.
13:02Those are the parameters that made me conclude
13:04that these parts, you know, are problematic
13:08or have yield issues and so on.
13:09So, and what I've seen is that
13:11when we actually make these things visible
13:13to people who are actually practicing this
13:15and having to be accountable for the decision making,
13:19it's much more likely that they would adopt these systems
13:22because one of the things that typically happen
13:24is that the adoption is where things typically tend to fall.
13:27Mm-hmm, mm-hmm.
13:28I'm going to open it up in a second to questions,
13:31but Joanna, I want to come to you before I do.
13:32So you served as Minister for Internet Safety and Security
13:35in two conservative governments
13:37under David Cameron and Theresa May.
13:39There's going to be a prime minister,
13:41a general election this year.
13:43The Labour Party has said that if they win,
13:45they would make AI firms share their test data.
13:48I wonder what you make of that.
13:50Is that a move in the right direction?
13:53You said tax data?
13:55Test data, their test data, AI...
13:57Test data. Test data, yes.
13:58Well, I think that the AI Institute right now,
14:02voluntarily, the AI companies are contributing
14:05their foundational models for testing.
14:06So I think that's already happening.
14:08I'm not sure the nuance that they're going after there...
14:11Of making it a statutory legislation?
14:13Could be, could be, because, you know, the EU's ahead.
14:17They came out ahead with the AI Act.
14:19The US has the executive order,
14:22and they kind of come at this very differently.
14:24The US, of course, comes at it from a market perspective,
14:26but also existential risk.
14:28And the EU comes at it from a, you know,
14:31more rights, human rights and privacy rights.
14:35So it's a really quite a good nexus right now,
14:37which is why governments haven't had to rush
14:40to fill the gap, because there's a voluntary set of,
14:43you know, people complying with these,
14:46with these regulations that are, or not regulations,
14:48but these recommendations that have been made.
14:51So the companies are uploading their data.
14:52They're being tested robustly.
14:55I think we're doing very well on the models,
14:59but I think we need to be a little bit stronger
15:01on the use cases.
15:03What does that look like?
15:04So my advice would be, like, the number one use case
15:07that concerns me most is,
15:09and it's the most advanced AI systems
15:13that we have in the world right now, and that's ad tech.
15:16I mean, I came, I was at Facebook when we launched
15:18the brilliant system that captures your attention
15:21and keeps you engaged for days and days.
15:24And that is all wonderful and good,
15:26but when ad tech systems merge with surveillance
15:30or personally identifiable information,
15:33we are seeing things like disinformation campaigns,
15:37being able to manipulate and, you know,
15:41manipulate public opinion,
15:42and that's important for enterprises as well,
15:43because we're talking about enterprises right now.
15:45Like, if your brand is somehow, you know,
15:48part of a campaign that then, you know,
15:51educates the public in a negative way,
15:53that's a really big risk for the enterprise.
15:55So ad tech systems that target people
15:58and bring that sort of, like, personal surveillance
16:01into this equation, I think is one of the riskiest areas.
16:04And up until now, governments haven't done anything
16:08to regulate that area.
16:09So maybe one thing I would jump in with,
16:12going back to the test and the point you were making,
16:14I think one thing we really need to be thinking about
16:16is the context of the AI supply chain, right?
16:20So one of the reasons why it's extremely important
16:22to actually have that transparency in the supply chain
16:25is that you're essentially separating
16:28where these models are happening
16:29with people who are trying to build systems with them,
16:32and with having much more details about what was tested,
16:37what is the data that was used, et cetera.
16:39It enables the people who are trying
16:40to use these foundation models in AI systems
16:43that are targeting specific use cases to do much better.
16:46Right, and I think that is really critical,
16:48especially in high-risk AI scenarios.
16:52Are there any questions from the room for our speakers?
16:54Oh, wow, loads of, okay.
16:55Yes, this gentleman here.
16:56Could you please tell us your name
16:58and the company you're from?
16:59Hi, I'm Ajay from Salesforce,
17:02and just wanted to touch upon an important topic.
17:05What would be your advice to CIOs or CISOs
17:09who are trying to avoid bad actors
17:11as they adopt these large-language models at scale?
17:14Poppy, let's go to you on this.
17:16Yeah, I mean, one, it's happening.
17:18We're seeing this happening already,
17:21and there's such an asymmetry, as we all know.
17:24A business has to secure itself 100% of the time.
17:27An adversary only needs to get in 0.1% of the time,
17:30and the damage is done.
17:31So it's a really complex challenge for businesses,
17:35and I'm surprised, like in cybersecurity in particular,
17:41we're seeing AI move from something
17:43that's serving up the person some information,
17:45i.e., yoo-hoo, there's terrible things
17:48happening in your business, to actually taking action
17:50and doing something about it,
17:52because cybersecurity is so fast,
17:54and your response has to be so quick.
17:56You can't wait for a human to say,
17:59oh, you're right, AI system.
18:00I am going to do something about that,
18:01because it's already too late.
18:03So already in cybersecurity, we're seeing AI move
18:07from something that's partnering with the people,
18:10that's something that operates without the person.
18:13But they're still having to come back and say,
18:15this is the action that was taken,
18:16and this is why we took the steps that we did.
18:18But you can't rely on having the human in the loop.
18:23So in terms of advice for cybersecurity professionals
18:27as well, you do need to think about security
18:29in that terms of layers.
18:30You need something that's going to keep out
18:32that bulk of known attacks.
18:35And there's a big, long back catalog
18:37of all the bad things that have ever happened
18:38in the past.
18:39Businesses write a list of those.
18:41Keep those out of your business.
18:43That's a good thing.
18:44But with AI happening so quick and adopting so fast,
18:47novel attacks that aren't on this list of known bad,
18:51that's expanding really quickly.
18:54So you need something that's that next level down
18:56that's saying, right, okay, we've sieved out
18:58the majority of known bad actors,
19:01but how do we then defend ourselves
19:03from the things that we don't yet know about
19:05these new forms of attack?
19:06And for that, you've got to see the problem
19:08through the other end of the telescope,
19:10which isn't what does bad look like?
19:12What do attackers look like?
19:13But what do we look like?
19:14How does our business operate?
19:16What is normal for us?
19:17And if you've got a really good understanding of that,
19:20you can then always spot the thing
19:22that isn't in keeping with that.
19:24You know, what I'm finding really encouraging is the,
19:27you know, again, we have a concentration of power
19:29of a few companies that are quite powerful in this space,
19:31building the models, the foundational models.
19:33And, you know, over time, hopefully that will change
19:35with open source and various other things.
19:36But the people who are sort of leading
19:39are also leaning in in terms of, you know,
19:42being receptive to creating the guardrails
19:45and to recognizing potential risks.
19:48Like I've been really encouraged.
19:49This is very different from social networks
19:52and the fact that, you know, it was more about
19:54like optimizing for revenue and profitability.
19:58Whereas in this case,
19:59there is kind of an altruistic vein through it
20:02where people are, the leaders of these companies
20:05are thinking about safety first
20:06and they understand the power of these models
20:09and they understand the potential of them to be manipulated
20:11and for people to be harmed.
20:13And they're taking progressive action upfront,
20:16which I think is a change.
20:19And I think that's very encouraging.
20:20Okay, I love ending on an optimistic note.
20:22So we are gonna leave it there.
20:24Poppy, Lama and Joanna, thank you so much for joining us.
20:27I really appreciate it.
20:28Thank you.
20:29Thank you.
Comments

Recommended