- 17 hours ago
Category
🗞
NewsTranscript
00:00So, eight years at Yahoo, four years at Meta Platforms, more than a decade at Microsoft
00:06though kind of like on different ends.
00:09Six months ago, you walked in at Adobe.
00:12Good, you got it all right.
00:14What's the challenge that you're presented with when you walk through those doors on
00:17the first day?
00:18First of all, I'm greeted by a very passionate team who really cares about security.
00:25And then the second thing as I dug deeper, even through my interview process, was Adobe
00:31is very much an AI first company.
00:34Adobe has its own models, Firefly models.
00:36We have the entire AI stack and we have to project it all.
00:43It seems like even other companies that you worked at in previous lives, Microsoft, Meta
00:49Platforms, these are now AI companies.
00:51But how is what you're doing at Adobe different than what you were doing at these other companies?
00:55Yeah.
00:56I feel like I'm pretty blessed with my career journey, so to speak.
01:00Because when I was at Yahoo, I was doing identity, identity for everyone.
01:06And at that time, Yahoo was the company.
01:09Then when I was at Skype, it was initially the consumer offering that I was supposed to
01:15be protecting.
01:16Later on, we added Enterprise, which we know as Teams today.
01:21Moving to Meta, very much like their social offerings.
01:26And then at Microsoft, protecting two major clouds, Azure and M365.
01:32And now here we have three clouds, our document cloud, creative cloud, and marketing clouds,
01:39both for our consumers as well as enterprises.
01:42So I feel like it's coming full circle where I have gotten all these different diverse experiences,
01:46and now I get to leverage them all collectively.
01:49So here you are at Adobe, and it's your responsibility not just to keep the company safe, but to keep
01:55Adobe's customers safe.
01:56If you put yourself in the position of an attacker, what is it that you want from Adobe?
02:03Well, attackers are always looking for the best ROI they can get.
02:10Unfortunately, we are living in a world where it's a business for them.
02:15And when they look at not just Adobe, any industry, I would say, they are looking for exactly the same thing.
02:20Like, what can I get out from this company that I can sell and make money?
02:25So what are our crown jewels?
02:27That's where we start.
02:28Like, what are the critical assets that we are there to protect our customers?
02:32Our customers are storing their brand assets, their creative assets.
02:37We are supposed to be protecting those.
02:39A lot of the marketing material.
02:41And how can we protect those assets?
02:44And also when customers are leveraging our AI stack, as I mentioned, we have to protect the entire AI stack.
02:52And with AI, there are lots of emerging threats.
02:55We have heard from a lot of panelists before, and Mandeep did an amazing job of covering some of the AI
03:02AI-related attacks.
03:03But, like, cross-prompt injection attack, XPIA as we call it, it's net new.
03:11It's like internet all over again.
03:13I don't know how many of you folks, I'm dating myself here probably, when we started with internet, it was like all XSS cross-site scripting.
03:21And we would wake up every morning, like, oh, net new XSS, let's mow the lawn.
03:27Then the next day, more XSS, let's mow the lawn.
03:30And now it's like with XPIA, the same thing.
03:33So what attackers are able to do is they can put certain prompts, either through the users or directly, and they can exfiltrate data.
03:42So that's net new attack because our LLMs do not distinguish between trusted prompts and untrusted prompts.
03:51So it's a fundamental issue across the board.
03:54So what could an attacker do with an untrusted prompt?
03:58So there was this one particular attack that a researcher showed a lot of attention on, where they were able to inject a prompt and exfiltrate a lot of sensitive data through that prompt from the back-end systems.
04:13Because the LLM did not distinguish between whether they should execute that prompt or not.
04:19It just went ahead, it took it as a command, and it executed.
04:24And it exfiltrated a whole bunch of data.
04:27So we have to come up with net new mitigations.
04:30Would you say that, you know, you've worked in a lot of different contexts in technology.
04:36Would you say that where we are right now?
04:38I think a lot of people would agree it's the most exciting place that we've been in a generation with AI, but would you also say that it's the most vulnerable place that we've been?
04:49Yes and no.
04:50And the reason I'm saying yes and no is if you decide to sit on a sideline and don't do anything about it, then probably you are in a more vulnerable situation.
05:00But if you are not sitting on sidelines, you are participating and you are very actively learning and adding mitigations and defenses throughout your tech stack, then probably not.
05:13And the playing field has always been this, I would say, unleveled for defenders.
05:23But now, if you are not going to use AI to your favor, it's going to get even worse.
05:30So I encourage people to participate.
05:32Well, you mentioned this idea of Adobe protecting the crown jewels of your customers.
05:36And just to remind everybody, 99% of Fortune 100 companies are using AI in an Adobe app right now.
05:42So whether or not you know you're seeing stuff that is created on Adobe's platform, Coca-Cola, Estee Lauder, IBM, Qualcomm, the list just goes on.
05:53YouTube creators are using it.
05:55When you talk about the risk of these crown jewels actually being accessed and shared at a moment where these clients don't want them shared, the risk, I think, to me, sounds like a lot of it lies with the customer and the end user and making sure that they are following safe and secure protocols.
06:13That's out of your control.
06:15Well, the basic hygiene for security, nobody can contest that.
06:20Everybody has to follow that, whether it's customers, whether you are a supplier.
06:25And the whole supply chain issue that we have been talking about, that all comes into play.
06:30Because at the end of the day, we as Adobe, we also have suppliers.
06:35And when there are any breaches, we depend on them to make sure that they notify us.
06:40So we can go and immediately patch our systems.
06:43You heard from the first panel that at the end of the day, the pace at which you can react and respond, it really matters.
06:52So that basic hygiene, even for our customers, is really important.
06:57Because if they are not going to patch their own systems.
07:00And one of the examples I'll give you, this is for the Gen AI agents.
07:05Sometimes customers are a little worried that, and this is from my time at Microsoft.
07:12Customers were coming to us, and they were asking questions like, hey, I feel like a lot of my data is being now accessed by these agents.
07:22And like, well, you need to have the right access controls.
07:25Your obscurity cannot be the security.
07:28Just because you did not have the right access controls, and now these agents are finding and replying with answers, that doesn't mean that data was protected ever.
07:39So what would you say is the weak link in the scenario that you described, or the weak link when it comes to your world?
07:46It is still the basic hygiene.
07:48So more so than the supply chain?
07:50You're talking about, you say basic hygiene, you mean our own habits as human beings?
07:54Our own habits.
07:55How many people in this room, with a show of hands, have migrated to fish-resistant MFA?
08:03Wait, let's just define fish-resistant MFA, multi-factor authentication.
08:15What's fish-resistant versus non-fish-resistant?
08:17So if you are just doing, say, SMS-based MFA, that is very fishable.
08:23I can ask you, hey, can you give me your OTP that you have fished you?
08:27Your one-time password.
08:28Yeah, one-time password.
08:29I can ask you for it, and I can punch it on your behalf and do account takeover.
08:34When you say it is fish-resistant, it has to be in a way that it is tied to your device, tied to your biometrics.
08:41So nobody should be able to fish it, and that's what companies should be moving to.
08:47We have done that at Bloomberg.
08:48I'm just going to say it.
08:49I'm raising my hand.
08:50Okay.
08:51But there are a lot of companies that I use in my own world where it's still text messages over and over again.
08:59Yep.
09:00That's out of our control.
09:02As customers, it's out of our control.
09:04As a large company.
09:05As a solution provider.
09:07Yeah.
09:08Can you go to a customer of yours and say you have to take this and make this fish resistant?
09:14Absolutely.
09:15For us to do business with you?
09:16Is that part of building a resilient culture?
09:18That, and what we can also do is we can provide them the monitoring and alerting capabilities where we can show it to them.
09:27That, look, these many of your customers or your employees got fished.
09:33And please, if you do not prioritize this, this is the risk you are carrying.
09:37So.
09:38So if we think about it then, if that's from the perspective of your customers and your clients, there's also the perspective of internally what you're doing at Adobe to build this culture.
09:50And when you and I spoke just last week to prepare for this, you talked a little bit about what the team does to try to essentially fool Adobe employees in order to get them to hand over their credentials as a learning opportunity.
10:05So talk a little bit about that because that has to do with building a culture of resiliency and a secure framework.
10:10Yeah, a few notable things I want to share about this because when I joined Adobe, coming from some of the other companies that have seen different programs run for employee training, I found Adobe's program to be very unique because we do not do a phishing or anti-phishing campaign once a year that like many companies do.
10:32What we do is we do it throughout the year and we take the flavor of the month or the week.
10:38Like if November was our benefits enrollment program, our phishing campaign was totally around that.
10:44And people were like, they could, they had to really squint to see when it was us phishing or it was the real benefits email that they had to sign up for.
10:53So in December, we are doing something about, hey, here's a few iTunes card if you do this.
10:58And so we are keeping.
10:59Just a heads up Adobe employees, it's not a real iTunes card.
11:02Yeah, don't fall for it.
11:05So what happens if they do fall for it?
11:07What's the, what's the training process then?
11:09Then we definitely reach out to them.
11:11We say, hey, you should not have clicked on this.
11:15And here is a little bit of a training program.
11:17Please go click on it, take this training for 15 minutes.
11:20And then next week they will again go through some of these phishing emails.
11:25So we are making sure that we are using AI to amp up our program too.
11:31Then second thing that we did as we were starting to learn about AI is the program we called AI Guild.
11:37So as part of that AI Guild program, we picked certain, a small group of employees and we said, go free form, learn about AI.
11:47What can we do with it?
11:48How can we protect ourselves?
11:50And as a result, they came back with solutions like one of the solutions that they built.
11:56There they feed in the threat intelligence from our partners, the one that we ourselves gather.
12:02And they feed all of that threat intelligence.
12:05And then this tool reads all the indicators of compromise, like 100 plus, and they scan our entire environment.
12:14And within five to 15 minutes, it will tell us where the weak spots are, where we need to add more detections and patch immediately.
12:22So this whole thing used to take us days and weeks, and now it is in less than 15 minutes.
12:28So that's how we are using AI to go so much more faster.
12:32In your world and in the outages that you've experienced, why have they happened in general?
12:39I mean, each one is different, but what is typically the cause of an outage?
12:43We can include Adobe, we can include the previous places you've worked, but the caveats, you've only been at Adobe for six months.
12:50Most of the time it is either a misconfiguration or a code change that wasn't tested for a certain edge case.
13:00And as a result, that edge case really happened in production and then it had a domino effect and it took down the entire system.
13:08So I think this is not the last time.
13:12This is not the first time it will continue to happen.
13:15And we have to protect ourselves.
13:18How often is it an external actor, somebody or an entity trying to get something?
13:26I haven't lately seen too much of this caused by a nation state sponsored or something.
13:31It is mostly self-inflicted, I would say.
13:34And that's where you argue that AI has a real opportunity to improve.
13:38Correct.
13:39Also, we heard a little bit about resiliency today and we need to really focus on that.
13:45Like when we are providing services, we are exploring how can we not be just on one particular cloud?
13:52How can we go have our services be on multiple clouds?
13:56So that even if one cloud goes down, we have like two to three other clouds where we can fail over right away and our customers do not see an impact.
14:06And you mean multiple cloud providers, so different companies like?
14:10Correct.
14:11Microsoft, Google, Amazon.
14:13And also those companies having locations in different areas, so they are resilient as well.
14:18Correct.
14:19Okay, so if you think about the culture of resiliency and security and safety that you are trying to build at Adobe, what is left for you to do?
14:28Like where are the big areas that you want to focus on?
14:31You are six months into this, what do you need to do?
14:34My key priority is AI and it has multiple elements to it.
14:41One is how do we continue to secure the AI features we are putting in our products and services.
14:47We want to make sure those are secure by default, secure by design.
14:51So our customers do not have to think twice.
14:54They can just safely and securely use those features.
14:57The second thing is how are we leveraging AI to defend ourselves?
15:01Because our threat actors are using AI left, right, front, center.
15:06And us not using it is not a good thing.
15:09So that's the second piece.
15:10And then the third thing is we want our employees to be leveraging AI, whether it is vibe coding, whether it is building tools and services or agents, all of that.
15:21And it's my team's responsibility to provide the security guardrails as they are doing all of that.
15:27So how long does it take you if your team or if a team comes to you and says we want to use this new tool, we need you to create the guardrails for it?
15:35What's the process of doing that?
15:36We call it paved roads.
15:38It's a unique term we have come up with because what we are saying is, oh, really good that you want to build something new and you have come to us.
15:47Now, go build whatever you are building and we'll provide you guidance.
15:51But in the meantime, in parallel, we will build this paved road that you can speed on because when you put brakes in the car, your car, you can use that car at a much higher speed.
16:06That's the same thing that we do.
16:09We provide those paved roads. So using those configurations, they can build features faster.
16:17But if it's not fast enough, they're going to use some people are going to use those tools outside of the preferred way that you want them to be used.
16:25I've seen this happen at different jobs.
16:27Yeah, which is which is totally OK. And actually, we encourage it because what it does is it builds that culture of security.
16:35Then they are not afraid to come to us, even if they what they want to do is totally not what we are providing off the shelf.
16:43We are like we encourage you to do it. Let's still have a conversation because we can secure whatever you are using.
16:50So we only have about a minute left, but I just want to end on you providing some ideas, some actions for the rest of the people here.
16:59Everybody watching right now, given the experience that you've had and what you're doing at Adobe now to build a culture of diligent security.
17:08What is the right way to do it? How do you build that framework?
17:11I would say if you are not using AI right now, please, please go explore it, use it more, whether it is for security,
17:21whether it is for speeding up whatever features you have, just pick one pilot project and invest a little bit more.
17:28The other challenge you might run into, which sometimes I also see with many other organizations, too,
17:34is there is no single place where you can go and say, I will send my employees here, get them trained on AI and security.
17:42You have to spend time, you have to explore, get your hands dirty. Do not shy away from doing that.
17:49And you will learn a lot, you will enjoy, and I promise you will be the next person writing those blogs and writing those white papers.
17:59So please do that. And we need to collectively make all of this AI space more secure because it's not going away.
18:08It's on to us to make it more secure for everyone else.
18:12And Chil Gupta, Chief Security Officer of Adobe. Thank you so much.
Be the first to comment