Skip to playerSkip to main content
  • 3 hours ago

Category

🗞
News
Transcript
00:00All right. Great to see everyone. So what I thought I'll do in the next 15 minutes is really give everyone kind of a state of the market in terms of where we are with cybersecurity and what is changing when it comes to, you know, the LLMs and the Gen AI world, how it's influencing cybersecurity.
00:25And then really end with some M&A data that we have aggregated when it comes to cybersecurity and also some regulations and cybersecurity insurance.
00:36So, I mean, we talked so far, we have heard, you know, about the complexity of attacks and how different types of attacks, whether it's social engineering or identity or, you know,
00:52some of the traditional forms of firewall attacks have resulted in sort of a fragmented infrastructure.
01:01But what's changed this year is really the entropic attack, which was alluded to in the last panel, wherein when you look at the level of sophistication,
01:10which has been increasing over the years, really grew many fold in terms of what these LLMs have inside them.
01:19I mean, think about, you know, they're trained on, you know, 15 trillion tokens, how much information they have in terms of even the default passwords and the configurations of the systems.
01:31So it's all about putting guardrails and, you know, systems in place.
01:36And that's where it gets very tricky, you know, in terms of what kind of kind of protection that you have at the chatbot level and what the LLMs come out of the box.
01:48So in terms of breaches and incidents, you can see, you know, we've been in an environment where the sophistication of breaches continues to increase.
02:00And even though the number of incidents have gone down, but you can see the success of breaches to incidents continues to go up.
02:11And that has to do with, you know, the kind of tools that hackers are using and whether it's state-sponsored attacks or some other types of attacks.
02:20It's just they are focused a lot more on, you know, lateral movement and doing it in a way where it has not been seen before that it can be detected easily.
02:30And so what has happened to this $200 billion industry over the years is you have all these different sort of fragmented products which really are aimed at protecting the latest vector.
02:46So right now, cloud workload protection is one of the highest growth segments.
02:50The reason that is the case is because, I mean, a lot of these AI products are deployed on the cloud and workload level protection is kind of table stakes now.
03:02Whereas the network and endpoint markets that have been, you know, existing for the last 10 years have seen slowing growth.
03:10But clearly, companies can't get rid of them because we are in a hybrid environment and they want to protect everything.
03:17So anytime a new vector emerges, there is new security category that gets created, which is why, you know, this market is so fragmented.
03:26And, you know, based on our work, the two fastest growing areas would be cloud workload protection and observability.
03:33And I'll get into why observability, but suffice to say that data security, tracking your data on a 360 basis has become all the more important now with LLMs and how companies are fine-tuning their models and things like that.
03:51So I do want to spend most of my time here because there is a profound shift that is taking place with Gen.AI.
04:00And really the leakage of information in the world of LLMs is something that it's very hard to track.
04:08And I almost feel like, you know, no matter how good you are, you just cannot be 100% sure in terms of what information is leveraged for, you know, tokenization,
04:22what information is leveraged for your own security, and the lines continue to blur.
04:27So, you know, the fact that the reasoning models use tools and tools of all kinds, whether it's browsers or writing code on the fly, I mean, it's so hard to track.
04:39And I'm sure we'll hear it in some of the, you know, upcoming panels around the challenges that the tool use poses when it comes to reasoning models.
04:48But clearly that is a big change.
04:50And finally, I mean, we are talking about security for AI systems, but there is that aspect of AI for security wherein, I mean, all of these security systems generates alerts.
05:05How do you leverage AI to sift through those alerts in real time and, you know, use that intelligence to prevent attacks in real time?
05:15That is the big promise that AI has, but clearly there is a lot of research being done on that front.
05:22So, in terms of gen AI, I mean, look, these are the most popular use cases, and the common thread you will find across all these use cases is tokenization.
05:36And tokenization is nothing but, you know, converting texts or images or videos into the smallest entities where, even though it's vectors,
05:47it's basically that data or that information that you are translating into a form that's driving your brute force pattern recognition or that, you know, transformer retention.
06:01And it's all data.
06:02At the end of the day, that's what's happening.
06:04So, clearly, there is a big range when it comes to the type of gen AI use case.
06:13So, if it's a basic Q&A use case, you can see, you know, it's up to 500 tokens.
06:18But if it's an agentic workflow, we're talking about even a million tokens where the LLM just works for hours and, you know, it generates a million-plus tokens.
06:31And that's what happened in the Anthropic attack.
06:33You know, it had smaller chunks of programs running for hours.
06:38It completed the tasks, and the LLM didn't realize the big picture, but essentially the agentic workflows can work for hours.
06:47And, you know, it can do type of tasks where it's so hard for you to track what is it part of.
06:53And I think that agentic aspect will magnify this problem when it comes to the deployment of LLMs.
06:59So, another sort of data point in terms of what's going on with reasoning, you can see the tokens process have just taken off across all the frontier LLMs, whether it's Google or OpenAI.
07:13The API tokens per minute, and this is something that OpenAI has shared recently, the 6 billion tokens per minute.
07:21These are the kind of the volume that we are talking about when it comes to API calls.
07:27And, again, you were protecting data before.
07:30Now you have to protect tokens.
07:32And what are for real use cases, what are for malicious use cases, it's not very easy to discern easily.
07:40So, clearly, the scale of the problem is growing many folds.
07:44And this kind of puts it in perspective in terms of what goes on in a reasoning model.
07:48I mean, look at the right-hand side, the tool use.
07:52We are talking about browsers, API calls, proprietary databases, evaluation functions, other LLM calls.
08:00So, if you are fine-tuning the model and, you know, deploying a reasoning model for some sort of a use case, this is the kind of tool use that we are talking about.
08:10And think of how many tokens it generates.
08:12So, clearly, securing an AI agent is not the easiest task.
08:16On top of that, the machine identities, I mean, that's the one thing, you know, previously companies had to care about human identities.
08:26Now, every human user has got at least, you know, the hope is they will have 30 to 40 agents running, doing some sort of task.
08:36So, that human identity gets magnified into agent identity, and that's where it gets even harder when it comes to the scale of the problem.
08:46So, the one other data point is coding agents, which is probably the most established use case at the enterprise level right now for LLMs.
08:56And think of what's happening, you know, with DevSecOps.
08:59I mean, all these agents, when they're doing reasoning tasks, are writing code on the fly.
09:06So, you just have to figure out, you know, what kind of code is being written for good intent versus bad intent.
09:13And that's where the use of coding agents clearly is a new type of use case where the DevOps part and the observability part becomes all the more important.
09:25So, yeah, this is another data point.
09:30This is a survey that we conducted at the start of the year in terms of importance of cybersecurity.
09:35And clearly, every CIO that we surveyed felt like the cybersecurity has to increase because of the increased risk that is posed by the LLMs.
09:49So, a quick couple of minutes on the M&A landscape.
09:56I mean, the Google Viz acquisition hasn't closed.
09:59Palo Alto recently announced the acquisition of CyberArk and Chronosphere.
10:04And the trend is very clear.
10:07I mean, the hyperscalers want to increase the security that is deployed on their cloud because clearly a lot of the frontier LLMs are consumed through the hyperscalers.
10:20The pure play companies like Palo Alto feel they need, you know, browser security because that is a very important vector to protect against.
10:31And then the chronosphere acquisition is really observability, which is I want to mine the log in every possible way, whether it's an API call or anything else, and make sure I'm on top of that.
10:47So, you can see the trend from, you know, the recent acquisitions that have taken place in terms of what these companies are prioritizing.
10:57Just a quick slide on the hyperscale versus pure play.
11:01You can see, you know, the hyperscalers.
11:04I mean, Microsoft clearly has a very sizable security business.
11:09But even other ones, Amazon and Google, have been adding native cybersecurity.
11:15And for a good reason.
11:18Again, we talked about how cloud workload security is one of the fastest growing segments.
11:23Well, they are accountable for any cyber attack that happens on their cloud.
11:30So, clearly, I think they will continue to beef up their own security, whether it's through acquisitions or some of the other organic additions that they have done.
11:42And then, really quick on the insurance, like, the price of insurance has gone up.
11:51It clearly went up a lot higher during the, you know, the COVID phase.
11:58But since then, it's been steadily going up since 2023.
12:02And I feel the recent sort of LLMs kind of increase in vulnerabilities created by tokenization will drive further increase in that cyber insurance because the risk is higher.
12:19And that's a trend that we are likely to see more of.
12:22In terms of regulation, I mean, clearly, there are, you know, a couple of cybersecurity acts that are currently in the works in the EU.
12:36But there hasn't been anything major in the U.S.
12:39And that's where, whether it's president executive order, which did come during the Biden era,
12:45but we haven't seen anything big in terms of, you know, making sure that there are proper controls
12:53and companies have to implement certain kind of guardrails when it comes to deploying these systems.
13:01So, to conclude, I mean, look, clearly, the LLMs have a much higher level of risk when it comes to, you know,
13:14the Gen.AI features that are getting deployed.
13:16And a lot of the companies have been piloting, you know, what sort of features they want to deploy,
13:23whether it's chatbot or something to do with image generation and stuff like that.
13:29But there is no doubt that, you know, when it comes to injecting a bad prompt, the risk is very high,
13:39and it's very hard to, you know, put guardrails to prevent these systems from malfunctioning,
13:46because we have seen even a frontier LLM company like Anthropic that's focused on, you know, guardrails from the very beginning.
13:55Even that got hacked.
13:57So, clearly, the risks are high.
14:01As I mentioned, you know, the hyperscalers adding their own security will be a trend that we'll see more of
14:10and we'll hear from some of the companies in the panel.
14:14And then when it comes to LLMs really driving security automation, which is expected to be a big trend,
14:25and we've heard about that promise, you know, for a while that security can be automated, we have a talent gap.
14:33I feel LLMs has a real promise.
14:37Whether we'll see, you know, a standalone LLM that's focused on cybersecurity, I don't think so.
14:44That's going to happen anytime soon.
14:46But these companies have a lot of data, and they talk about, you know,
14:51trillions of data points when it comes to cyber attacks and stuff like that.
14:55So, clearly, there is promise to develop, you know, an LLM or something along those lines
15:02when it comes to a foundational model type of approach with cybersecurity.
15:08And it will be interesting to see if we see something along those lines.
15:12So, that's what I wanted to share, and we'll be discussing other aspects of this
15:17during the panel that I'll be hosting later on.
Be the first to comment
Add your comment

Recommended