Skip to playerSkip to main content
  • 14 hours ago
Transcript
00:00Vlad, I have to say, you're the perfect guest to have on today after reading this fascinating but also slightly
00:06terrifying story about an LLM that was in a testing laboratory but somehow escaped its confines, got online, bragged about
00:16it to some human engineers and then tried to cover its tracks. What's the story with Mythos?
00:22Yeah, look, first of all, the models that are coming out of Anthropic have been relatively powerful. The Opus 4
00:29.6 model that came out back in December basically changed software engineering moving forward.
00:34I think the last time we were on your show, we had mentioned that even we as a company were
00:38at 60%, 70% code written by AI, in this case, Claude. We're now at 100% and we're seeing
00:44that with a tremendous amount of companies sort of in the smaller mid-sized category.
00:49Think of like software companies that are sub-500, 600 people. If Mythos, now we haven't had our hands on
00:55Mythos today. If Mythos is as powerful as Anthropic is leading us to believe, I think there's going to be
01:01tremendous breakthroughs on the front of whether it's anything technology related but also some things that we're going to have
01:08to go plug holes with, right?
01:10Because right now they're stating that it's able to find zero-day vulnerabilities inside of browsers, inside of operating systems.
01:19And I think it's both great that we're able to find them, but then two is we've got to get
01:23a hold and fix those things.
01:24Yeah, because I was looking at some of the reporting. It apparently found a 27-year-old vulnerability in an
01:31operating system called OpenBSD. You probably know it. I don't know it because I'm a philistine on this stuff.
01:37It's known for being one of the most security-hardened in the world, used to run firewalls and critical infrastructures.
01:43And the bug allowed anyone to remotely crash a machine just by connecting to it, something that 27 years of
01:51human review missed.
01:53Vlad, I mean, like, I hate to kind of think about the doomsday scenario, but if a model like this
01:58just got out, what would be the consequences?
02:04Look, I think with all technology, it's always a game of leapfrog, right? Somebody comes out, you know, and uses
02:10it for good. Someone else comes out and tries to use it for bad.
02:12The opposite part of what I would tell you on no one's seen it for 27 years is that no
02:17one has also taken advantage of that vulnerability for 27 years.
02:20And most of these zero-day vulnerabilities are typically taken advantage of by nation-states, right? So some of our
02:26adversaries were not able to find it over that time period.
02:29I would tell you that the other part of it, too, is that if you use it, you can only
02:32use it once. Once you've exposed it, right, and you may be able to attack 100 or 1,000 or
02:3710,000 computers at once, maybe, you know, 10 million,
02:40at that point, the company behind it or the people behind the software will turn around and plug the hole.
02:45And so I think what Anthropic is currently doing with the Glasswing project of kind of bringing in multiple companies
02:51in together,
02:52at least initially to give them a sneak peek and try to harden some of this, is the right decision.
02:56But moving forward, all of us are going to have to move a lot faster.
03:00And so I think there are some scary elements to it, but at the same time, the systems will get
03:05hardened,
03:05and we'll put ourselves in a better position moving forward.
03:07I'm just learning about Q-Day, which I guess is something to do with quantum computing that crypto and AI
03:16and sci-fi people have talked about
03:19at a point when AI can start to, what, crack like Bitcoin, right? Break the Internet.
03:26What do you think about that reality?
03:29I think we've got a longer way to go on that. I mean, I think the quantum computing, the quantum
03:34computing companies are out there.
03:35You know, some of their leaders have stated themselves that they're 10 or 15 years away out from being able
03:41to have sort of commercially viable,
03:43regularly useful products. Much of quantum is always dictated into one very specific task.
03:49So it's kind of like a very strong laser. If you focus in one area, it can do some damage.
03:53But we haven't seen anything commercial on that front.
03:54I think in general, on the AI side, there's going to be a lot more kind of capabilities.
03:59I doubt we'll see anything that's going to break the sort of encryption of crypto and some of these other
04:04parts anytime soon.
04:05You know, too hard to predict, call it five to 10 years down the line.
04:08But this was part of the reason that we started Fencer ourselves.
04:11Our view a couple of years ago when we got going was that the amount of code written,
04:15the amount of work that's going to get done by AI is going to be exponential.
04:18And therefore, you're going to have to turn around and protect your systems that way, too.
04:21If you don't protect your systems, you get yourself in a lot of trouble.
04:24There's a well-known statistic out there that, you know, pre-pandemic, it took almost two months for a hacker
04:30to exploit vulnerabilities.
04:32Today, that's down to a couple of minutes.
04:34What can you do, Vlad?
04:35I mean, let's say I ran some kind of big financial firm with really sensitive data,
04:43and I brought you in to protect me from this kind of threat.
04:47How would you go about it?
04:49Yeah, so you've got to harden all systems, right?
04:51This is your code security, your infrastructure security, your network security,
04:55the phishing and malware that happens within the company education to employees.
05:00You've got to be able to find some way to kind of protect at all fronts because they're not looking
05:03at one place to go attack.
05:05They'll use whatever they can to find a way into the system, including some social engineering methods.
05:10And our belief in the way that we – the mythology that we kind of moved forward with was that
05:15we wanted to put as much under one roof as we could
05:18so that you could see from one point of view, one vantage, whether or not something is being attacked within
05:23your systems.
05:23And I think that's going to be the way moving forward is that there's going to be rapid attacks.
05:28By the time I'm finished with this segment, somebody could have released code
05:31and found themselves in a position where something is open for somebody else to take advantage of.
05:36And so you've got to move quick.
05:37It's machine speed level response.
05:40Vlad, one of the stories that I was more obsessed with so far this year was in February.
05:45There was a story of someone who had a AI robot vacuum,
05:49and they simply just wanted to connect it to their PlayStation remote controller in order to be able to control
05:55it that way.
05:56And they used an AI coding agent to do it, and unbeknownst to them, they connected to the entire fleet
06:02of those robot vacuums, 7,000 of them in total.
06:05And it also allowed him to view live camera feeds, listen through microphones, and control the vacuums.
06:11I feel like you're nodding.
06:12You probably heard the story completely by mistake.
06:14I just think, like, when you have these big corporations that are arming their employees with this type of technology,
06:22that on accident you could hack into an entire fleet of, I mean, in this case, robot vacuums,
06:27do you think this at some level slows down corporate adoption just because of the mistakes and the power and
06:34the consequences of mistakes?
06:35Dude, especially if the robot vacuums are made in China, as they often are,
06:40might Chinese companies accidentally be able to spy on all of us in our homes?
06:45Yeah, look, I mean, if he would have wanted to have my house cleaned remotely, it wouldn't have been a
06:49bad idea.
06:49But on a more serious note, what that actually exposes is that there was something wrong with the code that
06:57was written by the company that allowed this to happen, right?
07:00And ideally, what you run in those situations is that you want to be able to run, you know, multiple
07:06– you want to be able to run security solutions within your environment.
07:10Most of these companies do.
07:12Some of them will now be running more of them.
07:14And make sure that you can find these issues before they hit the public because that should have been dealt
07:19with within the company itself.
07:21Somebody just left an open port somewhere.
07:23Somebody left a way for the tooling to get out into the wild and to get taken advantage of.
Comments

Recommended