00:00The policy shift from Anthropic, we're going to talk with Shireen Khafari in just a few minutes.
00:04Is it tied to that meeting yesterday that you reported on between Dario Amadei,
00:09Anthropic CEO, and Defense Secretary Pete Hegseth?
00:13Well, I think there's a broad policy shift happening at the moment, but in the specific
00:17safety release that they changed and announced yesterday, not officially. But what we're seeing
00:23across the weeks and months, really, is Anthropic having to grapple with working on classified
00:30cloud, the first of all the AI companies to be able to do that, and what that really means.
00:36We've seen someone resign recently from Anthropic who worked in safety research saying,
00:42it's really hard to put values into actions at the company. So there are a range of things
00:49happening right now where Anthropic is trying to compete, is the favored company at the Pentagon,
00:55but is also having to loosen its own high-minded ideals when it comes to being slow about releasing
01:02new tech because there's such a competition afoot. Katrina, what does this really mean? I mean,
01:08a lot of companies do business with the government, and I'm sure there are things agreed to signed. Is
01:14this just an AI version of that and kind of the norm, or does it say that there is the
01:21government
01:22kind of stepping more broadly into information or access of that information?
01:28Well, if you mean, what does this threat mean? We really are on a ticking clock right now. So
01:34Secretary Hegseth has said they have until 5.01 p.m. on Friday to decide whether they are prepared to
01:42be cut out of the U.S. government system and be declared a supply chain risk, or he will force
01:49them
01:50to provide their tech to the government under the Defense Production Act. Both are real serious
01:57escalations. It's not clear to what extent this is performative. Anthropic says good faith conversations
02:04are continuing. And so the irony of all this is that both Anthropic wants to work for the Pentagon
02:10and the Pentagon wants Anthropic to work for the Pentagon. So in that way, there isn't that much
02:14light between them. But it does really come down to usage and how they're able to present themselves
02:19of safety conscious and also make sure that AI isn't used in two key areas. One, autonomous lethal
02:26weapons, and the other, mass surveillance of the American population, which of course the Pentagon
02:31says it doesn't do and it sticks to the law anyway. So is there a way for these two companies,
02:36in your view, to actually move forward together without the United States invoking this Cold War
02:42era Defense Production Act, which, you know, we talked a lot about during the pandemic when you're
02:47talking about, you know, a company making masks or ventilators, but not necessarily about software.
02:53Yes, there is no war on right now. And the Pentagon's already using Anthropic's tools. It seems
02:59very unnecessary that they will have to do that to compel Anthropic to do it. But there are a few
03:04options. So Anthropic could loosen its language. They could engineer some sort of climb down where
03:11everyone is able to say, oh, we didn't really mean it that way. What we meant is that way. And
03:14we
03:14agreed all along. Or we could see someone like Grok take over. The Pentagon agreed a deal. We've been
03:22able to report to put Grok XAI's chatbot onto Classified Cloud. That hasn't happened yet. And it's
03:29really not clear what level tech they have compared to Claude, which is Anthropic's very well
03:35appreciated chatbot that the Pentagon is using widely. But it's possible, of course, that anyone
03:40can be replaced. And it's meant to be a competitive market. And so far, it does look as if the
03:45Pentagon
03:45is a little bit too much reliant on one company. And now we're seeing this, this weight get thrown
03:51around, of course, about very significant military uses of AI.
03:55Well, that's what I want to ask you, because it's about the company's assistance on guardrails
04:00for use of its Claude AI tool that the military sees as unnecessary. And it was what, a few weeks
04:06ago, the Pentagon published a new strategy on AI that called for making the military an AI first
04:11force by increasing experimentation with frontier models and reducing bureaucratic barriers to use.
04:19Guardrails meaning what? So if there are no guardrails, like, where can the Pentagon,
04:26take Claude?
04:30So what Anthropic is saying is they don't want it used in these two cases, mass surveillance and
04:35at least autonomous decisions that would happen without a human involved. What that AI strategy
04:40is doing is trying to say we want AI everywhere. And the Pentagon is experimenting with autonomous
04:46drone swarming. And so this is a key question now, and it either happens or it doesn't. You either
04:51have AI in an autonomous drone swarm experiment or you don't. And Claude is saying, we don't want ours
04:57to be there yet. We don't think the tech is safe. We don't think it can be made safe. The
05:02Pentagon, of course,
05:03doesn't just use these things without thinking about those things, despite the kind of approach they might
05:09be giving in public, there are numerous policy documents explaining how they set about testing
05:15for AI and autonomy, evaluating whether it works, where the fail-saves are, the examples where you
05:20really shouldn't use it, under what conditions it will fail. And in that way, there isn't really that
05:25much difference between the Pentagon and Claude's specific applied position. One has to remember,
05:30of course, that this is a long-term roiling debate between Anthropic and the Trump administration.
05:35They disagree on many areas in public. Anthropic's CEO was a loud supporter of Kamala Harris during the
05:42presidential campaign. Now, there's nothing explicit saying that any of that is feeding into this
05:47current row with the Pentagon, but there isn't actually that much daylight between them. And it's
05:52being put to me that, of course, weapons manufacturers, when they do provide their platforms to the
05:57Pentagon, they always explain the use cases where it will work and where it should not be relied on.
06:03If Anthropic can somehow massage the case to say, here are the places where we should not rely on AI
06:09and the Pentagon could agree, that could be a room for compromise. But it's not clear to me that
06:14compromise is what everyone is seeking.
06:16That's what I was trying to get to. You know, with old defense contractors, what terms or parameters can
06:22they kind of put when they are providing weapons systems? And so it sounds like there were some
06:27parameters. And does that just then carry over to something like an Anthropic?
06:30Well, if we're going to keep using that sort of analogy of like old school contractors,
06:34each one sort of has its own specialty. And yes, there's overlap. Yes, there is competition when
06:40these bids go out. But once they get a contract, you're kind of locked in there. And I wonder if
06:44software is any different. You mentioned Grok from XAI, OpenAI. You know, there are LLMs,
06:52you know, from many different companies at this point. Is what Anthropic does with the U.S.
06:58government? Can it be done as well by any of these other companies?
07:03I think at the moment, it's easy enough to say no, because Claude is on classified cloud in ways
07:10that the others are not there yet. OpenAI has been re-approached to see if they want to go on
07:17to
07:17classified cloud. XAI is striking this deal. But the ones who have the most kind of reps and sets
07:23on these systems are on one in particular known as MavenSmart system is Claude. That is not to say
07:29that MavenSmart system isn't using other chatbots. But this is really a case where Claude is out in
07:36front and is not committing to what the Pentagon wants it to do in public the way the Pentagon
07:42requires it to do. And this is clearly a fault line that they haven't worked out how to bridge and
07:47are going about it. Both sides, I think, in a very clear, clear public way.
Comments