00:00This is anthropic having to lead the charge here in many ways.
00:03They're the only one who's got this sort of relationship with the Pentagon thus far.
00:07What do you make of Dario's pushback?
00:10Yeah, and I think we should take a step back and acknowledge why they're at the tip,
00:14no pun intended, the tip of the sphere here with the Pentagon,
00:17which is that a year or so ago, you know, they seem to go in the direction of focusing on
00:23enterprise
00:23and what that more than a year ago, because their first contract with Palantir was in 2024.
00:28And so that's the market they're going for.
00:31And so they were part of, you know, $200 million contracts with Palantir and the Pentagon
00:39and part of that enterprise kind of business that they're focusing on.
00:44And so as part of that, that enters them into, you know, this arrangement that I think has then led
00:52to
00:52where we are now, which is this standoff in terms of how they balance
00:57their kind of ethical constitution with the need to be doing enterprise work that will bring in the revenue.
01:06There is a lot at stake because what is not, it's not only just a $200 million contract of work
01:13that Anthropic has,
01:14but there's almost a threat coming from the Pentagon that if you don't abide by our rules,
01:21we're going to say that you're a supply chain issue and lots of other military-related companies
01:26are not going to be able to use your models in the future.
01:29How much of a problem would that be for Anthropic?
01:31How much are you surprised by the Pentagon's focus there?
01:35Well, I think there are definitely two sides of this story.
01:40So what I think makes this generative AI so different from bombs and bullets and nuclear weapons
01:48is that nuclear weapons and missile silo only has one purpose, and it was built by defense contractors.
01:54The cutting-edge AI is coming out of the civilian world, and it's a classic dual-use technology problem,
02:01which is it's starting in a civilian space and now it's getting appropriated and used by the Pentagon.
02:07And so it really, it's a civilian technology that now has this critical national security value.
02:15And so that's where this tension is.
02:17Both sides are correct, but the Pentagon does have a lot of leverage because it's the federal government.
02:25It's the federal government, and we've been hearing from the federal government.
02:29I just want to hear a little bit more from Under-Secretary Michael.
02:32That's what he told Bloomberg earlier today.
02:34Just take a listen.
02:35We've been negotiating in good faith on the Department of War side for about three months,
02:41and we're working pretty diligently.
02:44And we sent over a proposal that we thought made a lot of concessions to the language that Anthropic wanted.
02:51And then, you know, without any notice, they published an article where we thought we were getting close,
02:58saying that they were breaking off talks well before the deadline,
03:01which is generally not good partner-oriented practice, if you will.
03:06And, look, Anthropic, in return, has said that while the Pentagon's latest proposal fell short,
03:11the company continues to negotiate with defense officials and remains committed to working with the military, Sarah.
03:17So when you think more broadly, and this is an echo of what happened with Google years ago,
03:23how do you think tech policy can be written by the Pentagon, by the government, to fit current purposes?
03:31It is surprising that we're seeing this repeat of 2018 where Google and the CEO seem to have been blindsided
03:39by the employees' reluctance to work with the Pentagon.
03:43And here we are eight years later, and it seems like a repeat of this that could have been avoided.
03:49But I think if we put ourselves in the position of how quickly AI has been moving in the last
03:55couple of years,
03:55you can see how this just becomes kind of somehow new territory, even though it feels like we've been here
04:02before.
04:03And so I think what was happening the last year or two, AI was moving so quickly,
04:08the Pentagon is moving quickly, trying to do things differently.
04:12And you can see then why, you know, and I worked in the acquisition business in the Air Force,
04:16and there was always this question, why can't we move faster?
04:19This is why you can't move faster,
04:20is because things with the federal government and national security and classified work
04:25and removing leaders from Venezuela are just not the same as coming up with a grocery list
04:32for yourself at home with ChatGPT, or in this case, Claude.
04:37Dario Amadei has been very clear that he thinks more about the implications of a grocery list,
04:42and he's written about the adolescence of technology, written large at the beginning of this year,
04:46thinking about the geopolitical implications, the implications for ethics,
04:50for our world of work going forward, Sarah.
04:53Just tell us a little bit about where you think there might be any room for agreement.
05:00Can the Pentagon go as far as to agree, as with Dario Amadei's desire,
05:05to not have surveillance of U.S. citizens,
05:07or not use models for autonomous lethal strikes without a human in the loop?
05:11Is that ever something that they could be specific enough about?
05:15I think it's a great question, and I think that's the reluctance of Anthropic,
05:20which is that the U.S. has been saying all we want is to use AI, use Claude for any
05:26lawful use,
05:27and it's not lawful to use autonomous weapons, and it's not lawful to do mass surveillance.
05:32So what's the problem here?
05:34I think the worry, probably from the perspective of Anthropic, is that slippery slope.
05:39What does it mean to be fully autonomous?
05:41What does it mean to do mass surveillance?
05:43And so I think that's what they think is gray area that they want to be very careful about.
05:48And I think the view has been that they would rather be trying to influence safe use of AI from
05:54the inside
05:55rather than take a sanctimonious perspective and be on the outside
06:00and not be able to influence the implementation and deployment of AI.
06:05And so I think that's what they're trying to do.
06:07And so for Anthropic's leverage, I think, is that their model is really good
06:12and that the Pentagon wants to use it.
06:14So my expectation would be that they probably will find some middle ground here before 5 p.m. today.
Comments