00:00You've been thinking about really the Pentagon's reaction to this. Just talk us through your
00:05thinking as to whether the step of banning anthropic from the supply chain in this way
00:10is the right step. It's a petulant reaction by the secretary of defense that's likely to turn
00:17out to be advantageous to anthropic. But it's going to be bad for the Defense Department and
00:24bad for the security of the United States. The idea that you could either nationalize or blacklist
00:32a leading American company is outrageous. And it's going to discourage other leading edge tech firms
00:39from being willing to expose their business by participating in the defense ecosystem.
00:46OK, there's going to be immediate shock, as you say, from those in the space and the reaction
00:51function of the Pentagon. But more broadly, how transparent should these sorts of conversations
00:57become at this moment? Very difficult ethical questions that seem to be shone a significant
01:02public light on by Dario Amadei, the leader of anthropic himself. Yeah, I think that's exactly
01:08right. I mean, this is a frontier technology that we are still understanding what its consequences
01:16are going to be. And to suggest that we have less transparency or that a firm has no right to
01:24set
01:24standards on how a cutting edge technology will be utilized by the Defense Department, I think is
01:32probably a losing proposition for DOD. What should be forced? What could be forced by Congress? We just had
01:39a representative of California to start the show talking about wanting to bring law to bear that that
01:45companies aren't pushed off of any sort of national supply chain in any retribution act. But what more
01:52broadly can be done by Congress at this moment to make clearer the lines? Well, Congress really runs
01:58defense policy and they can legislate in this space and DOD will be required to be compliant with it.
02:05It's actually shocking to think that the Defense Department blacklisted an American company. That's the kind of
02:14thing we do to Chinese firms, not to patriotic American firms who are already participating in
02:21classified DOD activity. Can you give us some guiding principles on then how OpenAI and Sam Altman might be
02:29able to find appeasement in a contract? Do you think extra steps had been taken? Is there though really a
02:36need
02:36at this moment for companies to see the law as it stands as where the line should be rather than
02:42trying
02:43to insert their own language into certain contracts? Well, it's an open question. But I do think that OpenAI is
02:52saying, yes, we trust the government that they won't use our product in a way they say they won't. And
02:58Anthropic is saying we actually need proof and we need indemnification that our product is being used
03:08as we are advertising our product. I think given the low level of trust in the Trump Department of
03:15Defense, this is likely to be advantageous marketing for Anthropic and very difficult for DOD to be able to
03:24say you need to trust that we will abide by the law when so much of their behavior, including blacklisting
03:32an American company, looks to be a bending of the law or at least an aggressive and predatory use of
03:41the law.
03:41Corrie, it's interesting. OpenAI defended saying that it built a number of safeguards into the contract
03:45with the Pentagon and, quote, agrees with, said the Pentagon actually agrees with these principles
03:49and reflects them in law and policy. When we're thinking about where AI's role now is in defense,
03:57in war, as we're currently seeing it unfold, can you just educate our audience as to how integral AI
04:03has now become? Yes. What AI is already proving incredibly valuable for in the defense space
04:10is surveying enormous amounts of data and, for example, identifying threat portfolios,
04:19you know, the ability to see patterns in large amounts of data. But what Anthropic is saying is
04:27we do not want it to be used for domestic surveillance of the U.S. or for decisions about the
04:36use of lethal
04:37force before we have higher confidence in the model. And those are not unreasonable standards
04:44and provide the basis for what Congress perhaps ought to put into law until we have better transparency
04:53and better understanding of what these models are capable of in future iterations.
Comments