00:00What happens when the U.S. military and a major AI company start arguing like political rivals instead of partners?
00:07The military is in a full-on standoff with Anthropic, the company behind the AI model Claude, and here's why
00:13it matters.
00:13Back in January, the military reportedly used Claude during an operation to capture Venezuela's now former president, Nicolas Maduro.
00:21Anthropic heard about it and basically asked the Pentagon,
00:24Hey, was our AI used in that mission? And if so, we need to talk about what's allowed.
00:28That question alone set off alarms inside the Department of War.
00:32Secretary Pete Hegseth fired back, saying the military needs full, unrestricted access to Claude, no second-guessing, no case-by
00:39-case approval.
00:40He then gave Anthropic a deadline to agree or risk losing its $200 million Pentagon partnership.
00:46But Anthropic isn't backing down.
00:48They say they're not trying to block any military operations,
00:51they just want confirmation that their AI won't be used for things like mass surveillance or fully autonomous weapons.
00:56When asked about those concerns, Secretary Hegseth said spying on Americans is illegal,
01:01but didn't address Anthropic's broader worries about how the tech would be used.
01:05Some critics see the Pentagon's aggressive stance as a red flag about transparency with the American people.
01:10But supporters argue the opposite, that other AI companies like OpenAI, Google, and XAI already allow the military to use
01:18their models with fewer restrictions,
01:19so Anthropic is the, quote, ethical outlier.
01:22And here's the kicker, Claude is currently the only major AI model available for use on the military's classified systems.
01:29So here's the big question, should the Pentagon have full access to any AI at once, or is Anthropic right
01:34to hold them back?
01:36Drop your answer in the comments, get the full picture on our website, and follow us here for more.
Comments