Skip to playerSkip to main content
  • 2 weeks ago
Transcript
00:00Clawed AI helped the U.S. bomb Iran. But how? Hours before the strikes, Trump had already ordered the government
00:07to stop using Anthropics AI after a dispute over how the Pentagon was deploying it.
00:13But the tool was too deeply baked into the systems. It would take months to disentangle.
00:18So U.S. Central Command ended up using Clawed AI for, quote, intelligence assessments, target identification, and simulating battle scenarios
00:28during the strikes on Iran.
00:30But nobody has clarified what that means. Was Clawed flagging locations to strike or making casualty estimates?
00:38More alarmingly, no one has to explain.
00:41Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats, and guiding missile
00:50defense systems.
00:51Remarkably, all of this has been happening in a regulatory vacuum and with technology that's known to make errors.
00:58What happens if a chatbot hallucinates when it's translating a commander's intent into digital instructions to coordinate a fleet of
01:07drones?
01:07To be fair, that's not happening as far as we know.
01:12Anthropic actually pitched that capability to the Pentagon, but it was rejected.
01:16But it wouldn't have been the first time unreliable AI systems were used in warfare.
01:22Lavender was an AI-driven database that analyzed information to identify military targets.
01:28The problem was that it was wrong 10% of the time.
01:32A defense and digital ethics professor at Oxford told me that 3,600 people were targeted by mistake.
01:40Of course, military operations often have to be kept under wraps.
01:45But defense is heavily regulated by international law and testing standards.
01:50That should apply to AI, too.
01:52The goal wouldn't be to disclose exactly how Clawed was used in something like Operation Epic Fury,
01:58but to release the broad strokes, especially if or when something goes wrong.
Comments

Recommended