00:00Next time you visit your therapist, they might ask a simple question.
00:05How's your relationship with your AI model?
00:07And if you say, complicated, you might have a point.
00:11Recently, OpenAI saw the maker of ChatGPT reached an agreement with the US Department of War for classified military use.
00:19And this got your reporter thinking, how is AI used in warfare right now?
00:25Let's start with cybersecurity.
00:27On the digital front, the Iran conflict triggered a surge in geopolitical cyberattacks,
00:34which are now deployed right alongside physical weapons.
00:38And with the rise of AI deepfakes and highly personalized phishing emails,
00:43experts warn that you can no longer rely on what you see and what you hear.
00:47And that's great to hear, isn't it?
00:50But what about the strategy simulation?
00:52Before reaching the battlefield, AI is tested in war games.
00:57And in a recent study, models including ChatGPT, Claude and Gemini were placed into simulated crises.
01:05And the results were alarming, as in every game at least one AI escalated the conflict by threatening to use
01:12nuclear weapons.
01:13Finally, during recent airstrikes on Iran, the US military relied on Antropic's Claude to identify targets.
01:22But after Antropic refused the Pentagon unrestricted access over ethical concerns,
01:28OpenAI immediately stepped in to check the contract.
01:32The company insists the agreement strictly prohibits domestic mass surveillance and requires human oversight for weapons.
01:40Defending the deal, the CEO Sam Altman posted, I quote,
01:44We remain committed to serve all humanity as best we can.
01:49The world is a complicated, messy and sometimes dangerous place.
01:53End quote.
01:54So it seems that deal would result in either more commitment to humanity,
01:58or more complications and dangerous places.
02:01So as I told you, it is complicated.
02:05Right ChatGPT?
Comments