Skip to playerSkip to main content
  • 8 hours ago
Speaking with FRANCE 24's Sharon Gaffney, Elke Schwarz, Professor of Political Theory at Queen Mary University of London, says that there's "a radical acceleration" in the speed of acquisition of military targets through the use of AI and how quickly action is taken on these targets, which raises concerns about the lack of human oversight, especially considering that AI models have "25 to 50% reliability, which means they are wrong very often".

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en

Category

🗞
News
Transcript
00:02this is apropos unprecedented and unlawful it's the latest twist in the AI
00:08firm's high-stakes battle with the US military over usage restrictions on its
00:13technology anthropic has begun legal proceedings aimed at blocking the
00:18Pentagon from placing it on a national security blacklist after the startup
00:22refused to remove guardrails against using its AI for autonomous weapons and
00:28domestic surveillance Monte Francis has the details the Pentagon has refused to
00:36say whether artificial intelligence was used to mistakenly target a school in
00:40Iran near a naval base resulting in the deaths of 175 people mostly children but
00:47the Wall Street Journal first reported that the US military was indeed using AI
00:51for a range of purposes including to identify targets for operation epic fury
00:57last week the Defense Department said it was cutting ties with its tech of
01:01choice anthropics Claude after the company CEO Dario Amadei demanded that the
01:06government put up some guardrails for how the tech was being used including
01:10assurances that it would not be used to power autonomous weapons or for mass
01:15surveillance at home I'm concerned about the autonomous behavior of AI models
01:20their potential for misuse by individuals and governments and their potential for
01:26economic displacement in response the Pentagon argued it was not up to a private company
01:31to determine how the tech could be used on the battlefield and Defense Secretary Pete
01:35Hegseth went a step further announcing that he was labeling Anthropic a supply chain risk
01:41prompting the company to file two lawsuits against the government on Monday arguing the
01:46designation violated its free speech and due process rights the company said it would not be
01:52swayed by intimidation or punishment adding Anthropics reputation and core First Amendment freedoms are
01:58under attack last Friday President Trump said he was ordering federal agencies to stop using
02:03Anthropics tech allowing them six months to phase it out but there are reports that Amadei is still in
02:10talks with the Pentagon to possibly hash out a deal Sam Altman the CEO of the Microsoft-backed OpenAI
02:16has since announced a deal with the Defense Department to provide technology on the government's classified systems
02:24Donald Trump says the executive order formally instructing the federal government to remove
02:29Anthropics AI from its operations will likely be issued later this week with more let's bring in
02:36a good words professor of political theory at Queen Mary University of London thanks so much for being
02:42with us on the program now you've written about this feud saying that it's unusual and that it pits
02:48state might against corporate power where exactly did this dispute begin
02:55well the dispute itself seems to have been brewing over the course of at least a year I mean I'm
03:03not
03:04privy to all the communications that were going on behind the scenes but there was a kind of a uneasy
03:11relationship between certain individuals between the Pentagon and certain individuals and Anthropic
03:17allegedly now we have to understand that Anthropic has a contract of course with the Department of
03:24Defense or Department of War as it is now often referred to since July 2025 but Anthropic also
03:33has entered into a partnership or into a contractual relationship with Palantir and Palantir provides
03:41Maven smart systems which the US Department of Defense uses for AI targeting and AI decision support systems and
03:50that contract was entered in 2024 I believe so the relationship has a longer history and it was only in
03:59recent months that this relationship that this relationship seemed to have turned and it is likely that this has to
04:05do with the fact that the
04:07Department of Defense Department of War has issued an AI strategy for the military domain and in this strategy the
04:14Department of Defense has
04:16clarified specifically what it understands to be responsible AI in the military domain and there are certain broader let's say
04:26interpretations of AI and its lawfulness than was previously the case so the Department of Defense expects to be able
04:36to use any AI model or product for any lawful use that's the stipulation but that is of course open
04:44to interpretation
04:45and I think this is where Anthropic was a little bit uncomfortable because it wanted to set these two red
04:52lines doesn't want its models to be used for mass surveillance of US citizens or for fully autonomous weapons systems
05:00so that's the that's the foundation of this dispute with a lot of back and forth but really it focuses
05:08the attention very much onto the US and its stipulations
05:13and kind of shifts much more stringent ethical and legal concerns in the use of AI specifically the use of
05:20large language models for military purposes into the margins and I think that's somewhat problematic yeah because as we heard
05:29in that report it suggested that this technology was used not only in Venezuela but also during the ongoing conflict
05:36the attacks on Iran what does this tell
05:39is Elke in your view this whole dispute about how artificial intelligence is likely to be used if it's not
05:46already being used in warfare
05:50well artificial intelligence in warfare has been used for quite a while actually those of us who have worked in
05:56this kind of environment for the last decade we have seen the accelerated integration of various AI tools and AI
06:05models into the military domain domain and that is fine and it's
06:09so when we talk about artificial intelligence we must remember we're not simply talking about large language models although that
06:15dominates the current consciousness
06:17and imagination about what AI is constituted of but AI can be many many things in the military domain they
06:24can be quite narrow models that that help organize supply chains or make logistics more efficient or help with translations
06:33so kind of narrow applications of artificial intelligence that are quite different to those general
06:39general purpose large language models that anthropics cloud is for example or various other companies offer these large language model
06:48products so we've seen an increasing integration or increased integration of artificial intelligence into the military domain at large but
06:58also into the decision making cycle and specifically the targeting cycle again not not only or not primarily general AI
07:08AI type technologies but more specific type of object recognition systems for example so bringing in these large language models
07:19tells us something about the much broader much more expanded and much more sped and scaled up integration of AI
07:27into all aspects of military operations and where this of course causes the greatest challenges is in the targeting cycle
07:37because what we're now seeing is a quite a radical acceleration in terms of how quickly targets are identified or
07:47indeed discovered and then how quickly targets then get actioned
07:52we've heard reports we've heard reports in the context of Iran with you know a thousand targets in the first
07:5924 hours that's a lot of targets that raises a number of questions one question is where is human oversight
08:06right so human
08:08judgment is still required in order to make sure that these targets that are being attacked are ultimately valid targets
08:16that they are not civilian targets accidentally
08:20that the data and information on which these target decision rests is sound and again to protect civilian populations and
08:30civilian objects and civilian infrastructures there's a there's a mandate to take precaution in the targeting process now that's really
08:39really difficult when you're dealing with 41 targets an hour or a thousand targets in a 24 hour period so
08:45that that's a challenge that we're seeing with this
08:48that's sped up so more speed broader scale of targeting that becomes possible with artificial intelligence and specifically with large
08:58language language models and I think this was potentially Anthropik's point as well
09:05there's not sufficient reliability at this stage some of these large language models have a 25 to 50 percent reliability
09:13which means they're wrong very often or they put things together that don't necessarily make sense because these models can't
09:23understand meaning or they don't really have an understanding like we do
09:27so with with all of these new tools in the mix that shifts the temporal horizons that shifts the scale
09:34of possible action and that of course challenges accountability responsibility and human judgment
09:41And once such a rapidly evolving technology as you point out there once it's actually out there what can these
09:49AI firms do to prevent their technology being used or adapted particularly when it's being used in classified military systems
09:58when they don't actually know how exactly the technology is going to be implemented what can they do about that?
10:06Well it depends on the type of technology that we're talking about very often it's there are ways to potentially
10:13safeguard misuse large-scale misuse through various restrictions
10:19but normally technology firms or weapons firms or military suppliers do work with the government to ensure that a weapon
10:29system or any kind of system that is used for military operations
10:33is reliable adheres to various legal reviews adheres to international humanitarian law and the broader stipulations for ethics of war
10:46So what we're seeing here at the moment is really a broadening out of actually dual use type technologies coming
10:54into the military sector
10:56Dual use means there's very often technologies that were designed and invented and used and rolled out
11:03But first for the civilian sector and now find their purpose within a military environment
11:08And very often the safeguards, regulations and the same stipulations for testing, evaluating, verifying, validating
11:20All these robust processes that have to be in place for military technologies are not necessarily in place for those
11:25new types of digital technologies
11:28And so it is an interesting inversion in this anthropic pentagon case in that it should normally be the government
11:37the democratically elected government that puts guardrails, restrictions, safeguards onto the private sector and its interests
11:45in order to ensure that citizens don't come to harm or that the interests of the citizens are safeguarded
11:54In this instance we seem to have an inversion
11:57But I would venture to say there aren't really any good guys
12:00because we have now seen quite quickly how the ethical bar and legal bar has been lowered quite significantly
12:08to stipulations about mass surveillance and fully autonomous weapon systems
12:12when really the use of artificial intelligence in targeting poses a much greater risk and challenge
12:18especially always to civilians because of potential errors and false information and false suggestions towards the human
12:29So, as I see it right now, it doesn't strike me as though anthropic is necessarily wanting to cut ties
12:39with the government or put safeguards onto its products
12:43It should be the role of a government or the international community to find much more robust rules and regulations
12:51than we have right now
12:52or than we are seeing in this dispute to say this is what AI is good for and here is
12:57what we will use it for
12:58and this is where significant risks reside to accountability, human judgement and therefore also risks for civilians in conflict
13:09Elke Schwartz, thanks so much for being with us on the programme this evening
13:13That's Elke Schwartz Professor of Political Theory at Queen Mary University, London
13:17before we areáis, thank you very much everybody
13:17We are금Bubor whoórök place
13:17and we will take care of people in to communities
13:18and this weekend
13:18and come to comfort
13:18We're really 144-ferえる
13:18and we areible
Comments

Recommended