Skip to playerSkip to main content
  • 9 hours ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00X is in the crosshairs of regulators around the world, notably because of its AI chatbot, Grok.
00:05And this happening has frameworks on how best to regulate the new technology are being drafted
00:11with a global AI summit scheduled to happen in India on February 16th.
00:16In the lead up to the summit, the third iteration of the international AI safety report was released on Tuesday,
00:22drafted by dozens of experts on AI and assessing both the risks and benefits associated with the technology.
00:28Well, to make sense of this report and the state of play on AI, let's speak with Mélanie Garçon.
00:35As we saw, she's a leading cyber expert and associate professor in international security at University College London.
00:43Thank you so much for joining us on France 24.
00:47Good morning. Thank you for having me.
00:50So let's start with what's been in the news here in France, especially with relating to deepfakes.
00:55They're improving, they're proliferating, as we saw with Grok.
01:01How big a threat is this?
01:03And is there any evidence that it's been weaponized by any kind of malicious actor yet?
01:09Well, actually, the deepfake and synthetic media sort of growth and proliferation, particularly over the last year,
01:21you mentioned the AI safety report.
01:23That is one of the areas where they've really doubled down on saying this has been one of the most significant areas of growth.
01:29They've become absolutely more realistic.
01:31They've been used, whether that's whether the voice or visual media in fraud scams.
01:38We've seen the non-consensual manipulation of images that particularly affect disproportionately women and girls.
01:46And the hazards of this are absolutely growing, partly because of the ease of being able to access it,
01:54and partly because of societal process or, if you want, the resilience of not being able to detect it as easily anymore.
02:04So being able to manage those processes, both on an individual level and on a wider policy level.
02:10Another part of the report dedicated to cyber attacks.
02:16What does AI offer in terms of being able to scale up cyber attacks?
02:20And are organizations, governments actually coping with that?
02:26It's an interesting balance of the cyber attacks.
02:29So absolutely being able to, the ease of being able to generate code has made some of the coding more vulnerable.
02:38We've had AI agents are able to be 77% more efficient, so identify 77% of the vulnerabilities in other AI software.
02:49We've been able to see AI agents on the model being able to compete actually in cyber competitions.
02:55So there's been a lot of growth in that area.
02:59By the same token, there's also been increasing use of AI on the defensive side.
03:04So it's one of those areas where, for every part of the integration of AI and ecosystem, it makes that system more vulnerable.
03:13But at the same time, the same tools that make it vulnerable are also able to make it defend.
03:19So this is getting a trade-off, a little bit, of who can be faster in this field, the defenders or the attackers.
03:26And then there's this bigger question of actually securing the models themselves.
03:32So as we use these in creative, whether that Grok or Claude or Mistral or whichever ones that we're using,
03:40the more we integrate them, the more they also have to be secured.
03:44So they are also at risk of cyber threats as well.
03:47There's also a lot of talks about AI bots being used as companions in the report.
03:54The report outlining that this is becoming increasingly popular, notably with apps like Replica or Character.ai,
04:01but also with more general purpose ones like ChatGPT or Claude.
04:06On the face of it, this could seem harmless, but there are causes for concerns here, right?
04:12Absolutely. And this is, you're right, the report has identified probably the use of agents
04:20and the proliferation of agents, probably one of the most significant growth areas since last year's report
04:26and how those have been integrated, sort of multi-agent systems.
04:30And we've already seen the effects.
04:32There have been lawsuits against both ChatGPT and Character.ai of young people who have either committed suicide
04:39because of the interactions with some of these agents.
04:44And there are real questions sort of open on if you want human resilience, human capacity,
04:50human training to interact with these systems and to have the resilience and know how to appropriately interact with them
04:56and the potential for their manipulation.
05:00But this growth of agents and how to monitor and prepare both companies and individuals for that interaction
05:10is a significant area of risk that was identified.
05:14There's also a sense that these AI Chatbot companions, that could exacerbate people feeling lonely
05:21because it's not a like-for-like replacement for an actual human relationship.
05:26I want to follow up on this question with China actually issued draft regulations on these AI Chatbots as companions.
05:35And they basically stated that there was a need for human intervention at one point
05:40to being able to come into the conversation, for a human to come into the conversation
05:46between a user and a Chatbot whenever self-harm is mentioned.
05:51Do you think this is a viable solution?
05:52This is the illusion that, you know, they've been increasingly in the media and talked about,
06:01particularly as they say with Dr. Mehta and Character AI,
06:04they're supposed to be able to identify when these self-harm conversations happen
06:08and to be able to cut them.
06:10And the problems have been in some extent of sometimes people are interacting,
06:16they're creating companions that are fictional characters.
06:19So the interaction almost takes on the realm of fiction and sometimes can bypass how those checks and balances.
06:26But it could be a way of happening that there would be a flag or some sort of interaction
06:31that sort of breaks a conversation or sort of puts that circuit breaker in,
06:36that provides the user the opportunity to engage with their own critical thinking,
06:41to step back and be aware of the interaction that they're in.
06:45What's your overall assessment of this report that was just released?
06:50Do you feel like it's a fair assessment?
06:54I mean, the report has been put together with over 100 experts,
06:58with 30 countries and organizations backing.
07:02It has enormous value, not just in itself,
07:06but to be able to provide that year-on-year evaluation that,
07:11I mean, it seems, well, they say that 700 million people are using ChatGPT
07:16or AI models weekly now.
07:20Everything that's only since 2022.
07:23We're in a really short period of evaluation.
07:26So the massive value comes from the extraordinary breadth of this particular report.
07:31It's 220 pages.
07:33It's got extraordinary rigor.
07:35And it's aimed at trying to give the evidence base for policy
07:39because they talk about this policy dilemma.
07:43Do you have an evidence dilemma in the sense that you can't
07:47or your scientific evidence runs a little bit behind innovation?
07:51And how do you set policy in that trajectory?
07:54Or how do you forecast policy?
07:56And we get that tension, not constraining innovation,
07:58but being able to make sure that what we're using is safe and fit for purpose.
08:03So it's an extraordinarily dense, evidence-rich,
08:08and with a good breadth of research and academics involved
08:13to give some good indications of where policy
08:18and certainly going towards the AI Impact Summit
08:21of what should be at the center of global conversations.
08:25That Global AI Impact Summit taking place in India starting on February 16th.
08:31Melanie Garson, you are a leading cyber expert
08:34and an associate professor in international security
08:36at the University College of London.
08:38Thank you so much for sharing your insights here on France 24.
Comments

Recommended