Skip to playerSkip to main content
  • 7 hours ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00Deepfake videos where reality is altered but almost impossible to differentiate from the truth.
00:05AI-generated emails that replicate the tone and characteristics of a colleague or close friend
00:10in order to elicit private information.
00:13Social media accounts run by robots that flood feeds with lies
00:17but the roots of which are very deeply buried,
00:19leaving readers lost as to know exactly what's real and what is not.
00:23Our increasingly online world has led to a new war,
00:26one on information with an impact on elections, mental health and even people's bank balances.
00:32It's a battle that the French president has put high on his agenda.
00:35After spending over an hour discussing it with high school students last week,
00:39he's travelling to a local newspaper this Wednesday for a discussion with readers there.
00:43Well, to get more insight on it all, we're joined now by Romain Plonso.
00:46And Mr. Plonso, you work for a company that focuses on reputation monitoring
00:50and the study of online behaviour.
00:53So it's great to have you in studio. Thanks so much for coming in.
00:55And can I start simply by asking you, I mean, we're slightly aware of the danger of social media,
01:00of not knowing where all the information comes,
01:02but what's the biggest danger for us on social media today?
01:06Today, we find a new strategy that is run by our, let's say,
01:13opponents worldwide that are running a proper strategy of misinformation
01:21to impact our behaviours or opinion online.
01:25And we see that something big is happening at the moment.
01:30And that's probably why our president Emmanuel Macron will speak today
01:34at the Depes de Midi and their, like, 200 readers,
01:41to let them know what's going on.
01:42And especially, like, we, as a French, fight against disinformation.
01:50And probably, based on what Élise said, Emmanuel Macron will speak about
01:55how we can better regulate this digital world,
01:58and especially in the context of local elections.
02:00OK, so there's a big question of the laws surrounding our digital world.
02:04Let's get on to that in a minute.
02:05But for the average person at home, I mean, is it simply just dangerous
02:10because we're taking information from sources that we're not fact-checking?
02:15I mean, there's a big distrust in mainstream media.
02:17So people are getting most of their information, their news, if you like, online.
02:22You know, and I guess not fact-checking where it comes from,
02:24is that the main danger that we take it all as truth, in a way?
02:27Absolutely. And if we look at what's happening right now,
02:30we have a concrete example with CopyCop, which is based in Russia,
02:35which is a group that is launching hundreds of fake local newspapers' websites
02:42and feeding them with misinformation of fake news
02:47that are relayed online on social media for virality, of course.
02:51And this strategy is clearly engaged to change our behavior
02:57and to have an impact on the local elections at the end.
03:01So the fact-checking is more and more, of course, important.
03:05Your role as traditional media is major as well.
03:09But we can ask way more to the, like, social medias to moderate their content.
03:17That is something that has been done in the past,
03:20that is less and less done and less and less effective.
03:23AI has a role as well, because AI is helping this organization
03:29to produce more and more content faster.
03:32As soon as one of their websites is disconnected, a new one appears.
03:37So it becomes more and more complicated to regulate that.
03:42And AI has another role, because if you look at the major AIs,
03:47their sources of information are mainly Wikipedia and medias, traditional medias.
03:54So they consider these fake websites as media and, I mean, legitimate sources,
04:01and it impacts their answers.
04:04So one third, if you look at Moldova elections, for instance,
04:09these fake news, fake medias, achieve to bring false information to the AI,
04:17and one third of the answers of these AIs, of LLM, were fake at the end.
04:24Okay, so it does sound like, for the average person,
04:26it's almost impossible to have the time, the resources, the knowledge to fact-check.
04:31We do need these laws to be put in place.
04:32But is there a political will to do that?
04:34I mean, we seem...
04:35Apparently.
04:36So our President Macron will have a speech about that.
04:40We have also Viginium, that is our French agency
04:45that fight against this intelligence and influence online.
04:50They produced the first report for 2024,
04:54highlighting that the 25 operations run in France during that period.
05:02So the first thing is to identify it, highlight it,
05:06and, of course, we have to educate our population,
05:10and that's probably the role of our government,
05:12to better educate the local populations,
05:17to let them know that it's happening,
05:19and how to identify what's the fake news.
05:21Indeed, there was even an EU disinformation task force set up a couple of years ago
05:25to do a similar job.
05:26But the problem really is,
05:28we are seeing heads of state use deepfake videos for their own means,
05:32and it needs to be an internationally-led initiative
05:36for it to have any real effect.
05:37Absolutely.
05:38That's something that is in discussion.
05:41The role of AI is major and has to be regulated at a point.
05:46But let's see what's coming in the coming months.
05:49And again, for people at home,
05:51I mean, also we have to be aware of our own online reputation
05:54and being present in social media.
05:56Where do you think people are on that?
05:58And I mean, is just simply not being on social media
06:00a valid option in today's world?
06:02It depends on what your daily life is.
06:07We were discussing that before that speech.
06:11For some of us, it's major to be online,
06:13to be recognized, and to be heard as well.
06:16If you talk about the new generation,
06:18Gen Z is fully online.
06:21It's the first source of information as well.
06:24So they won't get rid of that.
06:26But of course, you have to be careful about what you are sharing.
06:28We see new AI, such as Sora,
06:31who offer you the possibility to create your own avatar
06:35with your own image, your own voice.
06:38And it can be public.
06:40We saw that with some of influencers
06:42who offer the possibility to create content
06:44based on their own image.
06:46It's a big plus for the virality.
06:49They have been seen everywhere,
06:50even they didn't post anything.
06:53But at the end, you lose the control of your own image
06:56and your own speech, which is a major issue.
06:59So we have to be really, really careful about that
07:01and about what we are doing with our image and speech
07:04and online behavior.
07:06Indeed.
07:07It seems that when it comes to technology
07:09or knowledge for the average person
07:11and the laws seem to be so far behind the development,
07:15I mean, what do you see as the next big risk for us?
07:17And how can we be just that little bit more aware?
07:20There are thousands of risks, to be honest.
07:24If you look at WhatsApp, for instance,
07:27who ask us to share our private conversation
07:31to train their own AI,
07:34it can be highly risky from my point of view.
07:39But all these AI and LLM tools are based on data.
07:44And we have the control of sharing this information or not.
07:49You have to read carefully the rights
07:52that you are giving to this LLM
07:54and be aware of what they are doing behind them.
07:58But maybe explain a bit further,
08:00because we are always told,
08:01be careful of your data.
08:02But a lot of people say,
08:02oh, what does it matter if people know
08:04I'm going X, Y, and Z or doing X, Y, and Z?
08:06And you do have to make the effort
08:08when you're avoiding cookies,
08:10when you're Googling stuff online.
08:11It does take quite a bit of time sometimes.
08:15And you have to click sometimes
08:16on all the little boxes.
08:17How can that data be used against us, basically?
08:21You know, how important are what seem like banal facts?
08:26Let's get back to deep fakes, for instance.
08:30And you, especially as a journalist,
08:33we can use your own image, your own voice,
08:36and make you say the complete opposite
08:39of what you are thinking.
08:41And it can have a major impact on your career
08:45or even more.
08:48So we saw deep fakes that were like,
08:53I mean, pretty obvious and clear,
08:57easy to identify with our president
09:00or influencers, but it's not the case anymore.
09:04With the new release of Sora, once again,
09:09it's almost impossible to identify
09:11that it's a fake, a deep fake.
09:14So be really, really careful
09:16about what you are sharing online
09:18and make sure that you checked
09:20that you are not letting these companies
09:24using your own data.
09:26Okay, Romain, we'll have to leave you there.
09:28We thank you so much for joining us
09:30and bringing us your insight on it.
09:30Thank you for having me.
09:32Such an uncertain world we're entering into.
Be the first to comment
Add your comment

Recommended