00:00 Well, it's time now for some tech on the program.
00:02 In France's Le Monde newspaper has signed a deal with OpenAI, allowing the company to
00:07 use the paper's content to train its AI models.
00:11 It's the opposite approach to the New York Times, which sued OpenAI for copyright infringement
00:16 after its chatbot, ChatGPT, appeared to reproduce excerpts from the newspaper's archive.
00:22 For more on this, I'm joined by our technology editor, Peter O'Brien.
00:26 Peter, good to see you.
00:27 Hi, Alison.
00:28 Good to see you.
00:29 When you say seek deals or seek damages, it seems that that is now the choice that media
00:32 companies are facing.
00:33 Yeah, that's right.
00:35 It's almost a part two of publishers versus big tech.
00:39 We already had this dispute between publishers, regulators, and Google and Facebook, who eventually
00:46 have struck deals worth hundreds of millions of euros in the EU, Canada, and Australia
00:52 because they were using publishers' content on their feeds without necessarily permission
00:58 at first.
00:59 Now we're kind of seeing the same thing play out with AI.
01:02 If you look at the New York Times and The Intercept, they're some of the ones who have
01:05 decided to sue OpenAI, claiming that they're using their journalists' work without permission
01:12 for the data that's used to train their AI models.
01:14 It's not necessarily data that's immediately available, but it's stuff that's used to train
01:18 the models.
01:19 Now, others, like, as you say, Le Monde, the Associated Press, and Axel Springer, have
01:24 gone for a different approach.
01:25 They have actually signed deals with OpenAI, which allow, presumably in exchange for some
01:31 either monetary compensation or for their own content to be sort of promoted on AI platforms.
01:39 They've signed deals for their journalists' work to be used to train OpenAI's models.
01:44 Now, these AI companies say that they're just using publicly available data, which is fine.
01:50 I mean, Mira Murati, the chief technology officer of OpenAI, said this this week in
01:55 an interview with The Wall Street Journal.
01:57 It was a bit of a car crash.
01:58 You couldn't actually say which kinds of data were used specifically.
02:02 And it may seem like an obvious point, but if they're saying that the data is just publicly
02:06 available so they can just use all of it, well, why are they striking deals with publishers
02:11 then?
02:12 Yeah, no, clearly they know that that's not going to fly for a lot of companies.
02:15 It also makes me wonder what the journalists think about this deal that Le Monde has made.
02:18 Peter, what about people that don't have the deep pockets or the negotiating power that
02:22 big media companies have to make these kinds of deals?
02:25 Yeah, well, you touched upon it there.
02:26 I mean, freelance journalists, anyone who doesn't have those deep pockets and strike
02:30 these deals is feeling sort of left out.
02:33 I mean, we released a report this week about how France is throwing its weight behind AI.
02:39 And actually, we got a lot of illustrators and artists very angrily saying, well, actually,
02:44 our work has been plagiarized to use to train these models.
02:48 That's what they're alleging.
02:49 And now the government is just throwing us under the bus in order to try and become a
02:52 leader in AI.
02:54 And they're not the only ones who kind of feel left out in terms of how these models
02:57 are trained.
02:58 There's also content moderators, thousands of them, who look for a lot of harmful and
03:04 disturbing content to make sure it's not fed to AI systems.
03:08 Our Kenya correspondent, Olivia Bezo, released a report this week when she talked to one
03:13 of these continent moderators.
03:15 AI tools like ChatGBT work almost like magic, but there's no quick trick for building them.
03:26 Thousands of workers like Richard have spent months filtering toxic content out of the
03:31 data used to train open AI.
03:34 After reviewing scenes of child abuse and bestiality, he's been left traumatized.
03:40 You had to go through all of these statements.
03:44 You had to go through all this 250 content within a day.
03:51 And all of these are actually disturbing and traumatic content.
03:56 Richard and three other moderators decided it was time for change.
04:03 They filed a petition in parliament with the help of this lawyer.
04:07 Because this is new age work.
04:09 We've never experienced this.
04:10 We didn't prepare for this.
04:12 Our occupational laws don't even acknowledge harm to mental health as being occupational
04:17 illnesses.
04:18 So how do we change that?
04:20 And that's where the idea of petitioning parliament came from.
04:23 And you can find the full report on the France 24 website.
04:26 All right, Peter, thank you so much.
04:28 Always interesting stuff from you.
04:29 Makes me wonder if France 24 is going to try and sign a deal with an AI company.
04:32 I'm not going to speculate on that.
04:34 I don't know.
04:35 That'll be for another day.
Comments