Skip to playerSkip to main content
  • 2 years ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00 Your reaction to this first hurdle being passed by the European Parliament?
00:03 Well, it's really a day of celebration from the perspective of human rights. There are many of us
00:12 in civil society, in various NGOs, working to protect people from the most harmful uses of
00:18 technology. And today we've had a huge endorsement from the European Parliament,
00:24 one of the three legislative institutes of the EU, saying that they are willing to draw red
00:30 lines in the sand against the most unacceptably harmful uses of AI systems. So there are also
00:37 things that could be improved further, but nevertheless, there's a lot to be very happy
00:42 about today. Yeah, one of the issues going into the day was facial recognition. There was an
00:49 amendment put forth by the centre-right bloc to allow exceptions for police. That was rejected,
00:57 though. It could, though, make its way back onto the table when there's arbitration with the Member
01:04 States and the Commission. Yes, well, facial recognition and whether or not to ban it has
01:11 been one of the most controversial topics in the negotiations on this EU AI Act so far. And over
01:18 the last year and a half, we have seen parliamentarians working really hard to find a
01:23 compromise between all seven of the political groups. And that was something they were able
01:28 to achieve. And what they decided was that all live facial recognition and other biometric profiling
01:34 needs to be banned without exception in public spaces, and that there also needs to be really
01:40 strict limits on retrospective uses, as well as many other forms of profiling, tracking,
01:46 and other systems that are often connected to facial recognition, which have all been linked
01:51 to really egregious human rights abuses around the world. So what do you, on that point, what do you
01:57 answer to, say, the French government, which says we need that facial recognition for the Olympics
02:03 next year to keep visitors safe? It's very similar to what I said to MEPs in this pushback that we
02:11 saw from the centre-right that you mentioned, which is that there is absolutely no evidence
02:16 that the use of these essentially mass surveillance technologies in public spaces do keep us safer.
02:22 When we hear those claims, they're only ever coming from private companies or from governments
02:27 without evidence being put forward that we are safer, and in fact, a huge amount of evidence,
02:33 to the contrary, that we are all less safe when our behaviours, our faces, and our bodies
02:39 are being surveilled and profiled as we try to move around public spaces, whether that's sports
02:44 venues, festivals, going to a protest, going to the doctors, or a bar, or even a religious venue.
02:52 And these are all real examples that I've just listed that have come from the EU in the last
02:58 three years. And so we have really tangible evidence of people's human rights being harmed,
03:04 not just in the EU, but around the world, and no evidence of effectiveness. So we do not believe
03:09 that people should be treated as lab rats in experimental government pilots based on
03:16 claims from companies that this is going to keep us safer.
03:18 There's also so-called predictive policing. Can you explain for our viewers what predictive
03:25 policing is?
03:26 Predictive policing refers to a wide range of different techniques, some more low-tech,
03:33 using even things like spreadsheets, for example, all the way through to algorithmic predictions
03:39 that we might think of more commonly as artificial intelligence, but that are used to try to predict
03:46 whether somebody might be about to commit a crime, or whether they might be likely to
03:52 recommit a crime, which we call recidivism. And essentially what these systems claim to do is to
03:58 be able to tell the future. They say that they know, often based on incredibly discriminatory,
04:05 historically biased information, for example, where people live, who they associate with,
04:10 the school they went to, all sorts of sensitive data about their lives, used to try to say
04:16 whether or not they are going to commit a crime. And when you think about the fact that the EU has
04:21 a human right to the presumption of innocence, predictive policing just really turns that on
04:26 its head, and it suggests that you can use aspects about a person's life to make these really
04:34 crucial sensitive decisions that will impact their liberty. It's another thing, like public
04:39 facial recognition, where we as civil society... Because are there concrete examples where,
04:44 because it's one thing to send police to somewhere where they think there might be trouble, which is
04:48 normal if you're trying to keep everybody safe, but are there specific examples of the police
04:53 with this predictive policing technology putting people in preventive detention?
05:01 Well, we are starting to see around the world, actually, that these systems are increasingly
05:06 being used. So several cases in the US, for example, where it has contributed to people
05:12 having been given a criminal sentence. And whilst we might think that wouldn't happen in the EU,
05:18 I think the reality is very different. And we only have to look to, for example,
05:22 the Dutch government to see, actually, that a lot of governments are experimenting with these kinds
05:28 of predictive technologies. And I mentioned the Dutch government because that is another one of
05:33 the red lines that fortunately the European Parliament has taken a strong stance against
05:37 today. And that's something called social scoring, which has many similarities to predictive policing.
05:44 Social scoring is a technique that we've seen the Dutch government use to predict whether people
05:48 were cheating on their benefits, on their welfare provisions. And not only was this based on,
05:55 again, very discriminatory data, not individual data, but things about where people live,
06:00 where they come from, proxies for their race, their ethnicity. But it was also completely faulty,
06:07 as we know these systems are. And in fact, as a result of that Dutch benefits scandal that was
06:14 known as the Siri case, a lot of people lost their jobs, had their children taken away,
06:20 died by suicide as a result of the false and slanderous accusations of benefits fraud that
06:26 had been leveled against them by the use of these social scoring systems. So it's really...
06:31 I was just going to say, Ella, as you said at the outset, today marks a first step towards
06:39 regulation proper for artificial intelligence. France's president, attending a tech conference
06:47 here in Paris this Wednesday, was asked about the regulation of AI.
06:53 The good thing is that we have a lot of good, very good talent. We have good mathematicians,
06:59 good data scientists, a lot of talent adapted to this AI environment. We will invest like crazy
07:11 on training and research. We want to be sure that this is safe and biased.
07:16 All material, I would say, and that's the language model we have, are not biased. And that's what
07:25 should be forbidden in our society, is forbidden in this model. So we need some rules. We need this
07:32 basic regulation and much more regulation by design. We're listening to those words for the
07:38 first time, as you are, Elly Jakubovska. Your reaction to the tone employed there by the French
07:47 president? Well, it's a little bit ambiguous, I think, but in general, pointing to the fact that
07:56 we do need safe AI is something that we've heard from heads of state across the EU. And that is
08:03 what has underpinned this move in the EU to come up with the Artificial Intelligence Act. But we
08:09 didn't always feel in the initial draft that was put forward by the European Commission, that that
08:14 was actually being achieved. And that's why we've fought so hard for several years to say, if we
08:19 really want safe and trustworthy AI, and we want a European AI market that can be profitable and
08:26 innovative, the number one thing we need to put at the heart is people, people's rights, and what is
08:31 going to be the things that actually improve our lives. We cannot base regulation on false promises,
08:38 on buzzwords, on unevidenced claims of safety and security. And I think that's something that the
08:44 European Parliament has really taken to heart in their position. They have drawn these red lines,
08:49 but they have also set what I would call guidelines for the safe and trustworthy and human-centric
08:55 uses of AI. And going back to your earlier point, that's something that we haven't seen
09:00 to the same extent in the position that has come out of the Council. So the grouping of EU member
09:06 states. And so what we will see over the next few months is the Parliament negotiating with the
09:12 Council of Member States. And we'll see what comes out of it. Just very briefly, because we're running
09:16 short on time, we wouldn't be talking about all this if the world hadn't been shocked by the
09:23 inroads made by chat GPT. This concept of labelling when you're using things like chat GPT,
09:37 is that going to work? Well, I think it's actually very easy to get dazzled by what we hear about
09:44 chat GPT. And actually what the EU has tried to do is focus on the real harms today coming from often
09:50 seemingly much more simple systems. And so whilst I think there are things that need to be regulated
09:55 in the EU's AI Act would take some steps towards, for example, transparency against chat GPT,
10:02 we need to be wary of some of the claims about human extinction, for example, and to instead
10:07 focus on very real tangible harms that we have already seen in Europe and across the world.
10:13 Ela Jakubovska of the European Digital Rights Umbrella Group. Many thanks for joining us from
10:19 Brussels. Thank you.
Be the first to comment
Add your comment

Recommended