- il y a 2 jours
Whodunnit AI, Ethics, and Liability
Catégorie
🤖
TechnologieTranscription
00:00Sous-titrage Société Radio-Canada
00:30Sous-titrage Société Radio-Canada
01:12Sous-titrage Société Radio-Canada
01:24Hi, I'm Jennifer Shanker, Editor-in-Chief of The Innovator, and welcome to the session on Whodunit, AI, Ethics, and
01:34Liability.
01:34And we are really lucky today to have, I would say, you know, two of the most preeminent women in
01:44AI and ethics on the panel.
01:47To my immediate right is Kay Firth Butterfield, who is now Executive Director of the Center for Trustworthy Technology, which
01:57is a member of the World Economics Forum's fourth industrial revolution network.
02:03And she is the former head of AI at the World Economic Forum.
02:10Next to her, we have Renaud Allieu, who is founder and CTO of Prelegance, a scale-up which provides AI
02:20solutions to the defense and intelligence sectors.
02:23And we have Sasha Alenoko, I hope I pronounced that right, AI governance researcher and Kennedy Fellow at Harvard University.
02:35And she is an expert for the OECD's AI Policy Observatory and was named one of the 100 brilliant women,
02:44100 brilliant women in AI ethics in 2022.
02:48So, we have quite a lineup for you this afternoon.
02:53As someone recently said on another panel, generative AI is like a river.
03:00It can't be stopped, but it must be directed.
03:03And pretty much everyone now agrees that some sort of oversight of generative AI is needed.
03:11And the good news is that we do now have many AI principles that have been developed that actually share
03:23some common characteristics, principles of fairness and transparency and accountability and safety and privacy recur in many of these principles.
03:35But now, we need to move from principles into implementation, and that's the hard part.
03:44How do we do it?
03:46So, an hour and a half ago, there was some news on that front.
03:50So, I'm going to start with you, Kay, and have you talk a little bit about that.
03:54Yes, yes.
03:56For those of you who don't know, about an hour and a half ago, the EU Parliament actually agreed to
04:03move forward the EU AI Act.
04:06And some of you might have seen the news about a week ago, where it seemed that there was a
04:11lot of infighting, and that might not actually happen.
04:14But what happened was that many of the Conservative members actually had gone to Bellasconi's funeral, and so weren't there
04:23to vote against the Act moving forward.
04:26So, that's really exciting and historic news in terms of the governance of AI.
04:35So, let me now move to you, Sacha, because you have some points of view about the EU AI Act
04:48and how it will play in the market.
04:53If you talk to most people in the tech sector, they will tell you regulation kills innovation.
05:01Do you agree with that statement, and how do you think that the EU AI Act will, you know, work
05:10in practice?
05:11Hi, everyone. Very happy to be here.
05:13And just to highlight also what Kay was just mentioning, we are living in a very historical moment.
05:18For the last three years, the EU European Union has been preparing the first ever hard regulation on artificial intelligence.
05:26And so, today, the fact that the European Parliament adopted this text after, you know, multiple discussions is actually a
05:33very important milestone in terms of, you know, what AI regulation could look like.
05:36And even though conversation on chat GBT and generative AI are new, I think, to a wide-scale audience, and
05:43also maybe it was the necessary train horse to be able to sensitivize a wider public on the need on
05:48AI regulation,
05:50one thing really important to highlight is that the AI Governance Committee has been ready for the couple last years,
05:56has been laying the ground.
05:57And so, now, when it comes, you know, today we are at this session on Viva Technology, so obviously, you
06:02know, with a lot of, like, take makers, and Renaud will speak more on this.
06:07But obviously, I think one of the key points is also to not fall in the trap of having a
06:11dichotomy of innovation versus regulation.
06:14I think we're at the stage where things are a lot more nuanced, and, again, the AI Governance Committee has
06:19been able to reach a more granular level regarding that.
06:22What I find really promising with the EU AI Act is it has a risk-based approach to AI system,
06:28which means that you don't have a blanket, you know, where you regulate everything with the highest level of compliance.
06:33This is actually something where you have a lot of dexterity, and which can be transferable to other regions, from
06:40the perspective that what you categorize as a risk might be different from one region to the other, but the
06:44model of risk classification is the one, you know, which can be transferable.
06:50What's your point of view as a technology entrepreneur?
06:55So, hi, everyone, and very, very pleased to be here.
07:00As a member of the defense community, in fact, and what we call the beta day in France, so the
07:06defense and industry, industrial base, I don't buy the regulation key innovation, because if you look at France, you know,
07:17France, what France export, the most export goods from France are airplane, satellites, and defense products.
07:27And all are very regulated products, and I think regulation can actually channel innovation, especially in our sector, you know,
07:38in defense, you know, in intelligence, this kind of software, this kind of technology, you should regulate it, you should,
07:45you know, we are regulated.
07:46We have export control regulation in some of our products, and so you should be aware of what people are
07:53doing with it.
07:53You know, it's not something that should be freely available to anyone, of course.
07:57And so I think it could channel innovation, because you are, with regulation, you're also putting barriers to entries, you're
08:02also getting markets, which could be different, you're putting some kind of little bit of protection, you know.
08:09But it has to be common, like, not something doing by France, like in Germany, or by, you know, a
08:15small country.
08:16And the fact it's a European act, I think it's very great.
08:19And it's, you know, I compare it to GDPD, you know, GDPD, which is something which has been very successful.
08:27I don't think it killed innovation at all.
08:29And I think it's something we should pursue.
08:33Okay, you want to follow up on that?
08:34Yeah, absolutely.
08:36And I think, you know, the success of Mistral, the French generative AI company, just coming out of nowhere this
08:46week, shows that there is going to be innovation in the EU, regardless of that regulation.
08:52But also, in the US, you know, we see, oh, regulation kills innovation.
08:59But actually, the National AI R&D Strategic Plan has just been updated to give a lot of money towards
09:08responsible AI and checking for responsible AI.
09:13But I think you would probably agree that for the moment, you know, Brussels is sort of playing the policeman
09:19for the world.
09:20And what do you think the Brussels effect will be?
09:23Will it be similar to, like, GDPR?
09:28I actually think it will, and I actually hope that it will be, because we need good regulation.
09:35And I actually like the way that the act is set up.
09:40And so, I think that the benefit is that, as with GDPR, those countries that don't have the money and
09:50the time to put to this sort of major regulation will actually be able to adopt something.
09:57And that will be great for the world.
10:00Sasha?
10:01Yes, absolutely.
10:02I think that, similarly to the GDPR, we can definitely expect a Brussels effect for the EU-AI Act, because
10:09this is a momentum.
10:10I think, you know, like, today, we're very lucky, you know, that this panel is organized at a, could not
10:15be a better timing, you know, than the day that the European Parliament is adopting the EU-AI Act.
10:19But also, this has been the year where, finally, you know, we're having those discussions, which are much needed, and
10:24we're so far, you know, being led in a smaller community.
10:27And just in terms of example, actually, Brazil already adopted an AI bill, which was modeled on the risk-based
10:33classification, similar to the EU-AI Act.
10:37So, what is interesting is that this is, like, another continent, a very different region, doesn't necessarily, you know, have
10:41the same, like, background in terms of, like, data protection rights.
10:45And yet, they are meeting the momentum.
10:47So, I really hope that this will encourage other regions, you know, to see, like, it's not innovation versus regulation,
10:52but this is the right way to have the safeguards we want.
10:55So, I think you bring up an important point, because some people say, or they're concerned, that the discussion about
11:06regulating AI has been very Western-dominated, that the Global South is being left out of the picture, but also
11:13Asia.
11:14Now, you've just come back from Singapore, where there was a conference about some of these topics.
11:20Tell us what you learned there.
11:22Yeah, certainly.
11:23So, Singapore has always been ahead in thinking around about responsible AI.
11:30And back in 2019, in the World Economic Forum's office that I led in San Francisco, they sent fellows to
11:39start something that at that time was called the Model Governance Framework for AI.
11:44That has developed this year into AI Verify, which is a scheme that the Singaporean government has created, which actually
11:55allows users and developers to test and audit their systems, which, of course, is really important.
12:04And so, we are seeing movement in Singapore and Asia, obviously in China as well, we could talk about that.
12:14But I wanted just to pick up what you were saying about what you were saying about the Global South
12:19not being involved.
12:21We've heard that, obviously Brazil is, but if we look at large areas of the Global South, there are two
12:27problems, particularly with generative AI.
12:29One is that the data is so northern centric.
12:34And as you've probably all thought about, the data is not only northern centric, but it's very male dominated as
12:43well.
12:43And so, it's really difficult when we think about how do we use generative AI with the Global South, not
12:53to see that as some sort of digital colonialism, and not to want to do something about that.
13:00It's a good question, and I guess what we need now is, I know that in the EU AI Act,
13:11there are some provisions, I think, for auditing the auditors.
13:16But how do you think, globally, we can actually test and set up a system to make sure that people
13:32are really complying with rules?
13:36I think that's an excellent question, and I will also tie it back with the previous points that you discussed,
13:41which is also how can we make sure that we have an AI agency to hold, you know, those different
13:46applications accountable,
13:48while at the same time making sure that it reflects, you know, a coherent set of values from a represent
13:54set of actors around the world.
13:55And this is where it's like, we have a tension, right?
13:57We have those AI products, which are exported around the world, and yet, the way they may adopt it, and
14:02the threshold of acceptance may be different according to each cultural context.
14:07For example, you know, if we take the case of facial recognition technologies, I think, you know, we have a
14:12very strict way of, you know, regulating it,
14:14now with the EU AI Act, and yet, in other jurisdictions around the world, those threshold of trade-off may
14:19be seen more positively.
14:21And we have to acknowledge this. We have to acknowledge all of those differences.
14:24So, I think having a plurality, you know, of, like, AI governance institutions, such as the Global Partnership on AI,
14:31which has the mandate, actually, of coordinating international cooperation on AI, and also making the link between AI research and
14:38policy making,
14:39as well as the OECD AI Policy Observatory, which includes a lot more countries, you know, than the core OECD
14:45members,
14:47are good steps, you know, towards having those representative voices.
14:50Now, towards who's going to, you know, like, audit the auditors, how are we going to, like, control this?
14:54I think we have to see this, you know, for the hard regulation.
14:57In the EU AI Act, we have this implementation, well, we have the adoption this year of the core text,
15:05but then we have a two-year buffer zone for businesses to be compliant and to be able to develop
15:10their self-assessments.
15:11And this leaves us also the time to be able to find the right entities, you know, to certificate and
15:16audit those models.
15:18What are the dangers during that two-year time?
15:23You know, we saw with social media the unintended consequences. There could be many others.
15:29And I sat in a conference where the chief economist for Microsoft said,
15:35Well, you know, we should just wait and see what harm may be caused and then deal with it.
15:42That, to me, is very scary.
15:44So, you know, what's your opinion about, you know, what do we do in the interim?
15:49Because there's unintended harms, but there's also a danger of monopolies developing in that interim.
15:58And it may be too late to stop them. So...
16:02Yeah, I agree with that and maybe some kind of professional deformation, but I'm very worried about privacy on the
16:10internet,
16:11privacy about social media, etc.
16:12And AI, generative AI, all what's going on around LLM, which is amazing scientifically and technologically,
16:23but also can be very dangerous because we know there is problem in this model against copyright,
16:28against privacy, against GDPR, etc.
16:31And this is, I think this is something that should be regulated, that EU is tackling,
16:37and that we should tackle at a global level, so this is complicated.
16:42I don't know how to do it because, you know, I'm just, like, developing software.
16:46But that's not, but I think this is something which is scary, and it's scary in terms of how we
16:53think, you know,
16:54because chat GPT is amazing.
16:56But you understand when you are playing with it that it's changed the way you are thinking
17:01when you are developing, you know, new code, when you are writing an essay,
17:05and when you're, you know, it makes you less curious, it makes you less,
17:10you challenge less what you are finding information in just, like, listening to a machine.
17:14And I'm very worried for the young, the teenager who will be born with this technology.
17:21You know, they will not seek for information, they will just ask the machine
17:24and just copy what the machine is answering.
17:27And so this kind of thing, you know, like, there should be some studies, some research
17:37to understand what is the impact on teenagers, what is the impact on research, et cetera,
17:42to be sure that people are using it, but that the impact is positive.
17:49So Kay, you have a background as a lawyer, so, you know, do we need to start from scratch,
17:56or can we apply some existing laws?
18:00What needs to be done?
18:02Yeah, absolutely.
18:03I was going to jump in there, Jennifer, but you got there first,
18:06because I was going to talk about, first of all, agile governance.
18:10I think that Singapore's AI Verify is a really important example of agile governance.
18:19So governments should be thinking about that.
18:21But to your question, yeah, we've got lots of laws on the books that we could use now.
18:28And the FTC, the DOJ, the EEOC, and a couple of other regulators in the United States
18:37actually issued a statement saying, we are going to use the existing law,
18:43and so you can't hide behind the fact that you used an AI system
18:48when you have been biased or discriminatory.
18:52And so there is certainly a move in the United States to do that.
18:58And I think also, you know, we're seeing these cases in both the UK and the US,
19:03I don't know about Europe, around some of the development of images by generative AI,
19:13new images, and the copyright cases.
19:17And just to give you an example of something that the Supreme Court did two weeks ago,
19:23in a case called, in a case about Andy Warhol, creating new images of a photograph of prints.
19:34And please do, if you're interested in this, read the whole judgment.
19:39But in summary, instead of just going with fair use,
19:43which is what we thought that would protect AI-generated images,
19:50they said, no, we have to look at the commercial impact on the original artist as well.
19:59So they actually found in favor of the woman who had created the original photograph
20:04that Andy Warhol licensed one picture of.
20:10So that's really, really interesting and a potential development in this discourse.
20:17So the title of this session was Whodunit, right?
20:21So in terms of liability, if something goes wrong, who do we blame?
20:32Is it the developer? Is it the designer? Is it the company that implements?
20:39How are we going to figure this out?
20:42Sasha?
20:44That's a tough one.
20:46But actually, I will be very aligned, because we are in Europe right now,
20:50with the EU AI Act, where it is the actor who puts the AI product on the marketplace,
20:55who is liable in case of harm.
20:57And of course, you know, this approach may be adopted differently in other jurisdictions.
21:01It can also be, you know, speaking with some American lawyers and activists like yesterday.
21:06Maybe, you know, in the United States, there will be a plurality of actors.
21:09Maybe, you know, there are different segments where the AI harm is generated.
21:12And with large language models, actually, to know who's liable becomes increasingly complex.
21:17So we definitely have to keep this into account to have agile governance.
21:21But in Europe, the actor putting the product on the market will be liable for harm.
21:27Do you have a point of view?
21:29Yes.
21:29I think, as a company, you should be aware of that.
21:32And you should be liable of what you are implementing.
21:35When you are reselling, it's another stuff.
21:37But, you know, as a company, we are very careful of what we are doing,
21:42who we are selling, what product we are willing to do, and what product we are not willing to do.
21:46You know, there is some stuff we said, we are not going to do it, even if we are asked,
21:50even if people give us money.
21:52And I think, as a company, in AI, but in tech in general, you should, like, think about it.
21:59You don't just think about the money you are taking.
22:01And I think, especially with a very big company, very big actor, saying, it's not my fault.
22:07It's just, like, misuse, et cetera.
22:09You know, I'm not very fond of that.
22:11You know, because as a developer of a very sensitive subject, we think about it all the time.
22:17And we have even, like, an internal committee, this kind of thing, you know.
22:21And all our developers are asking us, as a leader, you know, as manager, what are our limits,
22:27and what are they developing, what will be the use, et cetera.
22:31So, we have to give them answer to the investor asking us to.
22:35So, I think, as a company, you can't be not liable.
22:40That's, even if the law allows you, you know, that's something, for me, doesn't feel right.
22:48Okay.
22:49Yeah.
22:50So, I wanted to raise that case where the lawyer in America used,
22:56I think it was ChatGPT, and then filed the brief with, I think it was six cases that ChatGPT had
23:04actually made up.
23:05And he didn't fact check this.
23:09And so, now the way that the courts in the States have dealt with it through a decision in Texas
23:15was that if you have used any form of AI in drafting your brief, you have to actually declare it.
23:23So, that's a bit of agile governance that's good.
23:27But the reason I wanted to raise this is, you know, you sort of look at that and you think,
23:32how could somebody be so stupid and he should be held negligent for that?
23:38But equally, if we don't actually push the negligence or the causality back onto the system, the system doesn't,
23:47there's no impetus for the system to get any better.
23:50And so, as we want AI to be trustworthy, as we want to be able to use AI for all
23:57the benefits,
23:57then we have to push, have that downstream liability on the producer as well.
24:06Okay. So, we've talked about, you know, how to deal with these issues in the short to medium term.
24:17Because these problems are global, and we are talking about global actors,
24:24do we eventually need to think about some sort of international oversight body?
24:31And what would that look like?
24:34Would it, should it resemble ICANN, which oversees the internet?
24:39Could it resemble the, you know, the agency that oversees atomic energy?
24:47Do we have models out there that we could build on?
24:50And do we need to change them or tweak them in a way that makes sense for generative AI?
24:56And this is to all of you.
25:00I would say that we have to be very careful of not getting our attention hijacked.
25:04And this is true, you know, for the kind of harms, AI harms that we focus on,
25:08as well as where we should start when it comes to AI governance.
25:12So, just to also loop back in, you know, to a point which was previously made by Renault, you know,
25:17of current harms, I think that right now we're hearing a lot about potential existential threats, you know,
25:22and AGI potentially coming and destroying humanity when we actually have real current harms happening right now.
25:28So, I think the first thing to really have clear in mind is to not get our, you know,
25:33cognitive attention being hijacked by all of those big claims and stay focused by, you know, the current timeline.
25:39Setting the right AI governance mechanisms now is the also possibility of having AI safety in the future.
25:45So, everything is connected and one lays the path for the other.
25:49Now, to the point, you know, about do we need a new agency or not, I think it's the same
25:53point.
25:53We have to be careful of not getting our attention hijacked and stay focused on the efforts that the AI
25:58governance community
25:59has been paving the way for years.
26:02Those conversations on generative AI may be the throwing horse for AI regulation,
26:06but they've been happening for a long time.
26:08This is not coming out of the blue.
26:11And I think that, you know, probably this is also part of us, the AI governance community,
26:14to make a much better job to loop in citizens, you know, industry experts,
26:18to also be able to democratize, you know, this conversation because they have been happening.
26:23And so, one of the key conversations we had is whether we should have an AI agency modeled
26:29on the Intergovernmental Panel on Climate Change, the IPCC.
26:33And those conversations we already had in 2018-19, notably by the G7 presidencies under President Macron Trudeau,
26:41and gave the path to the Global Partnership on AI.
26:44So, the irony is that now, you know, we're reading all of those news that we need an AI agency,
26:49and maybe GPI doesn't have the regulatory power, you know, that has been advocated for,
26:53but we have a first version of it.
26:55So, whether it's a model similar to it, you know, with like heightened regulatory power or GPI++,
27:01there's already something there.
27:06Oh, I think that, I think it should be, it could be difficult to have an international cooperation in this,
27:15fully international.
27:17we see that the value of privacy, of respect and, yes, respect and transparency in Europe are not shared by
27:28everyone in AI.
27:29We see some countries where AI is used to monitor citizens, to track social behavior, etc.
27:37So, I think, as you said, to not get our attention distract, focusing on cooperation between countries that share the
27:48same values,
27:49should be, could be a very good first step, you know, to show that we can, we could have Europe,
27:53and I don't know, US and some other countries, for example, South America, you spoke about Brazil,
27:59that share some kind of democratic value, of liberal values, saying, okay, we should protect privacy,
28:03we should protect human rights, etc., and AI should not be used to attack this right.
28:09Could make some kind of block, like to, to, yes, to have some kind of inequity, because I'm afraid that
28:17AI will be misused by some countries,
28:20some countries which use it to track their population, to track dissidents, to track this kind of thing.
28:25Thanks. Okay.
28:26Well, actually, I agree. I think it's very hard to get everybody to come together internationally,
28:33and whilst we're trying to do that, we're missing some of the problems that we have currently, and not addressing
28:40them.
28:41When I was at the, when I worked as head of the World Economic Forum,
28:46I was asked all the time this question of, you know, how are you going to bring America, Europe, and
28:53China together?
28:55And I used to say, well, I'm not, because it's actually a geopolitical problem.
29:00And it's not just confined to AI. There are all sorts of other geopolitical problems out there.
29:07And so I think we need to concentrate on doing what we actually can do.
29:11And I agree, bringing the people who actually can do something together is a great idea.
29:18Thank you. So I now want to bring in some of the questions from the audience.
29:24So one of the questions is open-sourced training of machine learning models has proved very effective,
29:36but how can that work in strategic sectors where data is hard to publish?
29:45So who wants to tackle that?
29:54Open-source is the base of the internet and all software development. So you cannot not be fond of it.
30:02But as a company, you have to make money. So sometimes it's hard to make money with open-source.
30:08So that's the first thing. So we have to agree that sometimes closed-source helps to protect IP
30:13and to protect your differentiation as a company. And that could be important.
30:19So auditing the data by an external, like, authorized auditor could be a good solution.
30:27And also, for example, especially if you think about defense, but also if you think about medical imagery,
30:34if you think about application to disease control or this kind of thing, data will be very private, you know.
30:40And so you can't open-source it. So I think we should be ready to have data, some data which
30:48are not open-source. That's okay.
30:51But especially for medical application, having some ability to audit it, which is basically already the case in France at
30:59least,
31:00that if you are doing AI and medical data, you should respect some kind of law like having your data
31:07encrypted and this kind of thing.
31:08So I think this is important, yes.
31:11What about if someone uses an open-source model and something goes wrong, then where is the liability?
31:21Just yes.
31:24It's a real problem and one that I don't think we're going to solve any time soon.
31:30So going back to, I have one other question I'll bring in from the audience, and that is, they reference
31:38the fact that the Italian government, like, kind of put a temporary ban on chat GDP.
31:44And then OpenAI actually was able to adjust and still, you know, create the service.
31:51So what are your takeaways from that?
31:56It was about the, you know, the decision of the Italian government to temporarily shut down chat GBT and then
32:03OpenAI adjusted and, you know, we're still able to offer the service.
32:09So what's the takeaway from that?
32:11The takeaway is that we're living in a certain time and we'll need a very agile governance to be able
32:15to adopt, you know, adapt to all of those things that we'll not be able to foresee.
32:20And obviously, you know, I think that regulation has to come from interdisciplinary expertise.
32:25It can not just be, you know, policymakers and lawmakers.
32:28We also need, you know, like that a scientist industry leaders be part of it, especially for those kind of
32:34outcomes for the things that we cannot foresee.
32:36And we would need to test, you know, and iterate on the governance to be able to get, you know,
32:40the perfect cocktail of solutions to be able to solve those kind of challenges.
32:46So we're, we are, we have like two minutes to go.
32:51What's, what's your words of advice to the young companies and large companies that may be in the audience about,
32:59you know, what should they be thinking about in terms of liability?
33:02Okay, so my words of advice for startups would be make sure you think about responsible AI now and make
33:11sure you find a funder who believes in it as well, because VCs tend to be looking at the bottom
33:19line and not thinking about the responsible piece.
33:22But there is now a lot of literature that can show that by doing it right, you actually improve your
33:30bottom line. So look for that as well. And if you can't find it, I can send it to you.
33:35And for large companies, please stop laying off your responsible AI fakes.
33:42Thank you.
33:44Yeah, as a founder, I think, especially in AI, but in all products, you should do something that you believe
33:51in, in terms of impact, right? If you make a company, if you are building an AI, build something you
33:57think is useful. And you think doing, doing helping people. So this is about value. So everyone in the room
34:07will have a different,
34:09different opinion about that. And some people will say, okay, AI for defense and intelligence is not good. But as
34:15long as you as a founder, you think the impact is important. That's the first step.
34:21Sasha, you have the last word.
34:23The last word. I would say in general, be very careful of where your attention goes. Don't get your attention
34:30hijacked by speculative AI risk.
34:33And I think just, you know, to reiterate on what the other panelists say, I think responsible AI is not
34:39just a way of being compliant. If you're based in Europe with the EUA Act, it's also a way of
34:44matching your values and having a product that citizens, you know, are willing to adopt.
34:49I'm much willing preferring, you know, to adopt a product I trust in, I understand, than one, you know, which
34:55actually is going to cause societal damages. And I hope that we're, you know, at a current stage in generation
35:00where this is something we collectively prioritize. So again, it's not innovation versus regulation. It's one which can, you know,
35:07hope, like together, advance societal well-being.
35:11Well, with that note, I'd like to thank our panelists. Please give them a nice round of applause.
35:34Thank you, guys.
Commentaires