Passer au playerPasser au contenu principal
  • il y a 22 heures
Where Will GenAI Take Us in the Next Decade

Catégorie

🤖
Technologie
Transcription
00:00Hello, everybody, and welcome. My name is Daphne. I'll be your moderator today. I'm a French tech reporter with Sifted,
00:09a media publication that covers European news for startups and tech.
00:16And today, we'll be talking about the topic that's arguably on everyone's lips at the moment, and has been for
00:22a few months, which is AI, Gen AI.
00:27It's kind of hard to believe it's only been a year and a half since ChatGPT took the world by
00:32storm.
00:33Obviously, AI has existed for much longer than 18 months, but it does feel like the last 18 months have
00:40been news upon news upon news.
00:42And I think, more importantly, contrary to before, these news are now really top of mind for the general public
00:49in a way that it perhaps wasn't before to the same extent.
00:54And I think, obviously, it's for a very simple reason. Gen AI has made a lot more apparent what AI
00:59can do and what it wants to do in a way that's much more tangible, I think, for the public.
01:04And with that has come a lot of anxiety and perhaps fear of the risks tied to AI, of what
01:12AI can do, of how far AI can go.
01:15And at the same time, I think we've never heard about concepts like transparency and openness as much as we
01:24have before in the past 18 months.
01:27And I'm thinking of things like the European AI Act, which was passed with a lot of noise, things like
01:33Sam Altman being ousted and then reinstated at OpenAI.
01:36So we're seeing more and more, I think, general public attention around how to make AI safe, controllable, ethical, how
01:43to make it work for everyone.
01:45So just how far can AI go?
01:48Why should we be scared of it?
01:50Should we be scared of it?
01:51And how can we make it work for everyone in an equal way?
01:55Those are the topics we'll be exploring today with our fantastic panelists.
02:00So I'm joined by, starting nearest to me, Isabelle, who is the director of the Paris AI Research Institute.
02:08We've also got Thomas, who is the co-founder of open source AI startup Hugging Face.
02:15We've also got Gabriela, who is the assistant director general at UNESCO's social and human sciences department.
02:22And finally, right down at the end of the line, we've got Connor, the founder of Conjecture, which is a
02:27startup that focuses on building AI that is safe and controllable.
02:32I'll start with a little bit of scene setting.
02:35As I said, I think there's a lot of anxiety among the general public when we discuss where AI is
02:41going and how far it can go, how much, to what extent it's reached its limits.
02:48There's a lot of noise around it.
02:50We hear things like artificial general intelligence, superhuman intelligence.
02:55So could you just give us a bit of an idea of how much further AI can actually go?
03:03Is there exaggeration around how far it can go?
03:06Or on the contrary, do you feel like we've really not started to see what that technology can deliver?
03:17Well, maybe just to start by saying that for me, generative AI was more a blessing in disguise because we
03:26are in charge of the framework on ethical governance of AI.
03:29And what happens is that you realize how the capacities of these technologies can grow exponentially.
03:39And without this ethical framework, of course, there is higher risk of not being able to control the downsides.
03:46And therefore, the question here was not a technological discussion, because that's what it has been.
03:54We try to make up for the algorithms and the developments.
03:58And what we need to discuss is how do we frame these technologies in a way that will help us
04:05build a more inclusive world,
04:08that will help us address the climate transition, that will help us bridge gaps, gender gaps.
04:15We know that that's one of the downsides of the technologies.
04:19And therefore, it propelled this action, policy action, that for me was lacking, understanding better how policies can shape these
04:27markets.
04:28And it did something else.
04:30It got leaders very worried.
04:32And when leaders get worried, they get to act.
04:36And then we got the executive order, and we got the AI safety summits, and then we got the AI
04:41acts already.
04:42They were already being negotiated.
04:44So I feel now we are not in the world before generative AI, where the discussion was whether regulation was
04:52going to hinder innovation.
04:53We are in the world where we're sitting together to see what kind of policies can shape these markets better.
05:01because we know they are highly unequal, two countries developing 80% of the AI products,
05:10women being not represented, half of the world being not very well connected to a stable internet,
05:18the discriminatory traits, and then all the existential traits.
05:22because what happens with generative AI is that it learns by itself.
05:27And therefore, this is an area that I feel everybody was very scared.
05:31But at the same time, super promising.
05:35But it depends on us, and how do we frame it to make sure that it delivers for good.
05:41Yeah, to pick up on that, I definitely agree that the potentials of this technology are incredible,
05:47and have grown in this exponential rate.
05:49I think this is kind of one of the most noteworthy things, just how fast this technology has moved.
05:56You know, a hundred years ago, developing a new technology, distributing a new technology like this could take decades or
06:02centuries even.
06:03Nowadays, we can go from, you know, having GPT-3 or GPT-2 even, which could barely string together a
06:11sentence,
06:11to having, you know, systems today that can explain quantum physics to you in perfect Shakespearean English,
06:17and all within, you know, less than a second.
06:19This is an incredible rate of progress.
06:21So, I think, to pick up on the original question asked about how far can this go?
06:26Well, I think we don't know, and we have no reason to suspect it ends anywhere nearby.
06:33As an example, chimpanzees have the same types of brains we do, just about a third the size.
06:39Turns out, if you take a chimpanzee brain, you make it three times bigger, they build nuclear weapons and go
06:44to the moon.
06:44That's a pretty big change.
06:46But what happens if you have something of that size, and you make it another time three times bigger?
06:49Well, nowadays, you can make an AI three times bigger quite easily.
06:53You just spend three times the money, you know, modulo some engineering time.
06:57So, as for what the true limit is to what AI technology could reach is, we don't know, but it's
07:02probably far beyond what humans can do.
07:05Are humans optimal? I don't know about you, but personally, I forget things all the time.
07:10I can't do math in my head that well.
07:12I think it's very likely that these systems can become vastly superhuman, and if the exponential continues, they will probably
07:20do so very, very soon.
07:30We are now in a phase that everybody has touched AI with generative AI, but AI was already in our
07:39life.
07:39Just people didn't notice it. You have very amazing applications that are using AI in your pocket each day.
07:48We come back to these things between human and AI, probably because of the way we are interacting with generative
07:57AI.
07:57It's not just the power of generative AI. It's a way to be able to talk with it, and then
08:03people again forget that this is a tool.
08:07It has probably a lot of progress still to do, so I don't know personally where it will end, but
08:15today a lot of things are still missing to AI.
08:19For example, the capacity of explaining the reasoning, the capacity of planning actions, and there are a few things that
08:31are announced for the next generation.
08:33So I fully agree that it will maybe not end during our life, but I'm not sure this will continue
08:41in the same direction.
08:42When you consider research, you always have a new technology, so we have a hub, and then it continues little
08:50by little to increase, to be more performance, more optimized, and then you have something new coming.
08:58So maybe it will be a more powerful generative AI, or maybe it will be something completely different. We just
09:05don't know.
09:07Yeah, the same for me, but let me frame it a bit differently. I think it's really boring the way
09:13we see that, because we lack imagination.
09:16We compare that to human, we say AGI is above human, but AI can do a lot of things we
09:21will never be able to do, right?
09:22So this week, I was doing some work with a new collaboration. We started on computational chemistry, on predicting the
09:31property of material.
09:32That's just something we cannot do as humans. We cannot predict the property of a new material. AI can do
09:37stuff like that.
09:38Voice cloning, we cannot clone voice. I cannot. AI is already above the human level in voice cloning. Maybe you?
09:44Yeah. Some people can.
09:45Now, there's a lot of things we, I think sometimes we think a bit too small in just comparing to
09:49human, which also lead us to just thinking as AI, this very boring way we're going to replace human by
09:55AI.
09:55That's so boring as a way to think about how to build AI. It's much more exciting if you think,
10:00okay, how can I augment this with like these new things?
10:03How can I do stuff that just human cannot do? How can I help a human or researcher to do
10:07what he does in a way that human cannot do? And that's fine.
10:11So yeah, I think we're going to see tremendous improvement. I think we're just at the beginning.
10:16What I found personally super exciting is how we went from, you know, this text boom to now image, to
10:22now speech, to now video. I work also a lot in robotics, seeing that go to robotics, quantum chemistry.
10:28So I think the domain, the field of application where we can reuse these kind of general techniques that we
10:34discovered can work is tremendous. So yeah, it's going to be big.
10:38Bouncing back, bouncing on top of that. Do you feel like there's a level of educating the general public about
10:46where the real risk lies?
10:48I feel like I've heard that from someone before where they were saying that a lot of the public is
10:52scared of the risk of AI in a very science fiction way.
10:56And that kind of displaces the focus of the conversation from where the real risk lies.
11:01Would you say that that's an accurate depiction of the current state of things?
11:07I think that for me, the main risk today is misuse. You give this tool to everybody, every citizen in
11:16the world. No, not every citizen, every citizen that can access to it.
11:19Without explaining carefully how it works, without explaining people that they have to keep in mind that they are the
11:29final user to decide what will be done.
11:31Not just accepting results from AI without any sense of critics.
11:39Fully agree. I think the main danger today, and it's a real danger, is using AI where it's not supposed
11:45to use just because you don't understand the limitation.
11:47Just because a lot of people still lack...
11:49Well, I think the notion of hallucination and this type of issue start to be much more widespread.
11:55But if I ask my parents that use JGPT, do they understand what is the hallucination, where it's come from,
12:00where is it, where it should be more or less present in the generation of the model, they have no
12:04idea.
12:05A lot of people just don't have any idea when they will be able to trust the output, when not.
12:09And this is a question of education, I think, in large part.
12:13Yeah. And just using AI everywhere, very quickly.
12:16And that's, I think, one of the most scary thing about today is not that we have superhuman AI that's
12:21starting to take control already.
12:23They start, you have human, real human, who start to put AI everywhere without too much thinking about, yeah, is
12:29it a good place to put some AI?
12:30That's what I'm scared about.
12:33Well, I have to put a little bit of distance, because I feel that, no, I completely agree that education,
12:44awareness, people understanding what it means to interact with the technologies, human computer interface, all of these things are very
12:51nice.
12:51We should, of course, because at the end, it's the individuals who interact with the technologies usually.
12:58But the impact is systemic.
13:00It's systemic because it's affecting labor markets, because it's affecting international competition, because it's changing the way we do research,
13:09because it's changing we produce things, because it changes how we interact.
13:14Let alone generative AI, more and more AI is being used to take decisions.
13:20Who gets allocation for benefits of the welfare system?
13:24Who gets access to health, education?
13:27So I feel there is a great scope to define the direction of this technological drive.
13:35And we need to get awareness in the public sector, but also in the business sector, of what are the
13:42frameworks that will allow us to ensure that these technologies are really developed for good.
13:48And that means more transparency, that means more accountability, that means more rules, and more policies.
13:56Because up to now, it has been a fantastic drive, fantastic drive.
14:01You go and get these amazing technological breakthroughs, and then you wonder, how does this affect the schools?
14:11And you get all the universities, oh my God, do I allow my students to use GNI or not?
14:17I think that we really need to turn the conversation around.
14:21What do we need these technologies for?
14:23And how do we ensure that we frame it with all the tools that we have, which are investments, incentives,
14:32subsidies, policies, and regulation?
14:36Very basic.
14:37If something goes wrong, and if there is harm, there should be some liability rules, which are not there anymore.
14:45If the labor market is going to change, and my skills are going to go updated, yes, I have an
14:50individual responsibility to try to make myself a little bit more tuned with the markets.
14:55But there is also the role for the state to provide with that enabling environment to take advantage of it.
15:03I'm very concerned that the way it is going now is a technological discussion.
15:08And we're just letting it go like that.
15:11It was impressive that when GNI arrived, the President of the United States got the Executive Act, and then he
15:20goes with all the issues that includes there.
15:22One of those is, we need to evaluate Exante before these things reach the market.
15:29It's like 101.
15:31It's due diligence.
15:32We ask that for any other sector.
15:34You don't put anything in the market you have not tested.
15:36But we were doing that.
15:39Now, rule number one, let's test it before we put it in the market.
15:43I mean, there are many things that we need to realize to ensure that we control the downside and that
15:50we get the best out of it.
15:52Yes, beautifully said.
15:54I couldn't agree more.
15:56The problem of AI is a systemic problem.
15:58It is not a technical problem.
16:00There are technical aspects of this problem, which are very important, such as how do you do evaluations?
16:04How do we understand these systems?
16:07How are they built?
16:08How are they funded?
16:08Et cetera.
16:09But fundamentally, there is a much deeper political and cultural problem of how these systems affect our institutions, our society,
16:17our culture, our lives.
16:18If the only problem was that some people accidentally use the system in ways that's not intended to be used,
16:24this would already be a big enough problem.
16:26But it gets much worse than this.
16:28As an example of what I mean also here by a growing systemic risk, here's a simple question.
16:34Who controls Google?
16:36Think about it for a second.
16:37Which person controls Google?
16:41The real answer is nobody.
16:43It's a system.
16:44You could say, well, the CEO does.
16:46But imagine the following.
16:48Tomorrow, the CEO of Google decides Google is a huge risk to society.
16:53It's extremely dangerous.
16:54It needs to be shut down.
16:55What happens?
16:56Well, he goes into the office the next day, jumps on the table, says, shut it down.
17:00Shut it all down.
17:01What happens?
17:03He stops being the CEO of Google.
17:05Even he can't shut down Google.
17:08There is no person.
17:09There is no person who can shut down Google, even if it was a huge risk.
17:15And this applies to any large corporation, to any complex techno-political system.
17:20These systems are self-perpetuating.
17:22They're self-controlling.
17:23Is there any individual person who controls the US government, or the Chinese government,
17:28or any other government?
17:29Well, no.
17:30And now, as these systems continue to evolve as political and technical systems,
17:35as AI gets integrated into decision-making loops, into data processing, and so on,
17:40more and more decisions are going to be made by algorithms that we fundamentally do not understand.
17:46It's very important to emphasize that AI is not normal software.
17:50Normal software is written by programmers.
17:54They have lines of code that tell the computer what to do.
17:57This is not how AI works.
17:59AI is more like grown.
18:02It's more organic.
18:03You have big piles of data, and you use big supercomputers to grow a program.
18:10It's called training on your data.
18:12But the internals of these programs don't make sense to humans.
18:16They're not something we understand or we can read.
18:19They look like huge piles of numbers.
18:21They work.
18:22They can do fantastic things.
18:23But we don't fundamentally understand them.
18:25And more and more decision-making power is being pushed into these systems, such as, for example, for trading.
18:32Think of a hedge fund.
18:33Think of a large corporation.
18:34As these systems become better and better at faster decision-making, at better decision-making, processing more information,
18:40as Thomas has talked about, having systems that can access new data modalities that humans potentially can't even process,
18:46and make decisions based on this.
18:48Less and less of the decision-making power and the ability to pull the plug or make dramatic changes leaves
18:54human hands
18:55and moves into systemic or technical hands.
18:58And this is a large-scale risk.
19:03I love that analogy with Google.
19:06Do you see AI going down the same path as Google?
19:09And by that I mean being consolidated in the hands of a few actors in the same way that we
19:14see with other technologies today?
19:15Is that the situation we're heading towards?
19:18I think it's much worse than that.
19:19If Google was completely consolidated in one hand, it would be less dangerous.
19:22Because then we could find the guy and talk to him.
19:25Or if he does something wrong, you can put him in prison.
19:27If there was one guy, Mr. Google, then it would be much easier to hold Mr. Google accountable.
19:32You can find Mr. Google.
19:34You can throw him in prison.
19:35You can talk to him.
19:35But there is no Mr. Google.
19:37Even if one person does something bad, maybe they go to prison.
19:40But the institution is still capable of doing things and hiding things.
19:43This is a large problem that exists in governments and things and whatever.
19:47The more distributed power is, the harder it is to hold power accountable.
19:52So there is a very strong incentive for power structures, corporations, governments, etc.
19:57To never have any one person be in charge.
20:00The more a machine is in charge of something, the more you can absolve yourself of responsibility.
20:04You can say that the machine made this choice.
20:06The community made this choice.
20:08You know, the whatever.
20:09Someone who isn't me made the choice.
20:11Even if you benefited from this choice.
20:13So it is true that for technical reasons, such as the capital tensiveness of AI systems,
20:19it's becoming very centralized that very large megacorporations are capable of building the most powerful and advanced technology.
20:25But partially, the reason they're doing this is also so they can absolve themselves of responsibility.
20:31They can say the machine made the choice.
20:32Not me.
20:33And eventually, this will be true.
20:35We can argue about whether we're already there or not.
20:37We're definitely already there, for example, for various insurance-based decisions and such.
20:40Decisions are already being made by machines.
20:42And they're seen as impartial.
20:43Because it's the machine who made it.
20:45It's not a person who made the decision.
20:46There's no one you can fire.
20:47It's the machine that did it.
20:49And this is very lucrative and very good for these companies.
20:53Especially as the systems become more powerful.
20:57Did you want to add something?
20:59No.
21:00So how do we bring back the power into the hands of the citizens?
21:08You mentioned, Gabriela, a few ways.
21:11You talked about regulation and policy.
21:14Is that the way forward?
21:15Or are there other things that we should think about?
21:18What's the role of corporations?
21:19What's the role of businesses, for example?
21:24Well, I tend to agree on this depiction.
21:28The only point is that business will always take advantage of the environment.
21:34If there are loopholes, if there are no responsibility or accountability mechanisms, they will use it.
21:41And therefore, when we go into discussing these issues as a business versus government versus citizen versus...
21:48It's not the point.
21:49It's the ecosystem that allows us to have this high concentration of power, technological, economically, in the hands of few
21:58firms and few countries.
22:01And therefore, what we need to think, is that the business model that we should be allowing to continue to
22:08exist?
22:10And I would say, no.
22:12I don't think that's the case.
22:13We cannot go into what some people have said, oh, my God, you cannot regulate because you don't understand the
22:20technologies.
22:21Nobody understands because of the black box first and now because of GNI or because it's a computer and the
22:26computer is deciding something that probably those developers didn't even know.
22:30What you can do is how do we ensure the rule of law, which is pretty straightforward, and I'm not
22:38going technical, I'm going really philosophical, legally-based discussion.
22:43You go for the outcomes.
22:45If there is harm, there should be compensation.
22:48If there is harm, there should be a way of me knowing that I was not given the opportunity to
22:54have a business, a job interview, because the algorithm was biased.
22:59Or because there are existential threats, I should know and I should be protected.
23:03Who is to protect people?
23:06I pay my taxes to be protected in my property, in my privacy, in my...
23:10Governments need also to increase their capacity to understand how to shape the business model that will deliver much better.
23:18Delivering much better is reducing the high concentration, allowing for more companies, businesses to develop their technologies, allowing for more
23:28countries.
23:29Core to the issue, who owns the data?
23:33Who can use the data?
23:34Who can train the data?
23:36Because there is also a problem.
23:38The data is highly representative of countries that are capable of using it and gathering it.
23:45And then you get this very high concentration at the upstream, and then discrimination at the downstream.
23:52So I feel there are many ways in which we can start to rethink the way these things have been
23:57done.
23:57Not to cut it short, because I don't think that that's the purpose.
24:01The purpose is to see what outcomes are we getting, and try to use the frameworks, the ethical frameworks that
24:08actually UNESCO is trying to advance, to reward the good outcomes, but to curtail the bad ones.
24:16And that's nothing more or nothing less than the rule of law.
24:21So I wanted to ask you, Gabriela, I think that this is a question of speed.
24:29Because technology is evolving very, very quickly.
24:33You're right, it's a systemic problem.
24:35But all the national and international organizations, the regulations, the law, is always a little bit slow to be invented
24:45and to be realized.
24:46For example, for the AI Act, it took, what, two years and a half.
24:52And between the start of the discussion and the signature of the Act, Gen AI was invented.
24:58And maybe we have also to rethink how the international organizations are working.
25:04But no, because, you know, what I want to escape here is a sense of powerless.
25:10We are not powerless.
25:11I agree it's very, very complex in terms of the fine definitions when things are moving so fast.
25:18But there are very specific issues that you can do in terms of institutions and rules and frameworks that already
25:28exist,
25:29that we work with them, human rights frameworks, that said you should not be discriminated,
25:35you should not be abused or manipulated, you should not be taking away your privacy.
25:42All these things exist.
25:44Many countries, for example, have institutions for access to information and transparency.
25:50If the government takes a decision, or if somebody takes a decision that discriminates you,
25:56now, we, in democratic societies, we have access to those institutions.
26:00The challenge is how these institutions are going to equip themselves to address the issues that come out from these
26:10technological developments.
26:12Very easy, well, not very easy.
26:15I don't want to portray that it's very easy.
26:17But imagine that you establish the right to know when a decision is taken that affects you.
26:26Yeah.
26:27It's not there yet.
26:29Is that that difficult?
26:32I mean, we just need to rethink these frameworks.
26:36And this is what we're doing at UNESCO when we work country by country because it depends where they are.
26:41It depends on the capacities.
26:42It depends on the laws.
26:43There are countries that are very risk takers, fine.
26:46There are countries that are risk averse.
26:48But the whole point is generative AI has increased their capacity 20-fold.
26:53We citizens, we governments, we institutions, we need to increase our capacities.
26:59The fact that the UK and the US and France are creating their AI institutes, you know it.
27:07It's because we need to know.
27:09So let's not get worried about the speed, let's just speed it up for ourselves to try to understand better
27:15and put the tools in the hand to frame it much better.
27:19Maybe just for a note, a positive note, because I think a lot of these problems we see and we
27:24talk about, and that's what you say,
27:26there are things that already existed before.
27:28They are not new.
27:28Like algorithmic responsibility, if you read Weapon of Math Destruction by Cathy O'Neil, these are already things that were
27:34there.
27:35Algorithm, already credit ranking in the US, having problems, you know, in defining.
27:40So these are things we knew, and AI don't fundamentally change this.
27:44We just need to apply the things we knew and maybe widen this.
27:47It's not like Gen.AI is bringing a new thing here, in my opinion.
27:52The same for malware and all these things.
27:54I think, oh, a model has a malware.
27:56Okay, we already had malware on our computer.
28:00Maybe just extend security to also AI, you know, objects, and that's it.
28:05There's a lot of things we already knew.
28:06These are risks that were there in computer.
28:08So, yeah, I'm not so worried about this.
28:12Yeah, I think I agree kind of with everyone to various degrees.
28:15There is a problem where institutions are slow to various degrees compared to the technology, or rather the technology is
28:20so fast.
28:21But, to agree with Gabrielle here, I couldn't agree more.
28:25Fundamentally, I think the deeper question is, who gets to make these choices about how our society is run?
28:30Is it done by whoever can get to market first?
28:33Is that the person who gets to decide whether a technology is, you know, for example, my parents, you know,
28:39still use Facebook sometimes.
28:40And there is terrible Gen AI stuff on there.
28:43Just the worst garbage.
28:45Just manipulative, disinformation, terrible things.
28:49That's obviously not good for them.
28:50Or, like, I have a young cousin who, like, exposed to this kind of stuff.
28:52This is obviously a toxic pollutant.
28:55It's like a chemical that is poisoning people.
28:58And there's no responsibility for this.
29:00It's open source.
29:01You know, it's done by some actors who cannot be held accountable.
29:04There's no laws against this, et cetera, et cetera.
29:06So, the cost of these technologies is being borne by vulnerable people in society.
29:11And soon, the vulnerable people will include everyone in this room, and everyone on Earth, as these technologies become more
29:17powerful.
29:17This is a big problem.
29:20Now, how can we, you know, both benefit from this technology while also pricing in the externalities?
29:26This is, for me, the deeper question.
29:28Yes, we have liability laws.
29:30Why are they not being applied to model developers?
29:33That's a question I have for international governance.
29:35Why, if I develop a model, and it is used to cause harm, am I not liable for this?
29:41Currently, I can develop a terrible disinformation model or a malware-generating AI system, release it onto the internet.
29:48It gets downloaded by, you know, Russian or, you know, got North Korean hackers that use it.
29:53I have zero liability.
29:54This is completely legal in the current system.
29:57And is this how we want to run our society?
29:59Maybe the answer is yes.
30:01I fully disagree there.
30:02Yeah.
30:03Fully disagree.
30:03I think, is it the person, who is the person responsible on Facebook?
30:07Is it the person who is posting these ads, right?
30:10Or, like, this fake photography?
30:13Or is it Adobe because they built Photoshop, which enabled everyone to build deepfakes?
30:17Who is the real person?
30:18I love this.
30:19Also, I think there's a lot of hubris, often, because we think we are the only one who can build
30:23AI models.
30:24We think, you know, we are the kind of gold-choosen ML scientists who understand.
30:28I think Russia, Korea, are very smart people that can do this technology as well.
30:34There is a part of, a part of ML right now is this very ivory tower thing where they think
30:39they are much smarter than all the rest of the world.
30:42But the reality, there are smart people in a lot of places there.
30:45With all due respect, I think you misunderstood what I said.
30:48And I understand your business depends on this not being true.
30:51No, I also need to debate for this panel to be lively.
30:54The point I'm making is not whether it's good or bad.
30:56My point is saying, is this how we want to run society?
30:59I'm not saying this person should be responsible, or that person.
31:02I'm saying there is currently no decision being made.
31:04No one is deciding. No one is debating. No one chose, okay, this is the part of the chain that
31:11we make responsible for this.
31:13Currently, there is just nothing. There is no responsibility. There is a pollutant in the water. People are being poisoned.
31:19That should be the AI act. That's the idea we're starting to try to set up.
31:23Let me just chip here, because I think that there are very simple answers to this kind of issues.
31:30I agree with him. I think that it's very difficult to allocate responsibility if something goes wrong.
31:36And we cannot just go with the notion that because there are layers and layers of developers that are building
31:42on models and models and models,
31:44and then you cannot know why the boy died.
31:46Oh, I'm not saying you cannot know. Let me just finish. Let me just finish.
31:49There is one single policy rule in the US that I feel will be beneficial to eliminate that will probably
32:00help us to get more accountability,
32:02which is the decency act that disconnects the carriers of the information from the information they carry.
32:10You don't do that with the media. You don't do that with the media.
32:14The Guardian, if they carry something that is not true and that is not contested and that doesn't have the
32:22source very well verified, they can be sued and they will be closed.
32:27These platforms do not only don't abide to that very simple rule of law kind of thing, they are freeway.
32:36Because they have this policy decision, which is you are not responsible for the content you carry.
32:43I think that's wrong. So that's why I feel that the debate needs to be elevated, not to be concerned
32:50about the downsides.
32:52Because we all have downsides. We all have downsides with all the technological waves before.
32:58Because you have the dual use of every single technology that humankind has invented.
33:03The point is, what are the policy choices? When you have, for example, deepfakes, no?
33:09And we have Johansson voice being used just lately. Why don't you ban it?
33:16That's what the US is saying. Deepfakes, banned.
33:20In my recommendation, we ban mass surveillance and social scoring. It's banned.
33:26And then you develop the rules and the institutions to make sure that you can follow through from those decisions.
33:32So I feel that the core of the issue here is how you develop these accountability frameworks that makes us
33:39easier to understand and shape behavior.
33:41Because market actors always shape behavior with the incentives and with the rules.
33:47And therefore, I feel it's not so complex.
33:49How do you ensure the responsibility?
33:52And one of the things that the recommendation of UNESCO has is you do not grant legal personality to AI
33:59developments.
33:59And you can think of that's, again, science fiction.
34:02Why would you give a legal personality to Sofia or to one of these developments?
34:08They are machines. They are not...
34:11Well, there was this debate that maybe they should be responsible. Responsible what?
34:15Well, members of UNESCO decided, no. Human responsibility.
34:22And whomever is there, we need to identify it.
34:25This is very bad because we humanize, we anthropomorphize this model more and more when we do that.
34:29Or the CEO in China, the AI is now a CEO of, I remember, I forgot the name of the
34:34company, right?
34:35But these are things where we can anthropomorphize them a lot.
34:39And then we start, you know, in the public mindset, in the kind of education around AI.
34:45That's maybe the worst thing you can do today.
34:48Yeah, yeah.
34:48Now, I'll just say, I don't want to...
34:51I'm not here for absolving all responsibility.
34:53That was not what I meant to say.
34:55I meant to say it's much more complex than just saying someone who built the model should be liable for
34:59everything the model can be used for.
35:02I think we should think about system.
35:04What will you use this model for?
35:05Are you going to use it to do deep fakes?
35:08Is your model who generates text?
35:10It can generate fake news.
35:12That's a bad thing.
35:13It can also be a writer help when you have, like, you're starting a book, you need ideas.
35:18Is it a bad thing?
35:20No, I don't think so.
35:21We should be able to have writer help.
35:23That's nice.
35:24But if you generate in mass to do fake news, disinformation, that's a bad use case.
35:29But maybe that's the same model that's used for both.
35:31So the question is just, it's much more complex than just saying training model is bad.
35:36And we should be liable for every bad thing that comes out.
35:38Can I just interrupt you here because I think it would be a good moment to open access to the
35:45audience and give them a chance to jump in and actually ask you questions about this really interesting conversation that
35:51has been taking place.
35:52So thanks very much.
35:53There are, I think, mics that are being circulated if anyone has a question.
35:58Does anyone?
35:59Yeah, we've got a question down there.
36:02Thank you.
36:03Thank you.
36:04Well, there was a talk about regulation when it comes to AI.
36:09But I'd like to remind you guys that there are also a talk about regulation on social media.
36:16So we still haven't got until this very moment a consensus on social media.
36:23And it has been there already for 20 years, to say the least.
36:28So do you guys think that we're going to take another 20 years to decide on what to do with
36:33AI?
36:34Or do you think this is, we are, we've learned from this trauma, brackets, and we're going to be much
36:44faster now?
36:45Or we are going, yeah, we are, we are, we are seeing the iceberg and then we're doing, we're not
36:50doing anything right at this moment.
36:53Yeah.
36:55Normally, I always answer, I am optimist.
36:58For this question, I am pessimistic.
37:00I think you're right, because there are several international organizations trying to do things.
37:07UNESCO is one.
37:08But it's also very difficult to align all the countries to some, anything, and especially ethics.
37:19And then it will, at some points, always start again the discussion with new technology, new fears, and new...
37:27So we need absolutely to do something.
37:30But I'm not sure this debate will end very soon.
37:36But I have to say that, first of all, we are not guys, because you have some girls here.
37:43But second, on the social media, the fact is that it seems that we learned from it.
37:52We already learned that that's not the model that we need.
37:56That was the model of free, the Wild West, first come, first served, those that own the technologies can develop
38:04and use it.
38:05Thank you very much.
38:06But I think that we have proven, since Christchurch, that in collaboration with the business sector, the government was able
38:16to develop some rules in terms of content moderation, in terms of investments in those areas.
38:23UNESCO just launched, this year, guidelines for regulating platforms, and we're getting together the regulators to exchange views on how
38:34to do it.
38:34So I think that the social media is the exact example of why we cannot leave this in the hands
38:41of the few.
38:43Yeah, we've got another question to the front.
38:49Thank you.
38:50A quick comment.
38:51There is regulation for social media.
38:54In Europe, it's a Digital Service Act.
38:56We're waiting to have actual real teeth.
38:58But that happens.
38:59And actually, when we have issues, for example, in France with New Caledonia, all of a sudden, we have TikTok
39:03who are suspended.
39:04So it does happen that we can press the button to have social media off.
39:08The question is, we're talking a lot about, I would say, the technology of regulation.
39:13Because as it was pointed, it's a race between technology and regulation.
39:19What do you see in terms of, say, new approach, new technologies for getting quicker regulation?
39:25There are things around the issue of plan, adaptive regulations, which we see in cybersecurity.
39:32We don't talk enough about true risk scenarios, which are very important, because if we're too fixated in big risk
39:41scenarios that actually don't happen, we are wasting time and wasting, actually, the focus of people.
39:46So how can we move further into what we call the technology of regulation, which is so necessary to be
39:52at speed of a change in technology itself?
39:58He is an expert. Guilty of charge, Guy Philippe.
40:02No, I think that you're completely right. The fact is that now we need to speed it up in the
40:08understanding of how the policies affect that business environment.
40:13And I think this is what we are really focusing on. I don't think that the governments now are waiting
40:20to see how these things evolve.
40:22I think they are exactly creating institutes to understand better. They are getting together, the regulators, and I'm working with
40:31the European Commission to sit there and see what kind of institution, what kind of profile, what kind of rules
40:36to ensure that we shape the way the markets go.
40:40And I completely agree with you. There must be also the possibility of using Gen I to speed up the
40:47development of good regulations.
40:49I think this question of social technology, as I would call it, and of institutions is an extremely important one.
40:56At least since the end of World War I, multiple thinkers in sociology and philosophy have talked about how technology
41:02is on an exponential track, but our ability to coordinate, our wisdom of our society, so to speak, is not
41:08an exponential.
41:09And this is fundamentally where I think all of our problems here come from.
41:13There is no reason that a mature, well-run society cannot have extremely powerful, potentially even dangerous technology.
41:21But let's be honest here. Do you all feel that our governments have nukes under control and this is totally
41:27fine that these weapons exist?
41:29Does everyone feel it's totally safe that our governments have nuclear weapons and nothing will go wrong?
41:33I don't feel safe about this. It's not because I think they're bad people necessarily, but some of them are.
41:41But overall, building institutions is hard. Regulation is hard. Technology is complicated.
41:47And I think this is the true question of our era. How do we build social institutions that can responsibly
41:52steward technology that is as powerful or even more powerful than nuclear weapons in the future?
41:57I don't have a simple answer to this question, but I do think it is the core question of our
42:01time.
42:01That's why I also like to think about it as more of a social or institutional problem than purely a
42:07technical problem.
42:07The technology will happen whether we want it to or not. The question is not should technology happen.
42:14If we don't make a choice, a choice will be made for us by the market. The market will make
42:20a choice for us if we don't intervene.
42:22And currently, we're quite slow on this and we need to do better.
42:28Just add one thing, if I push your idea to the extreme, because it goes to a direction I really
42:32don't want us to go,
42:33which is starting to put AI in charge of deciding law and how societies should be organized.
42:38We should not go in this direction too soon.
42:42Yeah, so there is a time to just taking law and voting and discussing, which is, I think, an incompressible
42:50time unless we do something really bad.
42:52But also, I think one element there is, again, the question that we in the philosophy world would say,
43:01what do we need these technologies for? Have we answered that question?
43:07They have been developed with profit-making purposes.
43:13Should we change the main objective?
43:16Well, yes, I would say yes. How do you change that?
43:21There are ways, no, I'm not saying that you are not going to make profit or that the market should
43:26not work the way they need to work.
43:27What I'm saying is, within the limits of the possible, it can not only be about profit.
43:34I fully agree. Just to say, there is a number of non-profits also building AI.
43:38Allen AI, not everyone is building AI for profits, even though VivaTech is very business-oriented.
43:43And we are also for profit, but they are non-profit. Elutter is a non-profit, right?
43:48And these people are also building AI, not for business.
43:52Yeah.
43:54Talk to the big tech.
43:56By dollar amount, it's definitely a business thing.
43:59Weighted by dollars.
44:00Weighted by investment, AI is definitely a business thing.
44:04It is not a non-profit thing.
44:05And I think what Gabriele is saying, which I can only agree with, is that fundamentally, we get what we
44:11measure.
44:11We get what we reward.
44:12If we reward the maximum shareholder value, then, oh golly, will we all have a lot of shareholder value, that
44:20we're here at VivaTech and we'll maximize shareholder value, to the detriment of other things, to the externalities of other
44:27things, to the pollutants of our social media, and so on.
44:30Maybe this is a compromise we're willing to make.
44:33Maybe not.
44:34Sometimes it's strange because, here also in Paris, we have Qtai.
44:37That's another non-profit lab, if you didn't follow it.
44:40I just found it by Xavier Niel.
44:41So Xavier Niel is already, of course, a big business.
44:44But I think maybe what they optimize is not so much shareholding value, but kind of an attention value of
44:51the human society, which is making this splash, you know, this GPT-4 or splash.
44:57I think there is this strange thing where we follow some really weird metrics to develop our model.
45:04I wanted just to ask that, yes, this is a business, but this is also a business because citizens accept
45:09it.
45:11There is so much hype around AI that people have completely lost their sense of objectivity.
45:22It's very difficult to ask today a young guy or a young lady, do not use social networks, be careful.
45:30It's, I don't say in English, golden age effect in French.
45:36The more you have products, the more people ask for products.
45:40And we have to break this if you want to have some control on it.
45:46With that, I think we are exactly on time, actually.
45:50This has never happened to me.
45:52But thank you very much for a brilliant panel.
45:55And thanks to you guys for great questions.
45:57Please join me in a round of applause for our panelists.
45:59Have a nice day.
46:00Thank you.
Commentaires

Recommandations