Passer au playerPasser au contenu principal
  • il y a 5 semaines
The New York Times Debate AI Will Reveal the Best of Humanity

Catégorie

🤖
Technologie
Transcription
00:00Hello, hello, good afternoon. Welcome, everyone.
00:05It's great to see so many of you joining us for this last session of the day, I think.
00:13So, my name is Sophie Lambin. I'm the founder and CEO of Kite Insights,
00:18and I'm the curator for the New York Times for this debate.
00:22I will be the emcee for this afternoon's session.
00:25So, before I introduce our distinguished moderator, Isha Nelson from the New York Times,
00:31I want to tell you a little bit more about this session today.
00:35First of all, this is not a panel. This is a debate.
00:41And debate really calls for a mix of intellectual agility, eloquence, empathy, even.
00:48It's designed to encourage respectful confrontation, even disagreement,
00:55in a time of increasing polarization and entrenchment of opinions.
01:01So, we are extremely grateful to our debaters, to our speakers today,
01:06for showing up and lending their minds to these very important questions and challenging exercise.
01:12And, of course, we are very grateful to you, the audience,
01:15as you will see that the success of this session today really depends on your engagement and your opinions.
01:22So, as we delve into this world of technology and unearth some of the tensions
01:26caused by the exponential trajectory of AI,
01:29I would just like to remind you that the intent behind the formality of this debate
01:35is really to create a safe space for the debaters.
01:40Because, as you will see, it takes courage to stand here in front of you
01:44and to take a for and against motion,
01:48not the kind of conversation we usually have in cocktail parties.
01:52And we've, in fact, asked our debaters to take their arguments
01:57to what might be going beyond their own conviction and belief.
02:02And we ask them to do this for the sake of the debate
02:05and for the sake of the audiences.
02:09So, that means that throughout this debate, they will need your encouragement.
02:15So, feel free to clap or even whoop if they say something that you like.
02:22And if we've asked them, in the spirit of this motion, to bring their full humanity to the debate,
02:32so we ask you to lean into that moment and to the theatrics of that moment as well.
02:37I know we are all very serious people, but I think we all like a little bit of drama,
02:42particularly at the end of the day, so please don't hold back.
02:46So, without further ado, it's my great pleasure to welcome our brilliant moderator tonight,
02:53Isha Nelson, economics and business reporter for The New York Times.
02:56She will come and set the scene with us.
02:59Please, everyone, welcome me, join me in welcoming Adibators and Isha.
03:15Okay, good afternoon, everyone.
03:17Thank you so much for joining us.
03:20As you can see, I have a lot of people on stage with me.
03:23There are more jury panels.
03:25This is a really exciting and engaging session.
03:28You will be asked to be involved.
03:30So, I'm warning you now, you're going to have to participate.
03:33So, just, like, get ready for that.
03:35So, I'm Isha Nelson.
03:37I'm one of the business and economics reporters for The New York Times.
03:40I'm based in London.
03:41And I get the nice and easy job, actually, of kind of introducing you to this topic
03:46and what we will be discussing this afternoon.
03:49So, when I saw the name of the session,
03:52This House Believes that AI Will Reveal the Best of Humanity,
03:56immediately the question is, well, what do we mean when we say humanity?
04:00What are we thinking about?
04:02And I think, well, what makes us human?
04:04What separates us from machines?
04:07And, of course, AI and these machines are ultimately created by humans.
04:12So, they will reveal something about humanity.
04:15The question is, what will they believe?
04:18And I do really think the stakes are high,
04:21which is why we have such an esteemed set of debaters and judges with us today.
04:25I feel like many of us feel that we must adapt to AI.
04:30I know I'm practicing with generative AI at home,
04:33wondering what it means for my job,
04:35wondering what it means for my friends and my family.
04:38And some of us in this room will actually be very key players in shaping AI,
04:44whether that means designing it or regulating it.
04:47And I think for the rest of us, the question is,
04:50well, how do we continue to gain agency so that we don't lose what really matters to us,
04:57whether it's personal relationships, beautiful music, art,
05:01whoever or whatever might create it.
05:04I do think we have to think about not just being kind of users of AI,
05:09but those of us who may not be in positions of regulation
05:12or in these jobs designing it,
05:14how do we still feel like we maintain some level of control?
05:18Now, as an economics reporter,
05:20I spend a lot of my time thinking about the world of work,
05:22what it means for the labor market.
05:25And, you know, I kind of vary in how I think about this.
05:29Now, in one respect, I read a study a few years ago
05:31that was actually about automation.
05:33And it said there had really only been one job
05:36that was truly wiped out by automation,
05:39and that was elevator operators.
05:41And I thought, okay, that's bad for them,
05:45but that's not terrible.
05:46There will always be more jobs.
05:48And so on the other end of the spectrum,
05:50I hear, you know, AI will be massively productivity-enhancing,
05:55that this will revolutionize economics
05:57in the way we think about work.
05:59And I live in the UK, as I said,
06:01and their productivity has been stagnant
06:04for the last decade and a half.
06:05So I can tell you we absolutely need that productivity growth.
06:09But it really depends on us embracing lifelong education,
06:15and that's where I start to get sceptical.
06:17I very quickly wonder about our ability to reach people
06:21so that there aren't people stuck in kind of dead-end,
06:25non-creative jobs where they are just feeding the AI,
06:28or jobs where actually you're not being trained properly
06:31on how to use it,
06:32so it's spewing out misinformation and all sorts.
06:35So while I do see that there is this huge potential,
06:38there is questions around equitable access,
06:41equitable training, and education that comes alongside this.
06:45And not just kind of equality within jobs and internationally,
06:49but between towns and cities and wherever we are.
06:51So when I think about kind of what does it mean for humanity,
06:54I think about how do we embrace that access for everyone.
06:57I'm going to give you one little example.
06:59I cover central banks,
07:01and central banks are very influential organisations.
07:05You know, they regulate financial systems,
07:07they set interest rates.
07:08Whether you are forced to follow central banks as much as me,
07:11I'll tell you now, they matter to your life.
07:14But they are accountable somewhat
07:16to parliaments, to congress, to public bodies,
07:20but they're also kind of technocrats.
07:22And one thing that's happening in central banks at the moment
07:24is they are experimenting with AI as well.
07:27And they say, you know, this is really early stages.
07:30We're looking at big data sets.
07:32We're looking at the financial information
07:35we get from the banks that we regulate.
07:37But they say, it is really just early stages.
07:39Don't worry.
07:40But to me, the inevitable next question is,
07:42will we get to the stage where we use AI to set interest rates?
07:47Are we sure we want to go all the way?
07:49We're very keen to use technology
07:52to kind of eliminate human failures
07:55from judgement and kind of these errors.
07:57But how far do we want to take it?
07:59So I think there's many ways
08:00to think about this question of what does AI mean?
08:03What does it mean for humanity?
08:05Whether it's work, whether it's creative arts,
08:07whether it's anything like your mortgage.
08:10I have had the great joy of being able to say,
08:13I'm not sure.
08:14I get to waver in each direction.
08:17Unfortunately, our debaters have no such luck.
08:20As you'll find out,
08:21they have to make really, really strong arguments
08:23in either direction.
08:25So I'm going to hand back to Sophie,
08:27who's going to explain the rules of this debate for us.
08:30Thank you, Isha.
08:32I hope...
08:33No, stay with us.
08:34Yes.
08:36Let's start practising the clapping
08:38because there's more of that coming.
08:40Thank you, Isha.
08:41Hopefully, you left the debaters something to argue about.
08:44Yes?
08:45Be good?
08:45Okay.
08:46Good.
08:46So the stakes are very high, indeed.
08:49And the main rule I'm going to explain
08:51is that, again,
08:53I really want you, the audience,
08:55to get engaged,
08:56to be ready to agree and disagree,
08:58and importantly, to keep an open mind.
09:00And that's the key point of this debate.
09:04So let me summarise the rules.
09:06First of all, in front of us here,
09:08we have three very serious and intimidating jury members.
09:12Our jury members are here to listen very carefully to the arguments on each side,
09:18and then they're going to come on stage and reflect on what they heard.
09:22Now, they are called jury, but in fact, they are impartial,
09:25at least until that point,
09:26because they're coming and they're going to reflect critically on what they heard,
09:30what they liked, what they thought might be missing,
09:34or the blind spots of the argument on each of our debater's sides.
09:40All the while they do that,
09:42debaters will take careful note,
09:44because they're going to use those insights
09:46to inform what will be the final returns.
09:49The debaters, we have the fore team here,
09:51and the against team here,
09:53will each have, in pairs,
09:56three minutes to debate for and against the motion.
09:59So they will come in pairs,
10:02argue for and against,
10:03they'll have a clock here,
10:04there'll be a gong at the end of three minutes,
10:05and they will be fighting it out and try to convince you.
10:10We'll do that three times.
10:11Then, as I said, the jury will come,
10:14and for two minutes each give their final comments.
10:19Then the teams will have a couple of minutes to confer among themselves
10:24and decide who will come and with what argument for the final retort.
10:30At which point, the vote is yours.
10:33What is very important in when we ask you to vote,
10:37and that's why I'm saying keep an open mind,
10:39because we're going to ask you to vote on the team
10:41that you felt was the most compelling
10:44in making the argument for and against.
10:47We have had split votes in the past,
10:50so if that happens, we'll ask you to vote again.
10:52Does that all make sense?
10:54All right.
10:55So we're going to do a little warm-up
10:57just to make sure you got it
11:00and make sure you are fully engaged.
11:02So I'll do a quick warm-up vote
11:04just to get a bit of a sense of who is in the room
11:06for our debater's sake.
11:08Don't be intimidated by that.
11:10It has been known to change.
11:12So if you agree with the motion
11:15that AI will reveal the best of humanity,
11:18clap now.
11:24If you don't agree with the motion
11:26that AI will reveal the best of humanity,
11:29clap now.
11:33Okay.
11:36And if you're not sure, do that with your hands.
11:42All right.
11:42So there's a few swing votes,
11:44so you stand the chance.
11:46Good.
11:47So I'm done with the rules.
11:49Isha, back to you.
11:50Thank you so much.
11:52It does sound like we're kind of wavering
11:54towards against at the moment,
11:55so the four teams, good luck to you.
11:58All right.
11:59Can I invite Luke Robinson,
12:01his director and partner at Post-Urban Ventures,
12:03and Claudia Schultz,
12:04who is the consulting director
12:06at Sustainable Impact Pivot and co.
12:09to the stage.
12:11Right.
12:11Luke, you are for the motion.
12:13You are going first.
12:14You have three minutes.
12:15I'm actually going to say,
12:16can we give Luke a round of applause
12:17because he is going first?
12:18Yes.
12:19Okay.
12:20You are great.
12:22I think we're ready.
12:23You can see the clock.
12:24When you're ready, Luke.
12:27We know there are a significant amount
12:30of issues on our planet.
12:32Many are really serious ones.
12:34We see, hear, and talk about them all the time.
12:38There is a high probability AI,
12:41easy to access,
12:43ONTAP intelligence,
12:44will be used to accelerate power,
12:46wealth, concentration,
12:49inequality,
12:50confusion,
12:52violence,
12:52and all manners of madness.
12:55However,
12:56like all powerful technologies,
12:58this new tool of ONTAP intelligence
13:00can be used for good,
13:02to fight back,
13:04to do something new,
13:06to do things we have never been able to do before.
13:11There are ways access to intelligence can be used
13:14to bring out the very best in humanity.
13:16I believe the most important one
13:18is to support our democracies.
13:21The new intelligence tool
13:23can be used to support conversations,
13:26dialogue,
13:26and connection at scale.
13:29It can be used to build solidarity
13:31when we have highly polarized societies.
13:35When we are in solidarity with each other,
13:37we have trust with each other.
13:39We have shared meaning.
13:40We are going to be able to rise above our conflicts,
13:43divisions,
13:44the polarizations,
13:45to see the bigger picture.
13:47For me,
13:48the most important revolution that AI will bring
13:50is to support the creation
13:52of a new participatory democracy.
13:55At Posturban,
13:57we are building tools
13:58so elected officials can have
14:01one-to-one conversations
14:02with their constituents at scale
14:05that's not been possible before.
14:07We are planning trials with the mayor in Greece.
14:09She wants to listen to her constituents
14:11so they can feel seen and heard
14:13to increased trust levels.
14:15We are creating tools for a Nordic bank.
14:18They want to increase alignment
14:20between managers and employees
14:22who feel disconnected and out of touch.
14:24We are speaking with an American workers' union
14:27with 150,000 people.
14:30They need tools to hear their members' concerns
14:33so together they can be effective
14:35at taking collective action
14:37to protect their rights.
14:39Our democratic systems are not designed
14:41to function at scale of a planet,
14:44to solve planet-level problems under pressure.
14:48The intelligence of AI offers something new.
14:52It can unlock a new democratic tool,
14:57something we urgently need,
14:59a tool to make decisions together.
15:01It is not possible to get everyone in the same room
15:04around a shared table
15:07so we can all speak and be heard in our own language.
15:11Ticking a single box on a ballot paper
15:13with only two hopeless options every five years
15:16is totally outdated now.
15:19When democracy is pitched against filter bubbles,
15:21fake news, power-hungry autocrats,
15:25AI is shaping the future now.
15:27Let's not waste precious time.
15:29Let's use the good that AI offers
15:31to amplify the things we love the most.
15:34Thank you very much.
15:37Okay.
15:39Excellent.
15:40Oh, yeah.
15:41You've got to stay on the stage.
15:43You've got to face your opponent.
15:44And feel free to react as you hear things.
15:47I heard a little bit of whooping earlier on.
15:49You know, let the debaters know
15:51if you like what you hear.
15:53Claudia, over to you.
15:54Thank you.
15:54A bit of light heckling is always welcome.
15:56So thank you so much, Luke,
15:58for making a lot of the points
15:59that we were going to make,
16:00but I'd like to remind the four team
16:02of the motion of the house today
16:04which is that AI will reveal the best of humanity.
16:08Now, we agree here
16:10that AI can be used to benefit society,
16:12but it certainly won't reveal the best of humanity
16:15because of how it is designed.
16:17Our team holds that only humans,
16:20us, you, can reveal the best of humanity.
16:24Indeed, the fact that AI is designed,
16:27let's not forget our code of friends,
16:29AI is designed, it is a conception.
16:31It is a programmed outcome.
16:34It is a man-made or woman-made
16:36or person-made innovation
16:38rather than something natural.
16:39And this suggests a flaw in the logic here
16:42that cannot be ignored.
16:43Designing something to reveal the humanity among us
16:47is a contradiction of terms.
16:49So how is AI designed?
16:50Let's not forget, how is it made?
16:52Well, AI, by definition,
16:54is designed to replicate human intelligence.
16:57It is designed to create something artificial
16:59from the natural, from humanity.
17:02Thus, at best, AI, what will it do?
17:05AI will reveal what we already know about ourselves.
17:09And unfortunately, that's not great.
17:13At worst, it gets worse, ladies and gentlemen,
17:16it will exacerbate the worst of humanity
17:19due to something called the alignment problem.
17:21So I'm here to talk to you
17:23about the cynical aspects of AI
17:26because I know you've all been brainwashed
17:28by two days at Viva Tech,
17:29so let's start.
17:30All right, so let's consider
17:33AI like a super-powered calculator, right?
17:35So this is at best.
17:36AI is a really powerful calculator.
17:38It is the car to the horse.
17:41It is the internet to the library.
17:43Great.
17:44It's going to do what we do.
17:45It's going to do it quicker.
17:46But at worst, AI, sophisticated AI, is generative.
17:51That means it can speak to itself,
17:54it can iterate, and it can optimise.
17:56It is a mathematician in a computer.
17:58And so what do we do
18:00if we let that mathematician in the computer
18:02code in an unsupervised fashion?
18:05Well, it will exacerbate
18:06what we know to be the worst of humanity.
18:09Some examples would be provided
18:11by my fellow debater, Alex,
18:13where AI has learned to scam, to kill,
18:16and to mislead democracies.
18:18Now, we can all agree, I believe,
18:20that systemic inequality, for example,
18:23is arguably one of the worst aspects of humanity.
18:26AI is more likely to exacerbate
18:28these problems of inequality,
18:30similar to most technologies,
18:32because it is more accessible
18:33in wealthy countries,
18:35and because of something called
18:36algorithmic colonisation,
18:39which is where AI is reflecting
18:42the vast diversity, unfortunately,
18:44that does not exist in the computer,
18:46but it exists in our society.
18:49Now, unfortunately,
18:51even if we could programme super-intelligent AI
18:53to pursue socially desirable goals,
18:55we cannot exclude the technical constraints
18:57that we do not yet know how
18:59to teach AI to behave
19:01with human values.
19:03Thank you.
19:08Okay.
19:11Luke and Claudia,
19:13you can take a seat.
19:14I feel like that was
19:15an incredibly strong introduction there.
19:19Luke making a very strong case
19:21for how AI can support democracy,
19:23build conversations
19:24that we could not hide otherwise,
19:26but a very strong rebuttal
19:28from Claudia around the risk
19:30that it could actually maybe reveal
19:32the worst parts of humanity
19:33that's already there,
19:34particularly inequality.
19:36Okay, round two.
19:37Round two.
19:37So, I have Shaina,
19:39Shaina Robana,
19:41from the founder and president
19:42of IncoJustice,
19:43would you like to join me?
19:44And I've got Alex Boyankov,
19:47who is Ukraine's deputy minister
19:49for digital transformation.
19:51Thank you both for joining us.
19:56Okay, you can see your clock.
19:59You are ready to go.
20:00There is no historical precedent
20:02for an innovation as transformative
20:04as artificial intelligence.
20:06It is not comparable
20:07to the printing press.
20:08It is not comparable
20:09to the steam engine.
20:10It is not comparable
20:11to the cotton gin.
20:12For the first time,
20:14we are seeing exponential progress
20:15in a technology
20:16that is being trained
20:18with the explicit purpose
20:19of acquiring agency
20:20and outstripping
20:21the intelligence of its creators.
20:23As a species,
20:24we have never embarked
20:25on a project quite like this.
20:27The technology itself
20:28is without parallel.
20:29But what there is
20:30ample historical precedent for
20:32is human adaptation
20:33and resilience
20:34in the face of civilization-changing forces
20:37of all kinds.
20:39Unfortunately, however,
20:40it typically takes
20:41some kind of disaster,
20:42a sufficiently disturbing warning shot,
20:44to kill public apathy,
20:46activate collective action,
20:47and reveal the best of humanity.
20:50The Chernobyl disaster,
20:51for instance,
20:51killed about 30 people
20:53and caused about
20:54700 billion U.S. dollars
20:55in damages.
20:57It's the worst nuclear accident
20:58in history.
20:59Widespread public horror
21:01after Chernobyl
21:02devastated the global
21:03nuclear industry,
21:04spurring political cooperation
21:06between rival states
21:07and leading governments
21:08to phase out
21:09nuclear programs entirely.
21:11The work of the
21:12International Atomic Energy Agency
21:14in the wake of Chernobyl
21:15then led to the Convention
21:16on Nuclear Safety.
21:17That disaster forced a reappraisal
21:20of the path humanity
21:22was previously on.
21:23More recently,
21:24COVID-19 exposed
21:25that we weren't ready
21:27for a pandemic of its scale.
21:28For years prior,
21:30leaders had dismissed efforts
21:31to invest in pandemic preparedness.
21:33But after a deadly
21:34public health crisis,
21:35civil unrest,
21:36and a global recession,
21:37we received a wake-up call
21:39and we finally answered it.
21:41Even as a young person
21:42deeply concerned
21:43about the AI future
21:44I will soon inherit,
21:45I have hope that we will win.
21:47I have hope that AI
21:49will bring out the best of us
21:50in the end.
21:51But I worry that it will take
21:52a Chernobyl or COVID-19
21:53level disaster
21:54or worse
21:55to generate the public
21:56and political will
21:57that we need
21:58to make that happen
21:59to set meaningful guardrails
22:01on this technology.
22:02I worry that in the meantime,
22:04things like bias,
22:05disinformation,
22:06and labor displacement
22:06will continue unchecked
22:08and that only an unmistakable
22:10catastrophe
22:10will deal the decisive blow.
22:12Whether that's a debilitating
22:14cyber attack
22:14on critical infrastructure
22:16and economic collapse
22:17or a large-scale
22:18propaganda campaign
22:19enabled by advanced AI.
22:21For AI to positively shape
22:23the arc of human history,
22:25we face a choice.
22:26Will we choose to move
22:28full throttle ahead
22:29without breaks or insurance
22:30or will we choose
22:32to think more carefully
22:33when building a technology
22:34with civilizational impacts?
22:36I'm optimistic
22:37that we as a species
22:39can get this right,
22:40unlocking revolutionary
22:41innovations across healthcare,
22:43education,
22:44and more.
22:45AI will reveal
22:46the best of humanity,
22:47but after how long
22:48and at what cost?
22:50We have to act fast
22:51before our window
22:52of opportunity escapes us
22:54and before disaster
22:55befalls us.
22:56Thank you.
22:57Thank you, Jaina.
23:07as a person who came
23:09from the country
23:10at war,
23:11I would like
23:12to say the following.
23:13Let's not fool ourselves.
23:15AI has been already
23:16weaponized
23:17to act against humanity
23:20and it brings everything
23:23that the worst case
23:24of the war brings.
23:27Manipulation of the truth
23:28turns people against each other,
23:31turn nations against each other.
23:32And I'm talking about disinformation.
23:35Disinformation that
23:36now spreading across globe
23:39because of this war
23:40and it's war
23:42not literally with bombs or tanks,
23:44it's war for a human mind.
23:46And before AI,
23:50Russian bots
23:51were really simple.
23:53You can tell by,
23:53this is written by the bots,
23:55but AI is fueling
23:57propaganda revolution
23:58using technology
23:59to build
24:00very sophisticated messages,
24:02turning to
24:03or adjusting to a person
24:05and when it says something,
24:07you think that's a real person
24:09and it's not dumped anymore
24:10and could profile you.
24:12They could
24:15substitute
24:15hundreds of people
24:16doing propaganda
24:17before with just one algorithm.
24:19So this is
24:21really disturbing
24:21and more
24:23going towards it,
24:24more disturbing it is
24:25because now,
24:26as Ukraine,
24:27we have to fight
24:28like fire with fire.
24:29We came up with
24:30algorithms
24:31that
24:31are finding
24:33sources of propaganda
24:34and
24:35trying to reveal
24:36who is behind
24:37these messages.
24:38So where it's going to end up?
24:40Nobody knows.
24:41And that's
24:43that's really
24:44something that we
24:45as millions of Ukrainians
24:46are facing every day.
24:47But it's not just us.
24:49There's a very popular case
24:51that probably
24:52some of you heard
24:52that somebody
24:54analyzed comments
24:55under Guardian
24:56or
24:58some German
24:59newspaper
25:00that
25:00thousands of
25:02thousands of comments
25:03were
25:05actually
25:05came from
25:06the bots
25:06making specific
25:08messages
25:08trying to manipulate
25:09everybody in Germany
25:10to stop
25:11help Ukraine.
25:13And it's just one case.
25:14The same thing
25:15is happening
25:16across the globe
25:17on many issues.
25:18And we don't know
25:19who's behind it.
25:20We may not
25:21know.
25:22So
25:23it's just one point.
25:25There are other things
25:26that
25:26where it goes
25:26with the deep fakes
25:27and the photos.
25:29And for example,
25:30TikTok now
25:30is
25:31TikTok profiles
25:33with fake
25:33people
25:34that
25:35talking
25:36and pretending
25:37they're real
25:38is
25:38a really big issue.
25:41And
25:42it's like
25:43a literally
25:44bomb drop
25:45on the TikTok
25:46having millions
25:47of views
25:48of a deep faked
25:49person
25:50that's spreading
25:51specifically
25:52message
25:53with a goal
25:54to mislead you.
25:56So
25:56we might be
25:57very careful
25:58where we're going
25:58and we already
25:59see that
26:00once the technology
26:01becomes active,
26:03it immediately
26:03turns to
26:04scam people,
26:05mislead people.
26:06so I believe
26:08that AI
26:09is not going
26:09to benefit,
26:10it's going
26:10to bring
26:11the worst.
26:11Thank you.
26:18Thank you.
26:19Thank you,
26:20Shaina and Alex.
26:21I felt
26:22Shaina,
26:22they're making
26:23a strong case
26:24that we have been
26:25able to adapt
26:25through time,
26:27so why not now?
26:28Although you did
26:28give a little help
26:30to the against team
26:31there by suggesting
26:32that we didn't want
26:33it to have
26:33a catastrophe
26:34to lead us there.
26:36And then I made
26:37the point that
26:37actually you feel
26:38it's already
26:39being weaponized,
26:40a real-time example
26:41of the fact
26:42that maybe AI
26:43is not revealing
26:44the best of humanity.
26:46Okay,
26:47our last round,
26:49this is your last
26:50opportunity as team,
26:51well, almost your last
26:51opportunity as a team
26:52to sway the audience
26:54in your favor.
26:54Okay, so Patrick,
26:55please join me.
26:56So I have Patrick Dupoe,
26:57who is the managing
26:58director at Boston
27:00Consulting Group,
27:00and Gina Neff,
27:01who is the executive
27:02director of the
27:03Minderu Center for
27:04Technology and Democracy
27:06at the University of
27:07Cambridge.
27:07Patrick, your three minutes.
27:10So as we are
27:11talking about humanity,
27:13let me take you
27:14where it all started,
27:16Africa.
27:17And I can tell you
27:19that in Africa,
27:20as in most of
27:21the global south,
27:22AI comes with
27:23a little bit of worry
27:26and a lot of hope.
27:28Between 2000 and 2010,
27:30Africa went through
27:3110 years of very high
27:33economic growth.
27:34That was driven
27:35by two forces,
27:37mobile and internet.
27:40That brought massive
27:41productivity,
27:42and AI will be
27:43the next force.
27:44I believe AI will be
27:47the strongest engine
27:48of socioeconomic
27:50development in the
27:51history of humanity
27:52and will enable
27:54more progress
27:55than years of
27:56development aid.
27:57the UN estimates
27:59that AI will enable
28:0180% of the 179
28:04SDGs targets.
28:06Every day,
28:07we find new
28:08applications
28:08with massive impact.
28:10Few examples.
28:12AI can help
28:13farmers monitor
28:15their soils
28:15in real time
28:16and choose
28:18what crops
28:19to grow,
28:20which fertilizers
28:21to use,
28:22and ultimately
28:23produce more,
28:25better,
28:25and with less impact
28:26on the environment.
28:28AI can help
28:29governments
28:30anticipate what
28:32climate change
28:33will impact
28:33in their land,
28:35their coast,
28:36their infrastructure,
28:37their population
28:37in 10, 20 years
28:39from now
28:39and get better
28:41adaptation plans.
28:42There are dozens
28:43of examples
28:44of how AI
28:46can help leapfrog
28:47in health care,
28:49in education,
28:50in agriculture,
28:51in energy access,
28:52climate adaptation,
28:53poverty reduction,
28:54gender equality.
28:56So if you want
28:57to understand
28:57if AI
28:58will reveal
28:59the best of humanity,
29:00look at the birthplace
29:02of humanity,
29:02Africa.
29:04And so now
29:05let me wrap up
29:06on the argumentation
29:08you have heard.
29:09All here,
29:10very logical,
29:11and to be frank,
29:12a lot of them
29:13you can find
29:13on ChatGPT,
29:14including mine.
29:16But since we cannot
29:17compete with OpenAI
29:19on logic,
29:20let me share
29:21my human
29:23instinctive reaction
29:24to this session
29:25when I discover it.
29:26I thought
29:27it's been 20 years
29:29I'm going
29:29to this type
29:30of conferences
29:30and it's the first time
29:32I see the word
29:33humanity
29:34in the title.
29:36I've never been
29:37to a conference
29:37questioning
29:38how banking
29:39will reveal
29:40the best of humanity
29:41or how agile
29:42transformation
29:43will reveal
29:43the best of humanity.
29:45And I take it
29:46as a sign
29:46that AI,
29:48because of its
29:48exponential nature,
29:50will force us
29:51to ask ourselves
29:52the real questions.
29:54The real questions
29:55of what makes us human
29:56will make us
29:57more humble
29:58about technology
29:59and progress.
30:0080 years ago,
30:01the atomic bomb
30:03forced us
30:04to think about peace.
30:05Well,
30:06beyond helping
30:07solve those SDG issues,
30:08AI could force us
30:09to become
30:10more human
30:11again.
30:14Thank you.
30:22AI won't bring out
30:25the best of humanity.
30:27Rather,
30:28it will take
30:30the best of humanity
30:32to make AI
30:34even work.
30:37Take,
30:38for example,
30:39the fact
30:40that all the buzz
30:42and all the hype
30:44that we're talking
30:45about AI right now
30:46is about
30:48its social
30:49and technical impact.
30:51This is a social
30:53and technical revolution
30:55we're talking about.
30:57People will talk
30:58about speed,
31:00scope,
31:00scale,
31:01efficiency.
31:02Those are a set
31:03of technical values
31:05of what AI
31:06can bring for us.
31:10We've heard
31:11a lot about this,
31:12but we've got
31:12lots of problems
31:13to deal with.
31:16What is going
31:16to happen
31:17to our schools?
31:19What is happening
31:20to our jobs?
31:21What is happening
31:22to our creative content?
31:24What is happening
31:25to film,
31:26art,
31:26humanity?
31:28These are problems,
31:29real problems,
31:30people are worried
31:31about right now.
31:34deepfakes,
31:35misinformation,
31:37bias,
31:38the collapsing
31:40of our public sphere.
31:41I have news for you.
31:44AI,
31:45it's not going
31:46to solve
31:46these problems.
31:47The biggest problems
31:49we face
31:50are going to be
31:52solved by people.
31:54AI can solve
31:56incredible problems,
31:57but it cannot deal
31:59with the problems
32:00of society
32:01that is for us
32:03to do.
32:05It will take
32:07the best of us
32:08coming together
32:09and working together
32:10to make the most
32:13of the powerful
32:14and wonderful
32:16transformative
32:16potential effects
32:18of artificial
32:19intelligence technologies.
32:21But if we don't
32:22come together
32:23and do that work,
32:26these tools
32:27will not bring out
32:29the best of us.
32:34who is going
32:36to solve
32:36the problems
32:37of AI?
32:39Actually,
32:40it's all of us.
32:42It's people
32:43in governments,
32:45it's people
32:46in schools,
32:47it's people
32:48in companies.
32:50In short,
32:52AI is not going
32:53to reveal
32:54the best of humanity.
32:56people are going
32:57to reveal
32:58the best of AI
32:59if we're going
33:00to get there.
33:01Thank you.
33:07to the best of us.
33:07What a strong ending
33:09round.
33:10Patrick reminding us
33:11that this could be
33:12the biggest social
33:14and economic change.
33:16But as Gina said
33:18there,
33:20it is a social revolution
33:22as well,
33:23not just a technical
33:24revolution.
33:25It's a great point.
33:25So now,
33:27this is when
33:27you have to really,
33:28really listen again.
33:30So we're going
33:30to get the jury
33:31up here
33:31who are going
33:32to tell us
33:34what they made
33:35of these debaters,
33:38of what they said.
33:40And you will have
33:40to listen really carefully
33:41because they're going
33:42to give you
33:42some homework.
33:43They're going
33:43to give you
33:43something to go away
33:44and think about
33:45to come back
33:46for your closing arguments.
33:48So I think
33:48I can give you
33:49a minute actually.
33:50You can leave the stage,
33:51you can get off
33:52the spotlight
33:53for a second
33:54and ask the jury
33:55to come join me
33:56on stage.
33:57Thank you.
33:57Yeah, please.
34:00No?
34:16Okay.
34:17Just to give them
34:18a rest from the spotlight,
34:20you know.
34:20Cool off
34:21and think about
34:21their arguments.
34:22Okay.
34:22so you all have
34:25two and a half minutes
34:26each to respond
34:27to what you've heard.
34:28Shirley McKinley,
34:30your vice president
34:31at Microsoft
34:31and the chief legal
34:32officer of GitHub.
34:33I'm going to ask you
34:34to go first
34:35if you're okay with that
34:36and you can take the floor.
34:37All right.
34:38Great.
34:39Well, thank you
34:40very much, debaters.
34:41I mean, I would give
34:42another hand for them
34:43because what tough points
34:45that was.
34:47On the one hand,
34:48we heard all the comments
34:50about how, you know,
34:51AI is like a runaway
34:52freight train
34:54that we can't corral
34:56and then on the other hand,
34:57all of the great opportunities
34:58we have with AI
34:59and so I thought
35:01everyone just did a great job
35:02of laying those out.
35:04I guess what I would ask,
35:05what I would ask
35:07of the Against team
35:09would be,
35:11could you think about
35:13and maybe dive
35:13a little bit more
35:14into detail
35:16on why all of
35:17the brilliant people
35:18in this room
35:19and all of our regulators
35:20that are out looking at this
35:21won't be able
35:22to turn this ship around
35:23if we really think
35:24this is not going to come out
35:26to help humanity.
35:28In terms of the foresight,
35:30I would also just like
35:32to hear a little bit more
35:33in depth for you
35:34about what you see today
35:36as really helping ensure
35:38that we minimize
35:39the risks of AI
35:41that your colleagues
35:42have called out.
35:43Thank you.
35:44Thank you.
35:50Daniel Andler,
35:51you are a philosopher
35:52and professor
35:53at the Sorbonne University.
35:54You also have
35:55two and a half minutes.
35:56Thank you.
35:57Thank you.
35:57So,
35:59I have six points to make.
36:03Basically,
36:05there's one ambiguity
36:07that was not settled,
36:09I think,
36:09by any of the six speakers
36:10but it went very fast.
36:12So,
36:13one thing
36:15is to do good
36:16or evil
36:17to humanity.
36:19The other
36:20is
36:21to make humanity
36:23better.
36:25Look,
36:26we could,
36:27with the help of AI,
36:29we could have
36:30the best doctors,
36:32the best teachers,
36:33the best lawyers,
36:35the best everything,
36:36with the help of AI.
36:37but the teachers themselves,
36:40humanity itself,
36:42might actually go worse.
36:44Teachers might become lazy.
36:48Judges might become corrupt.
36:51They're helped by AI,
36:54but they're not helped
36:56in their being
36:57as humans
36:58or as a social population.
37:03So,
37:04that's one problem.
37:05The second problem
37:08is that we talk of AIs
37:09if we knew what it is.
37:11First,
37:12we don't know what it is.
37:13It's a bunch of instruments
37:14and some people have said,
37:15let's not use the expression AI
37:17because no one knows
37:18exactly what they're referring to.
37:20So,
37:21I think it's very important
37:22to distinguish various kinds of AI.
37:25The other thing about AI
37:27is that we don't know
37:28how they work.
37:29We don't know how they work
37:30because the engineers themselves
37:31don't quite fully understand
37:32how large language models work,
37:34but also because
37:36they're trade secrets
37:37that we know nothing about.
37:38So,
37:38we don't know
37:39what's input
37:40in the systems
37:42and therefore,
37:43we cannot really use them
37:45with full understanding
37:47of how they work.
37:48A third thing,
37:50if I have time,
37:51is to say,
37:52look,
37:53AI,
37:53I'm thinking of democracy.
37:56So,
37:56one argument was made
37:57that with the help of AI,
37:59what kind of AI,
38:00we don't exactly know,
38:01but that could be made more explicit.
38:04We are going to make democracy work
38:07in large scale.
38:09It worked in Athens
38:10because there were few citizens.
38:11It doesn't work in the United States
38:14and therefore,
38:14AI is going to help us.
38:16I agree that it's possible
38:17that AI will provide real-time information
38:25of how people really feel,
38:27what their desires are,
38:28what their discontent is.
38:30So,
38:30it will come up
38:31with lots of information.
38:33But what will we do
38:34with this information?
38:35It has to be attended to.
38:37It has to be selected
38:38among tons of information.
38:40and AI is going to make
38:42this issue of relevance
38:45even worse
38:46because it's going to come up
38:47with tons of information
38:48and then only human judgment
38:50will be able to know
38:52what information is needed.
38:55So,
38:57just to conclude,
38:58I think that
38:59there's a lot of uncertainty
39:00and it was not maybe
39:03sufficiently stressed
39:04by the speakers
39:05on either side
39:06that
39:07it's a very complex situation.
39:09It's a very complex adventure.
39:12One speaker did say
39:13that it was unique
39:14in human history.
39:16But let's be humble
39:17and realize
39:18that we just don't know
39:20how it's going to turn out
39:22and therefore,
39:22we should be
39:23very cautious.
39:24Thank you.
39:30Okay.
39:31As Mita Dubé,
39:32the Chief Digital Marketing Officer
39:33of L'Oreal,
39:34could you wrap us up
39:35with your thoughts, please?
39:37Yeah.
39:37Hi, everybody.
39:38Great debate.
39:39I'm going to share with you
39:40three ideas
39:41that I heard
39:43on both sides
39:44for and against
39:45and I feel that
39:46you can mature
39:47and build on it
39:48even more.
39:49The first idea
39:50was this idea
39:51that in the end
39:53today,
39:53there is a democratization
39:55of AI
39:55that has happened.
39:57A democratization
39:58of AI
39:58because AI,
39:59which was the conversation
40:01of mathematicians
40:02and researchers
40:03is a conversation
40:05for both sides
40:06today
40:06and everybody
40:07can access it.
40:09Everybody can
40:10either have fun
40:11with it
40:11or do something
40:12bad with it.
40:13So AI
40:13is having
40:14a cultural moment
40:15and it's definitely
40:17having a cultural moment
40:18when you start
40:19seeing that
40:20it is being referenced
40:21in very popular shows
40:23like South Park.
40:24So why is it
40:25having this cultural moment?
40:27I think we have to think
40:28about that subject
40:29because both of you
40:30referred to it
40:31but that could end up
40:33giving some arguments
40:34to it.
40:35The second idea
40:36that I heard
40:37from both sides
40:38was this idea
40:40of...
40:41I really liked
40:43when you said
40:43people are going
40:44to reveal
40:45the best of AI.
40:47I really liked
40:48the idea
40:48of humanity
40:49and how we are
40:50talking about humanity
40:51in forums like this
40:53in the last 20 years.
40:54But all of that
40:56is alluding
40:56to the point
40:57that as
40:59one expects
41:00that as
41:01AI rules
41:03the world
41:04we need
41:05to start
41:06bringing rules
41:07to AI.
41:08There has to be
41:09regulation
41:10to AI
41:11and I think
41:12the second speaker
41:13this side
41:13started moving
41:15to that idea
41:16but I think
41:17we need to think
41:17more about
41:18at this moment
41:19we need to think
41:20more about
41:20what kind of rules
41:22are needed
41:22to bring to AI
41:23and who should
41:24be bringing it
41:25and how soon
41:26do we need
41:27to bring them.
41:28How should we
41:29mandate governments
41:30to act faster?
41:31It is already happening
41:33but where are we
41:33going with that?
41:34And the third
41:35and the third
41:36very powerful idea
41:37which was on both sides
41:39was this idea
41:40of technology
41:42has the power
41:43to change our lives
41:44no matter
41:45whether you said
41:46it was to change
41:47our lives
41:47for good
41:48or for bad
41:49but the idea
41:50that technology
41:51has the power
41:52to change our lives
41:53came through
41:53on both sides.
41:55The four sides
41:56spoke about
41:56socio-economic
41:57development
41:58saving democracies
42:00and the side
42:01which was against
42:02talked about
42:03it's a social
42:04and a technical
42:05revolution.
42:06If technology
42:07indeed has the power
42:08to change our lives
42:09we should think
42:10about
42:12what do we need
42:13to experiment with
42:14what is the kind
42:15of time
42:15that we need
42:16so that we have
42:17a more objective
42:18assessment
42:19of how we are
42:20going to change
42:21our lives.
42:21Thank you.
42:28Thank you so much
42:29judges
42:30you've actually
42:30given our
42:31debaters
42:32many many
42:33things to think
42:33about
42:34I hope you were
42:34all listening
42:35very very carefully
42:37from you know
42:38why can't we
42:39turn this around
42:40if that's what
42:40we feel like
42:41we need to do
42:41how do we talk
42:42more about
42:43the ambiguity
42:43and I really like
42:45your point
42:46professor
42:46about the kind
42:47of the fact
42:47that it is
42:48you know
42:48within companies
42:50this is not all
42:51entirely open
42:52information
42:52and there's
42:53a reason why
42:54we don't know
42:54some of it
42:55because that would
42:55be giving away
42:57very profitable
42:58information
42:59and then as we
43:00did that
43:00just talking
43:00exactly about
43:02this question
43:02of rules
43:03and regulation
43:04and who has
43:05the power
43:05to do those
43:06how quickly
43:06can we do those
43:07and I think
43:07there is a real
43:08question about
43:09the speed
43:10technology advances
43:11at versus
43:11the speed
43:12that rules
43:13can be brought
43:13into place
43:14so I think
43:14what happens
43:15now
43:15is our debaters
43:16will join me
43:17back on stage
43:18please please
43:18please come
43:19join me
43:21and they will
43:22and then
43:23you will all
43:25confer
43:25I think you have
43:27like two minutes
43:28so it's not
43:29very long
43:30you have two minutes
43:31to think about
43:31what the judges
43:32told you
43:33and you have to
43:34pick one person
43:35who is going to
43:36come back
43:36and make the
43:37closing argument
43:38you really do
43:39need to respond
43:40to what the judges
43:41say
43:41and then this is
43:42kind of your
43:42last chance
43:43as an audience
43:43to be swayed
43:45by the responses
43:46and remember
43:47when we ask you
43:47to vote at the end
43:48you're voting
43:49based on what
43:50you heard today
43:51you know
43:51forget what you
43:52thought before
43:52when you came in
43:53you're voting
43:54based on who
43:54had the most
43:55compelling argument
43:56so I think
43:57I am going to
43:57allow
43:57they've started
43:58so I guess
43:59the two minutes
43:59begins
44:04when you
44:04understand
44:05how gravity
44:06works
44:08you start
44:08to understand
44:10our skin
44:12the process
44:13of aging
44:14and the possibilities
44:15in a glass
44:16of red wine
44:17but sake
44:19as a stimulus
44:19for Japan's economy
44:21which leads to
44:23North Korea
44:24and Dennis Rodney
44:26so now you understand
44:28this whole world
44:29of body modification
44:30the boom of
44:31cosmetic surgery
44:32what's going on
44:33in Beverly Hills
44:34and their houses
44:36and the real trials
44:37of a real housewife
44:38theories about
44:39conspiracy theories
44:41the culture
44:42of cannibals
44:43the value of meat
44:45and the once plummeting
44:46stock prices
44:47of its alternative
44:48so if you understand
44:50why stocks fall
44:51naturally
44:52you understand
44:53falling
44:54and so
44:54you understand
44:56how gravity works
45:04if you understand
45:06sneakers
45:07you start to understand
45:09squeaking
45:10which leads to
45:11hardwood
45:12and the art of Kareem's
45:13famous skyhook
45:15so you get how
45:17flying works
45:18which takes you
45:19to carbon
45:19and our constantly
45:21changing world
45:23the deep world
45:25of deep fates
45:26and the dark fantasies
45:27of a chick
45:28which leads you
45:30to romantic dinners
45:31for two
45:31pretty sexy and all
45:33okay
45:33and what's going on
45:35in corporate cafeterias
45:36they just want you
45:37to never leave
45:37how that's evolving
45:38the workplace
45:39and what that has to do
45:41with crop tops
45:42at the office
45:43and naked dressing
45:44so you get bare feet
45:46running
45:47and human evolution
45:49which was shaped
45:50by a chick
45:51and if you understand
45:52the appeal of gum
45:53and you realize
45:54how much of it
45:55is on our streets
45:55you understand
45:56why some people
45:58never wear
45:58their favorite sneakers
46:03okay
46:05are you ready
46:07okay
46:08so now as I said
46:09you have
46:09I think it's just
46:11one minute
46:11or a minute and a half
46:12very little time
46:13to
46:14I know
46:14it's a lot to respond to
46:15this is your last chance
46:17to convince your audience
46:19you've got a minute and a half
46:20so Shana
46:21for the foresight
46:22I'm going to ask you
46:23to go first
46:25we are on the brink
46:27of the most transformative
46:28technology in human history
46:30AI may redefine
46:32the social contract
46:32upend economies
46:34and revolutionize democracy
46:35we are also on the brink
46:37of grave risks
46:38we need regulations
46:39to restore trust
46:41and connection
46:41we need regulations
46:43to combat
46:43algorithmic bias
46:44we need regulations
46:46to protect workers
46:46from labor displacement
46:47to end the use
46:48of autonomous weapons
46:49to promote
46:50international cooperation
46:51there's a lot of work
46:52ahead of us
46:53but I believe
46:54that AI will reveal
46:55the best of humanity
46:56because it will reveal
46:57our capacity for collective action
46:59our capacity for cooperation
47:01our capacity to reimagine institutions
47:04and uplift people
47:06voting against this measure
47:08is voting against the power
47:10and the strength of humanity
47:12betting against a future
47:13where we rise up
47:14come together
47:15and take control
47:15to maximize the benefits
47:17and minimize the risks
47:18I'm 19
47:19I have no choice
47:21but to be optimistic
47:22and that bet against humanity
47:24is not one that I want to make
47:25thank you
47:26oh please
47:28please stay
47:30okay
47:31and Gina
47:33final arguments
47:34for the
47:34for against
47:37AI will not reveal
47:39the best of humanity
47:41for three reasons
47:43one
47:44technically
47:45we do not know
47:47how to teach
47:48AI
47:49human values
47:50we're talking about
47:52a set of mathematical principles
47:55values need people
47:59this is not a tool
48:01for bringing out
48:03the best or worst of humanity
48:05this is a tool
48:06that brings out
48:07things for people
48:09and it can be used
48:10for the worst of humanity as well
48:12we've heard
48:14from my colleague
48:15from the front lines
48:17of an AI war
48:18happening right now
48:20we know these tools
48:22will be used by countries
48:23that don't share our values
48:25like Russia
48:27North Korea
48:28and Iran
48:28we know these tools
48:30can be used to scam
48:31kill
48:32mislead
48:34number two
48:35these tools won't reveal
48:37the best of
48:37they will only reveal
48:38the best of humanity
48:40if we marshal
48:41and champion
48:42the best of humanity
48:43right now
48:44that takes people
48:45AI is not going to reveal
48:47the best of us
48:48we are going to reveal
48:50the best of us
48:51and finally
48:54imagine agency
48:56would we turn our planet
48:58over to a baby
48:59what about a baboon
49:01or a frog
49:02that's the level of intelligence
49:05more than AI
49:06represents today
49:08that's not what I want
49:10for my agency
49:12and my decisions
49:13okay
49:17so
49:17we are heading
49:20for the moment of voting
49:21the moment of truth
49:22before we do that though
49:23can I just get a big round of applause
49:25for all our speakers
49:26that was amazing
49:29okay
49:32I get to do the nice bit
49:34so now
49:34Sophie has to drag out
49:36your competitive spirit
49:37okay go for it
49:38well there was a real
49:39tour de force
49:40well done
49:41well done
49:41and I think in many ways
49:43a lot of humanity
49:44has been revealed
49:45through that debating process
49:47our ability to
49:49tackle complex issues
49:50our ability to hold
49:51opposing truth in our mind
49:53and our ability to
49:54live with ambiguity
49:56which is so important
49:57when the stakes are high
49:58so
49:58the moment of truth
50:00as I said
50:01we have for this
50:02voting
50:03a very sophisticated
50:04technological tool
50:05from the New York Times
50:06called a clapometer
50:08so that's going to be you
50:10just so that
50:11there's absolutely
50:12no confusion
50:13about what you're voting for
50:15this is the four team
50:16that AI will reveal
50:18the best of humanity
50:19this is the against team
50:22they're trying to influence you
50:25let's see
50:25all right
50:26so I'm just going to say
50:28if you think the four team
50:30has been the most compelling
50:32clap now
50:32don't
50:33and then I'm going to say
50:34if you think the against team
50:35has been the most compelling
50:36clap now
50:37okay
50:37I don't want you to have missed
50:39too much double counting
50:42all right
50:43ready
50:44if you think the four team
50:46has been the most compelling
50:48clap now
50:57definitely grown from the first vote
50:59now if you think that the against team
51:02has been the most compelling team
51:03clap now
51:04wow
51:14all right
51:15so I think
51:16we are computing the results
51:17we're going to have
51:19a moment of truth
51:36it's a long drum
51:39okay
51:40Isha I think you should reveal the results
51:43well I think it's fair to say
51:45that in this particular debate
51:48the against team
51:49took it
51:58thank you very much
52:00for everyone
52:01thank you to debaters
52:02I also want to take this opportunity
52:04to thank GitHub
52:06who's enabled this debate
52:07to happen in the first place
52:08and thank you to you Isha
52:10for a wonderful moderation
52:12so please join me
52:13in thanking everyone
52:15for their contribution
52:16thank you
52:18well done
52:19thank you
52:20thank you
52:20thank you
52:21thank you
52:21thank you
52:21thank you
52:21thank you
52:21thank you
52:21thank you
52:21thank you
52:21thank you
Commentaires

Recommandations