- il y a 5 semaines
The Algorithmic Republic: Can Democracy Survive Artificial Intelligence?
Catégorie
🤖
TechnologieTranscription
00:00So, good morning, everyone.
00:03Thank you for attending this session
00:06after the wonderful session
00:08that we just witnessed.
00:12We are today
00:13with two special guests.
00:16Aksed is here with us
00:18and online
00:19somewhere in the world,
00:22Amba Kak. I don't know if Amba hears us
00:24and can say a word.
00:25She is in the United States.
00:27Yes, we can hear you.
00:29I can hear you.
00:31Can you see me?
00:32Yes, we can hear you perfectly.
00:34Okay, wonderful.
00:35So, thanks very much.
00:36The session is called
00:38The Algorithmic Republic,
00:40Can Democracy Survive AI?
00:42So, it's one of those topics
00:44which matter a lot to us at VivaTech.
00:47As you know, VivaTech is a place
00:48for business. It's a place for meetings.
00:50It's also a place for reflections
00:52and sharing thoughts
00:54about what really matters in our
00:56world, in our democracy
00:58and also in our digital world.
01:01So, first of all,
01:02I'd like to say a few words
01:03of introduction
01:03regarding our two panelists.
01:07So, Amba,
01:08who couldn't really fly
01:10from the States
01:11and is based
01:13in the United States now.
01:15Amba makes us the pleasure
01:16of being with us.
01:17She's the executive director
01:18of AI Now Institute.
01:20Amba is a researcher
01:22with a very long experience
01:24in working in multiple regions.
01:27Previously, she was,
01:28and correct me if I'm wrong,
01:30but she was senior advisor
01:31on AI at the Federal Trade Commission,
01:34which is a very important
01:36regulatory board
01:37in the United States.
01:38She also served
01:39as a global policy advisor
01:41at Mozilla,
01:42and she served
01:43on the board of directors
01:44for the Signal Foundation,
01:46which I think is also
01:47a very important foundation
01:49in our world.
01:50And in 2024,
01:53Time Magazine listed you
01:54among the 100
01:55most influential people
01:56on AI.
01:57Is that true?
01:59Apparently.
02:00That's what they said.
02:02So, we're very,
02:03very proud to have you,
02:04and again,
02:05thank you for making it possible
02:06to be with us online.
02:11Just a few words also
02:12on Axel.
02:13Axel has been a friend
02:15for a long time,
02:15actually.
02:16We know each other
02:17for a while.
02:18he's the founder
02:18of Make.org,
02:21Make.org Foundation.
02:23He worked for Publicis
02:24at one point.
02:25He also worked
02:26for Viva Tech
02:26at one point,
02:27which is very important
02:29for us.
02:30But today,
02:30it's especially
02:31in your new role
02:32that I want to have
02:33this conversation with you.
02:35Your new role
02:35as founder
02:36of the Worldwide Alliance
02:38for AI and Democracy.
02:39So, it's an important matter,
02:42and I think we have a lot
02:44to share today with you.
02:46So, first of all,
02:47when you take a look
02:48at the global title
02:50of the conference,
02:52it's a bit frightening
02:53in a way.
02:54Why are we asking
02:55if democracy
02:56can survive AI?
02:58Probably because we feel
03:00that AI is a hugely
03:02powerful tool,
03:04but it needs to be put
03:06in the right hands.
03:07In many ways,
03:07it's what we always say
03:09about science.
03:09Science needs also conscience.
03:12Science without conscience
03:13is the ruin of the world.
03:15And we have many times
03:18felt that AI came
03:20with a huge power.
03:22Some people say AI
03:23will take us to 1984.
03:25It will take us backwards.
03:28It will take us
03:28to the domination of robots.
03:30So, you read and you hear
03:32and you fear
03:33a lot of those things.
03:35Today, I would like to focus
03:36and ask our panelists
03:39two things
03:39about two potential risks.
03:42The first one is
03:42the segmentation
03:44of populations
03:45and opinion,
03:46which AI makes
03:47really possible.
03:48And if you really segment
03:50people into small slices,
03:52how are you going
03:53to be able
03:54to put them together again
03:55so that they make
03:56a democracy together?
03:57That's one question.
03:58And the second thing
03:59is what people describe
04:01as the potential
04:03autonomization,
04:04a difficult word,
04:05autonomization of algorithms.
04:08Algorithms are programmed
04:09by people.
04:10Will they have a conscience?
04:12Will they be autonomous?
04:13And thus giving birth
04:14to a sort of parallel
04:15virtual world?
04:17Those are the two main questions
04:18that I wanted to address,
04:19but feel free
04:20to address other issues.
04:22And so,
04:23what is your take
04:24on that topic?
04:24Emba, if I may start
04:26with you,
04:26how would you address
04:27those two topics?
04:29Thank you, Pierre.
04:30And firstly,
04:31thank you so much
04:32to VivaTech
04:33for having me
04:34and for the technology
04:35that makes it possible
04:36for me to zoom
04:37into you all
04:38from New York.
04:39I had an unexpected
04:41and very unfortunate snag
04:42while trying to fly out
04:44last night.
04:45So,
04:46instead,
04:47I'm here,
04:48you know,
04:48beaming in
04:49from New York time.
04:50The good news is
04:51that if there's
04:52one topic
04:53that can wake me up,
04:54it's AI and democracy.
04:55So,
04:57thanks again,
04:58Pierre,
04:58for that prompt.
04:59Just as some background,
05:01so at AI Now,
05:02which is the organization
05:03that I lead,
05:04where,
05:04you know,
05:04the leading think tank
05:05on AI in the U.S.,
05:07we spend a lot
05:08of our time
05:09speaking to the public,
05:11to a lot of policymakers,
05:12and to industry
05:13about,
05:14you know,
05:14risks from AI
05:15and what to do
05:16about them.
05:17But one of the first things
05:18we end up doing
05:19in every conversation
05:21we have
05:21is reframing
05:23debates about AI
05:24from being debates
05:25about progress
05:26to being debates
05:27about power.
05:29And what I mean by that
05:31is there's a lot of talk
05:32right now
05:32about abstract,
05:33you know,
05:33milestones,
05:35you know,
05:35even, Pierre,
05:36what you just said
05:37about, you know,
05:38autonomous agents
05:39or even the fragmentation
05:41of population.
05:42There's a way
05:43in which we talk
05:43about AI often
05:44as if it has
05:45some inevitable quality,
05:47right?
05:47You know,
05:48will coders
05:49have jobs
05:49in 30 years?
05:51Will AI,
05:52will we have
05:53killer robots?
05:54Will there be
05:55autonomous agents?
05:56Will any of us
05:57have anything
05:58to do
05:58in the future?
06:00You know,
06:01as if the society,
06:02the rest of us,
06:03the general public,
06:04are sort of passive
06:05observers
06:05to this onward
06:06march of technology.
06:07And I think
06:08that's what needs
06:09a reframe
06:10because I think
06:11the question
06:11about the incoming
06:13AI future
06:14is really a question
06:15about who has power,
06:16who has more of it,
06:18who will have less of it.
06:19And I think
06:20that's why it's central
06:21to the democracy question
06:22because it should be
06:24the dominion
06:25of all of us,
06:26of the global public
06:26to try and shape
06:27which way we go.
06:28So not,
06:29you know,
06:30will coders have jobs
06:31in 30 years?
06:32Will we have,
06:33you know,
06:34will there be
06:35autonomous AI?
06:35I think it depends
06:36and it depends
06:37on all of us.
06:38So that's just,
06:39you know,
06:40my stock clarification
06:41at the beginning
06:42of any conversation.
06:43But getting to
06:45the meat of this discussion,
06:47I want to say
06:47two quick things
06:48and then I'd love
06:50to hear what Axel
06:51has to say on this too,
06:52which is
06:54one thing
06:54that's sort of
06:55less understood
06:56is that this new
06:58AI takeover
06:59is in fact
07:01concentrating power
07:02in many of the firms
07:04we already know.
07:05You know,
07:06you call them big tech.
07:07Sometimes some people
07:08are now calling it
07:08big AI.
07:09It's a slightly different
07:10but mostly
07:11the same constellation.
07:13and this is important
07:15because there's
07:15a kind of disruptive,
07:16you know,
07:18air around AI
07:19and I imagine
07:20at the venue
07:21that you all are at
07:22that there's
07:23much excitement
07:24about this particular
07:25disruptive potential.
07:26But it's equally
07:27important to remember
07:28what isn't changing
07:30or what's getting
07:31more entrenched.
07:33Now,
07:33market pundits
07:34and they did this
07:36post deep seek,
07:38you know,
07:38they told you
07:39any way you look at it,
07:41we're seeing more AI,
07:42this market
07:42is going to be huge.
07:44So as long as we can
07:45all access the benefits,
07:47it's good for the economy,
07:48right?
07:49It's good for the global economy,
07:50it's good for the ecosystem.
07:51But I think that
07:53narrating this AI future
07:55as a win-win
07:56is a little bit misleading,
07:58especially right now
07:59because what it obscures
08:01is that growing the pie,
08:03growing the AI pie
08:04as it stands currently
08:06is one way or the other
08:08eventually concentrating power
08:11in big tech AI firms,
08:12mostly firms
08:14that are domiciled
08:14in the US.
08:15And the reasons why
08:17I don't have to really
08:17explain to this audience,
08:18but just quickly,
08:19right,
08:20they have outsized access
08:21to the key inputs
08:23that determine
08:23both technical performance,
08:25but also long-term
08:26business viability,
08:27its data,
08:28compute,
08:29talent,
08:29but also access
08:31to markets
08:31via their control
08:32over the platform
08:34and devices.
08:35So the quick point here
08:37is that
08:37we already have
08:39a bunch of
08:39incumbent companies
08:40that have both
08:41the incentive
08:41and the ability
08:43to capture value
08:44from this new future.
08:45This matters.
08:46It matters not just
08:47for markets
08:48and for innovation,
08:49but it matters
08:50for democracy.
08:52And I think
08:52that's,
08:53you know,
08:54at least here in the US,
08:55I think I can imagine
08:57that this is well
08:58underway in the EU as well.
09:00But, you know,
09:01there's more consensus
09:02than ever right now,
09:03even among the general public,
09:04that taming the power
09:06of these large
09:07technology companies
09:08is essential.
09:09It's essential
09:10to democracy.
09:11But somehow
09:12that analysis
09:13or that understanding
09:14isn't porting
09:16to the AI space,
09:17which is still assumed
09:18to be disruptive
09:19and new and new entrants.
09:21And so I guess
09:22what I'm here to tell you
09:23and not as a point
09:24of cynicism,
09:24but as a point of
09:25this is what we need
09:26to do something about this,
09:28that, you know,
09:30this particular new future
09:31is making
09:32the dominance
09:34of a handful
09:35of unaccountable
09:36private players
09:37and some eccentric
09:38individuals
09:39within these companies
09:40inevitable.
09:41And that's what
09:42we need to resist
09:43through all means possible.
09:45I think that's really
09:46the challenge
09:46of our time.
09:48Thank you very much.
09:50You mentioned
09:51the idea
09:52of trying to tame
09:53the huge power
09:54of the huge giants.
09:57How would you describe
09:58the debate
09:58in the United States
09:59related to that matter?
10:01Because here in Europe,
10:02sometimes in France,
10:03people say,
10:04well, first of all,
10:04let them develop
10:05their solutions,
10:06let them grow,
10:07let them create
10:08incredible things,
10:09and then we'll see
10:10if we need regulation.
10:12But let's not,
10:13you know,
10:13harness creativity first.
10:16How would you describe
10:16this trade-off
10:17between freedom
10:18and regulation?
10:20I'm going to get
10:21to the U.S.
10:22in a second,
10:23because as you can imagine,
10:24the situation here
10:25is volatile.
10:26But to what you just said,
10:27I actually think
10:28that this is a,
10:29you know,
10:30I understand why
10:31it's a misconception.
10:32First we build,
10:33then we regulate.
10:34But I think that
10:34we should try
10:35and view regulation,
10:36particularly market-oriented,
10:38pro-competitive interventions,
10:40as laying the foundation
10:41for disruption to happen.
10:43Because I know
10:44it's not something
10:45anyone wants to hear,
10:46but at this moment,
10:47this market is not,
10:48the conditions
10:49are not ripe
10:50for disruption.
10:51Even when they seem
10:52like they are,
10:5375% of startup innovation
10:55in this market
10:56in 2023
10:57came from three firms,
10:59three big tech hyperscalers,
11:01you know their names,
11:02right?
11:02We're seeing
11:03that one way
11:04or the other,
11:04even the most
11:05kind of innovative
11:06so-called AI startups,
11:08all roads do lead
11:09back to big tech.
11:10This is not a market
11:11that long-term
11:12is going to serve us,
11:14serve the global economy.
11:15And so I think
11:15we need to think
11:16of regulation
11:17less as this passive,
11:18burdensome friction
11:19and much more
11:20as opening
11:21a Pandora's box
11:22of innovation.
11:23I think,
11:24you know,
11:24at our organization,
11:25we argue that
11:26in the cloud market,
11:27especially the U.S.
11:28should be at the forefront
11:29of pushing
11:30for structural separation.
11:31If you own the cloud,
11:33you should maybe
11:33have restrictions
11:34on not just
11:35how you play
11:36in the AI market,
11:37but also whether
11:39you play
11:40in other,
11:40you know,
11:41other stages
11:42of the AI supply chain
11:43at all
11:43because there's
11:44clear conflicts
11:44of interest
11:46and a clear incentive
11:47to cannibalize
11:48the market.
11:48And then very quickly
11:49on the U.S.,
11:51you know,
11:51as you all know,
11:53there are deregulatory
11:54headwinds globally
11:55and the Trump administration
11:56has made it very clear
11:57that, you know,
11:59especially foreign regulation
12:00will be seen
12:01as a threat
12:02to U.S. national interests.
12:03But I want to also say
12:05something that's maybe
12:05being missed,
12:06which is that
12:07the one area
12:09where we're seeing
12:10continuity
12:10in some ways
12:12from Biden
12:13to the Trump administration
12:14is actually
12:15competition regulation.
12:16The new FTC
12:17has continued
12:18many of the cases,
12:20you know,
12:20the big antitrust cases
12:21that everyone's been
12:22break them up,
12:23search cases,
12:24all of those
12:24are continuing
12:25in ways that maybe
12:26not everyone expected
12:28and there's more
12:29continuity there.
12:30And I think the reason
12:31is because,
12:32you know,
12:32we can get into
12:33the specifics of this,
12:34but at a very high level,
12:35I think the fact
12:36that the power
12:37of these corporations
12:38needs to be tamed
12:40not just for foreign governments,
12:42but for any state actor
12:43is seen as a kind of
12:46an active threat
12:47to be managed,
12:48I think,
12:49is common sense
12:50at this point.
12:51Okay.
12:52Thank you very much
12:53for your takes
12:54on that broad topic.
12:56I will give the floor
12:57in a second
12:58to Axel.
12:59We just mentioned,
13:01you know,
13:01the threats
13:02that are linked
13:03also to the progresses
13:04and created
13:05by large companies.
13:07I think you're
13:08a keen believer
13:09in the power
13:10of the people.
13:11I am keenly afraid
13:13sometimes
13:13that people
13:14are so segmented,
13:16are in cognitive tunnels,
13:18are separated
13:19from each other,
13:20are not making
13:20a crowd.
13:22Some writer wrote
13:24at one point,
13:25we are alone together,
13:26and alone together
13:27is not perfect
13:28for a democracy.
13:29You need to be
13:30really together.
13:31So,
13:31how would you address
13:32this topic?
13:33How will AI
13:35give more people,
13:36more power
13:37to the people,
13:38actually?
13:39Thank you,
13:40Pierre.
13:40I think what you say
13:41is critical.
13:43What is at stake today
13:45is not an interesting
13:47intellectual debate.
13:49We all know
13:51that our democracies
13:53are in danger
13:54currently,
13:55without AI,
13:56I mean,
13:57worldwide,
13:58in a new
13:58socio-political
14:01context,
14:02and there is not
14:04any election
14:06in the world
14:07which is not
14:08very aggressively
14:10attacked
14:11by disinformation
14:13campaigns,
14:14using mainly
14:15social media
14:16and being
14:17AI-powered.
14:19As you saw,
14:20last year,
14:21there were
14:21Romania,
14:22who has been
14:23cancelled finally,
14:24the year before
14:25there were
14:25Slovakia.
14:27So,
14:27what we are talking
14:28today
14:29in this question,
14:31can democracy
14:31survive
14:32artificial intelligence?
14:34It's a real question,
14:35and I'm not sure
14:36about the answer.
14:36So,
14:38the risk
14:38that has been
14:39on the table
14:41are twofold.
14:42The first risk
14:43is very simple,
14:45misuse
14:46of AI
14:47to affect,
14:49to impact
14:50the legitimacy
14:51of an election.
14:54And this is
14:55everywhere.
14:56So,
14:56it could be
14:57misused
14:57by a tech company
14:59and then,
14:59specifically,
15:00social network
15:01companies.
15:02We also
15:04have the usage
15:05of X
15:05on the German
15:07election recently.
15:08It could be
15:09misused
15:09by foreign
15:11countries.
15:13The Russian
15:14attack,
15:15at least
15:16in Europe,
15:17but they are
15:18not the only one.
15:19Most countries
15:20are doing
15:21disinformation
15:22campaigns
15:23at a scale
15:24inimaginable.
15:26Inimaginable.
15:27In terms
15:28of number
15:29of attack,
15:30in terms
15:30of smartness
15:31of attack,
15:32there are
15:32thousands
15:33of people
15:33in those
15:34countries
15:34who are
15:35thinking
15:35every morning
15:36how to
15:36destabilize
15:38a country
15:38in an election.
15:39And it also
15:41can be used
15:41by politicians
15:42themselves.
15:43So,
15:44the attack
15:45on the election
15:46is the first
15:47big risk.
15:48If we lose
15:48the legitimacy
15:49of an election,
15:51we lose everything.
15:52The second
15:53field
15:53is maybe
15:55even worse.
15:57The LLM,
15:58it's not
15:59no more
16:00the big tech
16:01as a social
16:01media tech
16:02company,
16:03it's a big tech
16:03as a frontier
16:04model tech
16:04company.
16:05Those LLMs
16:07are embedding
16:08bias
16:09and can
16:10radically
16:11change
16:11the reality
16:13of the
16:14free will
16:15of people.
16:17Who
16:18will be
16:19deciding
16:19about a position
16:20on a given
16:21controversies
16:22when AI
16:23is everywhere.
16:24So,
16:25it could be
16:26really
16:27unfortunate
16:27because
16:28the training
16:29data
16:29for LLMs
16:31are not
16:31properly done
16:32or it could
16:33be even
16:35weaponized
16:35by tech
16:36company
16:37or by
16:38owners
16:38of tech
16:39companies.
16:41What are
16:41the solutions?
16:42There are
16:42solutions about
16:43regulation,
16:43we'll not
16:44talk about it
16:45and then
16:45comes the
16:45question about
16:46what are
16:47the people
16:48in this.
16:48The first
16:49solution
16:50is to
16:50build
16:51the
16:51societal
16:52resilience
16:54of
16:54citizens.
16:55How can
16:56we
16:57ensure
16:57that
16:58our
16:58citizens
16:59are not
17:00only
17:00aware
17:00but
17:01that
17:02are
17:02actors
17:03of
17:04counter
17:04firing
17:05those
17:06attacks.
17:07So,
17:08it's
17:08meaning
17:08involving
17:09everywhere
17:10people,
17:11not only
17:11in the
17:11problem
17:11but also
17:12in the
17:12solution.
17:13And that's
17:14opening,
17:15as you
17:15said,
17:16many fields
17:16of
17:17democratic
17:18spaces
17:19where we
17:19did not
17:20believe
17:20they could
17:21exist
17:21before.
17:22so now
17:23when in
17:24a given
17:24country
17:25you have
17:2610%
17:26of a
17:27population
17:27who is
17:28involving
17:29into
17:30designing
17:31public
17:32policy
17:33based on
17:33the
17:33consensus
17:34of the
17:35citizens,
17:36you are
17:37counter
17:37firing
17:37AI.
17:39And these
17:40democratic
17:40principles
17:41can also
17:42exist in
17:42other
17:42contexts.
17:43I will
17:44quote you,
17:45multilateralism.
17:47Today,
17:48multilateralism is
17:49almost dead,
17:50not working
17:50and not
17:53really
17:56efficient
17:57in the
17:58international
17:59cooperation.
18:00We could
18:01have millions
18:01of people
18:02collaborating
18:02with government
18:04into
18:04international
18:06cooperation.
18:07We could
18:07find some
18:08common
18:08grounds
18:09on which
18:09everyone
18:10can work
18:10on.
18:11And the
18:12second one
18:12is peace
18:13building.
18:13We experienced
18:14the fact of
18:16organizing
18:16collaboration
18:17between
18:17civilians
18:18on the
18:19both sides
18:19of the
18:19conflict
18:20zones.
18:21To identify
18:22the consensus,
18:24it's reducing
18:24significantly
18:26the level
18:27of tension.
18:28So one
18:30of the
18:30ways to
18:31counter
18:31fire
18:31the
18:33hyper
18:34segmentation
18:35and to
18:35counter
18:35fire
18:36disinformation
18:36is to
18:37work on
18:38involving
18:39the people
18:40and what
18:40they have
18:40in common.
18:41And that
18:42on this
18:43field,
18:43AI
18:43becomes
18:44a solution.
18:46Today,
18:47thanks to
18:48AI,
18:49you can
18:49help
18:50anyone
18:50without
18:51bias.
18:52We can
18:53help anyone
18:54to
18:54maximize
18:55its
18:56ownership
18:57on the
18:57society.
18:59It can
18:59participate,
19:00it can
19:01understand
19:01complex
19:02content,
19:04everything
19:04becomes
19:05accessible
19:06and simple
19:06at the
19:07level of
19:07understanding
19:08of anyone,
19:09and suddenly
19:10you reconnect
19:12every
19:13citizen with
19:14the complexity
19:15of the
19:15world.
19:16And I
19:17think that
19:17the very
19:18important
19:18things is
19:20decomplexification
19:20of the
19:21world thanks
19:22to AI.
19:23And I
19:23will finish
19:24by one
19:24thing,
19:25is we
19:26only see
19:261% of
19:28the impact
19:28on AI
19:29and democracy.
19:31Everything
19:32is in
19:32front of
19:32us.
19:33And this
19:33is why,
19:34as you
19:34mentioned,
19:35we launched
19:36at the AI
19:37Action Summit
19:37the Worldwide
19:39Alliance for
19:39AI and
19:40Democracy.
19:40so far
19:42it's 300
19:42organizations
19:43containing
19:44mainly
19:45research
19:46centers,
19:46but also
19:47NGOs,
19:47institutions,
19:48think tanks,
19:49companies,
19:50which are
19:51deeply
19:51working on
19:52collaborating
19:53to find
19:53solutions
19:54to the
19:56good use
19:57of AI
19:57in democracy
19:58to protect
19:58democracy
20:00and to
20:00have a
20:01sustainable
20:01free will
20:03and a
20:03sustainable
20:04democracy.
20:05Thank you
20:05very much
20:06for those
20:06inspiring
20:07remarks.
20:08And,
20:09we have
20:09four more
20:10minutes
20:10to go.
20:12We have
20:13understood
20:13actually
20:13that with
20:14the
20:14algorithmic
20:15world,
20:16we need
20:16a new
20:17sort of
20:17grammar.
20:18How are
20:19those
20:20algorithms
20:20written?
20:21What is
20:21the new
20:21language?
20:22So,
20:23what would
20:23you propose
20:24in terms
20:24of solutions
20:25in the
20:25same way
20:26Axel
20:27tried to
20:28describe
20:29some of
20:29them?
20:29How can
20:30we make
20:31AI the
20:32thing of
20:32everybody
20:33and not
20:34a thing
20:34for a
20:34few
20:35limited
20:36number
20:36of players?
20:37How can
20:38you distribute
20:39the AI
20:39power in
20:40a more
20:40democratic
20:41way?
20:43There are
20:44many questions
20:45in that one
20:45question,
20:46but let me
20:46start with
20:47how you
20:47described it,
20:48which is
20:48what is
20:48the new
20:49grammar
20:49of
20:50accountability
20:51perhaps,
20:52right?
20:52Especially
20:53when we're
20:53faced with
20:55an industry
20:55that derives
20:56its power
20:57from complexity,
20:58from obscurity,
20:59the fact that
21:00many of us
21:01in the general
21:02public perhaps
21:03don't feel
21:04expert on
21:05these systems.
21:06and so
21:06I think
21:07the first
21:07reframe
21:08and I
21:09think the
21:09work that
21:10Axel,
21:11you and
21:11others are
21:12doing is
21:13so critical
21:13here,
21:14is to
21:14reclaim that
21:15expertise and
21:16agency,
21:17that AI
21:17isn't just
21:18this far-off
21:19thing over
21:19here,
21:20it's not
21:21just a
21:21technology
21:21that's being
21:22used by
21:23us,
21:23it's being
21:23used on
21:24us,
21:25right?
21:25Eventually,
21:26whether or
21:26not,
21:27I know there
21:28are a lot
21:28of folks
21:28from the
21:28tech industry
21:29here,
21:29but as
21:30public
21:30citizens,
21:31whether it's
21:31our hospitals,
21:32our healthcare,
21:33education systems,
21:34the culture we
21:35consume,
21:36we are being
21:36subject to
21:37these technologies
21:38as well,
21:38and so I
21:39think reclaiming
21:40agency over
21:41the fact that
21:42this kind of
21:44rewriting of
21:45our economic
21:46and social
21:46foundations is
21:47not just a
21:48tech thing,
21:49it's a subject
21:50that we all
21:51need to have
21:51hooks into.
21:53I think the
21:53other thing I
21:54want to say,
21:54and this is
21:55really important
21:55in terms of
21:56a reframe,
21:57is the
21:57question or
21:58let's say the
21:59debates we
21:59should be
22:00having right
22:00now are
22:01not,
22:01you know,
22:02is chat
22:03GPT useful
22:04or not?
22:04How good
22:05is it?
22:05Can AI
22:06have good
22:06uses?
22:07Of course
22:07they can
22:08and we're
22:08all,
22:09you know,
22:09enjoying
22:10playing with
22:11these shiny
22:11new toys,
22:12but I think
22:13the real
22:13question we
22:14need to
22:14ask is
22:15whether the
22:15current AI
22:16industry's
22:17unaccountable
22:17power is
22:18good for
22:19society,
22:20right?
22:20And so I
22:20think that
22:21reframe or
22:22not debating
22:23the merits
22:23of just
22:24individual
22:24applications,
22:25but thinking
22:25wholesale about
22:26where we're
22:27headed is
22:27key.
22:28And then
22:28finally I'll
22:29say what
22:29is our
22:30toolkit?
22:30You know,
22:30we want to
22:31hold this
22:31sector to
22:32account,
22:32what tools
22:33will we have?
22:33I think
22:34the obvious
22:35answer and
22:35the answer
22:36you might
22:36expect from
22:36me is to
22:37say regulation.
22:38I think
22:38that's one
22:39of several
22:40tools in
22:41the toolkit.
22:41I'll say
22:41very honestly
22:42in the
22:43U.S.
22:43if there's
22:44one learning
22:44over the
22:45last decade,
22:46it's that
22:46we can't
22:47just rely
22:48on,
22:48you know,
22:50the kind
22:51of elite
22:51or kind
22:52of top
22:53down
22:53interventions
22:54in this
22:54space.
22:55Because at
22:55the end
22:56of the
22:56day,
22:56regulators,
22:57politicians
22:58are also
22:58part of
22:59society.
22:59They're also
23:00feeling the
23:00pressures.
23:01They also
23:02have an
23:02army of
23:03corporate
23:03lobbyists
23:03lining up
23:04at their
23:04door.
23:05And so I
23:05think they
23:06also need
23:06to feel
23:07the public
23:07pressure,
23:08the public
23:08power that
23:09essentially
23:10influences them
23:11to pay
23:12attention to
23:12the needs
23:12of the
23:13broader
23:13public,
23:13not just
23:14the tech
23:15industry,
23:16and to kind
23:16of bend
23:16the arc
23:17of this
23:17technology
23:18towards the
23:18public
23:19interest.
23:19sector.
23:19So I
23:19think at
23:19the end
23:19of the
23:20day,
23:20we wrote
23:21a whole
23:22report about
23:22this last
23:23week.
23:24We think
23:24there's a
23:25robust
23:25toolkit.
23:26We think
23:26workers in
23:27every sector
23:27have a
23:28really important
23:29role to
23:29play.
23:30And finally,
23:32a really
23:33important thing
23:33we need to
23:34do, and I
23:34hope that there
23:35are many
23:35examples of
23:36this at
23:36VivaTech,
23:37is to start
23:38seeding
23:39visions of
23:40public interest
23:40AI that
23:41kind of doesn't
23:42just feed the
23:43incentives of
23:44this current
23:44market structure,
23:45but is able
23:47to help us
23:47imagine an
23:48alternative that
23:49is guided, not
23:50just by different
23:51incentives, but
23:52by a different
23:52vision.
23:53Thank you.
23:54Thank you so
23:54much, Emba.
23:55Thank you, Axel,
23:56for your inspiring
23:57remarks.
23:57I think we
23:58have ended
24:00this conference,
24:01but I think
24:02you heard, all
24:03of you, there
24:03is a call for
24:04action.
24:05There's a call
24:05for education.
24:06I think we
24:07really need to
24:07learn how to
24:09use this new
24:11technology, how
24:11to harness its
24:12power.
24:13There is also a
24:14call that we
24:14haven't really
24:15heard, but
24:15which is
24:16important.
24:16You need to
24:17be frugal to
24:18some extent, not
24:19use too much
24:20AI because it's
24:21going to harm
24:21the planet also.
24:23As you know, the
24:23amounts of
24:25calculations required
24:26for some specific
24:27things are very
24:28important.
24:28So we need to
24:29learn the new
24:30grammar and the
24:31new right use of
24:33AI.
24:33So thanks for
24:34being with us.
24:35I hope this was
24:35enjoyable for you.
24:36Have a great day
24:37at Bibliotech.
24:42as you see.
24:44As you see.
24:45As you see.
Commentaires