- il y a 2 jours
AI Safety and Democratic Governance (remote)
Catégorie
🤖
TechnologieTranscription
00:01Bonjour professeur. Bonjour. Vous allez bien ? Très bien, merci et vous ? Très bien, merci d'être avec nous.
00:11Je sais que c'est extrêmement difficile d'avoir avec vous parce que vous êtes extrêmement busy et nous sommes
00:21vraiment extrêmement honnêtement d'avoir avec vous.
00:25Je pense que dans le milieu de toutes les discussions concernant l'AI et les risques associés, etc., il y
00:39avait probablement un sujet que nous devons couvrir avec vous, c'est ce qui peut se faire mal.
00:48Et je pense que vous avez évoqué les risques et il devrait être intéressant d'avoir votre opinion de ce
01:03qui peut se faire mal avec l'AI.
01:07En fait, il y a beaucoup de choses. J'ai fait un rapport avec 75 scientifiques et un panel avec
01:1530 pays, l'Union et l'Union, sur les risques de l'avenir de l'AI qui sont de l
01:23'intérêt général.
01:24Et les risques, vous savez, il y a des risques déjà que nous devons être concernés avec, en termes d
01:33'influencer public opinion, avec deepfix, ensuite avec dialogue systems.
01:40Il y a des problèmes avec discrimination et bias. All de ces sont déjà documentés, mais nous devons penser à
01:49comment l'AI va devenir plus puissant.
01:52Il y a beaucoup d'influencer, comme l'intelligence de l'intérêt général, et ce que ça veut dire pour
01:58les risques d'autres risques.
01:58Donc, il y a beaucoup de concerné par les types de misuse, beyond lesquels que je mentionne.
02:06Si les terroristes ou les states de l'Aise utilisent l'AI pour construire de nouveaux weapons, ou pour lancer
02:13des cyberattacks,
02:14si vous allez sur le dark web, vous pouvez déjà acheter ces systèmes qui peuvent être utilisés pour fraud et
02:20cyberattacks.
02:23Et puis, bien sûr, il y a des concernés sur l'assaut de contrôle.
02:27Maintenant, si c'est pour misuse ou l'assaut de contrôle, science n'a pas une réponse à comment nous
02:33pouvons être sûr que l'AI behaves according à nos normes,
02:38nos laws, n'est pas utilisé pour harmir les gens, et donc.
02:43Donc, jusqu'à ce qu'on peut faire, nous devons être très careful.
02:47Donc, le problème n'est pas que nous connaissons une précise scénario.
02:51Je veux dire, il y a beaucoup de scénarios qui ont été discuté, mais il y a beaucoup d'insurgité,
02:56et les scientifiques n'ont pas d'accord sur ce qui va se passer.
03:01D'un point de vue des publics et des décision-makers,
03:04le principal risque est associé à ne pas connaître ce qui va se passer.
03:09D'un point de vue que l'AI va être très, très puissant
03:12en ce que les avances continuent.
03:15Je pense que le plus de chose à faire est de être cautious,
03:18et d'essayer d'entendre ces risques, d'étudier les mieux,
03:22et d'essayer de mettre des mesures de mitigation,
03:24comme nous faisons progresser en compréhendant les.
03:27C'est ce genre de risque est quelque chose que nous avons appris dans le passé
03:31avec beaucoup d'autres aspects.
03:35Les armes chimiques, les armes nucléaires,
03:40et la dissémination a été contrôlé.
03:43Et jusqu'à ce moment-là, nous n'avons pas vu un grand accident
03:48et nous n'avons pas vu des terroristes utiliser des armes nucléaires
03:52ou des armes chimiques.
03:55Donc, qu'est-ce qui fait la peur que l'AI peut être utilisé
04:02plus important que ce qu'on peut voir
04:06avec les autres mass-destructeurs des armes ?
04:12Alors, pour tous les types de armes que vous avez parlé,
04:16il y a des treaties existent et des mechanisms en place
04:20en order to help monitoring and some amount of oversight.
04:25We have no such thing for AI.
04:27The other issue is uncertainty, as I said.
04:32for these weapons, we can have known them for a while.
04:37We have good expectations of what can go wrong.
04:40National Security Services are working to try to prevent those things from happening.
04:45In the case of AI, there's a lot of unknown unknown, right?
04:49We don't know exactly how these will be used in a few years from now.
04:54And we also need to put in place the kind of treaties and regulation
04:58that exist for these other risks.
05:00So, all of these being combined,
05:03and the potential magnitude of the risk could be much higher.
05:06So, when people talk about loss of control,
05:08they're talking a potential level of magnitude of damage
05:11that could be even larger than all of these things.
05:18When you look at AI, it has been spread very quickly.
05:24We are speaking of AI since the early 50s,
05:32and Turing, and you know what Turing is all about,
05:37because you are one of the very few Turing Prize,
05:41and congratulations again.
05:42But now, you can have three people in the lab
05:47who can develop a new algorithm or new solutions,
05:53even LLMs or whatever.
05:55And it's not the same when it comes to nuclear plant
06:01or other more important plants.
06:06So, how can you stop,
06:09and I think we should not stop it,
06:12how can you control and mitigate the risk
06:15when you have three people in a room
06:19who can create something which can be detrimental
06:23to the democracy
06:24or to the safety of the world?
06:31Well, so, what various pieces of legislation
06:34or the U.S. executive order is looking at
06:37is first making sure the governments
06:39have visibility on these projects.
06:42So, if you build a nuclear plant,
06:44the government will have to know,
06:45and you will have to get authorization for this.
06:47So, that's the first step.
06:49We need to be...
06:50So, does it mean that people should declare
06:51the content of their algorithm
06:53and what they are doing with it?
06:56Well, most importantly,
06:58what they're doing to make sure
07:01we get something safe
07:03that cannot be misused by bad actors
07:06or that they will not lose control.
07:08If you build a nuclear plant,
07:10before you do it,
07:12you have to show plans to the government
07:14showing with calculations
07:15that scientists can verify
07:17that the chances of something bad happening
07:19are very, very small.
07:21We need something similar
07:22in order to make sure
07:24that we don't do something
07:26that could be blowing in our face
07:28in a virtual way.
07:30So, Joshua, if I understand well,
07:32you are more European
07:34than many people
07:36from the American continent
07:39because you are in favor of regulation.
07:43Well, actually,
07:44the Biden executive order
07:47is taking those risks very seriously.
07:49I think maybe even more
07:52than the European AI Act in some ways.
07:54It's just that it's not a legislation.
07:55so it doesn't have teeth.
07:57But I think the intention here
07:59to make sure
08:01that those systems
08:02don't fall into bad hands
08:04and that the companies
08:06do as much as they can
08:07to evaluate the risks,
08:09these are very important
08:11and should be in every legislation.
08:13So, how can we ensure
08:16that we keep a good balance
08:19between accelerating
08:22that kind of business,
08:23which is developing
08:24very fast
08:26and which, by the way,
08:28is bringing a lot of possibilities
08:30to increase efficiencies,
08:33effectiveness,
08:35innovation,
08:37help,
08:38and developing new jobs, etc.
08:42and at the same time
08:44avoiding that this is
08:49derooted to something
08:51which can be bad
08:52for democracy,
08:54press,
08:55or simply
08:58privacy
08:59or the use of images?
09:02How can we manage
09:04that balance?
09:07Well, you know,
09:08we've done it
09:08in pretty much
09:09every other sector
09:10of society
09:11where we have regulation
09:12and we have innovation.
09:13It's just that
09:14the people
09:15in the computer industry
09:17have been used
09:17for decades
09:18that, you know,
09:19they would just do
09:20whatever they want
09:20without any kind of
09:21control by the government.
09:23but, of course,
09:24computers are changing
09:25the world
09:25and AI is going
09:26to change it even more
09:27and so there's no reason
09:29really that
09:29it should be an exception,
09:30continues to be an exception
09:31where you can just
09:32do whatever you want
09:33without thinking
09:33about the consequences.
09:34The problem is that
09:36the market forces
09:37are not sufficient here.
09:39The cost of, you know,
09:42not being careful enough
09:43is going to be borne
09:44by the whole of society
09:45and so it's an externality
09:48from an economic
09:48point of view
09:49which means that
09:51there is not enough incentive
09:52for companies
09:53to do the right thing
09:54to protect the public
09:54and that's why
09:55we need regulation.
09:56Like, you know,
09:57you have regulation
09:57for your sandwich,
09:58you have regulation,
09:59of course,
09:59for your nuclear plant
10:00but you have regulation
10:01for almost everything
10:02in society
10:03to make sure
10:04we strike the right balance
10:05between innovation,
10:06productivity
10:07and protecting the public.
10:13In March or April
10:14last year
10:15you have signed
10:20an open letter
10:21with Elon Musk
10:23and a few other
10:24prominent figures
10:26and when we look
10:29at a year after
10:31it doesn't seem
10:33that this has had
10:35any effect
10:37and nobody seemed
10:39to have slowed down
10:40or stopped
10:41their development.
10:43How do you read that?
10:46Actually, I disagree.
10:47It has had a huge effect.
10:49Now, it hasn't had
10:50the effect
10:51Let me finish.
10:52It hasn't had the effect
10:53of companies
10:54making a pause
10:55to study
10:57what they were building
10:57but it has had
10:59the effect
10:59to raise concern
11:01and public opinion
11:01and create a debate.
11:03You know,
11:04now we have
11:05these discussions
11:06of the G7
11:07and the UN
11:08and the OECD
11:09UNESCO
11:10and we have
11:11new legislations
11:13that are being
11:14proposed around the world
11:15so I think
11:15it's a huge success.
11:17Now, I didn't
11:18when I signed this letter
11:19I didn't expect
11:19the companies to stop
11:20because they can't, right?
11:21They're in competition
11:22with each other.
11:23It's just the way
11:23that our system works
11:24that in order to survive
11:26they have to compete
11:27and be at the edge
11:28of capabilities.
11:29So the real effect
11:31is on how collectively
11:33we decide
11:33to rein in those risks
11:35and make sure
11:36we do the right thing
11:37globally.
11:39I agree with you.
11:41You have,
11:41and that's the reason
11:42we are talking about it.
11:44You have raised
11:45the concern
11:46and it is something
11:48which is talked about
11:49in a lot of circles.
11:51It has not stopped.
11:53It has not created a pause
11:55but clearly
11:56you have raised the concern
11:57and it is something
11:59that everyone
12:00is thinking about.
12:01I would like now
12:02to move on regulation
12:04and when we look
12:06at what's happening
12:07we see a lot of countries
12:09the U.S.
12:10and you have mentioned
12:11the Biden Act
12:13Europe with Terry Breton
12:16and his AI Act
12:18and we see
12:19that regulation
12:20are not completely aligned
12:22and this is leading
12:24to some kind
12:26of discrepancies
12:27or opposition
12:29and we see
12:30and we see also
12:30some countries
12:31like China
12:32or India
12:33who are not part
12:34of that kind
12:36of agreement.
12:37So what
12:38can we do?
12:39Should we have
12:40a global agency
12:42in charge
12:43of AI
12:44as we have
12:45a global agency
12:47in charge
12:47of security
12:48of nuclear?
12:51well
12:51yes
12:52and no
12:53so yes
12:54we need
12:54global coordination
12:55but
12:56the timeline
12:57for the AI
12:59risks
12:59is too short
13:00and setting up
13:01hard constraints
13:03at the global level
13:03is going to take
13:04too much time
13:05we should
13:05go in that direction
13:06but I think
13:07what's much more
13:08practical
13:09is that
13:10the countries
13:12try to
13:13harmonize
13:14with each other
13:14and it's
13:15in their interest
13:16from the point
13:17of view
13:18of the companies
13:18it's going to be
13:18much easier
13:19if the various
13:20legislation
13:21are as harmonized
13:22as possible
13:22also the techniques
13:24for evaluating
13:25risks
13:25are things
13:26that can be
13:27shared across
13:27countries
13:28so there's
13:28a number
13:29of new
13:30AI safety
13:31institutes
13:31the UK
13:32started
13:33and then the US
13:34Japan
13:34now Canada
13:37countries
13:37are going to
13:38put together
13:39organizations
13:40that are going
13:42to be watchdogs
13:43for these systems
13:44and they can
13:45work together
13:46to develop
13:47the right
13:48state of the art
13:48because one thing
13:49that's also
13:49important to know
13:50is that
13:50we don't even
13:51know how to
13:52evaluate those
13:52risks properly
13:53right now
13:54and so we
13:55need to be
13:55really efficient
13:56at it
13:56and we can
13:57work together
13:58globally
13:59different countries
14:00that are
14:00invested in this
14:01to do that
14:02it's most important
14:04you mentioned
14:04India
14:05well
14:06where that
14:07evaluation
14:07is most important
14:08is where
14:09the large scale
14:10AI systems
14:11are being developed
14:12as these countries
14:14build up
14:15the scale
14:16of their systems
14:16I think
14:18hopefully
14:18they join
14:20the ranks
14:21of those
14:22who want
14:22to do it
14:22right
14:24as we are
14:25coming to a
14:26close
14:26I would like
14:27to ask you
14:28about a very
14:29important debate
14:30which exists
14:31today
14:31between
14:32open
14:33and closed
14:34and
14:37there are
14:38many opinions
14:39which are
14:40varying
14:40including
14:41the same
14:41person
14:42may have
14:42an opinion
14:43favorable
14:44to open
14:45at a point
14:45in time
14:46and then
14:46to close
14:47so I would
14:48like
14:48to
14:49have your
14:50point of view
14:51on
14:52the
14:53differences
14:55the interest
14:56of being
14:56open
14:57and
14:59what
15:00is now
15:01your opinion
15:02on
15:02open
15:03and closed
15:05yeah
15:05as you know
15:06I've been
15:07a huge
15:07proponent
15:08of open
15:08science
15:09and open
15:09source
15:09for my
15:10whole
15:10career
15:11in fact
15:11my group
15:12put out
15:12one of the
15:13first deep
15:13learning
15:14libraries
15:15in open
15:16source
15:16and it
15:16helped to
15:17fuel
15:17deep
15:18learning
15:18but
15:19the main
15:20advantage
15:21of being
15:21open
15:22in
15:22science
15:22and
15:24code
15:24is that
15:25it
15:25accelerates
15:26progress
15:26and so
15:27that's
15:28great
15:28that's
15:28important
15:29by the way
15:29including
15:30on safety
15:31right
15:31because
15:31the fact
15:32that we
15:33have
15:33some
15:34of
15:34these
15:34models
15:35being
15:35available
15:36to
15:36academics
15:37means
15:37they can
15:37explore
15:38how to
15:38make
15:39them
15:39safer
15:39now
15:40you know
15:41so there
15:42are positives
15:42but there
15:42are so
15:43negatives
15:43once you
15:44release
15:45the weights
15:46and the
15:46code
15:50anybody
15:50can do
15:51whatever
15:51they want
15:51with it
15:52in particular
15:52they can
15:53easily
15:53get rid
15:54of the
15:54safety
15:54protections
15:55which means
15:55bad actors
15:56can use
15:56that
15:58so that's
15:59the danger
15:59right
16:00so because
16:01there are
16:01pros and
16:02cons
16:02and the
16:03effect
16:03will be
16:03on the
16:03whole
16:03society
16:04the real
16:05question
16:05is who
16:06decides
16:06whether
16:07a particular
16:07piece of
16:08code
16:08should be
16:08open
16:09or not
16:10up to
16:11now
16:11it's
16:11just
16:11been
16:12the
16:12developers
16:12or the
16:13CEOs
16:13of a
16:14company
16:14and so
16:15long as
16:15these
16:15systems
16:16are
16:16overall
16:17beneficial
16:17and there's
16:18no question
16:19about the
16:19risks
16:19then that's
16:20the best
16:20thing to
16:20do
16:20in fact
16:21for most
16:21AI
16:22in fact
16:23all of the
16:23AI that
16:24currently
16:24exists
16:24as far
16:24as I
16:25know
16:25we probably
16:25are better
16:26off being
16:26open
16:27but we have
16:28to think
16:28about when
16:28we reach
16:29a threshold
16:29where the
16:30risks become
16:30larger than
16:32the advantages
16:33and who
16:33decides where
16:34is the right
16:34threshold
16:35it should be
16:36a democratic
16:36choice
16:37it shouldn't
16:37be the
16:38CEO of a
16:38company
16:38deciding
16:39oh I
16:39like this
16:39I'm going
16:40to go
16:40and make
16:41my potentially
16:42dangerous
16:43tool
16:43available to
16:44all the
16:44bad actors
16:45in the
16:45world
16:45it should
16:46be a
16:49choice
16:50you know
16:50and maybe
16:51through
16:51international
16:52negotiations
16:52but also
16:53each
16:53legislation
16:54it shouldn't
16:55be something
16:55that is left
16:56to the whims
16:57of a particular
16:58individual
16:59yes but when
17:00you look at
17:00the state
17:00of the world
17:01to get
17:02the opinion
17:03aligned
17:03is something
17:04which is
17:05extremely difficult
17:06we have seen
17:06that on
17:07climate change
17:08do you believe
17:09that it can
17:10happen on AI
17:11no it doesn't
17:12need to be
17:13aligned
17:13the whole
17:14point of
17:15democracy
17:16is to
17:17aggregate
17:17the opinions
17:18of many
17:18different
17:18people
17:19who may
17:19disagree
17:19and then
17:20come to
17:20a decision
17:21that's
17:21sort of
17:22the most
17:23of a
17:23consensus
17:24we can
17:24get
17:24right
17:25that's
17:25what
17:25democracy
17:26is about
17:26for every
17:27decision
17:27and we can
17:28do the same
17:29thing here
17:29now you know
17:30you have
17:30committees
17:30you have
17:31people who
17:31represent
17:32different
17:32parties
17:32and different
17:33interests
17:33and we
17:34come to
17:34an agreement
17:35that's
17:35as much
17:36as possible
17:37in the
17:37interest
17:38of the
17:38larger
17:38society
17:40Yoshua
17:41I have
17:41one last
17:42question
17:44when you
17:45look at
17:45the world
17:46of AI
17:46there is
17:47two
17:47dominant
17:48forces
17:49two giant
17:50who are
17:52the US
17:53and China
17:54Europe
17:55is a dwarf
17:57and many
17:58countries of
17:59Europe
17:59are trying
18:00to build
18:02something
18:02to compete
18:04in order
18:04that AI
18:06is not only
18:07controlled
18:07by these
18:08two giant
18:09countries
18:10and if
18:12you were
18:12to advise
18:13President
18:14Macron
18:15or the
18:16European
18:17Union
18:17Commission
18:18what would
18:20you advise
18:21them to do
18:22in order
18:23to accelerate
18:24massively
18:25the speed
18:27at which
18:27we are
18:28developing
18:28AI
18:28in Europe
18:31so
18:32before I
18:33answer
18:33this
18:34we're
18:35going to
18:35be
18:35building
18:36AI
18:36systems
18:37that are
18:37going to
18:37be
18:37smarter
18:38and
18:38smarter
18:38and
18:39intelligence
18:40gives
18:40power
18:40and so
18:41whoever
18:42controls
18:42that
18:43intelligence
18:43is going
18:43to have
18:44a lot
18:44of
18:44power
18:44economic
18:45power
18:45but also
18:46military
18:46power
18:46and so
18:47I think
18:48it's
18:48important
18:49for Europe
18:49to build
18:50up that
18:50power
18:51for itself
18:52including
18:52France
18:52of course
18:54now
18:54that being
18:55said
18:55that power
18:56is also
18:57something
18:57that is
18:58dangerous
18:58like you
18:59build
18:59nuclear
18:59weapons
19:00eventually
19:00that can
19:01blow in
19:01your face
19:02if you're
19:02not
19:02careful
19:02so we
19:03have to
19:04do both
19:04we have
19:05to build
19:06something
19:07that can
19:07help us
19:08for example
19:08defend
19:09against
19:09attacks
19:09so if
19:10the Russians
19:11use AI
19:12to attack
19:12our cyber
19:13infrastructure
19:13we may
19:15need AI
19:15to defend
19:16ourselves
19:16so we
19:17absolutely
19:17need to
19:18develop
19:18AI
19:20in such
19:20a way
19:21that we
19:21can both
19:24defend
19:24ourselves
19:24but also
19:25do it
19:26safely
19:27so if
19:27we build
19:28something
19:28that's
19:28going to
19:28blow in
19:29our face
19:29then it's
19:30not good
19:30either
19:30so we
19:31have to
19:31do both
19:31we have
19:32to do
19:32it
19:32safely
19:32and make
19:34sure we
19:34build up
19:35the
19:37capacity
19:37in order
19:38to defend
19:38ourselves
19:39in case
19:39one day
19:40these
19:40things
19:40happen
19:42Yeshua
19:44a
19:44grand
19:45merci
19:45a
19:46grand
19:46merci
19:47thank you
19:47s'il vous
19:48plaît
19:48you
19:49can
19:49thank
19:50and
19:52next year
19:54you have to be
19:55here in person
19:57because I'm sure
19:59that we're going
19:59to go
19:59much farther
20:01and we can
20:02have a lot
20:02of other
20:03meetings
20:03so I'm
20:04counting on you
20:05to have you
20:06in Paris
20:07next year
20:08at
20:09Vivatec
20:09thank you
20:10merci
20:11beaucoup
20:12à bientôt
20:13pleasure
20:13bye everyone
Commentaires