Passer au playerPasser au contenu principal
  • il y a 1 jour
A conversation on AI

Catégorie

🤖
Technologie
Transcription
00:00...
00:00...
00:14Cédric Oh?
00:16Cédric O, qui a été le ministre pour le digital.
00:21Cédric O, vous pouvez venir. Merci.
00:29Cédric, il a été le ministre pour le digital et un fantastique ministre.
00:35Nous allons le m'appuyer comme le ministre, mais nous allons gagner un bon ami.
00:40Cédric O, qui a été le ministre pour le digital et un fantastique ministre.
00:46Yann Lequin, de Meta.
00:51Yann a déjà parlé de la dernière fois.
00:55Yann, bienvenue.
00:58Et mon ami ami Eric Schmid.
01:03Eric, c'est un grand ami depuis de nombreuses, de nombreuses, de nombreuses, de nombreuses, de nombreuses.
01:12Et de nombreuses à venir.
01:13Et de nombreuses à venir.
01:14Donc, ils vont avoir une grande session.
01:16Cédric, vous avez la responsabilité de faire ces gens brillants, parce qu'ils sont absolument brillants.
01:22Merci.
01:23Merci.
01:24Merci.
01:24Merci pour cette introduction.
01:26Tout ce que Maurice dit.
01:27Merci beaucoup.
01:33Sous-titrage Société Radio-Canada.
01:50Merci.
01:57Merci.
01:58Merci.
02:07Merci.
02:10Merci.
02:10Merci.
02:21Merci.
02:23Merci.
02:25Merci.
02:26Merci.
02:31Merci.
02:31Merci.
02:37Merci.
02:38Merci.
02:41Merci.
02:41Merci.
02:44Merci.
02:44Merci.
02:46Merci.
02:47Merci.
02:55Merci.
02:58Merci.
03:06Merci.
03:09Merci.
03:16Merci.
03:17Merci.
03:33Merci.
03:33Merci.
03:46Merci.
03:46Merci.
03:58Merci.
03:59Merci.
04:00Nice.
04:00"'So many things have happened.
04:01I think the progress in AI has, if anything, accelerated.
04:06There was sort of a cusp around 2012, 2013, when we started hearing about deep learning
04:12and things like that, where companies like Google and Meta started to get interest in them.
04:18Et puis, en 2015-2016, je pense que ce qui a causé la révolution que Eric est en train de
04:24parler
04:24est une nouvelle façon de traiter des systèmes que l'on appelle « self-supervised learning ».
04:31Donc, en fait, nous n'avons pas de traiter une machine à accomplir un certain nombre de tâches anymore.
04:38Nous nous trainons de traiter un petit peu de temps à filmer les blanques.
04:41Les large-language models, ils sont trainés par un petit peu de texte dans lequel il y a des mots
04:47ont été élevés.
04:47Et c'est trainé de prendre les mots qui sont misseignés, essentiellement.
04:52Nous essayons de faire la même chose pour images et vidéo.
04:54Ça ne fait pas encore plus, mais il y a très vite progress dans ce direction.
04:58Et je suis sûr que, après que nous devons faire un peu de temps en général,
05:02pas juste pour texte, mais aussi pour des choses comme vidéo,
05:05nous devons avoir un pathé à un animal-là ou un human-là que nous n'avons pas aujourd'hui.
05:12Et le développement de ces techniques a été une vraie révolution
05:16pour things like content moderation on social networks or search engines,
05:21où tout ce que vous voyez a été filtré par des systèmes de ce type.
05:27Et le progress dans les deux ou trois dernières années a été phénoménal.
05:31Ce qui est intéressant, par le fait, c'est que nous savons que ça fonctionne,
05:34mais nous ne savons pas complètement pourquoi ça fonctionne.
05:37Donc, quand vous regardez ce que ces modèles et la façon dont ils pensent,
05:41ils ne sont pas pensés sur les choses que vous pensez sur.
05:44Ils ont appris en un autre manière différent.
05:47Et ma propre vue, c'est que ça veut dire que le type de intelligence
05:51que nous voyons va être confusant pour nous.
05:55Ça ne va pas faire un sens, mais ça va fonctionner.
05:59Je vais parler un petit peu sur ça.
06:01Et nous avons juste parlé de l'année dernière et de l'année dernière.
06:06Je voudrais vous dire à vous parler de l'avenir.
06:10Et je ne peux pas mentionner l'ambdagate.
06:14Je vais le dire ça.
06:15Donc, pour ceux qui ne connaissent pas l'ambdagate,
06:19il y a quelques jours avant,
06:21l'ambda stands for Language Model for Dialogue Applications.
06:24This is a scheme developed by Google.
06:29And one of the engineers working there
06:32said a few days ago that its AI had achieved singularity.
06:39Singularity meaning that the artificial intelligence
06:42was self-conscious, conscious of itself.
06:46And it reminds me, Jan, of a discussion
06:48that we had four years ago with President Emmanuel Macron
06:53when you told us that the problem with artificial intelligence
06:56was that we called that intelligence,
06:59although it's not really intelligent.
07:01So how do you react to that declaration?
07:03And how do you see development of AI in the near future?
07:07Okay, so my immediate reaction to this
07:09is that the person in question
07:11who has made those declarations,
07:12I do not know anybody
07:14who is sort of seriously in the AI research
07:17and development community
07:18who has any belief
07:23that the system in question is sentient.
07:26It's not.
07:28That particular...
07:28Sentient meaning conscious of itself.
07:30Some sort of conscious.
07:32He just got a little ahead of himself.
07:34He got a little bit too far ahead.
07:37Yeah, he's a bit of a mystical kind of person
07:40and sort of perhaps has a tendency for that.
07:43He was actually kind of, you know,
07:46put on leave from Google for a while
07:50because of what he did.
07:52But nobody seriously believes any of that.
07:55Now, it is true that current systems
07:57are nowhere near human intelligence,
07:59nowhere near sentience,
08:01nowhere near having consciousness
08:03or anything like that.
08:04but there is no doubt
08:06that eventually,
08:08sometime in the future,
08:09perhaps one or two decades,
08:11we'll have systems
08:11that have all the characteristics
08:13of intelligence
08:14that we observe in humans and animals.
08:16And there is work already
08:18towards kind of, you know,
08:20moving towards that direction.
08:23In my opinion,
08:24it requires some concept and components
08:27that either don't exist
08:29or have been proposed
08:30but have not been actually implemented.
08:32In fact, it's a big debate
08:33within the AI research community.
08:35You know,
08:35do we need to just scale up
08:36the current models that we have,
08:38just make them bigger
08:38and, you know,
08:40intelligence will emerge?
08:41I believe that
08:42when I was a grad,
08:42you know,
08:43a grad student,
08:43like, you know,
08:44a young grad student.
08:45I don't anymore.
08:48Are we,
08:49you know,
08:50do we need to just make
08:51the reinforcement learning methods
08:52more efficient?
08:53I don't believe
08:54that's sufficient either.
08:55So I actually have
08:56another proposal
08:57which I don't have time
08:58to explain here
08:59but I just finished
09:00writing a paper about it
09:01which will appear next week
09:04which is basically
09:05my plan or idea
09:06for how we should go forward.
09:08There's no doubt in my mind
09:09that eventually machines
09:10will have human-level intelligence
09:12and better pretty quickly,
09:14actually.
09:14So you said
09:15one or two decades
09:18before we can achieve that?
09:20Something of that type.
09:21I think the conceptual ideas
09:25that are necessary for this,
09:27some of the ones
09:27that we know are necessary,
09:29that I think are necessary,
09:30probably we're going to make
09:32significant progress
09:34over the next five years or so.
09:36That doesn't mean
09:37we're going to reach
09:38human intelligence
09:38after five years
09:39because there is,
09:40you know,
09:40we see the first obstacle
09:42that we have to climb.
09:43we don't know
09:44how many obstacles
09:45are behind it
09:45and it's been
09:47a constant mistake
09:50that AI researchers
09:51have made
09:52over the last five decades
09:53or six decades
09:54which is to think
09:55that the idea
09:56they just had
09:57was the solution
09:58and that that's it.
10:00You know,
10:00we discovered this idea
10:01whether it was
10:03universal search,
10:04whether it was
10:06logic-based reasoning,
10:07whether it was
10:08deep learning
10:08and backpropagation,
10:10whether it was now
10:10transformers
10:11and large language models.
10:12You know,
10:13people think that
10:13that's it,
10:14that's the solution.
10:15My opinion is that
10:16there is no single solution.
10:17It's going to be
10:18kind of slow progress
10:19and how long
10:20is it going to take?
10:21You know,
10:22at least 10 years
10:22but probably much more.
10:24Eric.
10:25So,
10:25if you ask AI researchers
10:28the median answer
10:29for when it happens
10:31is 20 years.
10:33So I'm going to say
10:3420 years from now.
10:35So I'm going to say
10:35it's going to occur
10:36in June of 2042
10:39in Paris
10:40in a heat wave.
10:42I want you to listen
10:44very carefully
10:45to what Jan's language said.
10:46He said,
10:46aspects of human intelligence.
10:49He didn't say
10:50a human being thinking.
10:53He was very carefully worded
10:55as he always is.
10:56And my own view
10:58is that we're not going
10:59to see one computer
11:00that is AGI
11:01that is super intelligent.
11:02this is my view
11:03that we're going to see
11:05different kinds
11:06of intelligence
11:07that we will work with
11:08and the problem
11:09with these intelligences
11:10is that we won't
11:12understand their limits.
11:13So if you have,
11:15if you and I
11:15are in a fight,
11:16an argument
11:17or a disagreement,
11:18I know you're human,
11:19I know you need to eat,
11:21I know you need to sleep,
11:22I know you need to breathe,
11:23I understand you have,
11:25you know,
11:25a life span
11:26and all of that.
11:27None of those concerns
11:29apply to the intelligence
11:31on the computer.
11:32we won't understand
11:34the basis
11:35of its human evolution
11:36because it didn't evolve
11:37the way a human does.
11:38And so the issue
11:40of how we interact
11:41with these intelligent things
11:44which we think
11:46are human-like,
11:47they're not going
11:48to be just like humans.
11:49And that's the big issue.
11:51I'm actually going
11:52to reinforce that.
11:53I agree with
11:54what you just said.
11:55One thing that
11:56we need to realize
11:58is that there is
11:59no such thing
11:59as a general intelligence.
12:01human intelligence
12:02is very specialized.
12:05Orangutong intelligence
12:06is very specialized.
12:07I don't know.
12:08Any animal you take
12:10has a different form
12:10of intelligence
12:11that is really
12:13constructed by evolution
12:14for the survival
12:15of their species.
12:16And it's true also
12:17of humans.
12:17We do not possess
12:18general intelligence.
12:20We're extremely specialized.
12:21This is going to be true
12:22of AI systems
12:24that we build.
12:25They will have
12:25specialized intelligence
12:27in domains
12:27that we think
12:28is useful to us.
12:29the difference
12:30with humans though
12:31is that we
12:32will have a way
12:33of tuning
12:34their objective function,
12:36the function
12:36they want to satisfy
12:37to keep themselves
12:39happy if you want.
12:40We can program
12:42that into them.
12:43We can't do this
12:43with humans.
12:44It's very difficult, right?
12:45Yes.
12:45The problem with that
12:46is that this gets back
12:47to the singularity argument.
12:49If you look
12:50at the programming work
12:52that Microsoft,
12:53for example,
12:53has offered,
12:54Codex,
12:54the check-ins
12:56and completion
12:57of code
12:58are 30%
12:59or 40%
12:59done by humans,
13:01at some point
13:04computers will be able
13:05to write their own code.
13:07And at that point,
13:08I think it's a free-for-all
13:09because how will
13:11the objective functions
13:12be set?
13:13Will they be able
13:14to at least emulate
13:15objective functions?
13:16Will there be
13:17a meta-system
13:18that can set
13:18the objective functions
13:19that's AI-driven?
13:20I just don't think
13:21we know.
13:21So this is raising
13:23a lot of questions.
13:24and I would like
13:26to tackle the issue
13:27of how do we frame
13:29that progress?
13:31And this is relating
13:32to a debate
13:34or a discussion
13:35we had, Eric,
13:36on the tension
13:36between regulation
13:37and innovation
13:38in a worldwide competition,
13:40by the way.
13:41How do you see that?
13:43Because to some extent,
13:46recent democratic issues
13:48that we had here
13:49in Europe,
13:49a little bit too
13:51in the US,
13:52were linked
13:54to technology
13:55and on some doubt
13:56that our population
13:57have on technology.
13:59And so,
13:59to give some reassurance
14:01to our population,
14:02we need to ensure
14:04them that all
14:05that progress,
14:05all those innovations
14:06are framed
14:08or democratically framed
14:10and are still
14:10in line
14:11with our democratic values.
14:13But doing that,
14:15we are,
14:15to some extent,
14:16hampering innovation.
14:17so how do you see
14:18the dialectics,
14:19I would say,
14:20of innovation
14:20and regulation
14:21in the coming years?
14:23So,
14:24we have all worked hard
14:26to get European champions
14:28in technology
14:29and especially in AI.
14:32Brussels
14:33and the regulatory
14:34infrastructure
14:34has published
14:35regulations around AI
14:37which undoubtedly
14:39will slow it down.
14:41Undoubtedly
14:41will slow it down.
14:43What Europe
14:43should have done
14:44is have done
14:45what the U.S. did.
14:46The U.S. created
14:47a commission
14:48which I was the head of
14:49which was focused on
14:51what do we need
14:52to invest in AI
14:53to make sure
14:54we're the global leader
14:55in AI
14:55and what are
14:57the regulatory issues
14:58as well.
14:59You should not,
15:00and I'll say this,
15:02I'll say this
15:03as directly as I can,
15:04you should not
15:05just regulate,
15:06you should also innovate.
15:09And you should,
15:10when you have
15:10a program
15:11and a policy
15:12and a commission
15:13and so forth,
15:14you should say
15:15how do we become
15:15the global leaders
15:17and maintain
15:18our values
15:21and there are trade-offs.
15:24Yeah, I want to agree
15:25with Eric again.
15:27I mean, certainly
15:28there is
15:30potential impact
15:31of technology
15:33on society
15:34that needs to be regulated.
15:36If you want to
15:36deploy a new piece
15:38of technology,
15:38you have to make sure
15:39it's safe
15:40and everything, right?
15:41But that's a regulation
15:41on the application
15:43of a particular application
15:45of a technology,
15:46not of the technology itself.
15:48It doesn't make sense
15:48in my opinion
15:49to regulate technology,
15:50even less makes sense
15:52to regulate research
15:52in that technology.
15:54It does make sense
15:55to make sure
15:56that whatever products
15:57are deployed
15:58are safe for people
16:01and have positive effects
16:02on society.
16:03Absolutely.
16:06so then there is
16:07things that people
16:08are afraid of.
16:09So there are things
16:10that people are afraid of
16:11that they should not
16:12be afraid of
16:13and things that people
16:14may not be that afraid of
16:15that they perhaps
16:16should worry about.
16:17So we should not
16:18be worried about
16:20a Terminator scenario
16:21that somehow...
16:22Actually, I have a proposal.
16:24Okay.
16:24We could ban
16:25killer robots.
16:27We could.
16:27Because we're not
16:28building them.
16:29We're not building them.
16:30It doesn't hurt
16:30the industry at all.
16:31That's right.
16:32Keep going.
16:33Right.
16:34So, you know,
16:35no Terminator...
16:36Well, there is a scenario
16:37by which, you know,
16:37we build a, you know,
16:40superhuman intelligent system,
16:42somehow embody it
16:43in a robot,
16:44and the robot, you know,
16:45has all the characteristics
16:46of humans
16:46and wants to dominate humans,
16:49wants to, you know,
16:51access resources, blah, blah, blah.
16:52By the way, I saw that movie.
16:53It was good.
16:53Yeah, yeah.
16:54It was fun.
16:54But, you know,
16:56it's not going to happen.
16:56And the reason it's not
16:57going to happen is,
16:58first, we don't have
16:59the technology.
17:00It's going to take
17:00at least 10, 20 years,
17:02something like that.
17:02But second,
17:04we are not going to
17:05hardwire into those systems
17:07the same desire
17:08that humans have
17:09of having kind of
17:11dominant relationships
17:12with other people,
17:13being able to influence them,
17:15having curiosity,
17:16and so not wanting
17:17to kind of, you know,
17:19be confined
17:19into a particular universe
17:21and things like that.
17:23Those are human qualities
17:24that would need
17:25to be hardwired
17:26into those systems
17:26for them to have them.
17:27and, of course,
17:28we won't do it.
17:29So we don't need
17:30to worry about,
17:31you know, Terminator,
17:32okay?
17:33We do need to worry
17:34about what is the effect
17:35on society
17:36of using intelligent systems
17:38that mediate
17:39or access to information.
17:41And let me tell you,
17:42it's going to get
17:43a lot bigger.
17:4410, 15 years from now,
17:46we will not be carrying
17:46smartphones in our pocket.
17:48We'll have augmented
17:49reality glasses
17:50that will overlay
17:51information over
17:52the real world.
17:53And because the amount
17:55of information generated
17:56by, you know,
17:57society and technology
17:58is increasing exponentially,
18:00we will need assistance
18:01to filter that information
18:03for us
18:04and only point to us
18:05things that are interesting.
18:07What are the effects
18:07of this?
18:08You know,
18:09we're seeing the effect
18:10of this already
18:10with social networks
18:11and things like this.
18:12and we're learning
18:13how to make that work.
18:16It's hard.
18:17It's very hard.
18:18It's going to get harder,
18:19but it's inevitable.
18:22Yeah.
18:24I'm forced to cut you short
18:26because there's
18:26a last topic
18:28that I would like
18:28to tackle with you,
18:30which is geopolitics of AI.
18:32I remember when we met Eric
18:35in DC a few years ago,
18:37you just had been appointed
18:38by the Congress, I think,
18:39to make an assessment
18:41of the, well,
18:42I would say,
18:43the US in the AI competition,
18:45especially regarding China.
18:47So what's your assessment
18:49of the different,
18:51I would say,
18:52region of the world,
18:53US, China, Europe,
18:57regarding the fact
18:58that AI is so important
18:59in terms of economic
19:02but also technological domination?
19:04Well, if you start the day
19:06by believing
19:06that artificial intelligence
19:08in its broad form
19:09is the most important
19:11technology of all,
19:12then it's important
19:13that the West win.
19:15And the reason
19:16the West needs to win
19:17is that it reflects
19:18our values,
19:19the way we operate
19:20as democracies,
19:21personal freedom,
19:22all of those kinds of things.
19:24Our report said
19:26that the West
19:28is a little bit ahead of China
19:30but that China has a plan
19:32and our report further said
19:34that the West
19:35needs to have a plan too.
19:37And China has published a plan
19:39which basically says
19:41that they want to be dominant
19:42in artificial intelligence
19:43by 2030.
19:44Can you imagine
19:45if all of the platforms
19:47that you used
19:48here in France
19:49and in the West
19:50in general
19:50were built with Chinese rules
19:52about surveillance
19:53and information
19:54and freedom of speech
19:56and so forth.
19:57You wouldn't want that, right?
19:58You'd much rather
19:59have us control that.
20:00So our assessment was
20:02the Chinese are serious,
20:04they're putting
20:04a great deal of money in it,
20:06they're funding the companies
20:07to do AI
20:08and by the way
20:09also synthetic biology,
20:11quantum and new energy.
20:12They've shifted
20:13from the web apps
20:14and those kinds of companies
20:15really to hardcore deep tech
20:17and they're smart
20:19and they're serious.
20:20We tend to view China
20:22as behind.
20:23I don't.
20:27Yeah, of course Eric is right,
20:29he studied the question
20:30very thoroughly
20:31but there are sort of
20:32two things that can moderate
20:33the sort of
20:34possibly pessimistic view.
20:36I mean, I understand
20:37you're trying to sort of
20:38rile up interests
20:40from us.
20:41I completely agree
20:42with the fact that
20:43we need to be ahead
20:44to sort of promote
20:45our value of liberal democracy
20:47and free speech, etc.
20:50but the first element
20:52is that China
20:54is painting itself
20:55into a corner.
20:56Its ecosystem
20:58of information technology
21:00is isolated
21:01from the rest of the world.
21:02It blocks Facebook,
21:04it blocks Instagram,
21:05it blocks Wikipedia,
21:07it works the New York Times,
21:08it blocks Google.
21:09All the services
21:10that are used in the West
21:11that help us access information
21:13are blocked in China
21:15and there is a bit of blockage
21:17in the other direction
21:18as well
21:18which is a little
21:19sort of under the radar
21:21but it exists as well.
21:23And so
21:25we're getting into a situation
21:26a little similar to Japan
21:27in the 1990s
21:29where they were ahead
21:29of the West
21:30in cell phone technology
21:32but they had
21:32an isolated ecosystem
21:33and when smartphones
21:35appeared in the West
21:37their system just
21:38crashed,
21:39disappeared.
21:41In fact,
21:42there is another analogy
21:42with Japan
21:44which is that
21:45in the early 1980s
21:46Japan had a plan
21:47for fifth generation computers
21:49which were supposed to be
21:50the AI of the time.
21:52A new generation of computer
21:54that was going to be able
21:55to reason, etc.
21:56It was a big government program
21:57with help from industry.
21:58It was a complete failure.
22:01Not much came out of it
22:02in the end
22:03but at the time
22:04Japan was going to
22:05take over the world.
22:06What they had was
22:07a cultural shift
22:09of people
22:10which kind of
22:11made that trajectory
22:12change direction.
22:14It's possible that
22:15the same thing
22:16is going to happen in China
22:17but we can certainly
22:18not count on it.
22:19And you too
22:20did not mention Europe.
22:22So where is Europe
22:24and France perhaps
22:25in the competition of AI?
22:26You have
22:27each of you
22:28one minute.
22:29Well,
22:30we talk about the West.
22:32I mean,
22:33the West includes Japan
22:33but
22:34and a number
22:36of other countries
22:37perhaps Taiwan.
22:39So I think
22:42France and Europe
22:43in general
22:44has an excellent
22:46education system
22:47so there's a lot of talents
22:48in France
22:50and in Europe
22:50in general.
22:51This is one of the reasons
22:53why
22:53seven years ago
22:55or six years ago
22:55I created
22:57fair in Paris
22:58sort of a lab
23:00a research lab
23:00in Paris
23:01Google has
23:01a lab in Paris as well
23:03actually my brother
23:04works there
23:06and
23:07because of the talent
23:09that's available
23:09in Europe
23:10I think
23:10where Europe
23:11fails a little bit
23:13is
23:15building
23:15like giving the means
23:17for those talents
23:18to flourish
23:19and
23:20either in
23:21sort of
23:22academia
23:23public research
23:24and other systems
23:25startup creation
23:27although there's been
23:27a huge amount
23:28of progress
23:28in France
23:29in that dimension
23:30thanks to you
23:31in part
23:32so
23:34I think
23:34the trend
23:35is positive
23:36but I think
23:36there's a need
23:37for way more
23:38investment
23:38and a plan
23:39like Eric
23:40was suggesting
23:40Eric
23:41one minute
23:42Cedric
23:43you really did
23:44drive
23:44the policies
23:45here forward
23:46so thank you
23:46for your service
23:47to your country
23:49my own view
23:50on Europe
23:51and France
23:51is you don't
23:52have enough
23:53softer people
23:54you have universities
23:55that are incredible
23:56your universities
23:58are poorly funded
23:59compared to the
24:00American equivalents
24:01you need
24:02to give more money
24:04to the universities
24:04the private sector
24:06needs to work
24:07more closely
24:07with the universities
24:08and you need
24:09to create more
24:10entrepreneurs
24:11a French word
24:13President Macron
24:14has made that
24:14a high priority
24:15and I salute him
24:16and your leadership
24:17and I think
24:18it's working
24:18but it takes
24:20a long time
24:20you need
24:21more software people
24:22more AI
24:23more companies
24:24more funding
24:25to universities
24:26and Europe
24:26will do
24:27incredibly well
24:28well
24:29thank you
24:30to you too
24:31this is
24:31a very big agenda
24:33please
24:33a big round
24:35of applause
24:35for Eric
24:36and thank you
Commentaires

Recommandations