- il y a 2 jours
AI x Business Transformation_copy
Catégorie
🤖
TechnologieTranscription
00:00I'm not sure how to follow that introduction, to be honest.
00:03I guess I'll introduce myself.
00:04I'm Mark Scott, Politico's chief tech correspondent.
00:07If we have two Jedi Masters, I'm not sure if that makes me...
00:10What other analogy in the Star Wars universe?
00:13But I want to get...
00:15Part of this panel here is to take a step back.
00:17I'm not sure if you had any time to look at the variety of startups out there today.
00:21But there's a lot of discussions legitimately about AI.
00:25But we're going to talk specifically about what happens next.
00:28The broad vision, because again, it can be very easy to get caught up in the day-to-day.
00:32What's going to be happening with your former colleagues at Google?
00:35What's happening at OpenAI?
00:36But that's not the end of this.
00:38As we all know, technology moves pretty fast.
00:40So, Jorov, I'm going to turn to you first.
00:42If we come back five years from now, what does it look like?
00:46What does artificial intelligence look like in five years compared to what we have now?
00:51You're looking at me like you want me to speak.
00:54Yes.
00:56Five years is infinity, right?
00:59So, let me, rather than speak about where we end,
01:02let me speak about the first derivatives I'm seeing now,
01:05the trajectories we're on.
01:07And we see it already this year,
01:09and next year I think it will dominate the conversation.
01:14You can divide it into the technology part
01:16and the dynamics in industry,
01:19but let me focus on maybe technology for now.
01:21I think language models aren't going away,
01:25but AI doesn't start in Daniel's language models.
01:28We're going to see evolution of language models.
01:32Some of it will be to make them more precise for,
01:37we think of it as a matrix.
01:38You have domains,
01:41domain-specific models,
01:42but you also have task-specific models.
01:44And often, by the way,
01:46the tasks are a better way to focus
01:48the language model than the domain,
01:50which is kind of boiling the ocean.
01:51But that's something you're going to see.
01:53Some of them will be also very small.
01:55So, some people speak about small language models
01:57for the sake of smallness,
01:58so they can fit on your phone.
02:01You're definitely seeing this,
02:03but you'll see more and more languages
02:04that are tailored to use cases
02:06that happen to be small.
02:09You're going to see architectures beginning to tweak.
02:12The transformer architecture,
02:13which dominates much of what's happened in recent years,
02:18has huge advantages,
02:20but it's very expensive both to train and to serve.
02:22So, you're seeing now evolutions there.
02:25I think the biggest change you'll see
02:27is to going beyond language models.
02:29I don't know, Eric, how you see it.
02:31I'm curious.
02:32But, you know, we speak...
02:33So, there are various terms I don't like.
02:36I don't like AGI.
02:37I don't like Gen AI.
02:39I don't like agents,
02:40even though I coined the term agent-oriented programming back.
02:43But under the header of agents
02:46and AI systems, compound AI systems,
02:49you have the realization that,
02:51yes, sometimes you want to call an LLM,
02:53but sometimes, let's make it very concrete,
02:57language models can do arithmetic,
03:00which is amazing,
03:02but they don't do it as well as an HP calculator from 1970,
03:05and they never will do.
03:06God did not put neural nets on Earth to do arithmetic.
03:11We don't.
03:11We have a calculator.
03:12And there are various things
03:13that are better served elsewhere.
03:16Now, how do you take those tools,
03:18the custom code you write,
03:19the multiple call to LLMs,
03:22and some are orchestrated,
03:24that the AI system,
03:25and I think we're going to see that,
03:27technologically speaking,
03:28become more and more of the conversation,
03:32rather than the specific LLM.
03:34more to say,
03:35but I've said enough ready for now.
03:38Yorov, I think maybe five years is right about it being too long of a time period,
03:41but Eric,
03:42from what Yorov said,
03:43is that something you agree with,
03:45in terms of going maybe small on the LLMs,
03:47and then looking beyond them at some point?
03:52The first thing I would say is
03:54that it's extraordinary to have this conference
03:58under Maurice's leadership
04:00to showcase how much innovation there is
04:04in new companies in France
04:07and in the world as an overall.
04:10I've participated in investing
04:12in a number of companies here,
04:13and they're just as good,
04:15just as quick,
04:16just as crazy as the U.S. companies.
04:19And so what you're seeing
04:20is a global renaissance of creativity
04:23and innovation for those things.
04:26And I will say further
04:27that I'm quite convinced
04:29that the AI revolution is under-hyped,
04:33not over-hyped,
04:34which will be a shock
04:35to pretty much everyone I know
04:37who can do math,
04:38which is all of you.
04:40And the reason is
04:41you're seeing the development
04:42of a new form of intelligence
04:44that's not human that we will use.
04:47And a new form of intelligence
04:49that's non-human that we will use
04:51and we will connect
04:52has enormous productivity gains
04:54across every element of society.
04:58And we've been working on AI at Google
05:01for a very long time.
05:02Transformers, as you know,
05:03was invented in 2017 at Google.
05:06And all of these startups
05:07and all of these new ideas
05:08and obviously OpenAI's leadership
05:11allowed us to see ChatGPT
05:13and its power language-to-language.
05:16So this is fantastic.
05:17There's lots of people
05:18who write things
05:19and language-to-language
05:20is very powerful.
05:21But there's an awful lot more people
05:23who do things that involve
05:25math and science and engineering
05:26and calculations
05:27which they're not very good at.
05:29So the first thing
05:31that's going to happen
05:33is what we call
05:34infinite context windows.
05:36And this is all this year.
05:38And a context window
05:39is the query
05:40that you give the system.
05:42So you can basically
05:43take the output,
05:44feed it in,
05:45take the output,
05:46feed it in,
05:47take the output,
05:48feed it in,
05:48and that's how you get a recipe.
05:50What do I do next
05:51to bake the cake?
05:52But those recipes
05:54can be arbitrarily long,
05:56thousands and thousands
05:57of steps.
05:58That's a very big deal
05:59for engineering,
06:01problem solving,
06:03doing good things,
06:04doing bad things.
06:05The second thing
06:06that's going to happen
06:06is the agentic revolution
06:08as it's called,
06:09where you can think of them
06:10as large language models
06:12that have knowledge
06:13that's actually calculated.
06:14So you use the math example
06:16and you're exactly right.
06:18You're going to see,
06:19for example,
06:20math problems
06:22and math solutions
06:23as LLMs,
06:24which have been either
06:25pre-trained with it in
06:27or they'll call out
06:28to a math solver.
06:29And what they do
06:30is in the point
06:31at which they make
06:32the probabilistic prediction
06:33of the next word,
06:34instead they call the solution
06:36to get the precise answer
06:37rather than
06:38a probabilistic distribution.
06:39And the third thing
06:41that you're going to see
06:43is text-to-action,
06:45where basically
06:45you say something
06:48and a programming solution
06:50appears.
06:51So for example,
06:53design me a conference
06:54in Paris,
06:56call it VivaTech,
06:57figure out who should come,
06:59invite them all.
07:01Once they've all said yes,
07:03call the speakers
07:04and say there's
07:05this huge conference
07:06and make sure
07:07the speakers show up.
07:09Right?
07:10And then the Python programming,
07:12in this case,
07:12will organize
07:13and do all of that.
07:14So the ability
07:16to go arbitrary calculations,
07:18the ability
07:19to have programmers
07:20that do anything you want,
07:21an infinite context window,
07:23all of which
07:23is occurring this year,
07:25sets up this revolution
07:26that is profound.
07:29So you're saying
07:30this is all going to happen
07:30this year.
07:32I mean,
07:32you're saying
07:32VivaTech next year
07:33is going to be all
07:34done by AI?
07:36I mean,
07:36are we at a point where,
07:37again,
07:37I'm using it as an example,
07:39we are at a point
07:39where that is something
07:40to text-to-action
07:41is practical
07:42and something
07:42that we'll be seeing this year?
07:43People are doing
07:44text-to-action
07:45in relatively simple ways.
07:47So take the spreadsheet,
07:49calculate things.
07:50One of my friends
07:51took a whole bunch
07:52of investment materials
07:53and said,
07:54read it
07:55and calculate
07:56the real return
07:57as opposed to
07:57the claim return.
07:58And he was able
07:59to do that.
08:01They make mistakes,
08:02they're not completely
08:02perfect and so forth,
08:04but you can see
08:05that in year one,
08:06if this is year one,
08:07this is version one,
08:08it's there.
08:09Version two,
08:10the next year
08:11will be better,
08:11version three
08:12will be better.
08:12It's very clear to me
08:14this is going to happen
08:15simply because
08:16the amount of money
08:17and people working on it.
08:19And you were talking
08:20about big companies
08:21and small companies.
08:23It's not possible
08:24to know right now
08:25where the gains will be.
08:27The big companies
08:28will do well
08:29because they have
08:30lots of training data
08:31and they have lots
08:32of monetization opportunities.
08:34The little companies
08:35have great technology,
08:37relatively high valuations,
08:39and no customers.
08:41At some level,
08:42these systems
08:43need to be fed
08:44real data,
08:45which is why
08:46the big tech companies
08:47are all now AI first
08:49because the AI
08:50simply allows them
08:52to make more money.
08:53A simple example
08:54in Google,
08:54when Larry and Sergey
08:56founded Google,
08:57it was all the world's
08:57information, right?
08:59So we had the 10 blue links
09:01and the ads.
09:02Well, now you could do
09:03a summary and have
09:04the links.
09:05It's an obvious
09:06extension of Google.
09:07It's not a threat
09:08to Google.
09:09It makes Google better.
09:10the ad system
09:12better targeting,
09:14the YouTube
09:15better targeting
09:16and summarization.
09:17You get the idea.
09:19You are in terms
09:20of impact implementation.
09:23There's a lot of people
09:24in this room
09:24who are probably looking
09:25or are using AI
09:26in their businesses
09:27right now.
09:28How do you see
09:29the impact and effect?
09:30On the prequel
09:31we had yesterday
09:32to discuss this panel,
09:33you talked about
09:34a year and a half ago,
09:35companies really weren't
09:36dealing with AI.
09:37Now everyone's AI first.
09:39what's the shift?
09:41So that's right
09:42and in part
09:44it has to do
09:44with the evolution
09:45and maturation
09:46of the technology.
09:48So if I look back
09:49a year and a half ago,
09:51we had over 40,000 developers
09:55sign up for our platform
09:56but maybe a handful
09:58of big companies
10:00and really I'd call it
10:01very sporadic
10:03cautious experimentation.
10:05Then about a year
10:06and a half ago
10:07roughly somebody
10:08turned on this bigot
10:10and we can't keep up
10:13with demand.
10:14There's not a CEO
10:15in the world
10:17that doesn't say
10:18we're an AI first company
10:20or we want to become one
10:23and some of them
10:24come very thoughtfully
10:27having mapped out
10:27hundreds of use cases
10:30and what you see now
10:31is mass experimentation.
10:33You don't yet see
10:34mass deployment.
10:37That's the next phase.
10:38I actually think
10:39it's happening now.
10:40We see the signs
10:41it's five-figure deal
10:42turning to six figures
10:43and soon six
10:44is to seven figures.
10:47Initially more
10:48internal than
10:49external facing
10:51and I think
10:52several things
10:53are driving that.
10:54number one
10:55may be the most important
10:56the companies
10:57are getting
10:57to understand better
10:58what this technology is
11:02understanding better
11:02what it can do to them
11:04and asking the real questions
11:05what's the total cost
11:06of ownership
11:07what's the ROI
11:07so where should I
11:09really deploy it
11:10getting more sophisticated.
11:12The flip side of that
11:13and that related
11:15to what Eric
11:16you said
11:16the technology
11:17is amazing
11:18but if you're
11:19brilliant
11:2095% of the time
11:22and total garbage
11:235% of the time
11:24that doesn't fly
11:26you've lost
11:26all confidence
11:27and so
11:29really
11:30the industry
11:31needs to get comfortable
11:32that they won't end up
11:34with egg on their face
11:35and I think
11:36we're getting there
11:37partly because
11:37more training
11:38but partly frankly
11:40because
11:40we're complementing
11:42we're compensating
11:43for the weaknesses
11:45of LLMs
11:45with other tools
11:46and I think
11:47that together
11:48will take us
11:48to the next phase.
11:50Eric as someone
11:51who's run
11:51a very very large business
11:53does that resonate
11:54with you
11:54in terms of
11:55again your colleagues
11:56at Google
11:56had issues
11:56with the recent AI
11:58I don't think
11:59advertisers
12:00want the system
12:01to be learning
12:02while it's deciding
12:04which ad to show
12:05imagine if it
12:06learned something wrong
12:07and as a result
12:08you create a huge problem
12:09so in the industry
12:11there are two
12:12fundamental debates
12:13and the first
12:15fundamental debate
12:16is
12:16will everyone
12:18in this room
12:18use a small number
12:20of extremely powerful
12:23platform systems
12:24these are the Microsofts
12:25and Googles
12:25and so forth
12:26Gemini etc
12:27or will you take
12:29open source models
12:30which are much smaller
12:31or you'll work with
12:33smaller companies
12:34that are not open source
12:35where the variability
12:36and the learning
12:37is under your control
12:38my guess is
12:40that it will be
12:41that both will be true
12:42that for systems
12:43where you cannot accept
12:45any possible downside
12:47you'll have a finely tuned
12:49very special system
12:51there's a whole bunch
12:52of companies
12:53I'm thinking about
12:54Scale AI
12:55Databricks
12:55a whole bunch of them
12:56that are really good
12:58at working with vendors
12:59to solve this problem
13:01right
13:01Hugging Face
13:02originally a French startup
13:04they're all organized
13:06around solving the problem
13:08that the enterprise has
13:09in some form
13:10the other big debate
13:12in the industry
13:12is the question
13:14of open source itself
13:15I had always assumed
13:17that there would be
13:18a limit
13:18a natural training limit
13:19of about 100 billion parameters
13:22and last year
13:23everyone sort of topped out
13:25but now Meta comes along
13:27with a 400 billion parameter model
13:29which they claim
13:30is as good as GPT-4
13:33if that's true
13:35then the fight
13:36between the open source people
13:37which is my own background
13:38and the closed source people
13:40which is where all my friends
13:41have ended up
13:42right
13:43is another one of these battles
13:44for the ages
13:45and for those of you
13:46who are younger
13:47I went through this
13:48in the PC industry
13:49and then in the internet industry
13:51and this is how it feels
13:53it feels confusing
13:54you have enormous opportunity
13:56enormous investment
13:58thousands of companies
13:59I can't even remember
14:01most of the companies
14:02in the PC age
14:03and frankly
14:04most of the internet ones
14:04were forgettable
14:05right
14:06so we're in that period
14:07of companies
14:08which seem really important
14:10and most of them
14:11we'll forget about
14:12in five years
14:12because some will consolidate
14:14but it doesn't necessarily
14:16follow
14:17that the consolidator
14:18is Google or Microsoft
14:19it could be another company
14:21it could be something
14:21you did for example
14:22we could buy another
14:23few companies from you
14:25what a compliment to my age
14:28in terms of that
14:29consolidation though
14:30you look at the money
14:31required to do this
14:32even though
14:32the compute power
14:33is coming down
14:34do you not see
14:35the focus on
14:37say five, six, seven
14:39of the big players
14:40who have access
14:40to the data
14:41to train those models
14:43to a sufficient level
14:44to get the customer base
14:45so I've always believed
14:48and you didn't want
14:50to talk about AGI
14:52I've always believed
14:53that there will eventually
14:54be computers
14:55in this space
14:56that are so smart
14:58and so valuable
14:59they'll be put
15:00inside of military bases
15:01with people
15:02with guns around them
15:03because the ability
15:05to solve these problems
15:06that are
15:07especially in NASA security
15:09are really important
15:10and they will be
15:11national secrets
15:12how many will there be?
15:14there'll be a few in China
15:15a few in the US
15:16maybe one or two in Europe
15:18maybe one in Israel
15:19we don't know
15:20it's not a large number
15:21maybe one in Italy
15:23sorry, not Italy
15:24India
15:25the question is
15:26what happens below that
15:28and we're not seeing
15:30the end of the people
15:32below that
15:33we're seeing an expansion
15:34of them
15:34so it looks like
15:36the structure
15:37will be tiered
15:38to these extremely
15:40expensive models
15:41and by the way
15:42the estimates
15:42of training costs
15:44training costs
15:45are going up
15:45about a factor of four
15:46every generation
15:48and there's a new generation
15:49every 18 months
15:51so if you assume
15:52the current models
15:53cost about
15:54100 million dollars
15:54in energy
15:56you can see
15:57very quickly
15:58why the companies
15:59that are leading
16:00in this space
16:01are raising
16:0110 billion
16:03the question is
16:05clearly they need
16:06the 10 billion
16:06the question is
16:07how will they get
16:08the return
16:08to capital
16:09of that 10 billion
16:10and that remains
16:11in many cases
16:12unsolved
16:13and people are
16:13working on that
16:14so the most likely
16:16scenario is
16:17the vast majority
16:18of AI we use
16:19will not be
16:20at that top tier
16:21it'll just be
16:22embedded in everything
16:23right
16:24you don't worry
16:25about the royalty
16:26for the operating
16:27system that's
16:28in your digital watch
16:29because it's free
16:31and it doesn't
16:32bother you
16:32that you have
16:33essentially a
16:34Unix operating system
16:35inside your watch
16:36because they disabled
16:37all the dumb features
16:38and it just runs
16:40your clock
16:40it's perfectly fine
16:41and so that
16:42diffusion down
16:44or the competitiveness up
16:46we don't know
16:46where the
16:47but both will probably
16:49be true for a while
16:51we've talked a lot
16:52about industry
16:53we've talked a lot
16:53about the tech
16:54I want to turn to people
16:56on this
16:56the IMF put out
16:58a report
16:58that said between
16:5940 and 60 percent
17:00of people's jobs
17:01globally
17:02will be affected
17:03in some way
17:04by AI
17:05not fully replaced
17:07but augmented
17:08whatever the right
17:10expression is
17:11what do you think
17:12of the impact
17:13on people
17:13in terms of
17:14how they do
17:14their business
17:15how they do
17:15their jobs
17:16how they do
17:16their lives
17:16is going to be
17:17in terms of
17:18the way
17:19what Eric
17:20is describing
17:22to be honest
17:23I don't know
17:23that I have
17:24a unique insight
17:25into this
17:26but you know
17:27the tired
17:30slogan is
17:31that you know
17:32AI won't replace
17:33surgeons
17:34but surgeons
17:35who use AI
17:36will replace
17:37surgeons who don't
17:38and I think
17:38it applies
17:38to every discipline
17:39so I think
17:40that's fundamentally
17:41true
17:42it's a general
17:43purpose technology
17:43that will be
17:44incorporated
17:44many many
17:46many areas
17:46of life
17:47it'll do away
17:48with some
17:49jobs
17:51you know
17:51think about
17:53writing
17:53there's no more
17:55copy editors
17:55that's been
17:56automated away
17:57we still have
17:58editors
17:59not the
18:00software
18:01but the people
18:02who edit
18:03books and
18:03magazine articles
18:04and so on
18:05I don't think
18:06that job
18:07is going to
18:07go away
18:08but it will
18:08change
18:09because editors
18:10will be able
18:11to use
18:13our software
18:14to improve
18:17their way
18:17they do the job
18:18so I've seen
18:20many different
18:21predictions
18:23very wildly
18:24but no question
18:26will impact
18:26the way
18:27we do work
18:28some jobs
18:29will go away
18:29and the big
18:30question is
18:32what do we need
18:33to do by way
18:34of training
18:35and education
18:37by the way
18:38I remember
18:38Eric
18:38when we first
18:39met
18:39you spearheaded
18:41I probably
18:42misremember a little
18:43bit
18:43but it was a
18:45very important
18:46initiative
18:46on democratizing
18:48computer science
18:48education
18:49we need to do
18:50that for AI
18:51now
18:51various efforts
18:53along these
18:54lines
18:55I think we'll
18:56do fine
18:56I think
18:57I look at
18:57my
18:58I have young
18:59kids
18:59they'll be
19:00fine
19:00there's probably
19:01an interim
19:02generation
19:02where retraining
19:04and you know
19:06we need to
19:07attend to that
19:08like I said
19:10no major
19:11insights
19:11so I actually
19:13don't agree
19:14with the
19:15view that
19:17somehow AI
19:18will destroy
19:19jobs
19:20I think that
19:21on balance
19:22AI will create
19:23more jobs
19:24and it's a very
19:25straightforward
19:26argument
19:26we have
19:28a human
19:29problem
19:29that we're not
19:30making enough
19:31humans
19:32all around
19:33the world
19:33that the
19:35reproduction rate
19:36is declining
19:36extremely rapidly
19:38everywhere
19:39for many
19:40many reasons
19:40so we have
19:42a lack
19:42of humans
19:43to do
19:44the jobs
19:45that we want
19:45and in
19:47most
19:47well-run
19:48economies
19:49there are
19:50jobs
19:51for which
19:52no one
19:52is willing
19:52to take
19:53them
19:53either
19:54because
19:54they're
19:54too hard
19:55or too
19:55hard
19:56physically
19:56or too
19:56hard
19:57whatever
19:57or they
19:57don't want
19:58to work
19:58that hard
19:59and
20:01if you
20:02look at
20:02the Asian
20:03societies
20:03where the
20:04fertility
20:05is the
20:05lowest
20:05they are
20:06running
20:07to robots
20:08as fast
20:08as they
20:09can
20:09for a
20:10simple
20:10reason
20:11that the
20:11young
20:12people
20:12have
20:13it's called
20:13the dependency
20:14ratio
20:14the young
20:15people
20:15have
20:17too much
20:17of a burden
20:18to pay
20:19for the
20:19health care
20:19of the
20:20old
20:20people
20:20there are
20:21too many
20:21old
20:21people
20:21not
20:22enough
20:22young
20:22people
20:23and
20:23obviously
20:24you want
20:24the old
20:24people
20:25to stay
20:25around
20:25you just
20:26need to
20:27solve
20:27this
20:27problem
20:27so
20:28that's
20:29point
20:29one
20:29I think
20:30point
20:30two
20:30is
20:31that
20:31remember
20:32economics
20:32is about
20:33making more
20:34money
20:34and allocating
20:35capital
20:35more
20:35efficiently
20:36so I'll
20:37give you
20:37an example
20:37I was in
20:38Los Angeles
20:39a few
20:39weeks ago
20:39and I saw
20:40a demo
20:40of a system
20:41which takes
20:42a
20:44essentially
20:44a young
20:46picture
20:46of an
20:47old
20:48an actor
20:49from years
20:49ago
20:50and puts
20:51the face
20:51of that
20:52on a
20:52current
20:53actor
20:53who's
20:54impersonating
20:55that
20:55person
20:56and I'm
20:57sitting there
20:57going
20:57like
20:58what does
20:58this
20:58mean
20:59and the
21:00answer
21:00turns out
21:01to be
21:01that
21:02studios
21:03spend an
21:04enormous amount
21:04of time
21:05building sets
21:06and doing
21:07makeup
21:08and you can
21:09do all of that
21:10now digitally
21:10quite well
21:11you can do the
21:11sets digitally
21:12and the makeup
21:13digitally
21:13so
21:15why do the
21:16studios do
21:16this
21:16they're
21:17profit seeking
21:18they love it
21:19they'll make
21:20more movies
21:20and they'll
21:20have more
21:21customers
21:22who wins
21:23well
21:24there's more
21:24actors
21:24who get
21:25jobs
21:25who loses
21:26the makeup
21:27person
21:28who was
21:29very very
21:29good at
21:30making alien
21:30makeup
21:31that job's
21:32going away
21:32he or she
21:34will have to
21:34go do
21:34makeup
21:35in some
21:35other
21:36industry
21:36there's no
21:37question
21:37he or she
21:38is harmed
21:39right
21:40but is the
21:41industry harmed
21:41no
21:42the number
21:43of jobs
21:43has shifted
21:44and so
21:45I think
21:45when people
21:46think about
21:47this
21:47there's no
21:48question
21:49that there's
21:49job transition
21:50there's no
21:51question
21:51that there's
21:52retraining
21:53problems
21:53which usually
21:54doesn't work
21:54so that's a
21:55real problem
21:56for these
21:56people
21:56but the
21:57fact of
21:58the matter
21:58is the
21:58economic
21:59efficiency
21:59of globalization
22:00and the
22:01lack of
22:01humans
22:02means this
22:03is going
22:03to happen
22:04we're going
22:05to use
22:06machines
22:06whether they're
22:07robots
22:07to make
22:08ourselves
22:09smarter
22:09the simplest
22:10example
22:11is
22:12if you think
22:13about Google
22:13came along
22:14it made
22:14you smarter
22:15because Google
22:15could remember
22:16100 billion
22:17things that you
22:18had never
22:18heard of
22:19it just
22:19made you
22:20smarter
22:20it didn't
22:21replace you
22:21the same
22:22thing will
22:22be true
22:23of AI
22:23if I can
22:24add to
22:24that
22:24because I
22:25actually had
22:25meant to
22:26start with
22:27agreeing with
22:28the conclusion
22:29that Eric
22:30started with
22:31although not
22:32basing it on
22:33economics or
22:34reproductive
22:35arguments
22:36but just
22:37historical
22:37precedence
22:38I can't
22:40think of
22:40any
22:41technological
22:42advances
22:43revolutions
22:44that didn't
22:45better human
22:46condition
22:48and create
22:49more jobs
22:50often jobs
22:51you couldn't
22:51anticipate
22:52I don't
22:53see any
22:54reason why
22:55this should
22:55be any
22:55different
22:56I don't
22:57think anyone
22:57is
22:58I'd like to
22:59add one
22:59more
22:59I'd like to
23:00take a pot
23:00shot at
23:01Europe
23:01since we're
23:01in Europe
23:02the European
23:03regulatory
23:04mindset
23:05assumes that
23:06you can
23:06regulate this
23:07stuff without
23:08inventing it
23:09that doesn't
23:10work
23:10one of the
23:11reasons that
23:12I'm here
23:12and one of the
23:13reasons that
23:14France is so
23:15important is
23:16that you're
23:17building it
23:17and also
23:18working with
23:19the regulators
23:19it's the
23:20only path
23:21so that's
23:22why I'm so
23:22proud of what
23:23France under
23:24President
23:24Racron and
23:25all of the
23:26people that I
23:26work with
23:26and they're
23:27fighting Brussels
23:28they're literally
23:29fighting these
23:29instincts to
23:30control
23:31right the way
23:32innovation works
23:33is you have to
23:33invent it and
23:35the inventors have
23:35to work with
23:36the regulators to
23:37understand its
23:38implications
23:38if France
23:40doesn't succeed
23:40then Europe
23:41will not be a
23:42major player
23:43in the development
23:45of this new
23:46form of
23:46intelligence
23:47which would be
23:47a great tragedy
23:48for Europe
23:49on that last
23:56point
23:57I think in the
23:59past you've been
23:59somewhat critical
24:00of the regulatory
24:01approach Europe
24:01has taken
24:02you just said
24:02the same thing
24:03what do you
24:04make of Europe's
24:05efforts on the
24:06AIX
24:06that they are
24:07moving ahead
24:08again
24:08potentially without
24:10inventing the
24:10technology
24:12I'm so glad
24:13you asked me
24:13that question
24:15I've been
24:15working a lot
24:16on regulation
24:18in the United
24:18States and
24:19around the
24:19world
24:20the US
24:21regulatory
24:21structure for
24:22AI uses a
24:24threshold of
24:2510 to the 26
24:25flops as its
24:26training mechanism
24:27and at the
24:28moment you
24:28by law
24:29you have to
24:30report that
24:31but the
24:32government can't
24:33tell you what
24:33to do with it
24:34that's a
24:34reasonable
24:35compromise for
24:36where we are
24:36today
24:37the UK has
24:38set up a
24:38trust and
24:39safety group
24:40Korea is doing
24:41the same
24:41thing
24:41there's a
24:42big meeting
24:42here in
24:43France
24:43be the
24:44first one
24:44in Europe
24:45in February
24:45of next
24:46year
24:46which again
24:47is a
24:47very big
24:48deal
24:48because it
24:48will bring
24:48everybody
24:49together
24:50and this
24:51is how
24:51it works
24:52today
24:53so everybody
24:54so you
24:55know
24:55too many
24:55people have
24:55seen all
24:56of the
24:56robot
24:57killing
24:57the
24:58scientist
24:58movies
25:00we've
25:00done a
25:01very detailed
25:01assessment
25:02of how
25:02dangerous
25:03are these
25:03models
25:04and the
25:05answer
25:05is you
25:05can see
25:06the danger
25:06coming
25:07but they're
25:08not that
25:08dangerous
25:09now
25:09except for
25:10disinformation
25:11and the
25:12disinformation
25:12issue is
25:13largely now
25:14out of our
25:15control
25:15because of
25:16open source
25:16so the
25:17fact of the
25:18matter is
25:18the disinformation
25:19problem is going
25:20to have to get
25:20regulated
25:21elsewhere
25:22it's a real
25:23issue for
25:23democracies
25:24it's a real
25:24issue for how
25:25democratic process
25:26works
25:26I'm not taking
25:27it away
25:27and the real
25:28dangers of
25:29LLMs which
25:30are largely in
25:30cyber and
25:31biological
25:31attacks
25:32are not
25:33here today
25:34but they are
25:35coming
25:35and they're
25:36coming in
25:37three to
25:37five years
25:39with the
25:40last six
25:40minutes we
25:40have left
25:41you have
25:43I want to
25:43come back to
25:44the initial
25:44question
25:44reframe it a
25:45little bit
25:45to look at
25:46a two year
25:46horizon
25:47because I
25:47think five
25:48is too
25:48much
25:49what are
25:49you working
25:50on
25:50what is
25:50the focus
25:53for you
25:54in terms
25:54of the
25:55next step
25:56the next
25:56thing
26:00well I
26:01mean
26:03this is an
26:04area we
26:04have to
26:04run to
26:05stay in
26:05place
26:05it's true
26:06for everybody
26:07it's true
26:07for us
26:07and so
26:09we're
26:09gas on
26:10the pedal
26:11on language
26:11models
26:12on agent
26:12systems
26:13and AI
26:14systems
26:15and engaging
26:16in industry
26:17as industries
26:18learning how
26:19to deploy
26:20adoptee
26:21system
26:21with an
26:22eye on
26:22real
26:23economic
26:24benefit
26:25not just
26:27experimentation
26:27that's
26:30interesting
26:32but
26:33I actually
26:34wanted to
26:35say a couple
26:35of words
26:36on
26:38on AI's
26:39promise
26:39and danger
26:42and
26:43I've always
26:44been much
26:45more of an
26:46optimist
26:46in general
26:46in life
26:47and pessimist
26:48I think
26:48is true
26:48here
26:48I have
26:49I believe
26:50that
26:51optimism
26:52doesn't
26:52always
26:53self-fulfill
26:53but pessimism
26:54does
26:56I
26:58never
26:59the
27:00doomsday
27:02scenario
27:03that sometimes
27:04people see
27:05I just don't
27:06see them
27:07based in fact
27:08I can
27:08I'm not
27:09complacent
27:10but
27:10they're
27:11I think
27:12often
27:12fueled
27:13by
27:13just
27:15interest
27:15of people
27:16who want
27:16to further
27:17careers
27:18or just
27:19speak
27:21I think
27:22that
27:23and the reason
27:24I brought this up
27:25is what
27:26Eric said
27:27I think
27:28that
27:28the biggest
27:30one of the biggest
27:31danger I think
27:32for the modern
27:32society
27:33is erosion
27:35of the information
27:36layer
27:36that underlies
27:37it
27:38I don't think
27:39it's only an AI
27:40issue
27:40that's important
27:41to say
27:42I think
27:43we saw
27:43the social
27:43networks
27:44we see
27:46there's a
27:48AI is a
27:49factor
27:51and what
27:52I want to say
27:52about AI
27:53in this context
27:53is
27:54yes
27:55AI can be
27:56used for bad
27:57here
27:57you need to
27:57watch out
27:58but AI
27:59can very much
28:00be also part
28:02of the solution
28:02here
28:03and that's
28:05a very
28:05important area
28:06I think
28:06for us
28:06as a society
28:07certainly
28:08all western
28:09democratic societies
28:10it's something
28:12to attend to
28:13both
28:14legislatively
28:15and technologically
28:17Eric
28:18for you
28:20where do you
28:21see
28:22what excites
28:23you
28:23do you use
28:24the optimism
28:25so we have not
28:26well I'm obviously
28:27excited about
28:28everything here
28:29because there's
28:29so much going
28:30on
28:30both here
28:31in France
28:32and globally
28:32and all of these
28:34fundamental questions
28:35that I'm posing
28:36will be resolved
28:37by technology
28:38or market
28:39over the next
28:39year or two
28:40so that's a super
28:41interesting problem
28:41over the next
28:41year or two
28:42yeah because we'll
28:42know fundamentally
28:43where the money's
28:44going
28:44I can't yet
28:45figure out
28:46exactly where
28:47the money's
28:47going to get
28:47made
28:48and at some
28:49point we're
28:49going to
28:49figure that
28:49out
28:50to me
28:51and I've
28:51spent a lot
28:52of time
28:52funding this
28:53the even
28:54bigger revolution
28:55is occurring
28:55in science
28:56and it's
28:57occurring in
28:58science
28:58because of
28:59a fundamental
28:59rule in
29:00science
29:00which everyone
29:01always forgets
29:02which is
29:03that the
29:03graduate
29:03students
29:04do all
29:04the work
29:05and so
29:06a new
29:06set of
29:07graduate
29:07students
29:07have shown
29:08up in
29:08chemistry
29:09physics
29:09biology
29:10material
29:11science
29:11and so
29:12forth
29:12and it's
29:13just the
29:14perfect moment
29:15for them
29:15for their
29:15PhDs
29:16take the
29:17equivalent
29:18of large
29:18language
29:19models
29:19apply them
29:20to problems
29:21that have
29:21never been
29:22solved before
29:23in the basics
29:23of science
29:24where they
29:25can calculate
29:25something
29:26but the
29:27calculation
29:27takes too
29:28long
29:28and the
29:29LLM
29:29provides
29:29an approximation
29:30things like
29:31that
29:32this is true
29:33in climate
29:33science
29:34it's true
29:34in all
29:35of the
29:35really hard
29:36problems
29:36so this
29:38revolution
29:38which we
29:39have not
29:39talked about
29:40and this
29:40is not
29:40necessarily
29:41a science
29:42focused audience
29:42is as
29:44profound
29:44as the
29:45business
29:45revolution
29:46that you're
29:46talking about
29:47in physics
29:48they use
29:49models
29:50which are
29:51called
29:51diffusion
29:51models
29:52which allow
29:53them to
29:53roughly figure
29:55out how
29:55something was
29:56constructed
29:57in physics
29:57in math
29:59there are
29:59people
30:00who are
30:00building
30:00basically
30:01non-LLM
30:03based
30:04partial
30:04difference
30:05equation
30:07solvers
30:08which solve
30:09in a new
30:10and more
30:10efficient way
30:11that changes
30:12the field
30:13in biology
30:14you have
30:15very large
30:15data sets
30:17of how
30:18various components
30:19of biology
30:19work together
30:20the system
30:21does reinforcement
30:22learning
30:22and then figures
30:23out what
30:23the path
30:24is
30:24in chemistry
30:25I'll give
30:26an example
30:27I funded
30:27a robotic
30:28lab
30:28where it
30:29read all
30:30of chemistry
30:31which is
30:31there's a lot
30:32of chemistry
30:32and then it
30:33posed questions
30:34and then there's
30:35a robot
30:35that tests
30:36the questions
30:37and then
30:38it feeds
30:39it back
30:40into the
30:41system
30:41that studied
30:42chemistry
30:43to start
30:43with
30:43so it's
30:44simulating
30:45what a
30:45graduate
30:45student
30:45does
30:46so all
30:47of those
30:47will lead
30:48to an
30:48acceleration
30:48in the
30:49things
30:49that we
30:50really care
30:50about
30:50more efficient
30:52materials
30:52more safety
30:54things
30:54better health
30:55and so
30:56forth
30:57this renaissance
30:58is profound
31:00I suppose
31:01it comes down
31:01to the point
31:02that AI
31:02is agnostic
31:03to your point
31:04it is good
31:05or bad
31:05it is about
31:05how the
31:06application
31:06is used
31:07to solve
31:08problems
31:11I want to
31:11point to
31:12one area
31:14that
31:15intellectually
31:17excites me
31:17probably more
31:18than anything
31:18else
31:20it's not
31:20necessarily
31:21related to
31:21our immediate
31:22business
31:22and to
31:24most
31:24business
31:25of the
31:25audience
31:26but
31:26it has
31:27to do
31:27with
31:28understanding
31:29who we
31:29are
31:30and I'll
31:31tell you
31:31what I
31:31mean
31:32there's a
31:33question of
31:33whether
31:33current language
31:34models
31:34understand
31:35there
31:35clearly
31:36are
31:36experts
31:37at putting
31:37out
31:38often
31:39really
31:40superb
31:40intricate
31:41answers
31:42to
31:42complicated
31:42questions
31:43do they
31:44really
31:44understand
31:45the
31:45subject
31:45matter
31:48can
31:49machines
31:49these
31:49kind
31:50of
31:50machines
31:50are
31:51they
31:51really
31:51creative
31:53are
31:54they
31:56are they
31:57conscious
31:57you know
31:58we're all
31:59aware of
31:59this
32:00somewhat
32:01ludicrous
32:02hoopla
32:02at Google
32:03about a
32:05year and
32:05half ago
32:05so
32:06about claiming
32:07that this
32:08or that
32:08was already
32:09conscious
32:10but can
32:11they be
32:11conscious
32:12can they
32:13have
32:13free
32:13will
32:15these
32:15are all
32:16very deep
32:17questions
32:18by the way
32:18do you know
32:19what we're
32:19going to do
32:19when computers
32:20have free
32:20will
32:21we're going
32:22to unplug
32:23them
32:25let's see
32:25who unplugs
32:26whom
32:29but the
32:30reason I'm
32:30excited about
32:31these questions
32:32is less
32:34because I
32:35think computers
32:36will have
32:36them
32:37I don't know
32:37the answer
32:38it's because
32:40it causes us
32:41to ask
32:41what those
32:42things mean
32:43and what
32:44does it mean
32:44for me
32:45to have
32:45free will
32:45what does
32:46it mean
32:46for me
32:47to have
32:47conscious
32:47for me
32:48to be
32:48creative
32:49and that
32:49to me
32:50is just
32:50just goes
32:51to the
32:52core
32:52of what
32:53we're
32:53about
32:53so I'm
32:54excited
32:54about that
32:55wow
32:55so on that
32:56very philosophical
32:57note
32:57we're out of
32:58time
32:58so Eric
32:58thank you
32:59so much
32:59for your
32:59time
33:00thank you
33:00all
33:00thank you
33:01thank you
Commentaires