Passer au playerPasser au contenu principal
  • il y a 2 jours
A conversation on AI

Catégorie

🤖
Technologie
Transcription
00:00Donc, notre next guest includes Yann Lecom, Turing Award-winning French computer scientist,
00:08the founding father of convolutional neural networks.
00:13He is the silver professor of the Courant Institute of Mathematical Sciences at New York University
00:20and the chief AI scientist at META.
00:25Our second speaker will be Jacques Attali, the French political advisor, futurologist,
00:32and president of Attali Associates and Positive Planet.
00:38This session will be moderated by Nick Thompson from The Atlantic.
00:43Nick, I can't wait what you will be sharing next.
01:10What an amazing introduction, what an amazing stage, going in through the back, through the line with these two gentlemen.
01:16It's like walking with, I don't know, Mick Jagger and Paul McCartney backstage.
01:19Very exciting.
01:20So, welcome Yann, welcome Jacques.
01:22I very much look forward to this conversation.
01:24It's an honor to be on stage with you.
01:26Let's get right into it.
01:27Jacques, I'm going to start with you.
01:29So, you have a concept in your last book, Le Grand Virage,
01:33where, as I understand it, the idea is there is a whole series of things that humanity has to do
01:38if we're going to make it as a species.
01:41Explain briefly what that is and then whether AI, particularly the new version of AI
01:46that we're all so interested in that this man helped invent,
01:49whether it's going to help us or hurt us.
01:52Well, it's well known that mankind is facing many dangers in the next three or four decades.
02:00One is well known as climate, which is a terrible danger with many causes and many consequences.
02:07The second is war, which is meant to go to a global war in the next decades if we don't
02:14act seriously.
02:15And the third is less easy to understand, but it's very linked to what we are talking about today,
02:21which is artificialization.
02:24Artificialization of nature, if we stop having ground for agriculture, it's a disaster.
02:30artificialization of ourself, both in terms of artificialization,
02:37digital artificialization, and also genetic artificialization.
02:41And worse, and we may talk about it, the link between digital and genetic artificialization.
02:46And the third danger, which is linked to this artificialization, is when the artifact, the robot,
02:54genetic or digital robot, will turn against us, which may come.
03:01Then, what I know is that we are on a road, we may see ahead of us with some lights.
03:10I try to see as far as possible, and I think that as far as possible, I see a lot
03:15of dangers.
03:17The answer to it, what I call the grand virage, is very simple in terms of definition.
03:23Would we be able to do it or not? I don't know.
03:26It's to distinguish between what I call the economy of life and the economy of death.
03:30The economy of death is everything which is linked to fossil fuels,
03:35to artificial sugar, and to a lot of drugs.
03:38And we look at it, it's more than 50% of the GDP,
03:41and more than 50% of each other's consumption.
03:46Economy of life, which is the only thing that has to survive,
03:49is health, education, democracy, culture, media,
03:54renewable energy,
03:56recycling,
03:57and whatever you can name,
03:59which is, in many countries, less than 30%.
04:02In developed countries, less than 50%.
04:04The key question is,
04:05AI will be used for economy of death
04:08or for economy of life?
04:10For the moment, AI is looked,
04:13in my view, as neutral.
04:14They don't care.
04:15They are good.
04:15Please take it, do it, whatever you want.
04:17I think it's very dangerous,
04:19because if you use AI to develop more fossil fuels,
04:23it would be terrible.
04:24If you use AI to develop more terrible weapons,
04:27it would be terrible.
04:28If you use AI to create more robots
04:32that can be used for the other development that I mentioned,
04:36it would be terrible.
04:37On the contrary,
04:38AI can be amazing for health,
04:40amazing for education,
04:42amazing for culture,
04:43amazing for regenerative agriculture,
04:45amazing for good food.
04:46but the question is,
04:48who is going to say
04:49this is used in the good direction or not?
04:52In my view,
04:53it's not the question of being open or closed.
04:55It is going to be open,
04:56whatever it takes,
04:57even if some people try to close it.
04:59But for me, it's not the key question.
05:01The key question is the use.
05:03And in our personal views,
05:04personal life,
05:06in life of companies,
05:07are companies going to be economy of life companies
05:09or economy of death?
05:10Are economy of death companies
05:12going to transition to economy of life?
05:14That's the key question.
05:15AI can be an amazing tool.
05:17All right.
05:18Well, Jan,
05:19quite a lot to respond to.
05:20How do you feel about your invention
05:22being part of the engine of the economy of death?
05:26I wouldn't feel particularly good about this.
05:29Actually, very bad.
05:30I mean,
05:30I think the problems that Jack is identifying
05:33are problems that we're facing,
05:36you know,
05:37in the world that we have to solve.
05:38And the question is whether technology,
05:40AI in particular,
05:41can help bring solutions to that
05:43or will,
05:45on the contrary,
05:47make the problems worse.
05:48So I think
05:51at a sort of general level,
05:53AI is intrinsically good
05:55because the effect of AI
05:57is to make people smarter.
05:59You can think of AI
06:00as an amplifier of human intelligence.
06:04And when people are smarter,
06:06better things happen.
06:07People are more productive,
06:09happier.
06:11The economy strives.
06:12People make fewer mistakes
06:14because they have to plan
06:16by being smarter.
06:18So this is really
06:19what characterizes us
06:20as humans,
06:22intelligence.
06:23And amplifying or intelligence
06:24can only have good effects.
06:26Now,
06:26there's no question
06:27that bad actors
06:28can use it for bad things.
06:31And then it's a question
06:32of whether there are more good actors
06:33than bad actors
06:34and it's going to be my good AI
06:35or gets you bad AI
06:36or something like that.
06:38Wait,
06:38are computers intrinsically good?
06:40Are smartphones intrinsically good?
06:42Are keyboards intrinsically good?
06:43Or is just AI intrinsically good?
06:45Well,
06:46it's the effect,
06:47the effect of AI
06:49of amplifying human intelligence
06:51is intrinsically good.
06:52And to some extent,
06:53computers already do this,
06:54even without AI.
06:56When you use
07:00simulators
07:02or high-performance computers
07:04to predict climate
07:05and the weather,
07:06it makes us smarter, right?
07:07We use computational models
07:09to predict what's going to happen.
07:11This is really
07:11the essence of intelligence
07:12is the ability to predict
07:14and then to act
07:15on those predictions
07:16to produce good outcomes.
07:18So that's already
07:19what we use computers for.
07:21They're, at the moment,
07:22not particularly smart.
07:24although if we project ourselves
07:27back in the, I don't know,
07:2817th century,
07:29what computers can do today
07:31or what even they could do
07:3220 years ago,
07:33would feel for a mathematician
07:35of the 17th century
07:36like being smart.
07:37and there's been constant discussions
07:39over the last six or seven decades
07:42in the progress of computers
07:44where people assimilated computers
07:45with intelligence
07:47and their brain power
07:48and things like this.
07:49So I think it's already
07:51in the mindset of everyone
07:54that computers make us smarter.
07:56or AI just makes us
07:57even more smarter.
08:00Zach?
08:02Unfortunately,
08:03being smart is not enough
08:04because you can make smart people
08:09in charge of bad things
08:11or bad habits
08:12or bad behavior
08:13or bad actions.
08:15And we have always been
08:17seeing in the history of mankind
08:19smart people doing terrible things.
08:22therefore the question we have here
08:24is
08:26are people going to be
08:28individually and collectively
08:30aware of the dangers?
08:32Are companies going enough
08:34aware of the dangers?
08:36Mankind is facing a risk of suicide
08:39in the next three decades
08:41on the three dimensions I mentioned.
08:44Are we going to be,
08:45all of us,
08:46enough aware of it
08:47to use for the best
08:49those technologies
08:50or to be,
08:51to have a kind of
08:52Benin neglect
08:54on the bad use of it?
08:56Are your company
08:57going to have a Benin neglect?
08:59I provide tools,
09:01just do it
09:01and it's not my role
09:03to do what to do.
09:05What's the moral thing?
09:06Or
09:06is it possible
09:07to have
09:09an evolution,
09:11an international charter
09:13that will
09:14create a democratic constraint
09:16for that evolution?
09:18It's difficult to think
09:20about doing it
09:20but what I think
09:21is what is important
09:22that these crossroads
09:23where we are
09:24is to really develop
09:26the awareness
09:27of the dangers
09:29and the awareness
09:30that there is an answer.
09:32I'm amazed
09:33of the numbers
09:33of start-ups
09:35and companies
09:36that are doing
09:37amazing positive discoveries
09:39and research
09:39and projects
09:41on what I call
09:42economy of life.
09:43It's huge
09:44but it's a race
09:45between the two.
09:48Unfortunately,
09:49short-term profit
09:50today is more
09:51in economy of death
09:51and that's the key.
09:53There is more
09:53short-term profit
09:54in the oil companies
09:56than in the health companies
09:57or in edtech
09:58and that's
09:59what the
10:01public sector
10:02can change.
10:03But this race
10:04between
10:05good future
10:06and bad future
10:06is here.
10:07and don't say
10:09don't think
10:10that the best
10:11is always coming.
10:12If you look at mankind
10:14if I can sum up
10:15the history of mankind
10:16it was a little bit of good
10:18a little bit of bad.
10:19More good
10:20more bad
10:21more good
10:21more bad
10:22more good
10:22more bad
10:23more good
10:24more bad
10:24and now we are very high
10:26and we may come down
10:28and never come up.
10:32Jan, I want you to respond to this
10:33but one of the key
10:35elements of responding
10:36is understanding
10:36how powerful
10:37the AIs are today
10:39and you are probably
10:41the only person
10:42in the world
10:43who has been in exactly
10:44the same spot
10:45for the past year.
10:46You were sort of
10:47more excited in some ways
10:48about the capacity of language
10:49and now you are pouring
10:51a lot of cold water
10:51on the people who say
10:53that GPT-4 is sentient
10:55and GPT-5 will
10:56exceed all human knowledge.
10:59Explain what you think
11:00the limitations are
11:01particularly of language models
11:02and why you think
11:04as you've said
11:05that they'll never be able
11:06to match human intelligence
11:07even if trained
11:08until I think you said
11:09the heat death
11:09of the universe.
11:11Right.
11:13So yes,
11:14we made a lot of progress
11:15in AI
11:15certainly in perception
11:17over the last decade or so.
11:19Speech recognition,
11:20image recognition,
11:21this kind of thing.
11:23Understanding language
11:24that has made a lot of progress
11:25over the last
11:25four or five years
11:27and then in recent times
11:29the use of large language models
11:31that are trained
11:32in self-supervised manner
11:33just to predict the next word
11:34in a text
11:35trained on gigantic amounts
11:36of text
11:37and we see this emergent property
11:39that they seem to acquire
11:41enough knowledge
11:42and to some extent
11:46kind of superficial reasoning ability
11:47that they are useful
11:48as a writing aid,
11:50for example.
11:50But those systems
11:51are still very limited.
11:53They don't have any understanding
11:54of the underlying reality
11:56of the real world
11:57because they're purely trained
11:58on text,
11:59massive amounts of text
12:00but this may surprise
12:03many people in the audience
12:04but most of human knowledge
12:05has nothing to do
12:06with language
12:07and certainly all of animal knowledge
12:09has nothing to do
12:09with language
12:10and so that part
12:11of the human experience
12:13is not captured
12:13by any of those AI systems.
12:15They don't have
12:16any physical intuition,
12:17they don't know
12:18how the world works
12:18and that basically stops them
12:21from being able
12:21to plan actions
12:23in the world.
12:24So that's one limitation.
12:26Okay, so making LLMs bigger
12:28and training them
12:29on more data
12:30is not going to get them
12:31to reach human intelligence
12:33even if in some narrow areas
12:37they already seem super intelligent,
12:39actually superhuman
12:40in some level of performance.
12:42We have superhuman performance
12:45for things like translation
12:46of hundreds of languages
12:48in any direction
12:49or image recognition
12:53for various applications.
12:55It's superhuman.
12:56But those are narrow domains
12:57and the latest techniques
13:00in LLM
13:00do not change that thing.
13:02So we have AI systems now
13:05that can pass the bar exam
13:08which is mostly information retrieval
13:10but we still don't have
13:12level 5 autonomous driving,
13:13something,
13:14a task that any 17-year-old
13:15can learn in about 20 hours
13:17of practice.
13:18We still don't have
13:19domestic robot
13:20that can, you know,
13:21clear up the dinner table
13:22and fill up the dishwasher,
13:23a task that any 10-year-old
13:25can learn in minutes.
13:27So what that tells you
13:28is that we're missing
13:29something really big
13:30in AI to kind of reach
13:32not just human intelligence,
13:33even dog intelligence.
13:35But an AI,
13:36a large language model
13:37can explain
13:38how to clear the dinner table
13:40and it can fact probably
13:41do it better
13:41than your average 10-year-old
13:43or 20-year-old.
13:44It's true.
13:44You can't do that
13:45because dexterity is hard
13:46but that's a separate problem.
13:50Explain to me
13:50why language is limited, right?
13:53So I listen to you
13:54and I hear the words
13:55and I process them
13:56and I also see the gestures, right?
13:57And I have some sense
13:59from the room
13:59which provides context
14:00and information
14:01and I see how Jacques is responding.
14:03All of those things
14:04are outside of the text
14:05so I understand that.
14:06But all of those things
14:07can be put into text
14:09so why can't
14:10an infinitely powerful
14:12language processor
14:13approximate
14:14all of these different
14:15streams of information?
14:17So not everything
14:18can be put into text.
14:19If everything
14:20could be put into text
14:21then you could become
14:23let's say a surgeon
14:24or a doctor
14:24but just reading books
14:25and you can't.
14:26You could become a mechanic
14:28but just reading books
14:29and you can't.
14:30You could learn to build
14:31I don't know
14:32anything out of wood
14:33by just reading books
14:34and you can't.
14:35You have to practice.
14:36You have to have someone
14:36that teaches you
14:37and so there's a lot
14:39of skills
14:39that include planning
14:42that require
14:44any task
14:45that you do consciously
14:47essentially
14:49require you to have
14:50a mental model
14:51of the world
14:53which includes
14:54the kind of physical qualities
14:55of the world
14:56and we acquire those models
14:57we form them
14:59in our prefrontal cortex
15:01when we were babies.
15:01we learn how the world
15:02works when we were babies
15:03by watching the world go by
15:05and then interacting with it
15:06and learning new skills.
15:09LLMs cannot do this.
15:11They don't have any
15:12sort of high bandwidth
15:13perception
15:14of what goes on
15:14in the world
15:15at the moment.
15:16So of course
15:17what people are working on
15:18including me
15:20and people working with me
15:21at Meta
15:23is providing
15:25a new generation
15:26of AI system
15:27with this ability
15:27to learn
15:28from video
15:29for example
15:30to learn how the world
15:31works on video.
15:31If you take
15:33a five-month-old baby
15:35and you show them
15:38a sort of made-up scenario
15:39where there is
15:40an object that appears
15:41to float in the air
15:42they barely pay attention
15:43to it.
15:44But if you show this
15:45to a ten-month-old baby
15:46the baby will look at it
15:48and be very, very surprised
15:50and have big eyes
15:51looking at this object
15:52floating in the air
15:53and it's because
15:54in the meantime
15:54around the age of nine months
15:56human babies learn
15:57about gravity.
15:58They learn that objects
15:59are not supposed
15:59to float in the air.
16:00They're supposed to fall.
16:01That takes nine months.
16:03We have no idea
16:04how to reproduce
16:04this capacity
16:05with machines today
16:06until we can't do this.
16:10We're not going
16:10to have human-level
16:11intelligence.
16:12We're not going
16:12to have dog-level
16:13or cat-level intelligence.
16:14So that's the missing part.
16:16And it's much more difficult
16:17it turns out
16:18to train a machine
16:19to learn how the world
16:20works by watching video
16:21than it is to train
16:23a machine
16:23to be fluent
16:24by training it
16:27on text
16:27and training it
16:28to predict the next word.
16:29Jacques, do you think
16:30he's right?
16:31Do you think
16:31at a fundamental level
16:33there is this
16:33in a way I think
16:34it's the source
16:35of Jan's optimism
16:36right?
16:37Is that an understanding
16:37that this is not
16:38actually as powerful
16:39as many people think it is.
16:41I sense from your writing
16:42that you may disagree.
16:45Well
16:46it's still the open question
16:48of what is going
16:49to be the use of it.
16:52Prediction
16:53has always been
16:54a source of power.
16:55It's not new.
16:57Artificial intelligence
16:58is an element
16:59of a long story
17:00beginning 5,000 years
17:02before us.
17:03For instance
17:04meteorology.
17:07Who was the first power?
17:08The priests.
17:09Because they can predict
17:11or hope to predict
17:12the future of life
17:13or future of afterlife.
17:15Who was the second?
17:16Which was the leaders
17:17because they can predict
17:18the generals.
17:20Because they can predict
17:21what should be done
17:22to win the battles.
17:23and after that
17:25it was the market
17:26the merchants
17:27predict on the markets
17:28and then make money
17:29and predict
17:30the future of markets.
17:32And today
17:32we enter in another dimension
17:33where we can predict
17:35more than
17:36what the generals
17:37or the priests
17:38or the markets
17:40could predict.
17:42And artificialization
17:43is just a small
17:45dimension of a long story
17:47of a power of prediction
17:48on the other dimension
17:49of power.
17:52But we need to know
17:53what is the use of it.
17:57Priests predict
17:58that could be good
18:00that could be bad
18:00general predict
18:01that could be for disaster
18:02markets
18:03that could be disaster.
18:04There is always
18:05a question of ethics
18:06which is behind.
18:08Ethics.
18:10And if we only look
18:12at technology
18:13with that logic
18:13of ethics
18:14we are dead
18:14the mankind
18:15is already dead.
18:18Where is the ethics?
18:20Where is the ethics
18:21of the use of AI?
18:23What is the ethics
18:24of what we do
18:25in the long run?
18:27For instance
18:27let me take an example.
18:30I think we have
18:33artificial intelligence
18:34and genetics
18:37are the two ways
18:38of artificialization.
18:39But there is a link
18:40which is appearing now.
18:42First, as we know
18:43since many decades
18:45life is a language.
18:49Code is a language.
18:50Therefore
18:50it's almost evident
18:53that we can create
18:55some kind of life
18:57by combination
18:58through an artificial
19:00intelligence
19:00of different
19:01elements of life.
19:03And
19:03as we know
19:05through the technology
19:07of biotechnology
19:07that biotech robots
19:09can create
19:11and are creating
19:12new molecules
19:13for therapy.
19:15All that is good.
19:16Excellent.
19:17Excellent use.
19:20They can also
19:21it's new
19:21through CRISPR
19:22and other technology
19:23modify
19:25the cells
19:27stem cells
19:29even.
19:30What happens
19:31if
19:33someone is using
19:34AI
19:35to plug the AI
19:37with a biogenical
19:38robot
19:40and create
19:41if he's
19:42an evil mind
19:43something
19:44which is an evil
19:45composition of hybrids
19:46of humans
19:47and animals
19:48or whatever
19:48or worse than that
19:51and it already exists
19:53and I will not
19:54go further than that
19:55that an AI
19:59independently
20:01happen to create
20:03new kind of molecules
20:05which not even
20:07have been
20:07imagined
20:08by the creator
20:10or by the companies
20:11and then we leave the two
20:13and you have
20:13an open box
20:15of disaster.
20:16I don't say
20:18we're not
20:18to do it
20:19because the future
20:20of health
20:20fight against cancer
20:22and many
20:23many problems
20:24are the hope
20:25to solve that
20:26here
20:26in the link
20:27between AI
20:28and genetic robots
20:30but who is going
20:31to put
20:31the borders
20:32the ethical
20:33and the political
20:34borders of it
20:35that means
20:36that we need
20:37counterpowers
20:38counterpowers
20:39of competence
20:40people should be
20:41aware of it
20:42they need to have
20:43a lot of people
20:43of your kind
20:44but scientific
20:45journalism
20:47politicians
20:47knowing a lot
20:48about science
20:50scholars
20:50being scientists
20:51also
20:52and not to let
20:54the engineers
20:55do alone
20:55what they want.
20:57We have
20:58quite a difference
20:59of opinion
21:00in the powers
21:01of AI
21:01we will not have
21:02cat level intelligence
21:03and we have
21:03what Jacques just said
21:04but I think we all
21:05agree
21:05on stage
21:06or you both agree
21:07that it can be
21:08a force for good
21:09a force for evil
21:09a force for the economy
21:10of life
21:11a force for the economy
21:11of death
21:12so Jan
21:14in your research
21:15and the research
21:15of the people out here
21:16who are working in AI
21:17and they want to help
21:18steer AI
21:20whether it's language
21:21models or other forms
21:22of AI
21:22generative AI in general
21:25what are the choices
21:26and decisions
21:26that they should make
21:27and how they do their work
21:28to maximize the odds
21:30that we steer it
21:31in the next five years
21:32in a direction
21:33that is more likely
21:34to lead to economy
21:35of life
21:36than economy of death
21:36so I think two things
21:38the first thing is
21:40that I should mention
21:41is that there is no question
21:42that at some point
21:43in the future
21:44perhaps not too far
21:45we will have machines
21:46that are more intelligent
21:47than humans
21:48in all domains
21:49where humans are intelligent
21:50okay
21:51we should not see this
21:52as a threat
21:52we should see this
21:53as something very beneficial
21:56every one of us
21:57will have an AI assistant
21:58which is their best
21:59digital friend
22:00if you want
22:00that will
22:02and it will be like
22:03a staff
22:03that assists you
22:05in your daily lives
22:07that is smarter
22:08than yourself
22:09it's fine
22:09I only work
22:11with people
22:11who are smarter
22:12than myself
22:12that's the best way
22:13to maximize productivity
22:15so I think
22:16we shouldn't feel
22:17threatened by this
22:18now
22:20on the way
22:21to building machines
22:23of this type
22:23we need them
22:24to be controllable
22:27and basically
22:28subservient
22:28to humans
22:31and there is a set
22:33of designs
22:34of sort of
22:36a new type
22:36of machine
22:36different from
22:37what's called
22:38autoregressive LLMs
22:39which is what
22:40you know
22:41ChatGPT
22:41and other systems
22:43of this type
22:44are
22:45I call it
22:46objective driven AI
22:48so these are AI systems
22:49that
22:51whose
22:54only
22:55goal in life
22:56if you want
22:57is to produce
22:58outputs
22:59that's set
23:00to satisfy
23:00a certain number
23:01of objectives
23:01so one objective
23:02could be
23:03I ask you a question
23:04is your answer
23:05answering my question
23:07but other objectives
23:08could be
23:09is this answer
23:10factual
23:12is this answer
23:13non-toxic
23:14for whoever
23:14audience
23:15it's designed for
23:18is it understandable
23:19by a 13 year old
23:20because I'm talking
23:21to a 13 year old
23:22so you could
23:23put a bunch
23:24of those
23:25objectives
23:26that are
23:27some of which
23:28might be safety
23:28guardrails
23:29that will guarantee
23:30that whatever
23:32action the system
23:33takes or whatever
23:33answer it produces
23:34satisfies a number
23:36of objectives
23:37that make it safe
23:38and useful
23:40and also
23:41one of those drives
23:42could be
23:43to be subservient
23:44to humans
23:44and a fear
23:47that you know
23:48has been sort of
23:49popularized by science
23:49fiction
23:50is the fact
23:52that if robots
23:52are smarter than us
23:53they're going to want
23:54to take over the world
23:54now something
23:55that we need
23:56to realize
23:56is that
23:57there is no correlation
23:58between being smart
24:00and wanting
24:00to take over
24:02in fact
24:03we have many examples
24:04on the international
24:06political scene
24:07that it's not necessarily
24:08the smartest among us
24:09that want to be
24:10the leaders
24:12and so
24:13we have to distinguish
24:14those two things
24:15we can have
24:16very smart machines
24:17that basically
24:18are subservient to us
24:19and won't actually
24:20have any desire
24:21to take over
24:21now the result
24:22the second
24:24question
24:25which I think
24:25is a very important one
24:26and we are
24:27on the cusp
24:27of kind of making
24:28that choice
24:29to some extent
24:29is whether
24:31research and development
24:32in AI
24:32is going to be
24:33under lock and key
24:36basically controlled
24:37by governments
24:39as well as
24:40a small number
24:41of powerful tech companies
24:43mostly on the west coast
24:44of the US
24:47because of the potential dangers
24:49you know
24:50people may think
24:50the potential dangers
24:51are too large
24:52to let the technology
24:53be accessible
24:54to everyone
24:55in terms of research
24:56and development
24:57or the other option
24:58would be for that technology
25:00to be completely open
25:01in fact
25:01to be open source
25:03and I'm totally
25:05on that side
25:05so is Meta
25:06by the way
25:07that AI technology
25:10should be open
25:10and there is
25:11a number of reasons
25:12for this
25:12some are economical
25:15and things like this
25:16but I think
25:16the most important reason
25:17is that
25:17if we imagine
25:18a future
25:1810 years from now
25:20where everybody
25:20will be
25:22everybody's interaction
25:23with the digital world
25:24will be through
25:25the intermediary
25:27of an AI system
25:30those AI systems
25:31will basically
25:32constitute
25:32a repository
25:34of all human knowledge
25:36and they will have
25:38an enormous influence
25:39on things like
25:40political opinion
25:42or general knowledge
25:44right
25:44they could be educational
25:45they could be factual
25:47everything
25:48but they could also
25:49be used to influence
25:50people
25:50and so
25:51for governments
25:52around the world
25:53to trust this technology
25:54and for people
25:55to trust it
25:55it will have to be open
25:56it will have to be transparent
25:57it will have to be a platform
25:59a bit like Wikipedia
26:02where an AI system
26:03is sort of adjusted
26:04through crowdsourcing
26:06by a lot of people
26:08and the process of that
26:09is transparent
26:10and we'll trust it
26:11because it's transparent
26:12but I'm going to go to Jacques
26:14I want you to wrap it up
26:14we're running out of time
26:16I heard something that
26:17I could almost flip around
26:19your premise
26:20and get to the same conclusion
26:21you're saying
26:21everybody has to trust it
26:22it has to be controllable
26:23therefore it must be open
26:25lots of other people
26:26would say
26:26well everybody has to trust it
26:27therefore it should be limited
26:29to a very small number
26:30of companies
26:31or governments
26:32that control it
26:32it would not happen
26:35even if you try
26:36to stop it
26:37it would not happen
26:37we agree on that
26:38it's out
26:40it's out
26:40for good or bad
26:41it's out
26:42and it can be used
26:44as good
26:44if this transparency
26:46is used
26:48with a lot of
26:50counterpowers
26:51as I said before
26:52which are able
26:52to understand it
26:53which receive enough
26:55information
26:55because transparency
26:56must be total
26:57and not partial
26:58we should know
26:59or whatever is
26:59in the algorithm
27:00we should know
27:01what kind of sources
27:02it's used
27:02what are the datas
27:03where are they coming from
27:06and we have also
27:07to keep in mind
27:08that AI can do
27:10a lot of things
27:11the only thing
27:12you cannot do
27:12and will never do
27:13is be able to predict
27:15what AI will become
27:17and we need an AI of AI
27:19and the AI of AI
27:21is ourselves
27:22it's democracy
27:23alright well
27:24on that wonderful note
27:25let's all go out
27:26and enjoy the economy of life
27:27thank you so much
27:28for these fabulous
27:29fabulous people
27:30thank you so much
27:30thank you so much
27:30thank you so much
27:30thank you so much
27:31thank you so much
27:31thank you so much
27:31thank you so much
27:31thank you so much
27:31thank you so much
27:32thank you so much
27:33thank you so much
27:33thank you so much
27:33thank you so much
Commentaires

Recommandations