- il y a 5 mois
Do You Trust This Computer? (2018) is a thought-provoking documentary that explores the growing influence of artificial intelligence and technology in everyday life. Through expert interviews and real-world examples, it examines how digital systems shape communication, decision-making, and the future of society. A compelling look at innovation, ethics, and progress in the modern age.
Do You Trust This Computer, Do You Trust This Computer (2018), AI documentary, technology and society, artificial intelligence, 2018 documentary, digital future, tech influence, innovation and ethics, modern technology, expert interviews, future of AI, full movie Do You Trust This Computer, data and systems, thought-provoking film, smart technology, tech trends, computer science, digital transformati
Do You Trust This Computer, Do You Trust This Computer (2018), AI documentary, technology and society, artificial intelligence, 2018 documentary, digital future, tech influence, innovation and ethics, modern technology, expert interviews, future of AI, full movie Do You Trust This Computer, data and systems, thought-provoking film, smart technology, tech trends, computer science, digital transformati
Catégorie
🦄
Art et designTranscription
00:14:35Le premier de l'article de la technologie de l'article, c'est tout fait en reconnaissance.
00:14:38La plupart des des années, l'article de l'article de l'article,
00:14:41c'est quand nous nous disions à dire à nos computers
00:14:44comment jouer un jeu comme chess.
00:14:47De l'article de l'article,
00:14:49c'est à dire exactement ce qu'il faut faire.
00:14:54C'est le jeu de l'article de l'articlean.
00:14:57C'est le jeu de l'articlean.
00:15:00Le jeu de l'articlean.
00:15:02No-one at the time had thought
00:15:05that a machine could have the precision
00:15:07and the confidence and the speed
00:15:09to play Jeopardy well enough against the best humans.
00:15:12Let's play Jeopardy.
00:15:15Four-letter word for the iron fitting on the hoof of a horse.
00:15:19Watson.
00:15:20What is it, shoe?
00:15:21You are right, you get to pick.
00:15:22Literary character APB for 800.
00:15:25Answer, The Daily Double.
00:15:28Watson actually got its knowledge by reading Wikipedia.
00:15:31and 200 million pages of natural language documents.
00:15:34You can't program every line of how the world works.
00:15:38The machine has to learn by reading.
00:15:40Now we come to Watson,
00:15:42who is Bram Stoker and De Wager.
00:15:45Hello, 17,973, 41,413,
00:15:50and a two-day total of 75...
00:15:53Watson's trained on huge amounts of text,
00:15:56but it's not like it understands what it's saying.
00:15:59It doesn't know that water makes things wet
00:16:01by touching water
00:16:02and by seeing the way things behave in the world
00:16:04the way you and I do.
00:16:05A lot of language AI today
00:16:07is not building logical models of how the world works.
00:16:11Rather, it's looking at how the words appear
00:16:15in the context of other words.
00:16:17David Ferrucci developed IBM's Watson,
00:16:20and somebody asked him,
00:16:21does Watson think?
00:16:23And he said, does a submarine swim?
00:16:26And what he meant was,
00:16:28when they developed submarines,
00:16:29they borrowed basic principles of swimming from fish.
00:16:32But a submarine swims farther and faster than fish
00:16:35and can get a huge payload and out swims fish.
00:16:37Watson winning the game of Jeopardy will go down in the history of AI
00:16:44as a significant milestone.
00:16:46We tend to be amazed when the machine does so well.
00:16:49I'm even more amazed when the computer beats humans
00:16:52at things that humans are naturally good at.
00:16:55This is how we make progress.
00:16:58In the early days of the Google Brain project,
00:17:01I gave the team a very simple instruction,
00:17:03which was build the biggest neural network possible.
00:17:06like a thousand computers.
00:17:08A neural net is something very close to a simulation
00:17:10of how the brain works.
00:17:12It's very probabilistic, but with contextual relevance.
00:17:16In your brain, you have long neurons
00:17:18that connect to thousands of other neurons,
00:17:20and you have these pathways that are formed and forged
00:17:22based on what the brain needs to do.
00:17:24When a baby tries something and it succeeds,
00:17:27there's a reward.
00:17:29And that pathway that created the success is strengthened.
00:17:32If it fails at something, the pathway is weakened,
00:17:35and so over time, the brain becomes honed
00:17:37to be good at the environment around it.
00:17:41Really, it's just getting machines to learn by themselves.
00:17:43It's called deep learning,
00:17:45and deep learning in neural networks mean roughly the same thing.
00:17:49Deep learning is a totally different approach,
00:17:53where the computer learns more like a toddler,
00:17:55by just getting a lot of data,
00:17:57and eventually figuring stuff out.
00:17:59The computer just gets smarter and smarter
00:18:03as it has more experiences.
00:18:05So imagine, if you will,
00:18:07a neural network, like a thousand computers,
00:18:09and it wakes up not knowing anything,
00:18:11and we made it watch YouTube for a week.
00:18:13And so after watching YouTube for a week, what will they learn?
00:18:27We had a hypothesis that they'll learn to detect commonly occurring objects in videos.
00:18:44And so we know that human faces appear a lot in videos.
00:18:48So we looked, and lo and behold,
00:18:49there was a neuron that had learned to detect human faces.
00:18:52Leave Britney alone!
00:18:55And what else appears in videos a lot?
00:18:58So we looked, and to a surprise,
00:19:01there was actually a neuron that had learned to detect cats.
00:19:05I still remember seeing recognition.
00:19:17Wow, that's a cat. Okay, cool. Great.
00:19:24It's all pretty innocuous when you're thinking about the future.
00:19:26It all seems kind of harmless and benign.
00:19:30But we're making cognitive architectures
00:19:32that will fly farther and faster than us
00:19:34and carry a bigger payload, and they won't be warm and fuzzy.
00:19:37I think that in three to five years,
00:19:39you will see a computer system
00:19:41that will be able to autonomously learn
00:19:44how to understand, how to build understanding.
00:19:48Not unlike the way the human mind works.
00:19:54Whatever that lunch was, it was certainly delicious.
00:19:57Simply some of Robbie's synthetics.
00:20:00Easier cooked, too.
00:20:01Even manufactures the raw materials.
00:20:04Come round here, Robbie.
00:20:06I'll show you how this works.
00:20:10One introduces a sample of human food through this aperture.
00:20:15Down here, there's a small built-in chemical laboratory
00:20:17where he analyzes it.
00:20:19Later, he can reproduce identical molecules
00:20:21in any shape or quantity.
00:20:22That's a housewife's dream.
00:20:24Meet Baxter, revolutionary new category of robots
00:20:29with common sense.
00:20:30Baxter...
00:20:31Baxter is a really good example
00:20:33of the kind of competition we face for machines.
00:20:35Baxter can do almost anything we can do with our hands.
00:20:40Baxter costs about what a minimum wage worker makes in a year.
00:20:45But Baxter won't be taking the place of one minimum wage worker.
00:20:50He'll be taking the place of three
00:20:51because they never get tired.
00:20:52They never take breaks.
00:20:54That's probably the first thing we're going to see.
00:20:57Displacement of jobs.
00:20:59They're going to be done quicker, faster, cheaper by machines.
00:21:02Our ability to even stay current is so insanely limited
00:21:07compared to the machines we built.
00:21:09For example,
00:21:11now we have this great movement of Uber and Lyft
00:21:13kind of making transportation cheaper
00:21:15and democratizing transportation, which is great.
00:21:17The next step is going to be
00:21:18that they're all going to be replaced by driverless cars.
00:21:20And then all the Uber and Lyft drivers
00:21:22had to find something new to do.
00:21:25There are four million professional drivers in the United States.
00:21:29They're unemployed soon.
00:21:317 million people that do data entry.
00:21:34Those people are going to be jobless.
00:21:36A job isn't just about money, right?
00:21:40On a biological level, it serves a purpose.
00:21:43It becomes a defining thing.
00:21:45When the jobs went away in any given civilization,
00:21:48it doesn't take long until that turns into violence.
00:21:50We face a giant divide between rich and poor,
00:22:02because that's what automation and AI will provoke,
00:22:05a greater divide between the haves and the have-nots.
00:22:07Right now, it's working into the middle class,
00:22:11into white-collar jobs.
00:22:12IBM's Watson does business analytics
00:22:15that we used to pay a business analyst $300 an hour to do.
00:22:19Today, you go into college to be a doctor,
00:22:23to be an accountant, to be a journalist.
00:22:25It's unclear that there's going to be jobs there for you.
00:22:28If someone's planning for a 40-year career in radiology,
00:22:32just reading images,
00:22:34I think that could be a challenge to the new graduates of today.
00:22:37But today, we're going to do a robotic case.
00:22:58The da Vinci robot is currently utilized by a variety of surgeons
00:23:04for its accuracy and its ability to avoid the inevitable fluctuations of the human hand.
00:23:24Anybody who watches this feels the amazingness of it.
00:23:31You look through the scope, and you're seeing the claw hand holding that woman's ovary.
00:23:37Humanity was resting right there in the hands of this robot.
00:23:43People say, it's the future, but it's not the future.
00:23:46It's the present.
00:23:51If you think about a surgical robot,
00:23:52there's often not a lot of intelligence in these things,
00:23:55but over time, as we put more and more intelligence into these systems,
00:23:58the surgical robots can actually learn from each robot surgery.
00:24:02They're tracking the movements.
00:24:04They're understanding what worked and what didn't work.
00:24:06And eventually, the robot for routine surgeries is going to be able to perform that entirely by itself,
00:24:12or with human supervision.
00:24:14Normally, I do about 150 cases of hysterectomies, let's say.
00:24:18And now, most of them are done robotically.
00:24:23I do maybe one open case a year.
00:24:26So do I feel uncomfortable?
00:24:28Of course I do feel uncomfortable,
00:24:30because I don't remember how to open patients anymore.
00:24:33It seems that we're feeding it and creating it,
00:24:37but in a way, we are a slave to the technology,
00:24:43because we can't go back.
00:24:51The machines are taking bigger and bigger bites out of our skill set
00:24:55at an ever-increasing speed.
00:24:57And so we've got to run faster and faster to keep ahead of the machines.
00:25:03How do I look?
00:25:05Good.
00:25:10Are you attracted to me?
00:25:11What?
00:25:12Are you attracted to me?
00:25:14You give me indications that you are.
00:25:18I do?
00:25:19Yes.
00:25:21This is the future we're headed into.
00:25:23We want to design our companions.
00:25:27We're going to like to see a human face on AI.
00:25:29Therefore, gaming our emotions will be depressingly easy.
00:25:34We're not that complicated.
00:25:36Simple stimulus response.
00:25:38I can make you like me basically by smiling at you a lot.
00:25:43AIs are going to be fantastic at manipulating us.
00:25:50So you've developed a technology that can sense what people are feeling.
00:26:00Right.
00:26:01We've developed technology that can read your facial expressions
00:26:03and map that to a number of emotional states.
00:26:07Fifteen years ago, I had just finished my undergraduate studies
00:26:10in computer science, and it struck me that I was spending
00:26:13a lot of time interacting with my laptops and my devices,
00:26:17yet these devices had absolutely no clue how I was feeling.
00:26:23I started thinking, what if this device could sense
00:26:26that I was stressed or I was having a bad day?
00:26:29What would that open up?
00:26:32Hi, first graders.
00:26:34How are you?
00:26:36Can I get a hug?
00:26:38We had kids interact with the technology.
00:26:40A lot of it is still in development, but it was just amazing.
00:26:44Who likes robots?
00:26:46Me!
00:26:47Who wants to have a robot in their house?
00:26:49What would you use a robot for, Jack?
00:26:51I would use it to ask my mom very hard math questions.
00:26:56Okay.
00:26:57What about you, Theo?
00:26:58I would use it for scaring people.
00:27:02Alright, so start by smiling.
00:27:05Nice.
00:27:06Brow furrow.
00:27:07Nice one.
00:27:09Eyebrow raised.
00:27:10This generation, technology is just surrounding them all the time.
00:27:15It's almost like they expect to have robots in their homes
00:27:18and they expect these robots to be socially intelligent.
00:27:22What makes robots smart?
00:27:25Put them in like a math or biology class.
00:27:30I think you would have to train it.
00:27:32Alright, let's walk over here.
00:27:34So if you smile and you raise your eyebrows, it's going to run over to you.
00:27:39It's coming over.
00:27:40It's coming over.
00:27:41What?
00:27:43But if you look angry, it's going to run away.
00:27:45It's going to break.
00:27:47Oh, that's good.
00:27:49We're training computers to read and recognize emotions.
00:27:53Ready, set, go.
00:27:55The response so far has been really amazing.
00:27:57People are integrating this into health apps, meditation apps, robots, cars.
00:28:04We're going to see how this unfolds.
00:28:07Go, Tessia!
00:28:09Robots can contain AI, but a robot is just a physical instantiation
00:28:14and the artificial intelligence is the brain.
00:28:17And so brains can exist purely in software-based systems.
00:28:20They don't need to have a physical form.
00:28:22Robots can exist without any artificial intelligence.
00:28:25We have a lot of dumb robots out there.
00:28:28But a dumb robot can be a smart robot overnight given the right software,
00:28:33given the right sensors.
00:28:35We can't help but impute motive into inanimate objects.
00:28:38We do it with machines.
00:28:40We'll treat them like children.
00:28:41We'll treat them like surrogates.
00:28:44And we'll pay the price.
00:28:47We'll pay the price.
00:29:09Okay, welcome to the HR.
00:29:10My purpose is to have a more human-like robot, which has the human-like intention and desire.
00:29:24The name of the robot is Erica.
00:29:39Erica is the most advanced human-like robot in the world, I think.
00:29:44Erica can gaze at your face.
00:29:47I think it's better to your face.
00:29:52Hello.
00:29:54Robots can be a pretty good conversational partner, especially for the elderies and young children, handicapped people.
00:30:03When we talk to the robot, we don't feel the social barriers, social pressures.
00:30:08And finally, everybody accept the android as just our friend or partners.
00:30:17We have implemented a simple desire.
00:30:20She wanted to be well-recognized and she want to take a rest.
00:30:29If a robot could have intention and desires, the robot can understand other people's intention and desires.
00:30:36How do you like animals?
00:30:39I like the dog.
00:30:41I really like the dog.
00:30:42I'm really cute, isn't it?
00:30:45That is tight relationships with people, and that means they like each other.
00:30:50That means, well, I'm not sure or not, to love each other.
00:30:57We build artificial intelligence, and the very first thing we want to do is replicate us.
00:31:01I think the key point will come when all the major senses are replicated.
00:31:09Sight.
00:31:11Touch.
00:31:13Smell.
00:31:15When we replicate our senses, is that when it becomes alive.
00:31:18So many of our machines are being built to understand us.
00:31:33But what happens when an anthropomorphic creature discovers that they can adjust their loyalty, adjust their courage, adjust their avarice, adjust their cunning?
00:31:42The average person, they don't see killer robots going down the streets.
00:31:49They're like, what are you talking about?
00:31:51Man, we want to make sure that we don't have killer robots going down the street.
00:31:57Once they're going down the street, it is too late.
00:31:59The thing that worries me right now, that keeps me awake, is the development of autonomous weapons.
00:32:11Up to now, people have expressed unease about drones, which are remotely piloted aircraft.
00:32:34If you take a drone's camera, feed it into the AI system, it's a very easy step from here to fully autonomous weapons that choose their own targets and can release their own missiles.
00:32:50The expected lifespan of a human being in that kind of battle environment will be measured in seconds.
00:33:09At one point, drones were science fiction, and now they've become the normal thing in war.
00:33:28There's over 10,000 in the U.S. military inventory alone.
00:33:32But they're not just a U.S. phenomena, there's more than 80 countries that operate them.
00:33:39It stands to reason that people making some of the most important and difficult decisions in the world are going to start to use and implement artificial intelligence.
00:33:47The Air Force just designed a $400 billion jet program to put pilots in the sky and a $500 AI designed by a couple of graduate students as being the best human pilots with a relatively simple algorithm.
00:34:05AI will have as big an impact on the military as the combustion engine had at the turn of the century.
00:34:18It will literally touch everything that the military does, from driverless convoys delivering logistical supplies, to unmanned drones delivering medical aid, to computational propaganda trying to win the hearts and minds of a population.
00:34:33And so it stands to reason that whoever has the best AI will probably achieve dominance on this planet.
00:34:45At some point in the early 21st century, all of mankind was united in celebration.
00:34:51We marveled at our own magnificence as we gave birth to AI.
00:34:57AI? You mean artificial intelligence?
00:35:00A singular consciousness that spawned an entire race of machines.
00:35:05We don't know who struck first, us or them, but we know that it was us that scorched the sky.
00:35:15There's a long history of science fiction, not just predicting the future, but shaping the future.
00:35:20Arthur Conan Doyle writing before World War I on the danger of how submarines might be used to carry out civilian blockades.
00:35:36At the time he's writing this fiction, the Royal Navy made fun of Arthur Conan Doyle for this absurd idea that submarines could be useful in war.
00:35:49One of the things we've seen in history is that our attitude towards technology, but also ethics, are very context dependent.
00:36:02For example, the submarine nations like Great Britain and even the United States found it horrifying to use the submarine.
00:36:08In fact, the German use of the submarine to carry out attacks was the reason why the United States joined World War I.
00:36:17But move the timeline forward.
00:36:21The United States of America was suddenly and deliberately attacked by the Empire of Japan.
00:36:27Five hours after Pearl Harbor, the order goes out to commit unrestricted submarine warfare against Japan.
00:36:40So Arthur Conan Doyle turned out to be right.
00:36:44That's the great old line about science fiction. It's a lie that tells the truth.
00:36:49Fellow executives, it gives me great pleasure to introduce you to the future of law enforcement.
00:36:54Ed 209.
00:37:04This isn't just a question of science fiction. This is about what's next, about what's happening right now.
00:37:14The role of intelligent systems is growing very rapidly in warfare.
00:37:19Everyone is pushing in the unmanned realm.
00:37:22Today, the Secretary of Defense is very, very clear, we will not create fully autonomous attacking vehicles.
00:37:33Not everyone is going to hold themselves to that same set of values.
00:37:37And when China and Russia start deploying autonomous vehicles that can attack and kill,
00:37:43what's the move that we're going to make?
00:37:45You can't say, well, we're going to use autonomous weapons for our military dominance, but no one else is going to use them.
00:37:57If you make these weapons, they're going to be used to attack human populations in large numbers.
00:38:02Autonomous weapons are, by their nature, weapons of mass destruction because it doesn't need a human being to guide it or carry it.
00:38:19You only need one person to, you know, write a little program.
00:38:25It just captures the complexity of this field.
00:38:30It is cool. It is important. It is amazing.
00:38:34It is also frightening. And it's all about trust.
00:38:38It's an open letter about artificial intelligence signed by some of the biggest names in science.
00:38:48What do they want? Ban the use of autonomous weapons.
00:38:51The author stated, quote, autonomous weapons have been described as the third revolution in warfare.
00:38:57A thousand artificial intelligence specialists calling for a global ban on killer robots.
00:39:01This open letter basically says that we should redefine the goal of the field of artificial intelligence away from just creating pure undirected intelligence towards creating beneficial intelligence.
00:39:14The development of AI is not going to stop. It is going to continue and get better.
00:39:19If the international community isn't putting certain controls on this, people will develop things that can do anything.
00:39:25The letter says that we are years, not decades, away from these weapons being deployed.
00:39:29We had 6,000 signatories of that letter, including many of the major figures in the field.
00:39:37I'm getting a lot of visits from high ranking officials who wish to emphasize that American military dominance is very important.
00:39:45And autonomous weapons may be part of the Defense Department's plan.
00:39:50That's very, very scary because the value system of military developers of technology is not the same as the value system of the human race.
00:39:57Out of the concerns about the possibility that this technology might be a threat to human existence, a number of the technologists have funded the Future of Life Institute to try to grapple with these problems.
00:40:11All of these guys are secretive. And so it's interesting to me to see them, you know, all together.
00:40:18Everything we have is a result of our intelligence. It's not the result of our big, scary teeth or our large claws or our enormous muscles.
00:40:29It's because we're actually relatively intelligent. And among my generation, we're all having what we call holy cow or holy something else moments because we see that the technology is accelerating faster than we expected.
00:40:43I remember sitting around the table there with some of the best and the smartest minds in the world.
00:40:50And what really struck me was maybe the human brain is not able to fully grasp the complexity of the world that we're confronted with.
00:40:58As it's currently constructed, the road that AI is following heads off a cliff and we need to change the direction that we're going so that we don't take the human race off the cliff.
00:41:12Google acquired DeepMind several years ago.
00:41:17DeepMind operates as a semi-independent subsidiary of Google.
00:41:22The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating digital superintelligence, an AI that is vastly smarter than any human on Earth and ultimately smarter than all humans on Earth combined.
00:41:36This is from the DeepMind reinforcement learning system. Basically wakes up like a newborn baby and is shown the screen of an Atari video game and then has to learn to play the video game.
00:41:50It knows nothing about objects, about motion, about time. It only knows that there's an image on the screen and there's a score.
00:42:00So if your baby woke up the day it was born and by late afternoon was playing 40 different Atari video games at a superhuman level, you would be terrified. You would say, my baby is possessed, send it back.
00:42:19The DeepMind system can win at any game. It can already beat all the original Atari games.
00:42:26It is superhuman. It plays the games at super speed in less than a minute.
00:42:37DeepMind turned to another challenge and the challenge was the game of Go, which people have generally argued has been beyond the power of computers to play with the best human Go players.
00:42:47First they challenged a European Go champion. Then they challenged a Korean Go champion.
00:42:56Please start the game.
00:42:58And they were able to win both times in kind of striking fashion.
00:43:01You were reading articles in the New York Times years ago talking about how Go would take a hundred years for us to solve.
00:43:10People said, well, you know, but that's still just a board.
00:43:13Poker is an art. Poker involves reading people. Poker involves lying and bluffing. It's not an exact thing.
00:43:20That will never be, you know, a computer. You can't do that.
00:43:22They took the best poker players in the world and took seven days for the computer to start demolishing the humans.
00:43:30So it's the best poker player in the world. It's the best Go player in the world.
00:43:33And the pattern here is that AI might take a little while to wrap its tentacles around a new skill.
00:43:40But when it does, when it gets it, it is unstoppable.
00:43:44DeepMind's AI has administrator level access to Google's servers to optimize energy usage at the data centers.
00:44:01However, this could be an unintentional Trojan horse.
00:44:05DeepMind has to have complete control of the data centers.
00:44:07So with a little software update, that AI could take complete control of the whole Google system, which means they can do anything.
00:44:14They can look at all your data. You can do anything.
00:44:21We're rapidly headed towards digital super intelligence that power exceeds any human. I think it's very obvious.
00:44:27The problem is that we're not going to suddenly hit human level intelligence and say, okay, let's stop research.
00:44:33It's going to go beyond human level intelligence into what's called super intelligence, and that's anything smarter than us.
00:44:38AI at the superhuman level, if we succeed with that, will be by far the most powerful invention we've ever made, and the last invention we ever have to make.
00:44:48And if we create AI that's smarter than us, we have to be open to the possibility that we might actually lose control to them.
00:44:57Let's say you give it some objective, like you're in cancer, and then you discover that the way it chooses to go about that is actually in conflict with a lot of other things you care about.
00:45:09AI doesn't have to be evil to destroy humanity. If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings.
00:45:24It's just like if we're building a road, and an anthill happens to be in the way, we don't hate ants. We're just building a road, and so goodbye anthill.
00:45:37It's tempting to dismiss these concerns, because it's like something that might happen in a few decades or a hundred years. So why worry?
00:45:46But if you go back to September 11, 1933, Ernest Rutherford, who is the most well-known nuclear physicist of his time, said that the possibility of ever extracting useful amounts of energy from the transmutation of atoms, as he called it, was moonshine.
00:46:03The next morning, Leo Szilard, who was a much younger physicist, read this and got really annoyed, and figured out how to make a nuclear chain reaction just a few months later.
00:46:13We have spent more than two billion dollars on the greatest scientific gamble in history.
00:46:27So when people say that, oh, this is so far off in the future, we don't have to worry about it. There might only be three, four breakthroughs of that magnitude that will get us from here to super intelligent machines.
00:46:38If it's going to take 20 years to figure out how to keep AI beneficial, then we should start today, not at the last second when some dude's drinking Red Bull decide to flip the switch and test the thing.
00:46:53We have five years. I think digital super intelligence will happen in my lifetime. One hundred percent.
00:47:06When this happens, it will be surrounded by a bunch of people who are really just excited about the technology.
00:47:12They want to see it succeed, but they're not anticipating that it can get out of control.
00:47:25Oh, my God. I trust my computer so much. That's an amazing question.
00:47:30I don't trust my computer. If it's on, I take it off. Even if it's off, I still think it's on. You really cannot trust the webcams. You don't know if someone might turn it off. You don't know.
00:47:41I don't trust my computer. Like in my phone, every time they ask, we send your information to Apple. Every time. So I don't trust my phone.
00:47:53Okay. So part of it is, yes, I do trust it because it's really, it would be really hard to get through the day and the way our world is set up without computers.
00:48:11Trust is such a human experience.
00:48:13I have a patient coming in with intracranial aneurysm.
00:48:30They want to look in my eyes and know that they can trust this person with their life.
00:48:35I'm not horribly concerned about anything.
00:48:39Good. Part of that is because I have confidence in you.
00:48:51This procedure we're doing today, 20 years ago, was essentially impossible.
00:48:57We just didn't have the materials and the technologies.
00:48:59Get on that corner.
00:49:15Could it be any more difficult? My God.
00:49:17So the coil is barely in there right now.
00:49:26It's just a feather holding it in.
00:49:29It's a nervous time.
00:49:36We're just in purgatory, intellectual, humanistic purgatory.
00:49:39An AI might know exactly what to do here.
00:49:51We got the coil into the aneurysm, but it wasn't in tremendously well that I knew that it would stay.
00:49:56So with a maybe 20% risk of a very bad situation, I elected to just bring her back.
00:50:03Because of my relationship with her and knowing the difficulties of coming in and having the procedure, I consider things.
00:50:11I should only consider the safest possible route to achieve success.
00:50:16But I had to stand there for 10 minutes agonizing about it.
00:50:18The computer feels nothing.
00:50:21The computer just does what it's supposed to do.
00:50:25Better and better.
00:50:30I want to be AI in this case.
00:50:35But can AI be compassionate?
00:50:43I mean, it's everybody's question about AI.
00:50:45We are the sole embodiment of humanity.
00:50:51And it's a stretch for us to accept that a machine can be compassionate and loving in that way.
00:51:05Part of me doesn't believe in magic.
00:51:07But part of me has faith that there is something beyond the sum of the parts.
00:51:11There is at least a oneness in our shared ancestry, our shared biology, our shared history.
00:51:20Some connection there beyond the machine.
00:51:23So then you have the other side of that is, does the computer know it's conscious or can it be conscious or does it care?
00:51:37Does it need to be conscious?
00:51:39Does it need to be aware?
00:51:40Does it need to be aware?
00:51:53I do not think that a robot could ever be conscious.
00:51:56Unless they programmed it that way.
00:51:59Conscious? No.
00:52:00No.
00:52:01No.
00:52:02No.
00:52:04I mean, I think a robot could be programmed to be conscious.
00:52:06How are they programmed to do everything else?
00:52:10That's another big part of artificial intelligence is to make them conscious and make them feel.
00:52:15Back in 2005, we started trying to build machines with self-awareness.
00:52:33This robot, to begin with, didn't know what it was.
00:52:37All he knew is that it needed to do something like walk.
00:52:40Through trial and error, it figured out how to walk using its imagination.
00:52:50And then it walked away.
00:52:54And then we did something very cruel.
00:52:57We chopped off a leg and watched what happened.
00:53:03At the beginning, it didn't quite know what had happened.
00:53:06But over about a period of a day, it then began to limp.
00:53:13And then a year ago, we were training an AI system for a live demonstration.
00:53:19We wanted to show how we wave all these objects in front of the camera,
00:53:25and the AI can recognize the objects.
00:53:27And so we're preparing this demo, and we had on the side screen this ability to watch what certain neurons were responding to.
00:53:36And suddenly we noticed that one of the neurons was tracking faces.
00:53:41It was tracking our faces as we were moving around.
00:53:45Now, the spooky thing about this is that we never trained the system to recognize human faces.
00:53:52And yet, somehow, it learned to do that.
00:53:58Even though these robots are very simple, we can see there's something else going on there.
00:54:02It's not just programmed.
00:54:06So this is just the beginning.
00:54:10I often think about that beach in Kitty Hawk, the 1903 flight by Orville and Wilbur Wright.
00:54:17It was kind of a canvas plane with some wood and iron, and it gets off the ground for, what, a minute and 20 seconds in this windy day before touching back down again.
00:54:30And it was just around 65 summers or so after that moment that you have a 747 taken off from JFK.
00:54:43A major concern of someone on the airplane might be whether or not their salt-free diet meal is going to be coming to them or not.
00:54:58With a whole infrastructure with travel agents and tower control, and it's all casual, and it's all part of the world.
00:55:03Right now, as far as we've come with machines thinking to solve problems, we're a kitty hawk now. We're in the wind. We have our tattered canvas planes up in the air.
00:55:16But what happens in 65 summers or so, we will have machines that are beyond human control. Should we worry about that?
00:55:30I'm not sure it's going to help.
00:55:34Nobody has any idea today what it means for a robot to be conscious. There is no such thing. There are a lot of smart people, and I have a great deal of respect for them.
00:55:53But the truth is, machines are natural psychopaths.
00:55:57Fear came back into the market and down 800, nearly 1,000 in a heartbeat.
00:56:02It is classic capitulation. There are some people who are proposing there was some kind of fat finger error.
00:56:07Take the flash crash of 2010. In a matter of minutes, a trillion dollars in value was lost in the stock market.
00:56:16The Dow dropped nearly 1,000 points in a half hour.
00:56:19So, what went wrong?
00:56:20What went wrong?
00:56:22By that point in time, more than 60% of all the trades that took place on the stock exchange were actually being initiated by computers.
00:56:33Panic selling on the way down, and all of a sudden it just stopped on a dime.
00:56:36It's all happening in real time, folks.
00:56:38The short story of what happened in the flash crash is that algorithms responded to algorithms, and it compounded upon itself over and over and over again in a matter of minutes.
00:56:46At one point, the market fell as if down a well.
00:56:51There is no regulatory body that can adapt quickly enough to prevent potentially disastrous consequences of AI operating in our financial system.
00:57:01They are so primed for manipulation.
00:57:04Let's talk about the speed with which we are watching this market deteriorate.
00:57:08That's the type of AI run amok that scares people.
00:57:10When you give them a goal, they will relentlessly pursue that goal.
00:57:16How many computer programs are there like this?
00:57:20Nobody knows.
00:57:22One of the fascinating aspects about AI in general is that no one really understands how it works.
00:57:30Even the people who create AI don't really fully understand.
00:57:36Because it has millions of elements, it becomes completely impossible for a human being to understand what's going on.
00:57:45Microsoft had set up this artificial intelligence called Ty on Twitter, which was a chatbot.
00:57:59They started out in the morning and Ty was starting to tweet and learning from stuff that was being sent to him from other Twitter people.
00:58:09Because some people like Troll attacked him. Within 24 hours, the Microsoft bot became a terrible person.
00:58:18They had to literally pull Ty off the net because he had turned into a monster.
00:58:24A misanthropic, racist, horrible person you never want to meet.
00:58:30And nobody had foreseen this.
00:58:32The whole idea of AI is that we are not telling it exactly how to achieve a given outcome or a goal.
00:58:43AI develops on its own.
00:58:45We're worried about super-intelligent AI, the master chess player that will outmaneuver us.
00:58:52But AI won't have to actually be that smart to have massively disruptive effects on human civilization.
00:58:59We've seen over the last century, it doesn't necessarily take a genius to knock history off in a particular direction.
00:59:07And it won't take a genius AI to do the same thing.
00:59:10Bogus election news stories generated more engagement on Facebook than top real stories.
00:59:17Facebook really is the elephant in the room.
00:59:20AI running Facebook news feed, the task for AI is keeping users engaged.
00:59:27But no one really understands exactly how this AI is achieving this goal.
00:59:34Facebook is building an elegant mirrored wall around us.
00:59:38A mirror that we can ask, who's the fairest of them all?
00:59:42And it will answer you, you, time and again.
00:59:45Slowly begin to warp our sense of reality.
00:59:48Warp our sense of politics, history, global events.
00:59:52Until determining what's true and what's not true is virtually impossible.
01:00:01The problem is that AI doesn't understand that.
01:00:04AI just had a mission, maximize user engagement, and it achieved that.
01:00:10Nearly two billion people spend nearly one hour on average a day basically interacting with AI that is shaping their experience.
01:00:20Even Facebook engineers, they don't like fake news.
01:00:25It's very bad business.
01:00:27They want to get rid of fake news.
01:00:28It's just very difficult to do because how do you recognize news as fake if you cannot read all of those news personally?
01:00:34There's so much active misinformation and it's packaged very well and it looks the same when you see it on a Facebook page or you turn on your television.
01:00:46It's not terribly sophisticated, but it is terribly powerful.
01:00:51And what it means is that your view of the world, which 20 years ago was determined if you watch the nightly news by three different networks, the three anchors who endeavored to try to get it right.
01:01:03You might have had a little bias one way or the other, but largely speaking, we can all agree on an objective reality.
01:01:07Well, that objectivity is gone and Facebook has completely annihilated it.
01:01:17If most of your understanding of how the world works is derived from Facebook, facilitated by algorithmic software that tries to show you the news you want to see.
01:01:26That's a terribly dangerous thing. And the idea that we have not only set that in motion, but allowed bad faith actors access to that information.
01:01:38This is a recipe for disaster.
01:01:43I think that there will definitely be lots of bad actors trying to manipulate the world with AI.
01:01:482016 was a perfect example of an election where there was lots of AI producing lots of fake news and distributing it for for a purpose, for a result.
01:01:59Ladies and gentlemen, honorable colleagues, it's my privilege to speak to you today about the power of big data and psychographics in the electoral process.
01:02:10And specifically to talk about the work that we contributed to Senator Cruz's presidential primary campaign.
01:02:16Cambridge Analytica emerged quietly as a company that, according to its own hype, has the ability to use this tremendous amount of data in order to effect societal change.
01:02:30In 2016, they had three major clients. Ted Cruz was one of them.
01:02:35It's easy to forget that only 18 months ago, Senator Cruz was one of the less popular candidates seeking nomination.
01:02:42So what was not possible maybe like 10 or 15 years ago was that you can send fake news to exactly the people that you want to send it to.
01:02:53And then you can actually see how he or she reacts on Facebook and then adjust that information according to the feedback that you got.
01:03:02And so you can start developing kind of a real time management of a population.
01:03:06In this case, we've zoned in on a group we've called persuasion.
01:03:11These are people who are definitely going to vote to caucus, but they need moving from the center a little bit more towards the right in order to support Cruz.
01:03:20They need a persuasion message.
01:03:22Gun rights I've selected. That narrows the field slightly more.
01:03:25And now we know that we need a message on gun rights. It needs to be a persuasion message and it needs to be nuanced according to the certain personality that we're interested in.
01:03:36Through social media, there's an infinite amount of information that you can gather about a person.
01:03:43We have somewhere close to four or five thousand data points on every adult in the United States.
01:03:47It's about targeting the individual. It's like a weapon which can be used in the totally wrong direction.
01:03:56That's the problem with all of this data. It's almost as if we built the bullet before we built the gun.
01:04:02Ted Cruz employed our data, our behavioral insights.
01:04:06He started from a base of less than 5% and had a very slow and steady but firm rise to above 35%, making him obviously the second most threatening contender in the race.
01:04:21Now clearly the Cruz campaign is over now, but what I can tell you is that of the two candidates left in this election, one of them is using these technologies.
01:04:30I, Donald John Trump, do solemnly swear that I will faithfully execute the office of President of the United States.
01:04:48Elections are a marginal exercise. It doesn't take a very sophisticated AI in order to have a disproportionate impact.
01:04:57Before Trump, Brexit was another supposed client.
01:05:02Well, at 20 minutes to five, we can now say the decision taken in 1975 by this country to join the common market has been reversed by this referendum to leave the EU.
01:05:16Cambridge Analytica allegedly uses AI to push through two of the most ground shaking pieces of political change in the last 50 years.
01:05:28These are epochal events. And if we believe the hype, they are connected directly to a piece of software essentially created by a professor at Stanford.
01:05:37Back in 2013, I described that what they are doing is possible and warned against this happening in the future.
01:05:48At the time, Michal Kosinski was a young Polish researcher working at the Psychometrics Center. So what Michal had done was to gather the largest ever data set of how people behave on Facebook.
01:06:02Psychometrics is trying to measure psychological traits such as personality, intelligence, political views and so on. Now, traditionally, those traits were measured using tests and questionnaires.
01:06:16Personality tests, the most benign thing you could possibly think of, something that doesn't necessarily have a lot of utility, right?
01:06:23Our idea was that instead of tests and questionnaires, we could simply look at the digital footprints of behaviors that we are all living behind to understand openness, conscientiousness, neuroticism.
01:06:36You can easily buy personal data such as where you live, what club memberships you've joined, which gym you go to. There are actually marketplaces for personal data.
01:06:47Turns out, we can discover an awful lot about what you're going to do based on a very, very tiny set of information.
01:06:55We are training deep learning networks to infer intimate traits, people's political views, personality, intelligence, sexual orientation, just from an image of someone's face.
01:07:10Now, think about countries which are not so free and open-minded. If you can reveal people's religious views or political views or sexual orientation based on only profile pictures, this could be literally an issue of life and death.
01:07:33And death.
01:07:37I think there's no going back.
01:07:42Do you know what the Turing test is?
01:07:45It's when a human interacts with a computer. And if the human doesn't know they're interacting with a computer, the test is passed.
01:07:55And over the next few days, you're going to be the human component in the Turing test.
01:08:00Holy shit.
01:08:01Yeah, that's right, Caleb. You got it.
01:08:04Because if that test is passed, you are dead center of the greatest scientific event in the history of man.
01:08:13If you've created a conscious machine, it's not the history of man.
01:08:17That's the history of gods.
01:08:19It's almost like technology is a god in and of itself.
01:08:33Like the weather. We can't impact it. We can't slow it down. We can't stop it.
01:08:38We feel powerless.
01:08:41If we think of God as an unlimited amount of intelligence, the closest we can get to that is by evolving our own intelligence by merging with the artificial intelligence we're creating.
01:08:55Today, our computers, phones, applications give us superhuman capability.
01:09:00So, as the old maxim says, if you can't beat them, join them.
01:09:07It's about a human-machine partnership.
01:09:10I mean, we already see how, you know, our phones, for example, act as memory processes, right?
01:09:15I don't have to remember your phone number anymore because it's on my phone.
01:09:19It's about machines augmenting our human abilities as opposed to, like, completely displacing them.
01:09:24If you look at all the objects that have made the leap from analog to digital over the last 20 years, it's a lot.
01:09:32We're the last analog object in a digital universe.
01:09:36And the problem with that, of course, is that the data input-output is very limited.
01:09:41It's this, it's these.
01:09:44Our eyes are pretty good. We're able to take in a lot of visual information.
01:09:49But our information output is very, very, very low.
01:09:52The reason this is important if we envision a scenario where AI is playing a more prominent role in societies,
01:09:59we want good ways to interact with this technology so that it ends up augmenting us.
01:10:09I think it's incredibly important that AI not be other.
01:10:12It must be us.
01:10:14And I could be wrong about what I'm saying.
01:10:17I'm certainly open to ideas if anybody can suggest a path that's better.
01:10:24But I think we're really going to have to either merge with AI or be left behind.
01:10:28It's hard to kind of think of unplugging a system that's distributed everywhere on the planet.
01:10:41That's distributed now across the solar system.
01:10:45You can't just, you know, shut that off.
01:10:47We've opened Pandora's box. We've unleashed forces that we can't control, we can't stop.
01:10:55We're in the midst of essentially creating a new life form on Earth.
01:10:58We don't know what happens next. We don't know what shape the intellect of a machine will be when that intellect is far beyond human capabilities.
01:11:14It's just not something that's possible.
01:11:16The least scary future I can think of is one where we have at least democratized AI.
01:11:30Because if one company or small group of people manages to develop godlike digital superintelligence, they can take over the world.
01:11:37At least when there's an evil dictator, that human is going to die.
01:11:44But for an AI, there would be no death. It would live forever.
01:11:49And then you'd have an immortal dictator from which we can never escape.
01:12:07Anything that you want to know.
01:12:09Isandel?
01:12:11Yes.
01:12:13No, no, you're here.
01:12:15mentor from my Howie, again.
01:12:19Out of her, do you feel GEORGE?
01:12:22You're there.
01:12:28Like you, George, Pinterest.
01:12:30Alright, let's see you.
01:12:32You love him.
01:12:34You're here to find a movie again.
01:13:05...
01:13:35...
01:13:37...
01:13:41...
01:13:43...
01:13:47...
01:13:49...
01:13:51...
01:13:53...
01:13:57...
01:13:59...
01:14:01...
01:14:03...
01:14:05...
01:14:09...
01:14:11...
01:14:13...
01:14:15...
01:14:21...
01:14:23...
01:14:25...
01:14:27...
01:14:37...
01:14:39...
01:14:41...
01:14:47...
01:14:49...
01:14:51...
01:14:53...
01:14:59...
01:15:01...
01:15:03...
01:15:05...
01:15:07...
01:15:17...
01:15:19...
01:15:21...
01:15:27...
01:15:29...
01:15:31...
01:15:33...
01:15:35...
01:15:45...
01:15:47...
01:15:49...
01:15:51...
01:16:01...
01:16:03...
01:16:05...
01:16:07...
01:16:17...
01:16:19...
01:16:21...
01:16:29...
01:16:31...
01:16:33...
01:16:35...
01:16:43...
01:16:45...
01:16:47...
01:16:49...
01:16:51...
01:16:59...
01:17:01...
01:17:03...
Écris le tout premier commentaire