Skip to playerSkip to main content
#nostalgia #tvcommercials #videogamecommercials #gamingcommercials #oldvideogamecommercials #90scommercials #90sads #1990scommercials #2000scommercials #2000sads #2001commercials #1991 #1992 #blockbuster #tacobell #nintendo #nintendocommercials #mcdonalds #dailymotion #youtube #facebook #twitter #twitch #motiongraphics #deezer #tv #dlive #instagram #stream #motion #twitchstreamer #fightingmentalillness #twitchclips #twitchretweet #twitchaffiliate #twitchshare #ant #scribaland #tiktok #greece #spotify #gelio #games #vimeo #google #motionmate #youtuber #greekquotes
Transcript
00:00I think that what we're doing now is, in a sense, creating our own successors.
00:05We have seen the first crude beginnings of artificial intelligence.
00:09It doesn't really exist yet at any level because our most complex computers are still morons,
00:16high-speed morons, but still morons.
00:18Nevertheless, some of them are capable of learning,
00:21and we will one day be able to design systems that can go on improving themselves
00:28so that at that stage, we have the possibility of machines which can outpace their creators
00:34and therefore become more intelligent than us.
00:40Artificial intelligence, a machine beyond the mind of man.
00:44It's science fiction, but is it also fact?
00:46Chess, for centuries a test of the human intellect.
01:00But these men are merely observers.
01:04In August 1977, 16 computer programs competed against each other
01:09in the Second World Computer Chess Championship.
01:11The best programs here can defeat 95% of all Syria's human players.
01:27The crowd tonight is absolutely phenomenal.
01:29I've never seen a crowd like this before at a computer chess tournament,
01:32and there's certainly about twice as many people here as ever go to watch the U.S. Open Championship
01:37or the United States Closed Championship, even when Fisher was playing.
01:40I think that probably most of you are having a good laugh tonight,
01:44and you're here out of curiosity, but in a few years' time,
01:47you'll be here because these programs are playing better than the Masters and Grandmasters
01:50in the U.S. Championship.
01:51The favorite, Northwestern University's Chess 4.6,
01:55is playing a program from Bell Labs.
01:57Now we have a move.
01:59Rook to King, Rook 7.
02:01Linked by telephone to a computer in Minneapolis,
02:03Chess 4.6 quickly examines a vast number of possible moves.
02:07Okay.
02:09Eight and two.
02:11The style of play is distinctly not human.
02:16Okay, we have a move.
02:18Rook to Rook 3.
02:20Rather than the judgment, intuition, and insight characteristic of human champions,
02:24speed is the secret of success.
02:26Okay.
02:28Let's move.
02:30Eight and one.
02:31We have a move.
02:34King to Rook 1.
02:43Eight to Bishop 7.
02:44Mate.
02:49And Chess 4.6 has just given mate.
02:52Will computers ever think like people?
02:58It is a question that goes beyond chess.
03:00A central fact about computers
03:03is that computers are prodigious calculators.
03:07They can solve huge systems of differential equations,
03:10invert very large matrices,
03:12and other such mathematical things.
03:14In my view, there's an enormous difference between judgment and calculation.
03:18It is that gap, the difference between judgment and calculation that computers can't cross.
03:27From the beginning of science, there have always been people telling you that this or that boundary line is a sacred one
03:35and can never be crossed.
03:37That it would never be possible, for example, for scientists to simulate or synthesize the basis of life.
03:47Not very long ago, the first complete gene was synthesized in the laboratory.
03:54This vitalist attitude of uncrossable frontiers is behind the question of whether there are human mental abilities which could never be simulated.
04:07I personally see no more reason now to be discouraged by this vaguely expressed feeling than scientists have been in the past.
04:16It is a computer scientist, Joe Weissenbaum, who is now the most outspoken critic of artificial intelligence.
04:22His own program, called Doctor, gave rise to his first doubts.
04:31The program parodies a psychiatric interview.
04:37The patient, here Weissenbaum, types in a complaint, and the program, in the role of psychiatrist, responds.
04:44The program became a plaything at MIT.
04:47People told it the most intimate personal details, as if they believed the program could understand.
05:06But in fact, the program doesn't understand anything.
05:09I would deny that there's any important sense, non-negligible sense, in which the program understands.
05:17It certainly creates the illusion of understanding. There's no question about that.
05:22But we have to understand that that illusion is an attribution that the person conversing with the program contributes to the conversation.
05:32It's not a function of the program itself.
05:36The program simply detects key phrases and makes routine transformations, such as turning the word I into you.
05:43When it doesn't recognize anything, it responds automatically, please go on.
05:47Weissenbaum was shocked when people began to confide in the program, acting as if it really were a psychiatrist.
06:06But it was when colleagues began to suggest that programs like Doctor could be used as substitutes for human psychiatrists
06:12and treating real patients that disillusion set in.
06:16There can be no question that the response, the responses that I noticed with respect to the doctor program,
06:24particularly the idea that this could be the dawn of automatic psychiatry, that machines could perform psychiatry at all, and so on.
06:31That these awoke in me questions that applied more generally to artificial intelligence than merely to these kinds of conversational programs.
06:46Is the computer just a calculator, or is it capable of judgment?
06:51Is it only a number cruncher, or can it match the human mind?
06:54Like us, the computer has a memory, where its knowledge is stored.
06:59Its tiny circuits hold billions of pieces of information.
07:03The circuits understand only one thing, the presence or absence of electrical signals.
07:10That means all information must be put in on-off terms.
07:14So the memory or knowledge base is like a maze of lights, some on, some off.
07:18Like the dots and dashes of Morse code, the arrangement of the lights can represent numbers, letters, even words.
07:37This particular knowledge base contains information about television programs.
07:40It shows, for example, that Sesame Street is an educational series for children.
07:47Like an index, it defines relationships among the separate pieces of information.
07:51It is by means of a computer program that the knowledge can be used.
08:00Written as a series of simple steps, the program can retrieve facts and answer questions about the information in the knowledge base.
08:06It does this by issuing instructions to the part of the computer that performs calculations,
08:13to the circuits which move pieces of information around and compare them.
08:19These are simple, basic operations, and the program combines them in the most effective way.
08:26As an example, we ask it if there are any science documentaries on television.
08:31The question is translated into the machine's on-off code, then matched against the knowledge base.
08:40Step by step, the program directs the search for an answer.
08:44Step one, it scans the category list.
08:48It locates documentary, which reveals more than one documentary program as a possible solution.
08:52Step two, it looks for science, and a match is made.
09:01NOVA is the only program with arrows linking it to both documentary and science.
09:06Step three, it writes the answer.
09:09Even with a knowledge base a million times larger, in reality this process would have taken a fraction of a second.
09:15It is undeniably mechanical.
09:19But proponents of artificial intelligence argue that the fundamental processes of human brain cells are just as mechanical.
09:27As with a computer, it is their combination, the program that counts.
09:31In matters of memory and calculation, machines easily outpace the mind.
09:42But that's only a small part of human intelligence.
09:45The results of the first few experiments in artificial intelligence surprised everyone,
09:50because it turned out that relatively small programs were able to do things that everybody had thought would require a lot more intelligence.
09:58For example, some of the early programs were able to play a fairly good game of chess,
10:05or to solve pretty hard problems in college calculus.
10:09Well, everybody knows that those things require very advanced intelligence.
10:13But it was much harder to get the programs to answer simple questions in ordinary language,
10:19the kinds of things that any child can do to solve simple everyday common sense problems.
10:24We take common sense for granted, but there is knowledge and understanding behind even the simplest activity.
10:36Artificial intelligence is based on the faith that there are rules underlying every aspect of human life.
10:56rules which can be uncovered, turned into programs, and given to machines.
11:03But do such rules really exist?
11:17That's the interesting question. Why don't we just tell the computer everything that there is to say about our everyday form of life?
11:25It's because our everyday form of life is so pervasive, and so much something that we embody, not something that we know,
11:33that there wouldn't be any way of telling it.
11:35It's not a bunch of facts, any more than somebody who knows how to swim, knows the rules for swimming.
11:40We don't know the rules for being a human being, or the rules for how to move and stand up.
11:45We just embody those rules.
11:48Yates said something very relevant to this.
11:51He said that we can embody the truth, but we cannot know it.
11:56We'd have to know it to be able to tell a computer what it was to be a human being.
12:00Well, the only difference between us and those critics is that they think it is impossible and you can't understand it,
12:07and we think that you possibly can.
12:10What makes up common sense intelligence?
12:13One important aspect is certainly language.
12:17Stanford University's Terry Winograd wrote a program to converse in everyday English.
12:23This is a program that I wrote in order to experiment with language understanding by computer.
12:27What I wanted is a world which the computer could talk about, so that while it was understanding sentences,
12:34it would actually be doing something with what was being said.
12:37You can see there's a set of objects and simple toy blocks and pyramids and a box,
12:43and a kind of a hand that can move them around.
12:46Let me give it a simple command.
12:48I can type, pick up a big red block, and the sentence appears, you can see,
12:55and it analyzes what it is that I'm asking, and then plans a sequence of commands to carry it out.
13:03The program has no intrinsic knowledge about the blocks world,
13:07so Terry Winograd has filled its knowledge base with facts about the objects it contains,
13:12their properties and their relationships to each other.
13:14It's by correctly carrying out his commands that the program proves it understands English.
13:22But even in this limited world, the process of understanding a command is no simple matter.
13:28When I type a command like this, it has to go through several different phases of analysis.
13:31First, it needs to look up the words in a dictionary it has and figure out the structure of the sentence,
13:38the kinds of things you learn in grammar school, the subject, the verb, and the object.
13:42Then needs to analyze the meaning of that sentence in this context,
13:46which involves converting from the specific words to a set of concepts that it has about the blocks world,
13:53what the objects are, what you can do with them, what the colors are, and so on.
13:56So that it can then use that to construct a program for carrying out the action.
14:01And finally, there has to be a kind of a reasoning system which reasons about the actions
14:06in order to know what has to be done to actually carry them out.
14:10In that first one, for example, it couldn't just go pick up the big red block,
14:14it needed to clear it off first.
14:16And there's a whole set of programs which deal with what you need to do
14:19in order to manipulate objects in this kind of a simple world.
14:26In addition to knowing the rules of the blocks world, the program has to master the rules of language,
14:32which are not always so clear cut.
14:34The program really isn't focused on the moving of these blocks.
14:37It's basically concerned with the ways in which people use language to communicate.
14:41So if, for example, I type a command like grasp the pyramid,
14:46even though that makes sense in terms of the basic ideas of what's in the blocks world,
14:51in this context it doesn't because there are three different pyramids there on the screen.
14:56And I wouldn't use a phrase like the pyramid unless I had a particular one in mind.
15:00So the computer answers, I don't understand which pyramid you mean,
15:04since it has no way in this context of knowing which of those three I intended.
15:08I can give it a much more complicated command like find the block,
15:12which is taller than the one you are holding, and put it into the box.
15:20In this case it needs to do a whole set of things, one of which is figure out what is meant by words like one and it.
15:28We use those in normal everyday language in a way which has to be interpreted by looking at the context in which they appear.
15:33In this case it types back out, by it I assume you mean a block which is taller than the one I am holding,
15:39which is only one of several possible things I could have meant and needed to use a set of rules of thumb
15:44about how people use words like that in order to decide in this case which one I intended.
15:49As skillful as it is in handling the imprecision and ambiguity of English,
15:53if you talk to Terry Winograd's program about anything but blocks, it would be incapable of responding.
15:58So language cannot be understood in a vacuum. Like us, a computer must know what it's talking about.
16:05By trying to program a computer to use language, we're forced into looking in a very clear way
16:10at what it is that people do when they use language in those same ways.
16:14We're forced to make very explicit things which seem so natural that people who don't look at language in this way
16:20don't even think they need explaining. And one of the things that we've learned from writing programs like this
16:24is the complexity of the way people understand language, the kinds of connections there are between using your knowledge about what's being talked about
16:35and your knowledge of language. The fact that you can't study language in a kind of separated, isolated way
16:41in which you look at grammar and dictionary meanings and content, but that really needs to be integrated into a much more coherent kind of theory.
16:49If language and knowledge of the world cannot be separated, how do children acquire language?
16:56I would like cheeseburger and a Coke.
17:02One theory is that even before they learn to talk, children accumulate a detailed knowledge of routine experiences called a script.
17:09Later they draw on these scripts as a basis for language and conversation.
17:20Observing the children at Yale University is Roger Schenck, who contends that computers can learn to communicate in much the same way.
17:28He writes computer programs that can understand stories like this one.
17:33It's simple even for a child, but surprisingly difficult for a computer.
17:36Well, the main problem is that our computers don't have knowledge.
17:40They can do certain manipulations, but they don't know things.
17:43And if you want to tell a story to somebody and talk about something to somebody,
17:47if they don't have the same knowledge that you have, they can't understand what you're talking about.
17:51Essentially, it's like an expert talking to somebody very naive.
17:55He wouldn't be able to communicate very much.
17:57So what we have in this computer program is the problem of giving it knowledge.
18:00All right. So if we want to tell a story about what goes on in a restaurant, well, it better know about restaurants and what they're for and what goes on in them.
18:06So it can sort of fill in the blanks of what I didn't say.
18:09If people had to say every single piece of information that ever happened, that little free line story about restaurant would take hundreds and hundreds of lines.
18:16Because there are assumptions that we share because as humans having been in restaurants know about what goes on in them.
18:21Shank believes that the programmer can compensate for the computer's lack of experience by spelling out exactly what goes on in a given situation.
18:30In other words, by providing the computer with a script, in this case for a restaurant.
18:35A script is in fact knowledge about the world.
18:38It is an attempt to codify the kind of knowledge that humans have about situations in a precise form such that we can give it to a machine.
18:45You can't just say restaurants and then tell it about restaurants in some very vague fashion.
18:51We have to give it an explicit list essentially of this happens in a restaurant and then this and then this and then this.
18:56So, for example, what we have here is a restaurant script and it has at the beginning some preconditions which says that the person eating has to have some money and has to be hungry in order for him to go into it.
19:04And then he has an entering scene. The entering scene says he goes to the restaurant, he enters the restaurant, he looks around, he sees he can go to a table, he goes to the table and he sits down.
19:15This is followed by a scene where the waiter gives him a menu and the customer reads the menu and this enables ordering where the customer tells the waiter what he wants.
19:24This enables the cook to prepare the meal and eventually the waiter will give the meal to the person who has ordered it, he will then eat it, he will then get a check and go give some money to the management and leave the restaurant.
19:37The program analyzes the restaurant story and fits it into the script. The stars show that the first sentence, John went to a restaurant, has been matched.
19:45The program then works on the second sentence of the story. He ordered lobster until it too is matched.
20:00But it's the final sentence that holds the key to this simple story.
20:04The last sentence is he paid the check and left. By that I mean they'll say, okay, he paid the check and he left and that's the end of my restaurant script.
20:11So everything in the middle must have happened. And so it goes back and traces between ordered, where the last place where we saw the stars, particularly told about, and the new part, which is the paying money to the management and leaving, which is where the stars are now.
20:27And it says, oh, well, what must have happened is the cook must have prepared this lobster and he must have given it to the waiter and the waiter must have given it to the person and the person must have decided to eat it and then the person must have eaten it.
20:35And so our program essentially is capable of making all those inferences and in this case has made all those inferences because it understood the important facts that surrounded the main event.
20:45But in the story, the main event was never really stated. The program was told only that John ordered lobster. But what did he eat?
20:52Shank asked the program. And the program says, uh, lobster. No, we didn't say that actually. And the story never specifically said anything about eating at all. But the program has no trouble with it any more than a person would have trouble with it because it has in fact understood the story.
21:07I think we have to be clear about the fact that language understanding involves very, very much more than the mere comprehension of a string of words. Silences, for example, are very, very important.
21:22If people are understanding what is meant by emotional kinds of statements or pauses or metaphors or whatever these AI critics think is so difficult, I like to understand how they explain that people can understand them. People have to have some method of doing it.
21:39Even the most ordinary linguistic intercourse among people involves shared experiences. And the fundamental difficulty with computer understanding of language is that there are human experiences, uniquely human experiences, which the computer, by its very nature, in virtue of its structure, in virtue of the difference between its structure and the biological structure and needs and so on of human beings, can simply not share.
22:07I think that communication involves sharing.
22:10Well, but shared experience, you could make the same argument that a computer couldn't understand anything about a restaurant since it had never been in a restaurant.
22:16I think that whatever the shared experience is, you have some rule for accessing it. If you have a rule that says, well, I remember a feeling that when I was in love, then I felt this way, well, I can write that, and then I did this, I can write that same rule into a computer program.
22:29Whenever you see something about love, you can assume that the person talking might feel this way and might do this.
22:34It's just a question of understanding what people think they know and think they are understanding in a situation.
22:40Forward. Forward.
22:43If, as Shank believes, man and machine will have something to talk about, how will they actually converse?
22:49Michael Condon is paralyzed from the neck down.
22:55Fast.
22:56At NASA's Jet Propulsion Laboratory, he is testing a wheelchair that can understand human speech.
23:02Train.
23:03Out.
23:04Aided by Larry Twos, he first trains the device to recognize his voice.
23:09Raise.
23:10Lower.
23:11Turn.
23:12Turn.
23:13A mini-computer matches his particular speech patterns to a set of 35 commands.
23:18Tilt.
23:19Right turn.
23:21Left turn.
23:22End.
23:23Okay, it looks like it's trained all right now.
23:27Why don't we see if we can work with the cop?
23:29Close.
23:31Mounted on the wheelchair is an arm which can manipulate objects within a range of several feet.
23:37Take.
23:38Clamp.
23:39Clamp.
23:40Raise.
23:41Clamp.
23:42Up.
23:43Left.
23:44Left.
23:45Halt.
23:46Better go slow now.
23:47Flex.
23:48Flex.
23:49Flex.
23:50Up.
23:51Halt.
23:52Halt.
23:53Halt.
23:54Raise.
23:55Raise.
23:56Halt.
23:57Right.
23:58Left.
23:59Left.
24:00Up.
24:01Halt.
24:02Mask.
24:03Halt.
24:05Halt.
24:06Raise.
24:07Raise.
24:08Halt.
24:09Right.
24:10Left.
24:11Fort.
24:12Left.
24:13Halt.
24:14Halt.
24:15Right.
24:16Left.
24:17This is one of the first time in the hallway.
24:19This is one of the first practical applications
24:22in which an intelligent computer can imitate
24:25and even replace human functions.
24:28Back.
24:35In 1986, this vehicle is to make its way
24:38through the barren landscape of Mars.
24:40As the representative of Earth-bound explorers,
24:50the rover embodies another aspect of common sense intelligence,
24:54the coordination of mobility and vision.
24:58It has a laser rangefinder and two television cameras.
25:03They detect a rock shown in pink and its shadow in red.
25:10The rover has been told to cross the room.
25:14Its computer plots a path to get there which avoids the obstacles.
25:19Observed by NASA scientist Bob Cunningham, the rover moves.
25:29On Mars, it will cover 20 miles a day
25:31to collect samples of rock and soil.
25:40Here, too, it will pick up a sample from this group of five rocks.
25:47The visual information is conveyed through the television cameras to the computer,
25:52a slightly different view in each eye.
25:55The rock in the center is the target.
25:57The computer works out its rough size, shape, and distance.
26:01Then the arm goes into motion, with the computer guiding its every step,
26:06keeping track of where all of its seven joints are
26:08in relation to each other and the goal.
26:10At the end of the hand are tiny sensors that control the final approach,
26:31when the position must be computed exactly.
26:42The hand picks up the rock, but not just to store it away.
26:45People on Earth will want to know what the rovers discovered,
27:02so it will show off its samples.
27:04But during its mission, the rover will be in contact with Earth less than an hour a day.
27:22Scientists now cannot prepare it for every contingency.
27:25So they hope to endow it with what is probably the most crucial aspect
27:29of common sense intelligence, the ability to learn.
27:34Learning about simple shapes was the problem given to a computer program
27:38written at MIT by Patrick Winston.
27:41I was trying to understand if it's possible for a computer to learn in some meaningful way.
27:45And by that I don't mean a kind of rote learning,
27:47in which I just tell the computer in a very straightforward way
27:50the facts that it needs to know.
27:52Rather, I wanted the computer to be more involved in the learning process.
27:55I wanted to do some analysis, to make some descriptions,
27:58to perhaps compare descriptions and use those comparisons
28:01to develop a kind of model of what it is that it's supposed to learn.
28:06Winston wanted the program to learn to recognize an arch.
28:09He began by giving it a model.
28:11The program itself labels and counts the parts.
28:16The diagram on the right shows the important features.
28:20It spells those out in detail and stores them in its knowledge base.
28:24The program was told only that this drawing is an arch.
28:34It had to figure out for itself the distinguishing features.
28:40In this example, the program sees something different.
28:43The two supports are touching.
28:45But it hasn't been told if it's an arch.
28:50So it asks.
28:57Winston types back that the drawing is not an arch.
29:00From the response, the program can draw an important conclusion.
29:04That the supports of an arch must not touch.
29:07This new information is added to its knowledge base.
29:13From examples like this, the program accumulates facts about arches.
29:17It acquires knowledge.
29:18But will it be able to apply it?
29:20Now that I've given the computer some examples,
29:22I want to see if it's really learned anything from them.
29:24So I'm going to give it a little test.
29:26I'm going to type in a picture here, which looks like an arch,
29:30except that the object on top is a wedge now instead of a brick.
29:36The program analyzes the drawing to see if it fulfills the minimum requirements for an arch
29:41without violating any of the conditions.
29:46This time, Winston asks the question.
29:48The program checks through its knowledge base before reaching a conclusion.
30:03What will happen when this ability to learn goes beyond arches?
30:06When computers can learn not just by example, but from experience as well?
30:11If and when computers have an ability to learn in very powerful ways,
30:15it might start a sort of chain reaction of intelligence.
30:18That is, the smart computer might be able to learn to make itself smarter.
30:22And that, in fact, would lead to a kind of intelligence
30:25that is very difficult for us to fathom.
30:28How would such an intelligence compare with our own?
30:33At the Stanford Artificial Intelligence Laboratory, Professor John McCarthy.
30:37I think it's possible to have artificial intelligence at human level or beyond,
30:42but it's very difficult to say how long it will take
30:45because I believe that some major discoveries are necessary to achieve that level.
30:52One way of putting it is to say that it takes 1.7 Einsteins and 0.3 of a Manhattan Project,
31:00and it's important to have the Einstein be first and the Manhattan Project second.
31:05I would say that on the basis of present knowledge, such claims are simply and utterly ridiculous.
31:12There is simply no basis for making them.
31:15It's not only on the basis of present knowledge, but certainly also on the basis of present achievement.
31:23The quest to build intelligent machines began long before the computer age.
31:27Traditionally, they had been cast in the image of humanity.
31:36The Green Lady was built by a 19th century craftsman to entertain the royal courts of Europe.
31:41Her complicated and graceful moves are rigidly controlled by a hidden mechanism.
32:02Early computerized robots were equally inflexible.
32:19Alpha Newt's sole function is to seek light.
32:23A sensor conveys information to a small computer which controls the robot's direction.
32:42His successor, Beta Newt, has an onboard computer which directs it through a programmed sequence of actions.
32:48Without vision, it can't distinguish the letters, let alone spell its name.
33:00A human being has carefully pre-arranged the blocks.
33:06What primitive robots have in common is that they can do only a single kind of task.
33:12But the essence of common sense intelligence is generality.
33:15Built ten years ago at the Stanford Research Institute,
33:22shaky represents a more sophisticated class of robot.
33:27Dismantled now, the versatile shaky could understand English commands
33:31and devise a way to carry them out, even in an unfamiliar environment.
33:34Here he uses his power of vision to find and retrieve a particular box.
33:51Is shaky the forerunner of a truly general machine intelligence, or are the problems insurmountable?
34:03Most people have been skeptical about all the developments in science and technology that have occurred.
34:08I mean, look at the history of space flight.
34:11I can remember when I was a boy and first became interested in space travel back in the 1930s,
34:16that this was regarded as the most ridiculous thing you could possibly talk about.
34:20And before that, of course, the idea of heavier than air flight was ridiculed.
34:23So we've seen right down the ages this kind of skepticism.
34:29In this case, too, the concept of the intelligent computer,
34:34there's also an element of fear involved, because this challenges and threatens us,
34:39threatens our supremacy in the one area which we consider ourselves superior to all the other inhabitants of this planet.
34:44So people are not only skeptical of computers, but they're fearful of them.
34:50And perhaps even if they think it may happen because they're fearful,
34:54they'll try to pretend to themselves that it won't happen, a kind of whistling in the graveyard.
35:001926. A German film, Metropolis.
35:04The idea of the smart machine is designed to frighten.
35:08A mad scientist schemes to replace human workers.
35:12It is from fantasy like this that our image of the intelligent machine has come.
35:25And Hollywood has maintained this Frankenstein motif in an endless series of horror movies,
35:32starring robots and malevolent computers.
35:39But recently, there have been exceptions.
35:43How did we get into this mess?
35:46I really don't know how.
35:48We seem to be made to suffer. It's our lot in life.
35:53Where do you think you're going?
35:55Well, I'm not going that way.
35:57What makes you think there are settlements over there?
35:59Don't get technical with me.
36:02What mission? What are you talking about?
36:04I've just about had enough of you. Go that way. You'll be malfunctioning within a day, you near-sighted scrap pile.
36:15What will intelligent machines of the future be like? What function will they serve?
36:19It is a question to ask the creator of hell, Arthur C. Clarke.
36:23Intelligent computers could take almost any conceivable form, and I'm sure they will according to the duties they had to perform.
36:32The commonest idea in the mind of the general public is certainly the clanking humanoid robot, like the one in Star Wars, or the ones immortalized by my friend Dr. Isaac Asimov, which look like human beings.
36:49In fact, sometimes they might even be indistinguishable from human beings.
36:53But I think that although that type may arise, most of them will tend to be just grey metal boxes sitting around and thinking and communicating instructions to all sorts of specialized tools and devices and machines which are their servants which do the jobs they're designed to perform.
37:15For example, at the Stanford Research Institute, artificial intelligence is being applied to industrial problems.
37:22Mounted on this arm is a camera which sees an object on the conveyor belt.
37:30By analyzing the image, the computer enables the arm to pick it up.
37:35As in a real industrial situation, objects go down this conveyor belt at unpredictable angles.
37:50A worker picking them up would automatically adapt himself.
37:54The robot arm must do the same thing.
37:56The arm returns to get the next object, another electrical socket cover.
38:03But the hole means it's defective.
38:17It's picked up anyway.
38:21And the computer directs it to be placed in a different bin.
38:26The potential advantage of computerized automation lies in its flexibility.
38:41The arm is now assembling a water pump.
38:48By changing the program, the same hardware can do many different jobs.
38:54Here they contend that this could mean an end to the standardized products always associated with automation.
39:01In charge of robotics, Charles Rosen.
39:06For the first time, it begins to be possible to customize goods.
39:11That is, to produce goods that suit the individual.
39:15To individualize what is produced for everybody.
39:18Now, I don't pretend that it means that everybody will have an absolutely individual styled car.
39:24But it is possible to have a much larger number of things to choose from.
39:31Ones that you would prefer from simple to complex and from gaudy to non-gaudy and so forth.
39:37In the field of clothes, for instance, it looks possible to customize every suit of clothes and every dress.
39:44Using other forms of this kind of automation.
39:46You would need computer-aided design and computer-aided manufacturing and finally this programmable automation to accomplish this.
39:54And it might mean that people would go around in their own designed clothes with some help from the computer.
40:01At a price that can be similar to the mass-produced prices that we now have.
40:05Anybody here? I have a new citizen to be outfitted.
40:22Brother, you want jackets? We got jackets. You want trousers? We got trousers.
40:26This is a good time. Believe me, we're having a big sale. Tremendous.
40:32Positively the lowest prices. Maybe you need a nice double knit.
40:36Incidentally, I'm stuck for three pieces of corduroy.
40:39Something simple.
40:41We got simple. We got complicated. Why do you worry?
40:44Okay, step against the screen.
40:56This is terrible.
41:10Okay, okay. We'll take it in.
41:13Even after the bugs are ironed out, intelligent computers will join the workforce gradually because of the great expense involved.
41:19They will be applied first to dangerous and monotonous jobs unpopular with human workers.
41:26But it's inconceivable that the trend will stop there.
41:30Today, around 30 to 40% of the workforce is engaged in manufacturing goods.
41:36By the year 2000, I would think that about 5 to 10% of the present working force would be needed to manufacture the same amount of working goods.
41:47And like it or not, this could mark the beginning of the much heralded age of leisure.
41:53And in its attempt to make the impossible possible, Kawasaki Heavy Industries has been struggling for the development of an unmanned assembly system.
42:02This has resulted in the acquisition of a definite outlook for the development of desired robots and software to create a robot-operated assembly line.
42:11This is no more than the final goal toward which every effort in labor-saving technology is being directed.
42:19Your new mate, Kawasaki Unimate, will find broader application in manufacturing operation in the near future as we vigorously approach the final goal of an unmanned factory.
42:31The forerunner of technology which will bring more happiness to everyone.
42:37It's perfectly obvious that the development of such computers would restructure society completely.
42:47They would clearly remove much of the mechanical, if you can use that term, the routine work, which of course has taken so much of society's time of the human race.
43:01And they're already doing this, of course, in many ways, because our society now would collapse instantly if the computers which run it were taken away.
43:10And these are very simple, low-grade computers.
43:13And this, of course, raises tremendous social and philosophical problems.
43:17Not just the question of displaced people, what will they do, what will the people who are only capable of low-grade computer-type work, what will they do in the future?
43:29The much more profound question of what is the purpose of life, what do we want to live for?
43:33And that is a question which the intelligent computer will force us to pay attention to.
43:38It is a question that will confront people from all walks of life.
43:45The decision-making capability of intelligent computers makes them as appropriate to the professions as to the workplace.
43:56Medical diagnosis is a test case.
43:58Now, this discomfort that you have, is it the kind of discomfort that would grab and then let go?
44:05No.
44:06And then it's more of a steady discomfort, except it's influenced by meals.
44:09At the University of Pittsburgh Medical School, Dr. Jack Myers is one of the country's leading diagnosticians.
44:16By asking questions and making observations, he begins to assess what's wrong with his patient.
44:20Now, why don't you show us next where you feel this discomfort?
44:24Diagnosis is often regarded as something of an art, at the very least a skill requiring the human touch.
44:29It's a fairly good-sized area there.
44:31But does it?
44:33Abdomen pain peri-umbilical.
44:37Myers conveys the information he has just gathered to a computer, programmed by artificial intelligence expert Harry Popol.
44:44Abdomen pain exacerbation with meals.
44:48These few observations allow the program to begin reducing the range of possibilities.
44:52The system, Harry, has now come back with the first stage of its analysis.
44:56And it's of interest that the first two items being considered are colidocolithiasis, which is the gallstone in the common bile duct, and carcinoma or cancer of the head of the pancreas.
45:08But there are also several other possibilities.
45:13The program will seek evidence to support or refute them.
45:17Like a physician, it will do so by asking questions.
45:21It's asking now for findings concerning the abdominal pain.
45:26Let's see what details it wants to know.
45:28Is it a colicky pain?
45:32No, it's not a colicky pain.
45:35Some of the questions are identical to those Dr. Myers asked his patient.
45:38Not really, no.
45:41It's not coincidental.
45:43For the past seven years, they've been tailoring the program to duplicate his methods.
45:47Is there a severe back pain?
45:49No.
45:54We have back another analysis, and the system is in a narrow mode, which means it has two leading contenders for the diagnosis.
46:03From many possibilities, it has narrowed the field to two.
46:08The system will now try to distinguish between these two, and almost certainly will go to more complicated studies than we've been using up to this point.
46:19Have you findings of upper GI barium meal x-rays?
46:23That has not been done as yet.
46:26The program calls for the least expensive and least painful tests first.
46:31It will draw tentative conclusions on limited information.
46:35And what about cholangiography?
46:36That has not been done as yet.
46:38But probably not a final decision.
46:40With all of these omissions in the important findings, it's pretty unlikely that the program will be able to come to any kind of a conclusion.
46:49I would guess that we'll see it deferring on this.
46:55Yeah, deferring is, in fact, the judgment of the program at this point.
47:00The program recognizes 600 diseases and 2,500 symptoms.
47:04With trillions of combinations possible, isolating the significant factors that will lead to a diagnosis would seem to require something beyond knowledge.
47:15Intuition perhaps.
47:17But years of thinking about what he does has convinced Jack Myers otherwise.
47:21My own observations that what is called art and intuition in diagnosis is generally based on knowledge and experience.
47:30Sometimes these things are hard to analyze and understand, but I think this is predominantly the application of information, the organization of information and the coming to a logical conclusion.
47:45So diagnosis can be expressed in rules.
47:49And Harry Popel had to learn what they are.
47:54Harry, here's a case I got out of the files. It's a very good one for analysis.
47:59This is an elderly...
48:00He would sit with Jack Myers for hours, trying to unravel the diagnostic process.
48:06...developed cirrhosis of the liver, and then there are many complications of this.
48:11In describing the components of the liver problem, you've skipped over some other items that are underlined.
48:17How is it that you know not to worry about those items when working on the liver problem?
48:23Well, that's a matter of medical knowledge and judgment.
48:28At the beginning, Myers found it difficult to explain.
48:31The problem is if we're going to get a program to do this, I have to understand what that professional judgment is that you're talking about.
48:39And I need to know just exactly what it is that enables you to do what you do and call professional judgment.
48:47Well, I can explain to you each of these items and as to what the item means.
48:54The result of several years of work was a system that Dr. Myers believes reproduces his own decision-making process.
49:00And then I think you'll see how they do form a pattern or a cluster.
49:06All right, let's do that.
49:08First, the generally known symptoms of a particular disease are compiled by medical students.
49:13Then, the judgments have to be assigned.
49:15Now that we've agreed upon the data for diabetes insipidus, let's put the profile into the machine.
49:21Chuck, are you ready?
49:23Sure.
49:25Age 16 to 25.
49:31O2.
49:33They are expressed in numbers, which tell the likelihood of a disease if a symptom is present
49:38and the likelihood of a symptom if the disease is found.
49:41O3.
49:42This is intuition turned into numbers, judgments converted to calculation.
49:49Diabetes insipidus family history.
49:52This program is intended to help physicians, not to replace them.
49:56But won't we eventually have more faith in the computer's decisions than in our own?
50:00I suppose it's possible in the future that these systems will be considered infallible, but I certainly hope not.
50:10There is a tendency for man to believe machines more than other people, but I don't really believe this is appropriate, and I hope this won't happen.
50:19Even worse, will we abdicate responsibility for our decisions?
50:24This question becomes increasingly important as decision-making programs are implemented in medicine and other professions as well.
50:31In a world of growing complexity, governments will use computer programs to set public policy.
50:40Our cities are already cognitive cities, with many of their functions computerized.
50:45artificial intelligence will further this trend, by giving computers the ability to program themselves, and maybe to explain to us what they're doing.
50:55Perhaps we will run things better in partnership with smart machines.
51:00Perhaps we'll no longer run things at all.
51:02We're already seeing individual functions of a city's life, the medical function, the educational, the central administration, the garbage collection, and so forth, increasingly computerized.
51:14In due course, these computer networks will begin to exchange information with each other, and we will have centralized machine regulation of cities,
51:24at a level of complexity, which none of the inhabitants can anymore explain, follow, correct, or control.
51:33And there is a risk of our species ultimately becoming parasites living in the interstices of intelligent cities of the future,
51:43which may be governing themselves according to certain criteria of efficiency,
51:49which may not always take into account, in a sensitive way, what we regard as vital human values.
51:57In this experiment, a computer is reading a woman's brain waves.
52:04The aim is for the computer to discern in what direction she is looking,
52:09not by watching her eyes, but by deciphering the electrical patterns of her brain.
52:19As the subject looks in each of four different directions, up, down, left, and right,
52:24a flashing checkerboard stimulates four corresponding brain wave patterns,
52:29which are recorded by the electroencephalograph.
52:34The differences among the patterns are so slight that no person could tell them apart.
52:39But the computer can, and it stores the results in memory.
52:43You can relax now.
52:46We have our training set.
52:49And for the next run, you're going to see the maze in your field of vision.
52:55Remember, you have to take that little mouse out of the maze step by step
52:59by fixating on the red dot that stands in the direction where you want the mouse to move.
53:06Are you ready?
53:08This is the test.
53:09The subject moves her eyes in the direction she wants an electronic mouse to move in a maze.
53:14The computer picks up the corresponding brain wave and moves the mouse accordingly.
53:21It seldom makes a mistake.
53:26In effect, the computer is reading this woman's mind.
53:29Experiments like this might be the first steps toward a merger of mind and machine,
53:36a marriage of artificial and natural intelligence.
53:40But in such a relationship, who would be in control?
53:50Who would be the dominant partner?
53:52It is possible that we may become pets of the computers, leaving pampered existences like lap dogs.
54:03But I hope that we will always retain the ability to pull the plug if we feel like it.
54:07And if we don't, in fact if we do hand over everything to the computers,
54:12that will just prove the thesis that I've sometimes suggested,
54:16that the computers are designed to be our successors,
54:19and that perhaps when they come along it's our function to become obsolete,
54:24as our predecessors have become obsolete and been replaced by us.
54:27And I feel if that happens, it will serve us right.
54:57In knots at chains hip ged а x bul a x bul a x bul a x bul a x bul a x th g b x bul a x
55:14The End
Be the first to comment
Add your comment

Recommended