- 2 days ago
Category
📺
TVTranscript
00:00Think.
00:31This is an artificial cockroach.
00:34Not only does it walk like the real thing, it even thinks like it.
00:39Studying the brains of insects is giving us unexpected insights into the nature of intelligence and how to build it.
00:51We're on the verge of making computers that can tell the difference between the faces of men and women.
00:58Appreciate emotions, sadness, happiness.
01:03Give us clues to the motivations of living things.
01:10Maybe even let us build machines that can impersonate them.
01:16Perhaps even build ourselves.
01:19This isn't a computer laboratory. It's a school room.
01:30Dave Cressy is conducting the lesson.
01:34He's teaching a new sort of computer called a neural network how to tell when an underground platform is too full for comfort.
01:43The first and most important thing I have to do with a neural network is to teach it what is meant by a full, half-full or empty platform.
01:54And the way I'm going to do this is to show the neural network examples of each type of platform.
02:01And what I'm doing now is to show the neural network examples of full platforms.
02:06So images come up on the screen and the neural network actually looks at these images and learns what is meant by a full platform.
02:13I shall now go on to half-full and then finally to the empty class.
02:23What I'm going to do now is to test the neural network's performance on images that it never saw before.
02:30And this is the interesting part.
02:31So let's see how it gets on on a full image that it never saw before.
02:42It's quite obvious to the human observer that it is in fact a full platform.
02:46And now let's ask the neural network what its opinion is.
02:49And in this particular case, it's got it incorrect.
02:53You can see that the neural network is at the moment confused between the full and the half-full classes.
02:59Let's try another one.
03:05And in this particular case, the neural network gets it absolutely right.
03:09Which, considering it's only actually ever seen two full platforms before, is pretty good.
03:15Let's have a look at how it's doing on the half-full classes.
03:21And the neural network is confused between half-full and full.
03:25So far, the neural network isn't doing badly, but it really doesn't yet have enough experience to make it correct enough of the time.
03:34And so what I'm going to do now is go back into the training phase and to give it some more examples to learn on.
03:40Now that the neural network has actually been taught 20 images from each class, I'm going to go back to the images that it got wrong before and see if it can get them right this time.
03:56Let's try on half-full.
03:57And there we are. The neural network in this particular case is very confident that it is a half-full platform. Well done.
04:08What I'm going to do now is to feed the neural network with a whole load of images taken at random that it has never seen before.
04:15And so now when faced with these random images, it appears the neural network is not making any errors at all.
04:22The basic design of the computers we see all around us hasn't changed for 50 years.
04:31When we type something on a keyboard, we send information to the computer's processor.
04:36Each piece of data is fed to the computer in sequence.
04:40The computer deals with it, sends it on its way, and moves on to the next item.
04:44It processes that one, then the next one, and so on step by step through the stream of data being fed to it.
04:52It's a powerful way to manipulate data, but it does have its disadvantages.
04:57A serial computer is a marvellous machine.
05:00It functions by working with a program, that is a set of rules, as to how to operate on what is stored in it, that is its data.
05:08Now the data are numbers, and it operates by adding them, or multiplying them, or dividing them.
05:12It's very effective because it can do this many times a second, billions of times a second, for example.
05:19Far faster than we could do this.
05:21And it can tackle very, very difficult problems thereby.
05:24But, it has a difficulty.
05:28Suppose we take an infant in a crowded room.
05:31That infant, at the age of a few weeks, can recognise its mother's face.
05:35Very, very effective.
05:37The computer here, in spite of its size, it couldn't do that.
05:40What's the problem?
05:43The problem, effectively, is that we're talking about an infant working without rules.
05:48It's solving problems without rules.
05:51The serial computer has to have the rules. It has to have the program.
05:55When a serial computer is used to perform a so-called intelligent task, or a task which, if done by humans, would be said to require intelligence,
06:05this has to be worked out by a programmer, step by step.
06:10The tasks I'm talking about don't require you to be a genius.
06:15There are things like recognising faces, recognising language, following a conversation very, very rapidly.
06:20It's enormously hard to get conventional computers to do that, for the simple reason that a programmer can't easily work out how this is done.
06:31And, when he does, there are usually loopholes in that which make the program rather incompetent.
06:37Neural net researchers realised that these limitations could be overcome.
06:43To build something as powerful as the human brain, they argued, you had to copy the way the brain works.
06:50The inspiration behind neural networks is the human brain.
06:54It looks like two fistfuls of porridge, but it's very complicated.
06:58It's made up of tens of billions of nerve cells, each of them very small, even less than a millimetre in size.
07:06It's because that they all function in parallel, simultaneously, that this is so effective.
07:14But even more than that, when the brain goes into a new environment, the way one nerve cell affects another can be modified.
07:21We're beginning to understand the way that these connection weights, that is, the ways one nerve cell affects another, are changed by different environments.
07:32And it's through that set of rules, of changes of connection weights, that we think we'll be able to understand the whole of this complexity.
07:40How it was even that Mozart and that Einstein were able to have their amazing creativity in this two fistfuls of porridge.
07:48Researchers have found that the brain processes information through this vast network of interconnecting pathways, many billions of them.
07:57Signals travel down these pathways from nerve to nerve.
08:01The strength or weight of the impulses controls how the brain stores and manipulates data.
08:07We can start to reproduce the activities of the brain artificially by modelling the single nerve cell.
08:12Now, the single nerve cell is a very small device.
08:16It acts by taking note of the total activity moving onto it, coming onto it, and responds if there is enough activity, giving out a nerve impulse.
08:28If there's too little activity, it doesn't respond.
08:30So it's a little decision unit, a very simple device.
08:33That can be modelled in silicon.
08:35The basic building block of a silicon brain is an artificial nerve cell.
08:42These cells perform a very simple function.
08:45They either pass on a signal or block it.
08:48On their own, these cells can't do much.
08:51But when they're arranged in a network, they start to show surprising abilities.
08:55The study of the brain has been very important so far in developing neural networks.
09:01Because if we didn't have the example of the brain, we wouldn't have believed that a bunch of relatively dumb little processors with variable connection strengths between them could actually learn complicated things.
09:12I mean, it stretches one's credulity to believe that a whole bunch of dumb processors with connection strengths between them could be turned loose on perceptual input coming from the world and could actually invent the correct model of how the world works.
09:29But that's what little kids do all the time.
09:31And so the brain tells us it can be done and it tells us something about what the hardware looks like.
09:36And so we know that this is a soluble problem.
09:40And that means you're much more willing to put a lot of effort into solving it.
09:45If it wasn't for the existence of the brain, I wouldn't believe this kind of thing could be done at all.
09:50Part of the power of the brain comes from its ability to process many pieces of information at the same time, what's called parallel processing.
09:58But simply connecting a lot of processes together isn't enough to make an intelligent machine.
10:05The connections have to be the right ones.
10:07To make the right connections, researchers give their machines a surprising ability.
10:12Neural networks are built in such a way that they can themselves alter the electronic connections between their artificial nerves.
10:19It's here that teaching comes in. The teacher gives the network examples, such as empty or full underground platforms.
10:29When the network gives the wrong answer, the teacher tells it to rearrange its internal connections to try a different organisation of nerve cells.
10:37When it gives the right answer, the teacher instructs it to keep the arrangement it already has.
10:42Eventually, a stable pattern of connections becomes established, and just like a real brain, the pattern forms a representation, an image of the information fed to the network.
11:01This is our version of Yorick.
11:05We've given him two eyes that are simple photo-electronic sensors.
11:09They send signals down to a neural net.
11:14This neural net doesn't have a computer in it.
11:16It's just made out of 256 silicon neurons.
11:21These silicon neurons are not only connected to the eyes, but also to one another, so they get into a state.
11:27We call it a mental state, because it's a little bit like the states that our own brains get into when we look at things or hear things.
11:35Now, the important thing here is that we can actually look at his mental state by taking his plastic head off.
11:46So far, Yorick's only learnt about two things, light and darkness, and has developed its neurons to create internal mental images of day and night.
11:59We can test this by presenting it with images that are light and dark, and look to see what's going on in his head.
12:11Now, there is a state we can create simply by cutting the sensors off, which is a bit like sleep in human beings.
12:18At that point, the neural network starts running off on its own and has flashes of the images that it's learnt.
12:27Now, that may be what dreaming is all about.
12:30The fact that the network can change its own internal organisation as it learns has important consequences.
12:36It means that nets are not mere slaves to a fixed, built-in programme.
12:42As a result, they can solve problems which have so far defeated computer programmers.
12:47Among them, many tasks human beings perform effortlessly.
12:51We can tell the difference between men and women, for instance, pretty easily.
12:55It's extremely hard to write a computer programme which can do the same thing.
12:59But a neural network called Sexnet has learned to discriminate between the sexes.
13:06It's the brainchild of Terry Sanofsky.
13:09Sexnet is one of my most favourite networks.
13:12We're interested in vision.
13:15How is it that we can look at a person and recognise that person,
13:20and even more basically, recognise what their sex is?
13:23That's a problem of pattern recognition that is very difficult to programme into a computer with a conventional approach.
13:33However, by taking examples of faces in the same way that we're able to take examples of words
13:41and feed them as input, we're able to teach the network how to make the discrimination
13:46between the features in the face that characterise males with those that characterise females.
13:53The network is given examples of faces.
13:56Now, the first time, when it's given a face, it may incorrectly identify it as female.
14:02But a teacher provides the network with the right answer, and using that knowledge,
14:08the network is able to readjust the connections within the network so that the next time it sees the face,
14:13it will give the correct answer.
14:16Now, it may not do it correctly for new faces, but gradually, with more and more examples,
14:22it's able to pick out what's common about all of the male faces
14:26and be able to assign a new face to the correct category, male or female.
14:32Now, we took away all of the secondary sexual characteristics to make it a particularly difficult problem.
14:37No hair, only from the chin up.
14:39And testing members of my own lab, humans can do around 90%.
14:46The network has achieved the performance, which is a little bit better than that, about 92%.
14:52And what this means is that either neural networks are better than humans
14:56or members of my lab need more experience with discriminating sex.
14:59Sexnet learned in the same way we do, by trial and error, and by example.
15:08It takes several years for children to learn to read.
15:12I like to go to my grandmother's because she gives us candy and we eat this.
15:19Learning to connect the written word with the sound it makes and the meaning it contains is a laborious repetitive process.
15:27But eventually, the rules of language become imprinted in the mind.
15:31To my cousins.
15:34Terry Sanofsky has taught a machine to read in exactly the same way.
15:38NetTalk is a neural network that's been taught to read English out loud.
15:44The network is given letters and words as they be written on a page as the input.
15:51And the output are English speech sounds or phonemes.
15:54And the task that the network has is to associate with each letter of each word the correct sound.
15:59Now that's a very difficult task for English because the spelling, as you all know, is very irregular.
16:06And it's very difficult to learn.
16:07And it takes many years to be able to read and to spell properly.
16:11What you're going to hear is what the network sounds like at the very beginning of training
16:16when it's just learning the different sounds and how they might go together.
16:21The network is going to be learning as you listen to it.
16:26We call this the babbling phase, and it sounds like a little baby just beginning to make speech sounds.
16:39The network likes vowels, but it begins to intersperse the vowels with consonants.
16:43Ba, ba, ga, ga.
16:47Now this is the second stage of training after a few hours in the computer.
16:51You'll hear discreet words, but they're not English.
16:53Now you're going to hear what it sounds like when you come back in the morning.
17:09It's been training overnight.
17:10When we waffle from school, I waffle with two fries and sometimes we can't travel from school now.
17:19I like to go to my grandmother's because she gives us candy, and we eat there sometimes.
17:27It's time.
17:29Making a machine that can read is quite an achievement, but it's by no means the limit.
17:34In Toronto, a network is learning to interpret the flickering movements of the human hand.
17:39We were interested in the ability of neural networks to learn by experience.
17:45So we took a very difficult task, which is getting a person to make movements of their hand to drive a speech synthesizer so they can talk with their hand instead of with their normal vocal apparatus.
17:56Glovetalk is the mechanical equivalent of sign language producing words from the patterns made by a hand.
18:03The Glovetalk glove feeds information to the neural network about the shape, position, and movement of the hand inside it.
18:12The basic way you tell it that you want to say a word is you form the shape with your hand, and then you move like that.
18:18The model's a bit like a conductor trying to make the drums go boom.
18:21You go like that, and when you hit the end, it's meant to go boom.
18:24Here, when you hit the end, it's meant to say the word.
18:27So there's a neural network that runs all the time that's trying to decide if you just made a movement like that.
18:32And that neural network was the trickiest one to get working, because when people hold their hand there, it sort of drifts around,
18:37and the net has to decide whether that was just sort of random drift,
18:40or whether the person really intended to say a word.
18:44And then once that system's recognized that you intended to say a word,
18:47another network reads the shape of the hand, that is, reads all these measurements coming from the glove,
18:52and decides which word it was.
18:54There's yet another net that decides which direction you went in,
18:57and that determines the ending on the word.
18:59So if you say the word short, you move one way and it says shortly,
19:02you move another way and it says shorted, and so on.
19:05The network converts the hand movements into a synthetic voice.
19:10The result, a mechanical rendition of a simple sentence.
19:14You are not my sister or their daughter.
19:26The way we initially train Glovetalk is the computer flashes a word for the user to try and produce.
19:32The user then makes the hand shape they think is correct for that word,
19:35and that happens a few times for each word.
19:38The computer then simulates the neural network and changes the weights in the neural network
19:43in such a way that when the user makes one of those shapes,
19:47the computer will respond by telling the speech synthesizer to produce that word.
19:52So because the computer told the user what word to say,
19:55it knows what the right answer is,
19:57and it can get to see all the variations that that user makes in producing that word.
20:02Then, after the user's finished the training session,
20:06the computer will then train for several hours to adjust all the weights
20:09so it produces the right response.
20:11Once the network's trained on one user,
20:14it can adapt rapidly to new users.
20:18It simply gets the new user to try and produce the various words.
20:22And then instead of having to adjust all the weights from scratch,
20:25it takes the weights it used for the previous user
20:28and just changes them slightly
20:30so as to accommodate the slight differences between the new user and the previous user.
20:37Neural networks are giving us machines that can learn
20:40to read, to recognize faces,
20:42to turn patterns into sounds.
20:45Why not ones that can find their own way around?
20:52This is Frank the robot.
20:54This is Martin Snaith, his trainer.
21:02Frank has a brain, not a very large one.
21:05It's got about a hundred artificial nerve cells in it.
21:08Nevertheless, Frank can do something that most other robots can't.
21:13He can read a map his trainers programmed into him.
21:17Frank is a neural network system that's designed to run through mazes,
21:37either real-world mazes or laboratory mazes.
21:40And he is able to get from a destination point back to a start point,
21:45or from a start point to a destination point.
21:47And that's designed to allow you to do some kind of real-world task,
21:52like taking a piece of paper from one guy's desk to another, for example.
21:57Frank knows where he is in the maze because he recognizes the things that he sees as he sees them.
22:02He uses the context.
22:04So he's expecting to see certain features one after another in the maze,
22:08just like you would do when you were following a map.
22:11In fact, Frank has got a map.
22:13Now, I use that word very loosely.
22:15It may well be totally different to the type of map that you and I use,
22:18but it is a neural implementation of a map.
22:21And this particular neuro-neurocomputer here
22:23is expecting to see a certain set of features which describe the robot's goal.
22:29So the robot will carry on in the maze as long as what the robot sees in the maze
22:36lines up with what the robot expects to see.
22:38And when the robot sees the set of features that has been written out as the goal,
22:44then it plays a tune or whatever, rings a bell or gives the guy the piece of paper,
22:49whatever it's designed to do for the particular application.
22:52It can then swap the map or turn it upside down
22:55and follow the map back to where it came from.
22:58A number of sensors on Frank's head feed him information about his surroundings.
23:04A central neural network processes this information
23:07and parcels it out to further networks.
23:10These do the donkey work of controlling Frank's movements.
23:13On a system like Frank, you need, say, about a hundred neurons to perform the kind of tasks that this robot can do.
23:22And to design a network with all of those hundred neurons all at the same time is really difficult.
23:26In fact, in many cases, it's impossible.
23:28So what we do is we cluster the neurons together into small groups of, say, ten or twelve neurons that have a particular function.
23:35Take, for example, this set of neurons here.
23:37They're designed to recognize two different types of features, a T-junction and a right-hand corner.
23:42And once they've extracted that information from the data, they can then communicate simply with the other neural computers on the robot just by one or two activity signals.
23:51Rather than a whole bunch of wires, rather than this connecting all the neurons in this card to all the neurons on the other cards,
23:57we can just connect one or two wires from a cluster of neurons to another cluster of neurons.
24:03Now, that is very similar to what happens inside an animal or an insect whereby you'll get a very complicated cluster of neurons interconnected very heavily
24:12and then that'll have a very simple connection to another complicated cluster of neurons.
24:16So it makes the design much more easy to manage and much more easy to do.
24:22Frank scores over existing robots by being flexible.
24:26He can find his way around not just the maze he's been trained in, but in any maze which is roughly the same.
24:32One of the advantages of neural network systems is that they are prototypically trained.
24:37You train them on an example of a particular situation. You don't need to train them on the exact situation.
24:42Systems like Frank are particularly flexible because they can ignore dynamic obstacles up to a certain point.
24:49So if you throw a brick at Frank, as long as the brick doesn't completely block Frank's field of view,
24:53then Frank can get through, take a phase of action, go around it, and that won't really disturb the mapping function.
25:02Neural networks are acquiring a surprising number of human-like qualities.
25:07They can recognise patterns, deal with incomplete information, make decisions, learn from experience.
25:16These abilities are making them valuable to industry.
25:19Wizard was among the first to be recruited.
25:22This is the wizard neural network, which is used for the recognition of images.
25:28It was built about ten years or so ago, so it's been around for a little while.
25:32It's actually been used in industry for recognising piece parts going past production lines.
25:38What happens with it is that the camera takes images which are sent to the wizard,
25:46which is a bunch of a quarter of a million neurons.
25:49The neurons are trained to recognise these images by outputting a signal,
25:55which either goes to a loudspeaker or some lights, or it can even drive a robot.
26:00What it's been trained on now is to recognise my face when it's smiling, frowning, and doing other things.
26:08Let's get it to do its thing.
26:10Frowing.
26:12Smiling.
26:14Frowing.
26:16Smiling.
26:18Where's he gone?
26:20Of course this thing can be used to do things other than just recognise my face.
26:26Intruder detection, someone coming into a field that's not expected.
26:32The system can be tuned to accept some people and not others.
26:36This can be done for access to a building, recognises the faces of those who are allowed access and those who are not.
26:42Reading of any kind, reading of handwriting, block codes, things on the front of envelopes.
26:50Counting banknotes, making sure that someone hasn't slipped the wrong thing into a banknote.
26:56Signature verification on checks.
26:59A whole host of things that requires the recognition of complex visual signals.
27:05Systems based on neural networks can take some of the drudgery out of repetitive or mundane tasks.
27:11In Japan, a network is employed to sort and grade apples.
27:16Others are coming which can recognise handwritten numbers, on checks for example.
27:21Or can sort mail quickly and efficiently by recognising post codes.
27:26These applications, say the researchers, are just scratching the surface of the capabilities of neural networks.
27:32Truly intelligent machines will change our lives dramatically, in the office, the factory, and at home.
27:40I think you've watched too many soap operas this week.
27:44I think it's time for something more interesting.
27:48pattern of racist machines.
27:57Click thekit
28:02of details
28:12Nature has had several million years to perfect the brain.
28:16the brain neural network researchers have been trying to build artificial ones for all of four
28:22decades some researchers have realized that they would be unwise to ignore the computing solutions
28:28built into the wetware of the brain they don't want merely to simulate the way the nervous system
28:34works they want to build something which is to all intents and purposes indistinguishable from
28:40an organic brain their research is directed towards copying the brain's architecture and
28:46organization and reproducing it in silicon they hope that by doing so they'll be able to build in
28:54the flexibility and adaptability of behavior seen in even the simplest living animal this is the
29:00stimulus for the artificial insect project at case western reserve university dr randy beer is building
29:07a mechanical housewives horror a neural network which can reproduce some of the brain activity
29:13of the cockroach the artificial insect project was motivated by a desire to bring artificial
29:20intelligence research into a little closer contact with biology historically artificial intelligence
29:25systems have been largely motivated by trying to reproduce very high level human cognitive skills
29:30like language or reasoning theorem proving things like that and those systems tend to be very brittle
29:36and very narrow in their expertise our feeling was that if we look a little closer to the biology
29:42and look to much simpler animals than people that can get along very well in the real world though
29:47they can't prove theorems or play chess they are very good at dealing with whatever comes up in a
29:52real world situation in the way that ai systems are not and so what we tried to do was say let's set
29:57out to build an artificial insect analogous to a real insect and let's try to have it solve some of the same
30:03problems that a real insect can solve and let's base the design of its nervous system on what's known
30:07about insect nervous systems the result was a computer bug which can do similar things to a real insect
30:16this is computer food
30:20like real insects the neural bug is always hungry the first thing it does is locate lunch
30:33but in the way is a wall when the bug hits it it's forced into a second sort of behavior it follows the edge
30:44by now it's walked so far that energy is getting low and by the time it finds its way around the wall
30:50the bug is too far from the food to detect it so a third behavior random wandering takes over until it finds another edge
30:58or senses the food again then the desire to feed takes control once more
31:11artificial bug heaven food in sight and nothing in the way
31:16found an electronic banquet for the hungriest of bugs
31:46it's not a bad thing to do
31:47but it's not a bad thing to do
31:48beer has learned enough about how the cockroach walks to try to build one
31:53the nervous circuit responsible for driving the insects legs was built into this robot model
32:00one of the main advantages of basing a robot on the biology is that biological organisms are much
32:05more robust much more flexible than anything that we've been able to engineer from scratch
32:10within our technology and so what we're hoping to do is transfer some of that robustness and flexibility
32:15from the biological world into the robots that we'd like to design most conventional robots are
32:20designed only to work in very restricted domains for example a factory floor that's specifically designed
32:26to accommodate their needs we're much more interested in building robots that can get around in existing
32:31environments that aren't pre-designed to accommodate them for example the kind of environments that people
32:35live in for example a home environment um and that requires a certain degree of flexibility and
32:40robustness which most current robots simply do not have
32:44biological organisms have those characteristics and that's why we'd like to see ideas from biology
32:48transferred into the control of robots making a machine that can walk is one thing
32:55but to build a totally independent robot we'll need to give it senses touch smell hearing and vision
33:05at the california institute of technology carver mead is building an artificial retina
33:10the part of the eye which turns the light we see into nerve impulses which our brains can interpret
33:17here at my lab at caltech we build systems that are models of pieces of the nervous system like the
33:26retina or the cochlea in your ear what i mean by a model is a system that in some way does a similar
33:34function function and by building models uh this way we use microchip technology the technology that
33:43was designed originally for computer chips but it lends itself well to building
33:50these systems which emulate in some way pieces of the nervous system
33:56because down at the bottom the physics of the transistor is not really that different from
34:00the physics of the nerve membrane this is a chip which is uh seeing light and then doing computation
34:07on it it's unlike a tv camera in that it's not trying to perfectly image as perfectly as possible what
34:13it's seeing it's actually trying to do computation like our brain does from the retina and the computation
34:20it's doing is looking for changes in the intensity and the idea there is that what's important to the
34:26brain is what's changing in the world if something is just sitting there it's not important
34:30it's not interesting it's um something that's not going to eat us or it's not moving relative to us
34:35so what the pixels on this chip here are doing is just simply looking for whether the intensity is
34:41getting brighter or darker so if i try to hold perfectly still
34:47all it sees are my blanks and if i take these patterns here of black and white squares and hold it
34:53in front of the picture the middle of the squares is the same color as the outside of the squares
34:58even though the middle of the squares is really black and the outside is white
35:03so when there's no change it just settles down to the same zero level all over it's only when the
35:07squares move do the cells put out anything that's like the computation that the retina is doing on the
35:15image before it gets sent to the brain and so that's the kind of pre-processing that we're doing here
35:20we're trying to actually do computation on the image not build tv cameras but actually try to pre-process the
35:24image so that we can do something more interesting with it later like motion or whether uh we're
35:30coming towards something or whether something is moving left or right and these are the kind of
35:34things that we're going to have to be able to do if we want to build machines that can really
35:37navigate drive a car or fly a plane or things like that each of these different microcircuits
35:43performs a unique function exactly this kind of separation of functions goes on in real brains
35:49the artificial retina is closely modeled on the real thing if you take a computer
35:57like a house fly a single simple little house fly we think it's doing more computation in the brain of
36:04that house fly than in a hundred cray supercomputers and the reason for that is that it's specialized
36:10and it's optimized over billions of years of evolution to try to do those particular computations
36:15that it needs it can't predict the weather or the stock market but it can navigate in around the
36:20world and do it very well and the reason it can do that of course is that it's doing all that
36:26processing of all the information that's coming in the incredible flow of information
36:31with specialized circuits these are the kind of circuits that we're trying to build here
36:36following nature closely has clearly brought benefits for the caltech group
36:40and a few problems such as how to find a place in the circuitry of the artificial eye
36:45from which the wires can lead out the solution was to gather them all in one spot but that spot
36:51would be blind the human eye has an exactly similar blind spot where the optic nerve carries out
36:57information to the brain the payoff from copying biology has prompted some researchers to use not the
37:06ends but the means of nature they're trying to make smarter machines by putting them through a version
37:11of the process which produced human beings evolution by natural selection martin snaith has used what's
37:18called a genetic algorithm to build igor igor has all the right equipment for walking he just doesn't know
37:26how to use it a genetic algorithm is just how it sounds it's a model of what happens in natural selection
37:33now in natural selection you take a population of animals and the ones that are successful the ones
37:38that have the correct solution to the problem of life get the chance to breed and that's a mutation
37:44of their genes which which then carry on to try and approach the ideal solution to the problem of life say
37:50now in a machine system it's a bit cruder than that obviously because it's a model but we can take
37:56what the quality factor is and determine it for ourselves in natural selection it's whether you breed
38:01but in a walking robot it would be how far you walked for example so we can decide which of the
38:07members of the population that had a quick stab at how how to walk were any good and then we can mutate
38:13from them to hopefully get closer to a better walking machine igor's genetic algorithm selects
38:19variations in the connections of his neural net igor's there to learn how to walk we're really far more
38:25interested in the learning than we are in the walking at this stage if i was going to design
38:30a walking machine just for walking then i wouldn't design something like igor we'd sit and do a big
38:36engineering job that was all precise and didn't have lots of wires hanging around and stuff but igor is
38:41there really to study learning we're interested in machines that can solve difficult problems and
38:47learning to walk on a chassis like igor is very difficult and so it's a good trial for a learning system
38:54like evolution genetic algorithms work by trial and error igor tries to walk he can measure how
39:00well he does by measuring how far he gets then he tries again using a different set of connections
39:07in his neural networks igor only uses the genetic part of its system when it's never seen a particular
39:17situation before so if the robot finds itself in a circumstance that isn't near any of the
39:22circumstances that it has knowledge of previous knowledge of then it's got to do something
39:28now it could either just take a wild guess or it could take what it did last time and just try and
39:34mutate it a little bit and now this is the genetic flavor that igor has igor takes the current vector
39:41and twists it around a bit and tries it again and sees how how well it does and if it did no good
39:47it throws it away and tries another one genetic algorithms allow the designer to try out an
39:52enormous number of different networks including many that arise randomly
40:01a similar technique was used to hone the performance of london underground's crowd sensing net
40:07in taking a neural net that knows nothing about platforms and passengers and making it learn to
40:15make those judgments we've really taken one trick from nature which is learning systems and in this
40:21application we can take another trick from nature and we can actually take the neural net and evolve
40:26it so that it can make those decisions even better and to do this we use a genetic algorithm
40:31which is a way of successively improving the architecture of our neural net so that it can
40:36make that decision better and better and what we've done here is to allow the neural net to change
40:43where it focuses its attention in the visual field we started off with a neural net that had its
40:48attention uniformly across the whole of the visual field and then we allowed them to to mutate and
40:55to vary a little and we picked out the ones which perform best and we found after several generations
41:00of that evolution that indeed we were getting better performance and the way the neural net was getting
41:04better performance was by concentrating its attention on just those parts of the visual field where the
41:09passengers were and ignoring the parts of the visual field like the walls of the platform where
41:13essentially there was constant information so no useful information to help it make that decision
41:18so what we've got here is something like an animal which lives in a special habitat where some
41:22parts of its visual field are no use to it and over several generations that animal has evolved to
41:28essentially have fewer neural connections to those parts of its visual field and to concentrate its
41:32attention on just those parts which are useful for making the decision it has to make genetic
41:38algorithms can help find the best network for a particular job but they can't solve another problem
41:44who's going to teach all these knowledge hungry networks the answer might be themselves
41:51in unsupervised learning the idea is that the neural network gets to look at the input it gets to look
41:58at input coming from some environment but nobody tells it what it should be doing now you might think
42:04that since there's no definition of what it ought to be learning it couldn't possibly learn in those
42:08circumstances but there are now a number of learning algorithms that allow a network when it receives input
42:16from the environment to create internal representations of the causes of that input so let me give you one
42:25example of how unsupervised learning might be possible if you see some rigid three-dimensional object suppose my
42:34hand was rigid and you get input images of this object rotating in space then the images are all rather
42:42different from one another but there's something common about all the images namely this three-dimensional
42:48shape so a neural network that assumes that the things that it sees at neighboring times probably have
42:56some common underlying cause might actually be able to learn about three-dimensional shape by internally
43:04trying to figure out what's common to all these inputs unsupervised learning was developed by
43:10professor tuvo kohonen at the university of helsinki he used it to develop a phonetic typewriter which
43:19responds not to commands from a keyboard but to the more variable and inconsistent tones of the human voice
43:26it was around 1975 when we really started doing work on a speech recognition but it was not until about 1981
43:46when i invented some new principles the self-organizing map and using these maps we were aiming at the
43:55phonetic typewriter which is able to write out from general dictation not just bound to any vocabulary
44:07as the phonetic typewriter receives inputs from a voice different parts of the neural network are
44:13stimulated each part of the network has a letter assigned to it by learning to make the connection
44:20between sounds and letters the network can print the right words on the screen an internal dictionary
44:26enables it to correct the spelling as it goes along already neural networks are close to taking over
44:32some of the activities we think of as human banks are currently testing their efficiency in predicting
44:38the movement of the stock exchange they can do at least as well as human stock brokers
44:42and in upstate new york a neural network has replaced a team of technicians trained to spot cancerous
44:49cells among hundreds of thousands of healthy smear samples the machine can do the job in a fraction
44:56of the time and never gets tired or bored impressive though these abilities are neural net researchers
45:03remain unsatisfied if you ask sort of how sophisticated are the neural networks or ai systems that we have at
45:10present as compared with animals um my view is that the neural networks we have at present are kind
45:18of comparable with a slug or something maybe a fraction of a slug we really are nowhere near having
45:26sophistication of something like a cat or a dog or a rat those are much more more sophisticated
45:32information processing systems it may appear that some of these neural networks are cleverer than rats
45:37because for example they can read digits which rats maybe have some difficulty doing but rats do a
45:43whole lot of other things very well um so we're a very long way away from coming anywhere close to
45:50real biological intelligence and i think it'll be quite a long time before we get anywhere near it
45:55if the promise of neural network research is so far unrealized the aims are extremely ambitious
46:02the researchers will settle for no less than a conscious machine a real intelligent system would have not
46:11only the vision and hearing front ends but it would also have an interpretation system that could learn
46:18from the images and the sounds and extract information and correlate that with what it's heard and seen
46:26before and that requires a mechanism of long-term learning the neural network field is developing mechanisms
46:34like that and we're building mechanisms like that onto silicon right now it won't be too many years before
46:41we really have systems that have vision and hearing front ends that can actually learn from those scenes and from those sounds
46:52with carver meets chips we're able to take in sensory information the current state of the world
47:00with neural networks we're able to interpret that information to figure out what's out there
47:08but the next step is to be able to use that information to plan the future to be able to
47:14decide make a decision an intelligent decision about how to interact with the world to try to anticipate
47:23problems that may come up and to try to solve them and these are problems that are going to require
47:30much larger structures beyond the network and here we may be able to get some clues from the brain
47:37what then is stopping us from building a machine which can match human intelligence i think at the
47:44present time we have enough technology to build anything we could imagine our problem is we don't
47:51know what to imagine we don't understand enough about how the nervous system computes to really make
47:58more complete thinking systems we have little tiny insights that we've gotten from studying biological
48:07systems and we can build little systems based on those and we can make them work and we can understand
48:14a little more and then we can go and ask the biology some more questions and we can learn a little more
48:20but it's slow going the brain is an enormously complicated thing even very small parts of it are way beyond
48:26anything we understand so we're really limited by our understanding not by our ability to build things
48:34people talk about building a conscious machine to some extent that will always be a fantasy and
48:41some people will never accept that anything except a human being can be conscious they won't even
48:47accept that animals are conscious because they can't use language but that's an area of philosophy which
48:53i don't think i want to get into right now with neural networks we're beginning to see a definition of
49:00something which could be the beginnings of consciousness and that is their ability to build up internal
49:07experience to which they can relate it's this relating to previously learned things being able to take
49:16actions on these things the the fact that there is something going on inside the network which makes
49:23use of information coming in this thing that's going on inside could be referred to as the beginnings of
49:31thinking or consciousness inside a highly artificial system so i would call it artificial consciousness so
49:38as not to confuse it with a human kind networks are a state of mind and i think that someday in the future
49:51your bet your best friend may be a neural network
49:55if so then neural networks really will change the way we live but more modest consequences are equally likely
50:02on the cards are intelligent kitchen scales that can work out if you're eating too much fat
50:09coffee machines that make it just as you like it
50:13toast done to a turn perfectly every time
50:19high-tech industries around the world are looking to neural networks to power a consumer revolution
50:26they see a new generation of intelligent domestic appliances
50:29and artificial butler may still be some way off but self-steering vacuum cleaners that learn their way
50:37around the house might be just around the corner
50:50well next sunday at seven o'clock equinox looks at the
51:20the engineer extraordinary ben bulby whose dyslexia is not proving a handicap
51:24in fact he's proving to be something of a surprise packet in the competitive world of motor racing
51:30and don't forget that most of the programs in the equinox series are available on video
51:34if you'd like more information please call equinox video on 0532 438 283 extension 4060 or 4075
51:43on mondays to fridays between 9 00 a.m and 5 30 p.m
Recommended
6:49
|
Up next
3:20
0:12
4:36
3:07
3:23
2:40
24:57
45:23
46:04
55:50
52:55
54:22
51:58
52:15
55:45
1:00:34
1:00:34
58:15
59:10
58:26
Be the first to comment