00:00This is a story about preparing for the worst, maybe the worst thing any of us can imagine.
00:06A year ago, we talked with computer scientist Jeffrey Hinton just days after he won the Nobel Prize for his work in machine learning.
00:14The so-called godfather of AI has been busy since then, not developing artificial intelligence, but warning people about it.
00:21He says we've all become more aware of the risks, but knowing about them isn't enough.
00:27We need to act.
00:30Suppose that some telescope had seen an alien invasion fleet that was going to get here in about 10 years.
00:36We would be scared and we would be doing stuff about it.
00:40Well, that's what we have.
00:42We're constructing these aliens, but they're going to get here in about 10 years and they're going to be smarter than us.
00:46We should be thinking very, very hard.
00:49How are we going to coexist with these things?
00:52Coexistence and control.
00:54Two things that Jeffrey Hinton himself has been thinking very hard about.
00:58As one of the computer scientists who helped make modern AI possible, he's uniquely well-suited to consider its future and who, if anyone, can shape it.
01:08Are there companies who are doing real work on safety?
01:13I mean, we hear about Anthropic.
01:15We hear about DeepMind.
01:16Are they helping on the safety front?
01:18Yes, I think both Dario Modi and Demis Sabis and also Jeff Dean, they all take safety fairly seriously.
01:26Obviously, they're involved in a big commercial competition too, so it's difficult.
01:30But they all understand the existential threat that when AI gets super intelligent, it might just replace us.
01:39So they worry about it a bit.
01:41I think that some companies are less responsible than others.
01:44So, for example, I think Meta isn't particularly responsible.
01:48OpenAI was founded to be responsible about this, but it gets less responsible every day and their best safety researchers are leaving or have left.
01:57Yeah, I think Anthropic and Google are somewhat concerned with safety, and the other companies less so.
02:04As I talk to some of the people at some of the companies you're talking about and raise the question of safety, I often am told, don't worry your pretty little head about it.
02:12We have great computer scientists who are on top of this.
02:15We're far off from any real danger, and our computer scientists will know soon enough.
02:20So we're much more concerned about the race to become dominant.
02:24Yes, that's the problem.
02:26They are much more concerned about the race.
02:28They should be much more concerned about whether humanity will survive it, also whether society will survive it if you get massive unemployment.
02:37There's one piece of good news, which is all the different countries are aligned in not wanting AI to take over from people.
02:44They're anti-aligned for things like cyber attacks or autonomous weapons.
02:50They're somewhat aligned for creating viruses.
02:52None of them really wants other countries to create viruses.
02:54On AI taking over, they will collaborate because nobody wants that.
02:58The Chinese Communist Party doesn't want AI to take over.
03:01Trump doesn't want AI to take over.
03:03They can collaborate on that.
03:04That leaves the question of how do we prevent it taking over?
03:07Even if all the countries collaborate, what do you do?
03:10And I think at present, all the big companies and governments have the wrong model.
03:17So their basic model is, I'm the CEO, and this super intelligent AI is the extremely smart executive assistant.
03:25I'm the boss.
03:27I can fire the executive assistant if she doesn't do what I want.
03:31And I just sort of say, make it so.
03:35A bit like Star Trek.
03:36And the super intelligent AI makes it so, and I get the credit.
03:41Great.
03:43It's not going to be like that when it's smarter than us and more powerful than us.
03:48That's just the wrong model, I believe.
03:51We need to look around and say, is there any model where a less intelligent thing controls a more intelligent thing?
03:58And we have one model of that.
04:00And it's a model we all know, which is a baby controlling a mother.
04:03Evolution put lots of work into allowing the baby to control the mother.
04:08And the mother is actually often more concerned about the baby than about herself.
04:12It doesn't work like that with rabbits, but it does work like that with people.
04:15That seems a much more plausible model of how to coexist with the super intelligence.
04:20But we have to accept that we're the babies and they're the mothers.
04:24Hopefully they're not Jewish mothers, but you can't imagine these tech bros accepting that model.
04:32They just don't think of the world like that.
04:34Is the United States behind China in developing generative AI right now?
04:38Not yet.
04:39The United States is still a little bit ahead, but not as far ahead as they thought.
04:43And in China, you've got a very large number of very competitive, very smart people, very well educated in science and engineering and math.
04:58They're educating far more people than the U.S. in those areas.
05:01The U.S. has basically relied on immigrants to be smart at those things.
05:06China may well overtake the U.S.
05:08And if there's one thing you would do if you were Chinese to ensure that China overtakes the U.S.
05:14is you would stop the funding of basic research in the U.S.
05:20and you would attack the good research universities.
05:23Trump looks like he works for Putin, but actually in attacking the universities and attacking the funding of basic science,
05:30he's acting as if he's working for Xi.
05:33How deep is that damage?
05:34By the way, it's the immigrants you talked about as well.
05:36It's not just the direct funding for the research.
05:39It's also the brain power coming in from overseas.
05:42How deep is that damage and how immediate may we feel it?
05:45The point about attacking basic research is you don't really feel it for 10, 15, 20 years.
05:52Because what you do is you ensure that the really big conceptual breakthroughs won't happen here.
05:59And then later on, the Chinese will be way ahead.
06:04Regardless of who becomes the frontrunner in the AI race, Hinton says the risks to everyone have gone up over the past year,
06:13particularly for workers, as we saw just this week when Amazon announced it would be cutting 4% of its workforce,
06:21perhaps made both possible and necessary by unprecedented levels of AI investment.
06:27There's been an enormous amount of money put into AI since you and I spoke a year ago.
06:31I mean, an amount that I could not have conceived of actually.
06:34I mean, of the order of a trillion if you add it up over all the companies.
06:37So what is that money going for?
06:40And will it ultimately redound to anyone's benefit?
06:42These are big companies run by serious people, and presumably they wouldn't be putting all that money in unless they thought they could get a return on it.
06:52There's some ego involved.
06:54They want to be the ones to do it first, even if it's going to kill us all.
06:57So there's ego involved, but presumably they think there's returns to be made.
07:05My worry is that the obvious way to make money out of it, apart from charging fees to use the chatbots, is by replacing jobs.
07:16The way you make a company more profitable is replace the workers with something cheaper, and I think that's a big part of what's driving it.
07:25Is it a winner-take-all in the end?
07:28I mean, in terms of the basic structure.
07:29I don't know.
07:31I mean, one thing I should say is that this is sort of uncharted territory.
07:35We've never had things almost as smart as us, which we have now, or things smarter than us, which we'll have soon.
07:43We've never been there.
07:44We've had things in the Industrial Revolution that got more powerful than us, but we were always in charge of them.
07:50You know, a steam engine is just a lot more powerful than a horse.
07:53But we control the steam engine.
07:57This isn't like that.
07:58Also, if you got unemployed because you used to do ditches, now you have to do something else, you could get a job in a call centre.
08:07But now those jobs are all going to go.
08:10It's not clear where those people go.
08:13Some economists say these big changes always create new jobs.
08:18It's not clear to me that this will.
08:22And I think the big companies are betting on it, causing massive job replacement by AI, because that's where the big money is going to be.
08:30As you say, some economists say, we go back in history, and new technology destroys some jobs, but creates other jobs.
08:36And net-net, you have as many or more jobs.
08:39You're saying this time is different.
08:41Can the investment, the trillion dollars or more investment, can it pay off without destroying jobs?
08:48I believe that it can't.
08:50I believe that to make money, you're going to have to replace human labour.
08:58Given the dire warnings about AI's risks to workers, economies, and humanity as a whole, one wonders whether Geoffrey Hinton has any regrets about his pivotal role in giving it life.
09:11We asked ChatGPT how it would describe its relation to the man many people call its godfather.
09:17Its answer?
09:19If I'm the mature rainforest, Hinton is one of the people who planted the first seeds and figured out how to water them.
09:25Still, the question of whether it was worth it is the one that gave him pause.
09:31To ask an unfair question, you were sort of there at the birth.
09:35If you had it within your power, understanding it's not going to happen, would you stop AI altogether, given the risk?
09:44I don't know.
09:45Because there's also, you have to remember, it's not like nuclear weapons, which are only good for bad things.
09:50It's a difficult decision because it can do tremendous good, too.
09:53In health care and education, it'll do tremendous good.
09:57And, in fact, if you think about it increasing productivity in many, many industries, that should be good.
10:04The reason it's bad is because of the way society is organised, so that Musk will get richer and a lot of people get unemployed.
10:11And Musk won't care.
10:12I'm using Musk as a sort of stand-in.
10:17That's not on AI.
10:18That's on how we organise society.
10:20I wonder if, over the last year, the economy and the markets haven't worked against you.
10:26In this sense, so much of the growth in the stock market, so much of the driving economy is investment in AI right now.
10:32Even if the public were more concerned than they are about some of the risks you've described, they're going to say, wait a second, that's what's driving our economy.
10:41We don't want to give that up.
10:42We don't want to go into a recession.
10:44Some people say that our best hope is to have AI try to take over and fail.
10:51We need something to really scare the s**t out of us.
10:57Something like Chernobyl for AI.
11:01I'm not sure I agree with that, but that's certainly a possibility.
11:04Or the Cuban Missile Crisis.
11:06Or the Cuban Missile Crisis.
11:07Because one of the questions I had was, even if the government sort of agree in general we should do this, is there a sense of urgency?
11:12I think the Cuban Missile Crisis probably gave a sense of urgency on nuclear disarmament.
11:17Yes.
11:18We need something to make people pay more attention and put more resources.
11:26So at present the big companies aren't going to put like a third of their resources into figuring out how to make it safe.
11:32But if it tried to take over and only just failed, maybe they would.
Be the first to comment