- 5 hours ago
Category
🗞
NewsTranscript
00:00This is a story about passion and profits.
00:03The quest for scientific discovery often begins with curiosity, but money and power can follow
00:09closely behind.
00:10We saw it with the printing press, the nuclear bomb, and now we're seeing it with artificial
00:14intelligence.
00:15When a technology becomes powerful enough to change the course of humanity, who should
00:20benefit most from it?
00:21And who has the power to control it?
00:25Maybe Demis Hassabis turns out to be the Robert Oppenheimer of the 21st century, somebody
00:31who leads a project like the Manhattan Project, brings into the world an incredible technology.
00:38It's a massive scientific achievement and it's dangerous.
00:42And I think the builders of the technology, including Demis, as I got to know him, had
00:48that same thought.
00:51Sebastian Malaby is a senior fellow at the Council on Foreign Relations and the author
00:56most recently of The Infinity Machine.
00:59His book chronicles the rise of artificial intelligence through the lens of one of its
01:03key players, Demis Hassabis, a Nobel laureate and the co-founder of Google's AI lab, DeepMind.
01:10Right from the start, they were thinking, you know, the Manhattan Project was both good and
01:15bad and probably going to be the same with AI and we have to be careful.
01:19And so right at the beginning of DeepMind's story, Demis Hassabis meets his scientific
01:24co-founder, Shane Legg, at an AI safety lecture in London.
01:29And this is, you know, 17 years ago.
01:31You know, again, the nuclear analogy is instructive.
01:34We have a nuclear non-proliferation treaty for nuclear weapons.
01:37It's not completely airtight, but it's better than nothing.
01:39And that's what we're going to need to have with artificial intelligence.
01:43To get there, some believe that we might need a wake-up call.
01:46That's what Nobel laureate Jeffrey Hinton told us late last year here on Wall Street Week.
01:51Some people say that our best hope is to have AI try to take over and fail.
01:59We need something to really scare the s*** out of us.
02:02Something like Chernobyl for AI.
02:05I'm not sure I agree with that, but that's certainly a possibility.
02:08We need something to make people pay more attention and put more resources.
02:14So at present, the big companies aren't going to put like a third of their resources into
02:17figuring out how to make it safe.
02:20But if it tried to take over and only just failed, maybe they would.
02:24Even without an AI uprising, Maliby thinks policymakers have become more aware of the technology's
02:31risks, especially in the wake of Anthropik's Mythos rollout.
02:35Mythos is a dangerous point because already, you know, the U.S. Treasury Secretary, the
02:39Fed Chairman have told the banks, listen, your bank accounts are going to be emptied if you
02:43don't protect yourself against these systems.
02:45So we've had hints of Cuban missile crisis already.
02:49I would point out to Jeff Hinton, who I like very much, that, you know, before the Nuclear
02:55Non-Proliferation Treaty in the 60s, there was the 1950s.
02:58And in the 1950s, that's when the IAEA to track all the nuclear material was created.
03:04In 1956, it was negotiated.
03:06So, you know, that was actually ahead of the Cuban Missile Crisis.
03:11Whatever the outcome is for AI, Maliby thinks it will be shaped by the people who create
03:16and control it, and their individual personalities and motivations.
03:21And in Demi Sassabas' case, you know, he's building something which he himself says is
03:25dangerous.
03:26Right?
03:27Why do you do that?
03:28What makes you want to do it?
03:29And the answer in his case is scientific curiosity.
03:32He is so burningly determined to understand what he calls the fabric of reality, that
03:39he expresses that ambition in spiritual language, that to understand nature is to become closer
03:44to God.
03:45For others, Maliby says, it's the ambition for money or power.
03:49I think if you look at the other end of the spectrum, you look at Mark Zuckerberg, for him,
03:53it's always been really commercial, right?
03:55He wants AI because it will make Instagram and Facebook be more compelling, maybe more addictive.
04:01That's what he's motivated by.
04:03I think Elon Musk just wants to be the greatest industrialist of all time.
04:06So he wants to do it for that reason.
04:08Sam Altman is kind of opportunistically riding against something that will make him powerful.
04:13This is a man who thought of running for governor of California.
04:16He is rumored to have thought of a presidential run.
04:18So he wants power.
04:19But whether it's power or money or scientific achievement that motivates the world's AI
04:24titans, they all have at least one thing in common, the drive to be number one.
04:29I remember going to see Demis right after the launch of Chatty PT at the end of 2022.
04:35And of course, this was the moment when DeepMind and Demis Asabas had been the leaders in
04:39global AI without dispute for a decade.
04:43And all of a sudden, this upstart in California, Sam Altman, drops this model.
04:47It goes viral.
04:48And Demis is no longer the leader.
04:49So I go see him and I say, well, how do you feel about this?
04:52And he says, Sebastian, they've parked their tanks on the lawn.
04:57This is war.
04:58You could see that competitive fury in his eyes.
05:01And yes, I think most complicated human beings have more than one personality inside them.
05:07And in Demis' case, there is the scientist.
05:09There is also the furious competitor.
05:12There's also the enormous need for money, for capital to compete, for compute, but also
05:19for the talent that comes across in your book about how much it costs to really get some
05:23of these scientists to come work with you.
05:25Does that necessarily take it out of the science and make it fundamentally a commercial phenomenon?
05:30Well, it's definitely a commercial phenomenon.
05:32And we just saw in the results this week from all the big tech firms that they've expanded
05:37what they say they're going to spend on their computing chips, the infrastructure that they
05:41need.
05:42This is just going from extremely big to crazy big.
05:45So you're totally right.
05:47It's commercial.
05:47And we can't escape that.
05:49But it could also be scientific.
05:50You could have both at the same time.
05:53And for what it's worth, Demis' view is that the future path to the kind of AI that
06:01can unlock all of science is through these large language models.
06:04There's no alternative path where you go in some totally parallel route.
06:08So we're going to build these large language models.
06:10They're going to get better and better.
06:11They're going to become more agentic where they actually take actions.
06:14They're going to understand the physical world, that spatial intelligence.
06:18And then from there, it will get into the ultimate test, which is, supposing you trained
06:23an AI and you told it everything that people knew in 1911, could it then invent by itself
06:30general relativity?
06:32The tension between the science and the business of AI has come to a head in the ongoing trial
06:38between Elon Musk and OpenAI.
06:40At the center of the trial is a debate over who should control the chat GPT creator.
06:45And the outcome could shake up the corporate models that have underpinned the tech industry's
06:50explosive AI growth.
06:52As of right now, as we speak, there's something like $725 billion this year from the big investors.
06:58How does that pencil out?
07:00Fundamentally, what we've got at the moment is an A-plus technology with a C-minus business
07:05model.
07:05And the purpose of capital markets is to bridge from today when the technology is great but
07:11you're not making money to some beautiful future when finally you figure out the business
07:15model and you do make money.
07:16But I think we're running an experiment in the limits of that capital market function because
07:22the capital markets are not infinitely deep.
07:24So when you have OpenAI, which was spending money like completely crazy and was not attached
07:31to one hyperscaler deep-pocketed balance sheet, that was extremely precarious, is extremely
07:37precarious.
07:37I said three months ago that I thought there was a 50-50 chance that OpenAI would go bust.
07:42Basically, I mean, it would be absorbed by another company.
07:45And I still believe that, that by the summer of next year, there's a half chance that it
07:50just runs out of its ability to raise money, has to sell itself.
07:53And that's where the hyperscalers are now all created equal.
07:56Because in a Google, they can say, we're making up more money on search.
07:59And maybe genius as well.
08:01We've got some revenue coming in for this.
08:02And Amazon can say, we certainly have the cloud that's helping us a lot.
08:06Then you get to a meta.
08:07It's not quite so clear what they have to support this investment.
08:10Right, right.
08:10And they just seem to be quite clumsy about translating all the money they spend on AI into actual AI
08:17results.
08:18And we'll see how this plays out.
08:19But there was this moment last year in 2025, when they were doing record expenditures on
08:25the signing bonuses of AI scientists, because basically, they didn't have much of a team.
08:29And the only way they could get a team coming from behind was to just 10x people's salary.
08:36And so, you know, all the other labs were going nuts.
08:39I actually witnessed the head of one of the other companies, you know, pretty much yelling
08:43at the meta person saying, you know, you're draining all our talent.
08:47You guys are, you know, completely useless.
08:49You're never going to build a system that's really powerful because you're hopeless.
08:53But you're taking our good people away.
08:55And then the Chinese will overtake us.
08:56And then they'll kill us.
08:58It's pretty extreme.
09:00If throwing money at a problem is the clumsy solution, and often not the right one, what
09:05business model both contains ambition and allows science to flourish?
09:10Mallaby says some of the big players have tried through their corporate structures, though
09:14none has yet proven effective.
09:16You know, all of the top three labs have experimented with these governance ideas.
09:22And in fact, in my research for this book, I found out about a thing called Project Mario,
09:26which was secret hitherto.
09:28But Project Mario was essentially, Demi Sassabes saying, I need safety governance.
09:33I need a kind of nonprofit board, which will oversee the powerful AI when I get it.
09:38And we can't just have the corporate board of Google deciding how it gets rolled out.
09:41That's not democratically legitimate.
09:43That's not good for humanity.
09:45We need that nonprofit structure to be grafted onto Google.
09:49So he spent three years fighting about that.
09:51As we know, in OpenAI, more in the public domain, they began as a nonprofit,
09:55and then they graft it on a for-profit later.
09:58Anthropic has its own version of this kind of hybrid for-profit nonprofit.
10:02But the harsh reality is, as you said earlier, you know, this is an extremely expensive technology
10:08to build. And so the for-profit, capital-raising, kind of red-in-tooth-and-claw capitalist thing
10:16is going to dominate, because you need money all the time.
10:19Is there anything the government can and should be doing now on the safety front?
10:23So there are three things.
10:24The first thing is, we already have a national AI sort of security monitoring body.
10:31But it doesn't have enough resources.
10:32It doesn't have the power to veto a model before it's released.
10:36We should have one which is like the Food and Drug Administration, which is properly
10:40resourced. It has expert staff. And if the drug is not safe or efficacious, you can ban the release,
10:46right? AI is just as dangerous as drugs. So we should have an FDA for AI.
10:52Second thing we should do is we should divert more money from just building the model stronger
10:58to alignment research, where you align the model with human priorities. That's a whole field of
11:04engineering. It's getting some resources at the moment, but obviously it's not in the commercial
11:09interest of the companies to invest too much in that. So public policy needs to support the public
11:15good of AI alignment. And the third thing is on the international front, because like it or not,
11:21the Chinese have very good AI models. So unless we bind in China and other countries like France,
11:28which has the AI lab Mistral and other places, we need everybody to acknowledge, just like we did in
11:34the nuclear age, that this is dangerous if it proliferates into the hands of terrorists and so forth.
11:39And so we should all come together with agreed joint common safety standards. Otherwise,
11:44if just one country is safe and the other ones aren't, we haven't made humanity any safer.
11:50The invention of the atomic bomb gave humanity a stark warning. It's up to humans to safeguard against
11:57technology of immense power endangering us all. And as the AI battle rages all around us,
12:04maybe we can take some lessons from that earlier arms race to limit the potential damage from the next one.
Comments