- 1 day ago
Category
🗞
NewsTranscript
00:00Since ChatGPT burst on the scene four years ago, all we've heard about is how big AI will be,
00:06how it could lead to superintelligence replacing humans, how fast it is coming,
00:11and at least for some, how dangerous it may be.
00:16We have 2,000 people doing it. Spend $2 billion a year on it.
00:20It affects everything. Risk, fraud, marketing, idea generation, customer service,
00:26and it's kind of the tip of the iceberg.
00:27AI. AI. AI. AI. AI. AI. So imagine 100,000 people, 100 million people, smarter than any Nobel Prize winner.
00:37When AI gets superintelligent, it might just replace us.
00:40But what if they're wrong? What if AI isn't taking us toward either utopia or dystopia?
00:47What if AI is just normal?
00:50I have been surprised by how controversial the statement AI is normal technology is.
00:57Arvind Narayanan is a professor of computer science and director of the Center for Information Technology Policy at Princeton.
01:04He's also co-author of the book AI Snake Oil, what artificial intelligence can do, what it can't do, and
01:11how to tell the difference.
01:12We acknowledge that AI is a powerful, general-purpose technology compared to electricity, the Internet, I think even the Industrial
01:20Revolution.
01:21But like all of those technologies, what we think is happening and will continue to happen is a gradual integration
01:29of this technology into society.
01:31If it is gradual, corporate America doesn't seem to be investing that way.
01:35The six U.S. firms pouring the most cash into AI are projected to spend over $750 billion on it
01:42this year alone, more than the entire GDP of Ireland.
01:46And some firms are already claiming significant layoffs are driven by AI.
01:52But Narayanan is more circumspect.
01:54This is not, we think, some impending superintelligence that is going to, you know, render obsolete the laws of economics
02:02or the limitations of human behavior.
02:05The usual reasons that are given for why AI is going to displace labor rely on a whole bunch of
02:11fallacies.
02:12AI developers are looking at these so-called capability benchmarks.
02:16Oh, you know, AI is as capable as people are at answering customer service questions, so surely it's going to
02:23replace customer service representatives.
02:25Maybe, but what we said was, hey, let's look at reliability separately from capability.
02:30Capability is not enough.
02:31You have to have reliability, which means does it answer the same question reliably every time, or is it giving
02:36different answers to different customers?
02:38Does it know which tasks it can take on and which ones are out of scope?
02:43A second big reason is even if you do implement AI, does that actually make workers, you know, more productive,
02:51able to do more work, and therefore actually increase the demand for their work?
02:56Drew Mattis, the chief market strategist at MetLife, agrees.
03:01I think when you think about what artificial intelligence does and why people are concerned about it is they're afraid
03:07that the knowledge worker is going to become extinct.
03:10But the reality of it is, is that knowledge workers are constantly using technology to expand the bounds of knowledge,
03:17and when they expand those bounds, then they have other questions they need to ask.
03:21If I have 100 questions that I start the day with, and technology can help me answer 50 of them
03:26very quickly, it's not like I have 50 left.
03:29That just created 50 new questions for me to answer.
03:32And the more questions I answer, the more valuable I become, and the more valuable I become, the more valuable
03:39the company I work for becomes.
03:40Some companies are investing hundreds of billions of dollars right now in AI.
03:45They have to get a return on investment from somewhere.
03:48Either they have to actually have more revenues and more profits, or they have to cut their costs somewhere.
03:53And that's part of the concern, I think, about employment.
03:55Some people say, yeah, you've got to cut employment.
03:57How does that pen sell out, do you think?
03:58How does a return on investment for all that come?
04:01Well, so I think that that's a very short-term way of looking at the world.
04:05And I think that if we really consider the long-term view of what's best for a lot of companies,
04:14it's to make the investment and make sure that you understand where the payoff is.
04:19Recently, companies have justified layoffs by saying AI can replace workers.
04:23But some critics have countered by calling the move AI washing, implying that companies are dressing up old-fashioned cost
04:31-cutting as AI adoption in a bid to satisfy shareholders.
04:36What we've seen historically is that as technology advances, people move to higher and higher levels of abstraction, of performance,
04:44work that involves supervising the technology to do the work instead of doing it directly themselves.
04:48And what we're seeing with AI so far is consistent with this pattern.
04:53So my bet is on not seeing massive labor effects across the economy, but there might be specific jobs in
05:01sectors in which there might, in fact, be quite negative consequences.
05:05Software engineering has been the clear leader, I think, in the pace of AI adoption, but also the effects of
05:11AI adoption to the point where going back to a time when all the code was written manually by hand,
05:17that almost feels like going back to punch cards before the days of keyboards.
05:22Critically, though, even in companies where AI is being rapidly adopted for software engineering, it's not really clear that it's
05:32leading to replacing software engineers with AI.
05:35In fact, the number of job postings for software engineers actually continues to increase.
05:40Until recently, job postings for software engineers and others had been going down.
05:46But last year, the need for programmers spiked, unlike the rest of the jobs market.
05:51One of the things that is said is that although it may be as profound as the Internet or electricity,
05:58it's coming much faster.
05:59Does that make a difference?
06:01There are many claims that are being made that AI is the fastest adopted technology in history.
06:07But when we looked at those numbers when writing this essay, we weren't really convinced.
06:12If you look at something that could really make an economic impact, like replacing customer service representatives with chatbots, for
06:21instance.
06:22I mean, when ChatGPT came out, so many people, including me, thought that that was the first thing that was
06:26going to happen in terms of labor effects.
06:28Chatbot, it's right there in the name.
06:30Why is that still not happening for the most part?
06:32It turns out that when you kind of look at this deeper AI integration where there are risks, there are
06:40legal liabilities involved, there are structural and organizational changes that companies have to make, it's not so simple.
06:46One of the stories we heard was that Air Canada had this kind of customer service chatbot.
06:52And it made up a non-existent refund policy when a customer was asking about it.
06:59And the customer got upset and sued.
07:02It went all the way to the Canadian Supreme Court.
07:04And what the court decided was to force the airline to abide by this non-existent refund policy.
07:10The speed limits in many cases are things like regulatory barriers that have, of course, been inserted by humans, but
07:17for very good reasons.
07:18There are, you know, there's kind of a saying that every regulation is written in blood, every safety regulation at
07:24least.
07:25So when you look at why AI can't make rapid inroads into health care, for instance, it's because we're not
07:32going to let AI autonomously do medical experiments on people, right, to figure out how to cure cancer.
07:38Samyukta Malangi has seen that firsthand.
07:41And she's an oncologist who has been hired as vice president of clinical strategy at Open Evidence, a medical chatbot
07:48that was recently valued at $12 billion and has been called the chat GPT for doctors.
07:54I'm a little bit, you know, circumspect when it comes to those sort of grandiose statements around AI tools just
08:00replacing physicians entirely.
08:01But I do think that there's definitely going to be, you know, a world in which AI tools become a
08:08part of clinical practice.
08:10And I'm excited to enter that world, to be honest.
08:12I certainly do not think that they're going to replace physicians.
08:16Every single company founder putting out statements talking about the efficacy of their tool, they'll say something like this tool
08:24has outperformed a group of physicians in coming to a diagnosis or solving a clinical problem.
08:30And I think what they're trying to do by sort of putting these statements out there entirely is showcase the
08:36efficacy of their tool and as well not take any liability for the downside that could happen because the AI
08:43tool has hallucinated or provide a biased answer or provide a false negative.
08:49And I think that's a real problem.
08:51I think it's actually really inappropriate to sort of make statements like that without kind of realize sort of assigning
08:58or taking on responsibility for what happens when your algorithm messes up.
09:02What happens if a physician takes on a recommendation that an algorithm has generated and that turns out to be
09:09the wrong one?
09:10Who kind of like assumes that responsibility?
09:13Like most technology, AI will continue to improve.
09:17And even if it doesn't get everything right now, that potential for improvement leads some to worry that it might
09:23get too smart for humans to handle.
09:25Among those raising concerns is Jeffrey Hinton, a Nobel Prize winning computer scientist known as the godfather of AI.
09:33Suppose that some telescope had seen an alien invasion fleet that was going to get here in about 10 years.
09:40We would be scared and we would be doing stuff about it.
09:43Well, that's what we have.
09:46We're constructing these aliens, but they're going to get here in about 10 years and they're going to be smarter
09:49than us.
09:50We should be thinking very, very hard.
09:52How are we going to coexist with these things?
09:55What do you say to the people who say there may actually be a greater danger here beyond just losing
10:00your job?
10:00Where I disagree with people like Hinton is on two big things.
10:04One, if we just club a whole bunch of things together as some umbrella category of AI risk and then
10:11treat it as an AI problem, we just lose a lot of clarity and we lose a lot of avenues
10:16by which we can actually address the problem.
10:17So one kind of risk that people are worried about is that AI is very good at hacking, finding new
10:23vulnerabilities in software, and maybe taking over critical systems.
10:27That is something we should be worried about, but it turns out that specific risk has a very specific solution.
10:32But this idea that we should imbue AI with a maternal instinct, or in computer science it goes by the
10:38term alignment, that AI is going to magically know what is the right thing to do in every possible situation,
10:44that seems like a pipe dream to me.
10:47Because what is the right thing to do in every possible situation?
10:50Well, people don't agree on that, so how can we agree on what AI should do in those situations?
10:55And so putting all our eggs in this one basket of alignment is going to result in a very brittle
11:01scenario, which is that if anyone ever creates a misaligned AI system, then all bets are off.
11:07And how are you ever going to stop every kid in the world from creating their own AI system that
11:13might not follow your rules or your instinct or whatever it is?
11:16Because the trend we've seen is that something that takes a data center today, within a few years, is going
11:22to be something that you can do in your mom's basement.
11:26Whatever this new world of AI makes possible, Norianen says there's one thing that it simply will not be able
11:32to do, predict the future.
11:34Once this happens, we will have no choice but to rely on AI in order to figure out what, let's
11:42say, military or geopolitical strategy should be.
11:45But when we actually look at the research, the picture that emerges is that the reason people are not that
11:51great at predicting the future is not some limitation of our biology.
11:55It's because the data that's out there that might allow us to extrapolate to what might happen in the future
12:00is pretty limited, but also the future is genuinely unknown.
12:04And just because the future is genuinely unknown, none of us, not the experts like Hinton and Norianen, and not
12:12even AI itself, can predict just how big it can get or how fast it can grow.
Comments