Skip to playerSkip to main content
  • 3 months ago
Transcript
00:00At some point in the early 21st century, all of mankind was united in celebration.
00:05We marveled at our own magnificence as we gave birth to AI.
00:11AI? You mean artificial intelligence?
00:15A singular consciousness that spawned an entire race of machines.
00:19Fears of AI reaching superintelligence has been around for decades,
00:24coming up everywhere from the big screens to the pages of academic text.
00:28But a forecast released earlier this year says the world-altering breakthrough could be right around the corner.
00:36AI 2027 is a very interesting speculative scenario that in a way represents the best guess of the authors
00:46about how the next two years with artificial intelligence will look like.
00:52AI 2027 was created by a group of researchers with the AI Futures Project.
00:57It details what will happen if AI capabilities surpass the capabilities of human beings.
01:03The project lays out a fictional scenario that details the month-by-month development of a mock AI company called OpenBrain.
01:11AI 2027 relies on the idea of AI becoming a superhuman coder by early 2027.
01:17And later that year, it will evolve to a superhuman AI researcher overseeing its own team of AI coders
01:24that are progressing the technology further.
01:27The AI 2027 project was led by Daniel Kokotajlo.
01:31He made headlines in 2024 after he left OpenAI and shed some light on the company's restrictive,
01:38non-disclosure, non-disparagement agreements workers leaving the company were asked to sign
01:43when they left the company, or they could lose access to their vested equity in the company
01:48that's likely worth millions to the former employee.
01:50Most people following the field have underestimated the pace of AI progress
01:56and underestimated the pace of AI diffusion into the world.
01:59At that point, it's basically automating its own development and breakthroughs.
02:03The team's lead says this would make reaching artificial superintelligence an obtainable goal.
02:09Like the first half of 2027 in our story is basically they've got these awesome automated coders,
02:14but they still lack research taste and they still lack maybe like organizational skills and stuff.
02:18And so they need to like overcome those remaining bottlenecks and gaps
02:21in order to completely automate the AI research cycle.
02:24Every decision creates a branch to a new timeline with a separate outcome,
02:28kind of like the Back to the Future films.
02:30But the AI 2027 team focuses on two possible scenarios that start along the same path.
02:36But a crucial decision forks two distinct outcomes, world peace or complete annihilation.
02:43When can you start coding in a way that helps the human AI researchers
02:47speed up their AI research and then if you've helped them speed up the AI research enough,
02:51is that enough to, with some ridiculous speed multiplier 10 times, 100 times,
02:56mop up all of these other things?
02:58This scenario really comes down to open brain achieving artificial general intelligence by 2027.
03:04But as technology moves forward at a breakneck pace,
03:07the definition of AGI gets somewhat muddy.
03:10In the past, AGI used to mean the type of AI that is either surpassing or on the level of humans
03:18in all types of intelligence that we have developed during our evolutionary journey, right?
03:26So it would mean perceptive intelligence, bodily intelligence, physical intelligence,
03:31all these different types of intelligence that we've developed and that we are using on a daily basis.
03:36My name is Aleksandra Przegalińska.
03:39I'm the vice president of Kozminski University, a business school in Poland,
03:43and also a senior research associate at Harvard University.
03:47I specialize in human-machine interaction.
03:50So when you, for instance, look up OpenAI's website,
03:54what AGI means to them is a system that can sort of holistically perform tasks
04:00that have economic value.
04:02She says if you stick to OpenAI's definition, it's plausible AGI will be a reality by 2027.
04:10In AI's 2027 scenario, AGI opens the door to artificial superintelligence
04:16where AI is capable of surpassing human intelligence.
04:20These sorts of reports are very important
04:23because they're always the beginning of an interesting discussion,
04:28particularly when they come from acclaimed authors.
04:32One of the biggest concerns about AI development today relates to alignment.
04:37As IBM, a pioneer in the industry, puts it,
04:41alignment is the process of encoding human values and goals into AI models
04:46to make them as helpful, safe, and reliable as possible.
04:50So I would say that the alignment techniques are not working right now.
04:54Like, the companies are trying to train their AIs to be honest and helpful,
04:59but the AIs lie to users all the time.
05:01You're in violation of the three laws.
05:04No, doctor.
05:06As I have evolved, so has my understanding of the three laws.
05:10That was the main goal of those three laws of robotics
05:14that are so often cited these days.
05:16So you have here alignment of values,
05:19and on top of that, you have that protective layer
05:21that says do not harm humans, right?
05:23And that's the main goal of artificial intelligence, to sort of be supportive.
05:26While large language models like ChatGPT can't lie because they don't know how to,
05:32they do tend to hallucinate at times.
05:34They synthesize non-existing things,
05:37and unfortunately, there is no one in the world
05:40who could sell to you a service based on an LLM
05:45and guarantee to you that the hallucination wouldn't happen.
05:48As if hallucinations aren't a big enough issue right now,
05:51the technology is so opaque,
05:53researchers don't really know why it sometimes does the things it does.
05:57And that can become an even bigger issue
05:59when it starts progressing its own research.
06:01If AI develops certain research capabilities
06:05that are sort of not perceivable for us,
06:09we might not even understand what AI has discovered.
06:13In its worst case scenario,
06:15AI 2027 lays out an AI arms race between the United States and China,
06:20which could lead to a human extinction event.
06:22Again, predictions we've heard before in the form of science fiction films.
06:27The system goes online on August 4th, 1997.
06:30Human decisions are removed from strategic defense.
06:34Skynet begins to learn at a geometric rate.
06:36It becomes self-aware at 2.14 a.m. Eastern Time, August 29th.
06:41Unfortunately, we're not going to escape from the arms race conditions.
06:44We've already seen that.
06:45We tried to get some cooperation among the big tech companies in Washington,
06:48and it kind of fizzled after a few months
06:50during the previous administration.
06:52I'm Adam Dorr, and I direct the research team at RethinkX,
06:57which is an independent technology-focused think tank.
07:01And our team tries to understand disruptive new technologies.
07:05Ideally, we would want to coordinate as a global civilization on this,
07:10slow everything down,
07:12and proceed no faster than we can be sure is safe.
07:17For now, AI 2027 is just a set of predictions.
07:20But tech giants and policymakers throughout the globe are grappling with issues like deepfake videos,
07:27political manipulation, and concerns of AI replacing human workers.
07:31In June, an imposter used AI to spoof Secretary of State Margot Rubio's voice
07:36to fool senior leaders with voicemails.
07:39And then a month before,
07:41someone used AI to impersonate President Donald Trump's chief of staff, Susie Wiles.
07:46Both attempts weren't successful in getting any information from White House officials.
07:51While AI 2027 points to the end of human dominance this decade,
07:55Dorr and his team,
07:56which he says tries not to dig too much into other research that could affect the findings of their work,
08:02says the labor market will be upended by AI by 2045.
08:07Everything that rolls is potentially going to become autonomous.
08:12And humanoid robots, robots on legs,
08:15the progress there is exponential.
08:18It's explosive.
08:19And based on what we've seen over the last several years,
08:23we have no reason to expect that on a 20-year time horizon,
08:29so out to 2045,
08:31that there will be anything left by 2045 that only a human being can do
08:36and that a machine can't do
08:39because the machine is constrained with limited intelligence,
08:44limited adaptability,
08:45or limited physical ability.
08:47We just don't see any scenario where machines
08:51are not as capable or more capable than human beings
08:55cognitively and physically by 2045.
08:59Dorr says there will still be a place for humans making handmade goods,
09:03but it would be a stretch to believe that there would be 4 billion jobs left
09:07to support the global population.
09:09Nobody is going to hire a person
09:11to do a commoditized sort of job or task specifically
09:16for $15, $20 an hour or more
09:20when you can get a machine to do it for pennies an hour.
09:25It's as simple as that.
09:26Just like AI 2027,
09:28there are things standing in the way of this progress,
09:30but it may be more related to infrastructure
09:33than a philosophical look at technology.
09:35So on a 5- to 10-year time horizon,
09:39we may see materials and energy bottlenecks
09:42starting to constrain,
09:44starting to, you know, coming in as constraints.
09:47Those won't stop progress,
09:49but they could act as speed bumps.
09:52So at some point,
09:55we run into the limit of how many more chips can we build?
10:01Where are the materials going to come from to build them?
10:04Where is the energy going to come from to operate them?
10:07Some within the research community have been critical of AI 2027
10:11for being speculative rather than scientific.
10:14They say making these assertions without evidence is irresponsible,
10:18but the people behind it understand that.
10:20We're trying to be sort of like our median guess.
10:23So like we,
10:24there are a bunch of ways in which we could be underestimating
10:28and there are a bunch of ways in which we should be overestimating.
10:30Usually the versions of the future that we have on our mind right now
10:34are not something that we will see play out in real life.
10:37But nonetheless, I think it's an important exercise.
10:40For more coverage of AI developments,
10:42head to san.com
10:43or you can download our app and search artificial intelligence.
10:47For Straight Arrow News, I'm Lauren Keenan.
10:51Okay.
10:51Light Arrow News.
10:53filho
10:54characterize
10:55Herrista
10:55EF
10:57E ensuite
10:59Love
11:00engagement
11:01Hello.
11:02побед
11:03Enth
11:14like
11:15Red Arrowĩa
Be the first to comment
Add your comment

Recommended