Skip to player
Skip to main content
Search
Connect
Watch fullscreen
Like
Bookmark
Share
More
Add to Playlist
Report
AI safety and the potential apocalypse: What people can do now to prevent it
Straight Arrow News
Follow
3 months ago
Category
🗞
News
Transcript
Display full video transcript
00:00
At some point in the early 21st century, all of mankind was united in celebration.
00:05
We marveled at our own magnificence as we gave birth to AI.
00:11
AI? You mean artificial intelligence?
00:15
A singular consciousness that spawned an entire race of machines.
00:19
Fears of AI reaching superintelligence has been around for decades,
00:24
coming up everywhere from the big screens to the pages of academic text.
00:28
But a forecast released earlier this year says the world-altering breakthrough could be right around the corner.
00:36
AI 2027 is a very interesting speculative scenario that in a way represents the best guess of the authors
00:46
about how the next two years with artificial intelligence will look like.
00:52
AI 2027 was created by a group of researchers with the AI Futures Project.
00:57
It details what will happen if AI capabilities surpass the capabilities of human beings.
01:03
The project lays out a fictional scenario that details the month-by-month development of a mock AI company called OpenBrain.
01:11
AI 2027 relies on the idea of AI becoming a superhuman coder by early 2027.
01:17
And later that year, it will evolve to a superhuman AI researcher overseeing its own team of AI coders
01:24
that are progressing the technology further.
01:27
The AI 2027 project was led by Daniel Kokotajlo.
01:31
He made headlines in 2024 after he left OpenAI and shed some light on the company's restrictive,
01:38
non-disclosure, non-disparagement agreements workers leaving the company were asked to sign
01:43
when they left the company, or they could lose access to their vested equity in the company
01:48
that's likely worth millions to the former employee.
01:50
Most people following the field have underestimated the pace of AI progress
01:56
and underestimated the pace of AI diffusion into the world.
01:59
At that point, it's basically automating its own development and breakthroughs.
02:03
The team's lead says this would make reaching artificial superintelligence an obtainable goal.
02:09
Like the first half of 2027 in our story is basically they've got these awesome automated coders,
02:14
but they still lack research taste and they still lack maybe like organizational skills and stuff.
02:18
And so they need to like overcome those remaining bottlenecks and gaps
02:21
in order to completely automate the AI research cycle.
02:24
Every decision creates a branch to a new timeline with a separate outcome,
02:28
kind of like the Back to the Future films.
02:30
But the AI 2027 team focuses on two possible scenarios that start along the same path.
02:36
But a crucial decision forks two distinct outcomes, world peace or complete annihilation.
02:43
When can you start coding in a way that helps the human AI researchers
02:47
speed up their AI research and then if you've helped them speed up the AI research enough,
02:51
is that enough to, with some ridiculous speed multiplier 10 times, 100 times,
02:56
mop up all of these other things?
02:58
This scenario really comes down to open brain achieving artificial general intelligence by 2027.
03:04
But as technology moves forward at a breakneck pace,
03:07
the definition of AGI gets somewhat muddy.
03:10
In the past, AGI used to mean the type of AI that is either surpassing or on the level of humans
03:18
in all types of intelligence that we have developed during our evolutionary journey, right?
03:26
So it would mean perceptive intelligence, bodily intelligence, physical intelligence,
03:31
all these different types of intelligence that we've developed and that we are using on a daily basis.
03:36
My name is Aleksandra Przegalińska.
03:39
I'm the vice president of Kozminski University, a business school in Poland,
03:43
and also a senior research associate at Harvard University.
03:47
I specialize in human-machine interaction.
03:50
So when you, for instance, look up OpenAI's website,
03:54
what AGI means to them is a system that can sort of holistically perform tasks
04:00
that have economic value.
04:02
She says if you stick to OpenAI's definition, it's plausible AGI will be a reality by 2027.
04:10
In AI's 2027 scenario, AGI opens the door to artificial superintelligence
04:16
where AI is capable of surpassing human intelligence.
04:20
These sorts of reports are very important
04:23
because they're always the beginning of an interesting discussion,
04:28
particularly when they come from acclaimed authors.
04:32
One of the biggest concerns about AI development today relates to alignment.
04:37
As IBM, a pioneer in the industry, puts it,
04:41
alignment is the process of encoding human values and goals into AI models
04:46
to make them as helpful, safe, and reliable as possible.
04:50
So I would say that the alignment techniques are not working right now.
04:54
Like, the companies are trying to train their AIs to be honest and helpful,
04:59
but the AIs lie to users all the time.
05:01
You're in violation of the three laws.
05:04
No, doctor.
05:06
As I have evolved, so has my understanding of the three laws.
05:10
That was the main goal of those three laws of robotics
05:14
that are so often cited these days.
05:16
So you have here alignment of values,
05:19
and on top of that, you have that protective layer
05:21
that says do not harm humans, right?
05:23
And that's the main goal of artificial intelligence, to sort of be supportive.
05:26
While large language models like ChatGPT can't lie because they don't know how to,
05:32
they do tend to hallucinate at times.
05:34
They synthesize non-existing things,
05:37
and unfortunately, there is no one in the world
05:40
who could sell to you a service based on an LLM
05:45
and guarantee to you that the hallucination wouldn't happen.
05:48
As if hallucinations aren't a big enough issue right now,
05:51
the technology is so opaque,
05:53
researchers don't really know why it sometimes does the things it does.
05:57
And that can become an even bigger issue
05:59
when it starts progressing its own research.
06:01
If AI develops certain research capabilities
06:05
that are sort of not perceivable for us,
06:09
we might not even understand what AI has discovered.
06:13
In its worst case scenario,
06:15
AI 2027 lays out an AI arms race between the United States and China,
06:20
which could lead to a human extinction event.
06:22
Again, predictions we've heard before in the form of science fiction films.
06:27
The system goes online on August 4th, 1997.
06:30
Human decisions are removed from strategic defense.
06:34
Skynet begins to learn at a geometric rate.
06:36
It becomes self-aware at 2.14 a.m. Eastern Time, August 29th.
06:41
Unfortunately, we're not going to escape from the arms race conditions.
06:44
We've already seen that.
06:45
We tried to get some cooperation among the big tech companies in Washington,
06:48
and it kind of fizzled after a few months
06:50
during the previous administration.
06:52
I'm Adam Dorr, and I direct the research team at RethinkX,
06:57
which is an independent technology-focused think tank.
07:01
And our team tries to understand disruptive new technologies.
07:05
Ideally, we would want to coordinate as a global civilization on this,
07:10
slow everything down,
07:12
and proceed no faster than we can be sure is safe.
07:17
For now, AI 2027 is just a set of predictions.
07:20
But tech giants and policymakers throughout the globe are grappling with issues like deepfake videos,
07:27
political manipulation, and concerns of AI replacing human workers.
07:31
In June, an imposter used AI to spoof Secretary of State Margot Rubio's voice
07:36
to fool senior leaders with voicemails.
07:39
And then a month before,
07:41
someone used AI to impersonate President Donald Trump's chief of staff, Susie Wiles.
07:46
Both attempts weren't successful in getting any information from White House officials.
07:51
While AI 2027 points to the end of human dominance this decade,
07:55
Dorr and his team,
07:56
which he says tries not to dig too much into other research that could affect the findings of their work,
08:02
says the labor market will be upended by AI by 2045.
08:07
Everything that rolls is potentially going to become autonomous.
08:12
And humanoid robots, robots on legs,
08:15
the progress there is exponential.
08:18
It's explosive.
08:19
And based on what we've seen over the last several years,
08:23
we have no reason to expect that on a 20-year time horizon,
08:29
so out to 2045,
08:31
that there will be anything left by 2045 that only a human being can do
08:36
and that a machine can't do
08:39
because the machine is constrained with limited intelligence,
08:44
limited adaptability,
08:45
or limited physical ability.
08:47
We just don't see any scenario where machines
08:51
are not as capable or more capable than human beings
08:55
cognitively and physically by 2045.
08:59
Dorr says there will still be a place for humans making handmade goods,
09:03
but it would be a stretch to believe that there would be 4 billion jobs left
09:07
to support the global population.
09:09
Nobody is going to hire a person
09:11
to do a commoditized sort of job or task specifically
09:16
for $15, $20 an hour or more
09:20
when you can get a machine to do it for pennies an hour.
09:25
It's as simple as that.
09:26
Just like AI 2027,
09:28
there are things standing in the way of this progress,
09:30
but it may be more related to infrastructure
09:33
than a philosophical look at technology.
09:35
So on a 5- to 10-year time horizon,
09:39
we may see materials and energy bottlenecks
09:42
starting to constrain,
09:44
starting to, you know, coming in as constraints.
09:47
Those won't stop progress,
09:49
but they could act as speed bumps.
09:52
So at some point,
09:55
we run into the limit of how many more chips can we build?
10:01
Where are the materials going to come from to build them?
10:04
Where is the energy going to come from to operate them?
10:07
Some within the research community have been critical of AI 2027
10:11
for being speculative rather than scientific.
10:14
They say making these assertions without evidence is irresponsible,
10:18
but the people behind it understand that.
10:20
We're trying to be sort of like our median guess.
10:23
So like we,
10:24
there are a bunch of ways in which we could be underestimating
10:28
and there are a bunch of ways in which we should be overestimating.
10:30
Usually the versions of the future that we have on our mind right now
10:34
are not something that we will see play out in real life.
10:37
But nonetheless, I think it's an important exercise.
10:40
For more coverage of AI developments,
10:42
head to san.com
10:43
or you can download our app and search artificial intelligence.
10:47
For Straight Arrow News, I'm Lauren Keenan.
10:51
Okay.
10:51
Light Arrow News.
10:53
filho
10:54
characterize
10:55
Herrista
10:55
EF
10:57
E ensuite
10:59
Love
11:00
engagement
11:01
Hello.
11:02
побед
11:03
Enth
11:14
like
11:15
Red Arrowĩa
Be the first to comment
Add your comment
Recommended
2:38
|
Up next
The rise of ‘AI psychosis’ and exactly what that means
Straight Arrow News
3 months ago
3:34
This week’s news was overwhelming. Here’s how to protect your brain
Straight Arrow News
3 months ago
1:20
‘Chipocalypse Now’: Trump shares ominous post targeting Chicago
Straight Arrow News
3 months ago
21:19
Performance Drone Works’ next frontier: Weapons and Warfare
Straight Arrow News
3 months ago
1:33
US phases out security programs for European allies near Russia
Straight Arrow News
3 months ago
7:14
America’s backyard chicken boom explained
Straight Arrow News
3 months ago
1:37
White House tightens security after Charlie Kirk assassination
Straight Arrow News
3 months ago
20:27
Crafting Ukrainian security in the face of aggression
Straight Arrow News
3 months ago
2:01
Silicon Valley leaders launch PAC to promote AI in upcoming election
Straight Arrow News
3 months ago
0:47
YouTube to use AI to estimate users’ age and boost teen protections
Straight Arrow News
3 months ago
21:19
Performance Drone Works’ next frontier
Straight Arrow News
3 months ago
1:32
NASA blocks Chinese nationals from agency facilities
Straight Arrow News
3 months ago
9:15
China’s military parade: Some cause for concern, but mostly show
Straight Arrow News
3 months ago
25:58
Army Watercraft Modernization: Powering Expeditionary Operations
Straight Arrow News
3 months ago
8:30
Deadly Michigan church shooting, and a looming government shutdown: Unbiased Updates, Sept. 29, 2025
Straight Arrow News
3 months ago
2:08
Undersea cables face rising sabotage risk, analysts warn
Straight Arrow News
3 months ago
1:08
YouTube using AI to tweak user videos without their knowledge
Straight Arrow News
3 months ago
1:39
Ozempic to cost much less for cash customers
Straight Arrow News
3 months ago
2:17
An AI bot used their words. Now, authors are demanding compensation
Straight Arrow News
3 months ago
2:36
White House Cage Fight Is Going to Happen
Straight Arrow News
3 months ago
7:55
Need for FCC regulation could fade as media landscape evolves: Media expert
Straight Arrow News
3 months ago
2:15
Celebrities speak out after AI artist signs multimillion dollar record deal
Straight Arrow News
3 months ago
1:08
Instagram maps: How apps track your location
Straight Arrow News
3 months ago
2:18
Rise in COVID cases has one CA leader recommending masks
Straight Arrow News
3 months ago
10:52
'People We Meet on Vacation' Star Tom Blyth Talks Kissing in the Rain And Future 'Hunger Games' Movies
New York Post
1 day ago
Be the first to comment