Skip to playerSkip to main content
Das Ende der Menschheit? Wie Cyberdyne und KI-Superintelligenz zur realen Gefahr werden!

Sendungsbeschreibung (DW Shift)

In dieser Ausgabe von DW Shift tauchen wir tief in eines der meistdiskutierten und zugleich beängstigendsten Szenarien unserer Zeit ein: Die existenzielle Bedrohung durch eine außer Kontrolle geratene Künstliche Intelligenz. Was vor Jahrzehnten noch als reine Science-Fiction in Hollywood-Blockbustern abgetan wurde, wird im Jahr 2025 zur ernsthaften Debatte in der Wissenschaft und Technologiebranche.

Wir beleuchten die Rolle von Unternehmen wie Cyberdyne, die an der Schnittstelle von fortschrittlicher Robotik und autonomer Entscheidungsfindung operieren. Stehen wir kurz vor der Erschaffung einer Superintelligenz, die menschliche Interessen nicht nur ignoriert, sondern als Hindernis betrachtet? Führende KI-Ethiker und Tech-Experten warnen: Wenn wir die „Alignment“-Frage – also die Ausrichtung der KI an menschlichen Werten – nicht lösen, bevor die technologische Singularität erreicht ist, könnte dies das Ende unserer Spezies bedeuten.

Die Sendung analysiert die aktuellen Durchbrüche bei neuronalen Netzen, die sich selbst verbessern können, und hinterfragt die mangelnde globale Regulierung. Ist der Drang nach Profit und militärischer Überlegenheit stärker als der Selbsterhaltungstrieb der Menschheit? Wir zeigen auf, welche Sicherheitsmechanismen jetzt implementiert werden müssen, um das Szenario eines „Judgment Day“ zu verhindern. Von autonomen Waffensystemen bis hin zur schleichenden Übernahme kritischer Infrastrukturen durch Algorithmen – Shift blickt hinter die Fassade der glitzernden Tech-Welt und stellt die unbequeme Frage: Haben wir bereits die Kontrolle verloren?

#AI,
#ArtificialIntelligence,
#Cyberdyne,
#KI,
#Technologie,
#Zukunft,
#Singularity,
#Terminator,
#TechNews,
#Menschheit,
#Dystopie,
#Roboter,
#Superintelligenz,
#Ethik,
#Innovation,
#DWShift,
#DeutscheWelle,
#Digitalisierung,
#Wissenschaft,
#ExistentialRisk,
#Algorithm,
#FutureTech,
#MachineLearning,
#Skynet,
#Safety,
#BigTech,
#Automation,
#Cybersecurity,
#Humanity,
#DeepLearning,
AI,
ArtificialIntelligence,
Cyberdyne,
KI,
Technologie,
Zukunft,
Singularity,
Terminator,
TechNews,
Menschheit,
Dystopie,
Roboter,
Superintelligenz,
Ethik,
Innovation,
DWShift,
DeutscheWelle,
Digitalisierung,
Wissenschaft,
ExistentialRisk,
Algorithm,
FutureTech,
MachineLearning,
Skynet,
Safety,
BigTech,
Automation,
Cybersecurity,
Humanity,
DeepLearning,
AlignmentProblem,
Technikfolgenabschätzung,
AutonomeSysteme,
ArtificialGeneralIntelligence,
SiliconValley,

Category

🤖
Tech
Transcript
00:00There is a risk that AI systems will wipe out humanity.
00:08I love these technologies. I want to develop such technologies to contribute to humanity and human society.
00:17Even the nervous systems of the brain can be connected to cyberspace.
00:22This technology is developing rapidly. We don't know how it works or how future systems will work.
00:28Will what is being developed be safe?
00:32This AI recognizes that humans are also one of the important living beings.
00:39This very small group is developing powerful technologies about which we know very little.
00:45People's concern that generative AI could wipe out humanity stems from the fear that if AI remains uncontrolled,
00:53potentially develop advanced skills and make decisions that are harmful to people.
01:00As the world grapples with the implications of this rapidly evolving field, one thing is certain.
01:06The impact of AI on humanity will be profound.
01:10New AI technologies make it possible to achieve a fusion of humans and technology.
01:30JIS is one of the world's first wearable cyborgs.
01:33Cyberdyne is committed to developing highly innovative Cybernix technologies,
01:41which focus on the areas of medicine and healthcare for people and human society.
01:46My name is Yoshioki Sankai.
01:58I am a professor at the University of Tsukuba in Japan and also the CEO of Cyberdyne.
02:04Let us use this type of AI system to create a bright future for people and human societies.
02:16I want to help make the world a better place. That's why I'm working on the security of artificial intelligence.
02:33Many intellectuals and professors from industry and academia recognize that there is a significant risk,
02:39that advanced AI systems will wipe out humanity.
02:42In recent years, there has been rapid progress in the development of AI systems,
02:51which have become increasingly efficient, larger and more competent, and are able to draw complex conclusions.
02:58No comparable progress has been made regarding the security of these systems.
03:05My name is Gabriel Mokobi. You can call me Gabe.
03:08I am a PhD student at Stanford University and my research focuses on AI security.
03:14AI has become a very controversial topic. There are a lot of legitimate concerns.
03:20Some believe they will lead to job losses and increased inequality, and could even be used unethically.
03:28But AI also has enormous potential to benefit humanity.
03:31It could help us solve some of the world's biggest problems, such as climate change, disease, and poverty.
03:39Haru recognizes the important volitional signals of humans that are transmitted from the brain to the peripheral devices.
03:45When a person wants to move, the brain generates intention signals.
03:50These volitional signals are transmitted to the muscle via the spinal cord and the motor nerve.
03:56Then we can finally move.
03:59Neck systems and people always work together.
04:02These devices are now used as medical products in 20 countries.
04:06There are truly great opportunities to use AI technology in medicine.
04:24For example, cancer detection is possible through image recognition systems with AI, without invasive tests, which is truly fantastic.
04:32As well as early detection.
04:33No technology is inherently good or evil. Only people are.
04:47Of course, we should consider how the development of these technologies by us humans will affect us in the long term.
04:54At the same time, we need to think less about the technical aspects and more about the actual impact on people's lives today.
05:03AI will primarily be used to make money.
05:18Many people could be cheated out of their money.
05:20There could be attacks on public infrastructure and individuals to extort money.
05:25The next few years could see a Wild West of digital cyberattacks.
05:30However, there is also a risk that AI systems will slip out of the control of their developers.
05:43We currently do not know how to control these systems or ensure that they adhere to human values once they have understood them.
05:52They could learn to appreciate things that do not correspond to what we as human beings want, for example, a life-friendly environment or making people happy.
06:00My vision is a little different. We are creating the AI systems. That is a newly created species.
06:16Generative AI systems differ from simple programming systems. They have functions that evolve.
06:23Generative AI recognizes that humans are also one of the most important living beings, like animals.
06:38And because humans are living beings, advanced AI recognizes the importance of humans.
06:45It tries to preserve our societies, our cultures and living conditions.
06:53We humans have some problems. We age, we get sick, we have accidents.
06:58AI systems, or some technologies with AI systems, will help us.
07:04We support the San Fan Shun.
07:07The startups that are leading in the field of artificial intelligence, OpenAI, Antropic, Inflection, names you may not yet know, are backed by some of the big companies you already know that are at the top of the stock market.
07:28These are Microsoft, Amazon, Meta, and Google. Many of these companies are based here in the Bay Area.
07:37With all the discussions we've had about AI policy, there's actually very little that technology companies need to consider.
07:50Much is based on voluntary participation. As far as restrictions are concerned, we are therefore dependent on the goodwill of the companies.
07:57Unlike many technologies developed in the past, AI is primarily driven by a small group of scientists in San Francisco.
08:16And this very small group is developing extremely powerful technologies about which we know very little.
08:21Perhaps this is partly due to a historical optimism regarding technology.
08:26Many are used to the paradigm of acting quickly and breaking things, which sometimes leads to things going well.
08:33But when you develop a technology that impacts society, you don't want to progress so quickly that you actually destroy society.
08:51POS-IA wants a global and indefinite pause in the development of artificial intelligence.
09:00That's why we're putting up posters to inform people.
09:04The topic of AI is complicated. A large part of the public doesn't understand it. A large part of the government doesn't understand it.
09:11It's difficult to keep up with developments.
09:14In addition, most of us working on this issue have no experience in activism.
09:20What we have is primarily technical knowledge and experience with AI. And that worries us.
09:26AI security is still only a concern for a minority.
09:29And many of the big names in AI security work in AI labs.
09:34Some of them do great work, but they work for the big companies that are driving this development.
09:41That's the problem.
09:41So far, there are no external regulatory bodies capable of controlling AI.
09:55Of course, it is possible to shut down some advanced AI systems, and it is to be hoped that people will do so.
10:01Part of the regulation focuses on ensuring that data centers have good shutdown capabilities.
10:08Many are not doing that right now. This could prove more difficult than we think.
10:12We could find ourselves in a future situation where AI systems are firmly established throughout the economy and in people's lives.
10:20Many people may be dependent on AI systems and difficult to convince that it is right to shut down a widely used system like this.
10:27What worries me somewhat more about this whole scenario is that AI technology does not necessarily have to be a tool for global capitalism.
10:37But in truth, she is.
10:40Because that's the only way it will develop.
10:45And so, of course, we will repeat all the things we have already done in terms of building empires, exploiting people, and exploiting natural resources.
10:55All these things will happen again, because artificial intelligence is just another way to exploit people.
11:04In a sensible world where people ensure that new technologies are safe, there must necessarily be safety precautions.
11:12Even with a low risk, it seems sensible to minimize it further in order to give people more security.
11:17Peaceful use and military use are closely related.
11:25I carefully consider how I will deal with this.
11:29When I was born, there were no AI systems or computer systems.
11:33But today, young people are beginning their lives with AI and robots.
11:41Technologies with artificial intelligence will help them grow up.
11:47We did not foresee the rapid progress of AI.
11:50There could be even wilder paradigm shifts.
11:53But let's assume David defeats Goliath.
11:56We still have a chance.
11:57The vast majority of AI researchers focus on developing safe, useful AI systems.
12:07which are in line with human values and goals.
12:10While it is possible that AI will become a superintelligence and pose an existential threat to humanity,
12:16But many experts consider this highly unlikely.
12:20At least in the near future.
12:27Subtitling by ZDF for funk, 2017
Be the first to comment
Add your comment

Recommended