Zum Player springenZum Hauptinhalt springen
NVIDIA zeigt auf der GTC 2026, wie künstliche Intelligenz die reale Welt erobert – von autonomen Fahrzeugen bis hin zu humanoiden Robotern. Die Präsentation von Jensen Huang gibt einen klaren Ausblick darauf, wie zukünftige Mobilität und Maschinen funktionieren werden.

😇 Dein Abo hilft uns: https://tublo.eu/abonnieren
✅ Source: NVIDIA
➡️ Mehr Infos: https://www.tuningblog.eu/dies-u-das/nvidia-gtc-keynote-773723/

Im Fokus stehen neue KI-Plattformen, leistungsstarke Rechensysteme und realitätsnahe Simulationen, mit denen autonome Fahrzeuge und Roboter deutlich schneller lernen. Durch virtuelle Trainingsumgebungen können komplexe Szenarien millionenfach durchgespielt werden, bevor Systeme überhaupt im echten Einsatz sind. Das sorgt für mehr Sicherheit, bessere Entscheidungen und effizientere Entwicklung.

Besonders spannend: NVIDIA verbindet Software, Hardware und KI-Modelle zu einem Gesamtökosystem, das sowohl selbstfahrende Autos als auch intelligente Maschinen antreibt. Von der Verkehrslogik im Stadtverkehr bis zur Interaktion von Robotern mit ihrer Umgebung – die Technologie legt die Grundlage für die nächste Generation autonomer Systeme.

#NVIDIA #KI #AutonomesFahren #Robotik
#GTC2026 #FutureTech #AI #SelbstfahrendeAutos
#tuningblog - das Magazin für Auto-Tuning und Mobilität!

Kategorie

🚗
Motor
Transkript
00:00We also have been working on physically embodied agents for a long time.
00:05We call them robots.
00:06And the AIs that they need are physical AIs.
00:09We have some big announcements here.
00:11I'm going to just walk through a few of them.
00:14110 robots here.
00:16Almost every single company in the world, I can't think of one,
00:20that are building robots is working with NVIDIA.
00:23We have three computers.
00:24The training computer, the synthetic data generation and simulation computer.
00:28And, of course, the robotics computer that sits inside the robot itself.
00:32We have all the software stacks necessary to do so.
00:35The AI models to help you.
00:38And all of this is integrated into ecosystems around the world
00:42and all of our partners from Siemens to Cadence, incredible partners everywhere.
00:48And today we're announcing a whole bunch of new partners.
00:52As you know, we've been working on self-driving cars for a long time.
00:55The chat GPT moment of self-driving cars has arrived.
00:58We now know we could successfully, autonomously drive cars.
01:02And today we are announcing four new partners for NVIDIA's robo-taxi-ready platform.
01:12BYD, Hyundai, Nissan, Geely.
01:18All together, 18 million cars built each year.
01:22Joining our partners from before, Mercedes, Toyota, GM.
01:29The number of robo-taxi-ready cars in the future are going to be incredible.
01:35And we're announcing also a big partnership with Uber.
01:39Multiple cities.
01:40We're going to be deploying and connecting these robo-taxi-ready vehicles into their network.
01:46And so a whole bunch of new cars.
01:49We have ABB, Universal Robotics, KUKA.
01:54So many robotics companies here.
01:56And we're working with them to implement our physical AI models integrated into simulation systems
02:02so that we could deploy these robots into manufacturing lines all over.
02:07We have Caterpillar here.
02:08We even have T-Mobile here.
02:11And the reason for that is in the future, that radio tower used to be a radio tower
02:17is going to be an NVIDIA aerial AI RAM.
02:20And so this is going to be a robotics radio tower.
02:23Meaning it can reason about the traffic, figure out how to adjust its beam forming
02:29so that it could save as much energy as possible
02:31and increase the amount of fidelity as possible.
02:35There are so many humanoid robots here.
02:38But one of my favorites, one of my favorites is a Disney robot.
02:44You know what?
02:45Tell you what.
02:46Let me just show you some of the videos.
02:48Let's look at that first.
02:57The first global rollout of physical AI at scale is here.
03:02Autonomous vehicles.
03:04And with NVIDIA Alpamayo, vehicles now have reasoning,
03:08helping them operate safely and intelligently across scenarios.
03:14We ask the car to narrate its actions.
03:16I'm changing lanes to the right to follow my route.
03:21Explain its thinking as it makes decisions.
03:24There's a double parked vehicle in my lane.
03:27I'm going around it.
03:30And follow instructions.
03:32Hey Mercedes, can you speed up?
03:35Sure, I'll speed up.
03:41This is the age of physical AI and robotics.
03:45Around the world, developers are building robots of every kind.
03:49But the real world is massively diverse, unpredictable, full of edge cases.
03:56Real world data will never be enough to train for every scenario.
03:59We need data generated from AI and simulation.
04:04For robots, compute is data.
04:08Developers pre-train world foundation models on internet scale video and human demonstrations.
04:13And evaluate the model's performance to prepare them for post-training.
04:19Using classical and neural simulation, they generate massive amounts of synthetic data and train policies at scale.
04:29To accelerate developers, NVIDIA build open-source Isaac Lab for robot training and evaluation and simulation.
04:37Newton for extensible and GPU-accelerated differentiable physics simulation.
04:43Cosmos World Models for neural simulation.
04:47And Groot Open Robotics Foundation Models for robot reasoning and action generation.
04:53With enough compute, developers everywhere are closing the physical AI data gap.
05:01Paritas AI trains their operating room assistant robot in NVIDIA Isaac Lab, multiplying their data with NVIDIA Cosmos World Models.
05:10Skilled AI uses Isaac Lab and Cosmos to generate post-training data for their skilled AI brain.
05:17They use reinforcement learning to harden the model across thousands of variations.
05:24Humanoid uses Isaac Lab to train whole body control and manipulation policies.
05:30Hexagon Robotics uses Isaac Lab for training and data generation.
05:36Foxconn fine-tunes group models in Isaac Lab.
05:40As does Noble Machines.
05:43Disney Research uses their Kamino Physics Simulator in Newton and Isaac Lab to train policies across their character robots in
05:52every universe.
06:10Disney Research uses their Kamino Physics Simulator in Newton and Isaac Lab to train policies across their character robots in
06:11every universe.
06:17Disney Research was the same, as evaluative model across their team Mae Lab instead of...!
06:40Disney Research uses their ownakboard logic lazy-jo swordskiffes�로 wiege Carp
07:01Untertitelung des ZDF für funk, 2017
07:21Untertitelung des ZDF für funk, 2017
07:48Untertitelung des ZDF für funk, 2017
07:51Using this Newton solver that runs on top of NVIDIA Warp that we jointly developed with Disney and with DeepMind
08:00that made it possible for you to be able to adapt to the physical world. Check that out.
08:08That's how smart you are.
08:11I'm a snowman, not a Snokelope.
08:17Could you imagine this? The future of Disneyland? All these robots, all these characters wandering around.
08:26You know, I have to admit though, I thought you were going to be taller.
08:30I've never seen such a short snowman, to be honest.
08:34Nope.
08:36Hey, tell you what. You want to help me out?
08:39Hooray!
08:41Okay. Usually, usually I close the keynote by telling you what I told you.
08:47We talked about inference inflection. We talked about the AI factory.
08:51We talked about the open claw agent revolution that's happening.
08:55And of course, we talked about physical AI and robotics.
08:59But tell you what. Why don't we get some friends to help us close it out?
09:03Of course!
09:04All right, play it.
09:06Come on.
09:08Terminating simulation.
09:16Hello?
09:21Anybody here?
09:47The keynote's over.
09:49All was said.
09:50Jensen mapped the road ahead.
09:52AI factory's coming alive.
09:55Agents learning how to drive.
09:57From open models to robots too.
09:59Now we'll break it all down for you.
10:05Compute exploded.
10:06Well, we saw from CNN's to open claw.
10:10Agents working across the land.
10:12But they need the power to meet demand.
10:14So we solved the problem.
10:15It was brilliant.
10:17We multiplied compute by 40 million.
10:27Well, once upon an AI time trend was the paradigm.
10:34Sure, it taught the models how.
10:37But inference runs the whole world now.
10:39Vera shows us who's the boss at 35 times less the cost.
10:43Blackwell makes the token sing.
10:45NVIDIA, the inference king.
10:51Yeah, our factories once took years.
10:54Vendors pulling racks and gears.
10:56Built up slowly piece by piece.
10:58No clear way to scale this piece.
11:01DSX and Dynamo know what to do.
11:05Turning power into revenue.
11:14Agents used to wait and see.
11:16Now act autonomously.
11:18But if they ever try to stray.
11:20Safe cause block and say no way.
11:22Nemo claws there to guard the course.
11:26And yes, my friends.
11:30It's open source.
11:40Cars that think and droids that run.
11:42This ain't the movies.
11:43It's all begun.
11:44Alka Mayo calls the shots.
11:46It's a GPT moment for the bots from sim to streets.
11:50Now watch them drive.
11:52Blow your hands up.
11:56For physical AI.
12:10Industrial age.
12:11Build what came before.
12:13Now we build for AI.
12:14Even more.
12:15Vera Rubin plus Grok.
12:16Make the inference splash.
12:17Put them together.
12:18Now it's raining cash.
12:19We build new architecture every year.
12:21Cause claws keep yelling more tokens here.
12:23The AI stacks for all to make.
12:26So let us all eat five.
12:27Lay a cake.
12:28The moment's bright.
12:29The path is clear.
12:30Cause open models led us here.
12:32When data's missing, there's no dispute.
12:34We just generate more with compute.
12:37Robots learning without flaw.
12:39Fueling the force.
12:40Scaling laws.
12:41The future's here.
12:42Won't you come and see?
12:43Welcome all to GTC.
12:58All right.
13:01Have a great GTC.
13:06Wave.
13:08Thank you, everybody.
13:11See you, love.
13:19Für mehr Videos.
13:21Einfach abonnieren.
Kommentare

Empfohlen