😇 Dein Abo hilft uns: https://tublo.eu/abonnieren ✅ Source: Boston Dynamics ➡️ Mehr Infos: https://www.tuningblog.eu/?s=Boston+Dynamics+
Wie gut erkennt und versteht Atlas seine Umgebung? In diesem Video werfen wir einen Blick auf die aktuellen Entwicklungen im Labor und zeigen, wie der humanoide Roboter sich auf unterschiedliche Umgebungen einstellt. Die Wahrnehmung von Objekten und deren Kontext ist dabei der Schlüssel.
Mit fortschrittlicher Sensorik und einem agilen Wahrnehmungssystem bewegt sich Atlas flexibel durch verschiedene Szenarien – ob in der Fabrik, im Lager oder potenziell sogar im Haushalt. Dieses Video liefert spannende Einblicke in die Forschungsarbeit hinter der Anpassungsfähigkeit von Robotern.
#Atlas #Roboter #KünstlicheIntelligenz #Robotik #Wahrnehmung #Technologie #Forschung #tuningblog - das Magazin für Auto-Tuning und Mobilität!
02:00Things that are simple for humans, such as social skills, are analyzed in robotics, paradoxically because motor and perception skills were developed in humans through millions of years of evolution.
02:12In contrast, robots are able to perform these, like quick computations, instantly and they are a simple task in general, but complex for us because of our limited capacity and memory.
02:31Atlas' perception system has to be dynamic simply because we cannot predict the state of the world and how the world reacts to what we do with it.
02:42Imagine trying to find a remote in your living room of your kids and a dog running around. It's pretty much impossible.
02:48So being able to perceive these changing circumstances and adapt to it is key.
02:55Many of our viewers assume that uh we can just replay a p-recorded trajectory to make behaviors in our videos.
03:01In reality though, um small imperfections and small errors accumulate very quickly to make what we think the state of the world is diverge from reality.
03:15Atlas perceives the world by using camera sensors.
03:18It estimates properties and facts about the world such as the 3D geometry of the environment around where the objects are that we care about and as well as what are the possible obstacles that you know could get in our way.
03:33This is achieved by a combination of AI and classical systems that are basically you know working together.
03:39We think this kind of sequencing task is a really good fit for a humanoid robot like Atlas because it has the right blend of being just unstructured enough that you need the freedom and the power of a humanoid form factor to reach really low, reach really high, and deal with a lot of environmental variation.
03:56Well, at the same time, it is a pretty dull and pretty repetitive task that's both physically strenuous to do day in and day out.
04:03Solving this kind of task requires being able to do manipulation very reliably and for long periods of time without causing dramatic failures.
04:10So for a lot of these uh tasks that we have, the margins for success are very slim.
04:23For example a lot of the f the cells that we need to saint in to have margin about 5 cm in C here C there can be grounds for failure it's entirely impasse unless you have
04:32time perception running because every time you carry something it can slip in hand every time you GR something you may not always get the best GR so I we have something run in the loop real percept not only are the difficult toasts but the the way that they in the world is as difficult for Atlas as well.
04:48They're not simply on a table, they're often shoved into dark cubbies uh that with only a small sliver of the object actually visible to the robot.
05:02And when we go to grab those objects, Atlas often blocks the entire view itself with its arm, uh so we have to do a lot of uh fun stuff to make it actually work.
05:10There might be instances where you see ATL might shift the object in hand as if to get a better glance at it and if you shine a bit of light on it and it works better.
05:22So in order to be reactive to any movement of the dollies either because they've been pushed or moved or not known precisely uh Atlas needs to uh constantly update its belief about where those fixtures are in the environment.
05:34So one way we test that is by moving the fixtures on Atlas by pushing the dolly when it's not looking or as it's turning around and making sure that it can still update its belief correctly and get those objects into the great shelf.
05:48So picking from the floor is one of the most dramatic examples of a way we'd like the robot to be able to handle of a catastrophic failure.
05:55If the object has wound up on the floor, something has already gone kind of wrong, uh maybe the fixture wasn't exactly where we expected so we bumped it when we were inserting the object in.
06:03And then also our grass wasn't secure enough so the object fell out of the hand.
06:06No matter what, we want to get that object off the ground so that we don't trip over it or destroy it and then go put it in some QA pile so that you know we can deal with the object later.
06:15Our strategy for picking off the ground is an instruction to the robot to reach your hand down to get your hands around it and pick it up.
06:22Uh and the instructions are literally just put your hands roughly here relative to the to the object, push them into the ground, curl the fingers and push them together.
06:31And we rely on our pretty extensive control stack to figure how to get the robot there, reliably get it over there, get it squatted all the way down with that crazy range of motion and we rely on our perception system to know exactly where that object actually is.
06:44Currently the biggest challenge for Atlas and other humanoids on the market is adaptability.
06:50How many tasks can you perform of a single system?
06:54In order to do this, the robot needs to learn more fundamental truths about the world it operates in.
07:00And this is just a general friend that we're seeing in research and machine learning overall.
07:05So this is a shift from machine learning models trained on individual task or data sets towards big foundational models that are trained or large scale data sets that consists of multiple modalities such as video or images or language.
07:21Recently the research is also going a bit further.
07:26So we're going past just perceiving and kind of understanding images and more towards controlling the whole robot based on language and video inputs.
07:38This shift is basically a shift from spatial AI to physical intelligence.
07:42This shift is basically a shift from spatial AI to physical intelligence.
07:45And one is two and three.
07:46And all you might want to know is world's speed, which is little very small.
07:47It's the only humanoid in that sustainability that would observe the scientific machine learning and interesting stuff.
Schreibe den ersten Kommentar