00:00Google DeepMind just made a robot that can play ping-pong against humans and even win some matches.
00:07Meanwhile, Boston Dynamics' Atlas robot is showing off its strength, doing push-ups and burpees like it's training for a marathon.
00:13On top of that, scientists are building a global network of supercomputers to speed up the development of artificial general intelligence,
00:20aiming to create AI that can think and learn more, like humans.
00:24We're covering all these topics in this video, so stick around.
00:27But first, let's jump into the story about the AI robot taking on table tennis.
00:31So Google DeepMind, the AI powerhouse that's been behind some crazy tech, has trained a robot to play ping-pong against humans.
00:38And honestly, it's kind of blowing my mind.
00:41Alright, so here's the deal. Google DeepMind didn't just teach this robot to, like, casually hit the ball back and forth.
00:48No, they went all in and got this robotic arm to play full-on competitive table tennis.
00:54And guess what? It's actually good enough to beat some humans.
00:57Yeah, no kidding.
01:01They had this bot play 29 games against people with different skill levels and it won 13 of them.
01:06That's almost half the matches, which, for a robot, is pretty wild.
01:10Okay, so let's break down how this all went down.
01:13To train this robot, DeepMind's team used a two-step approach.
01:16First, they put the bot through its paces in a computer simulation where it learned all the basic moves.
01:21Things like how to return a serve, hit a forehand topspin, or nail a backhand shot.
01:26Then, they took what the robot learned in the sim and fine-tuned it with real-world data.
01:31So every time it played, it was learning and getting better.
01:33Now, to get even more specific, this robot tracks the ball using a pair of cameras, which, like, capture everything happening in real time.
01:40It also follows the human player's movements using a motion capture system.
01:45This setup uses LEDs on the player's paddle to keep track of how they're swinging.
01:50All that data gets fed back into the simulation for more training, creating this super cool feedback loop where the bot is constantly refining its game.
01:59But, guys, it's not all smooth sailing for our robotic ping pong player.
02:03There are a few things it still struggles with.
02:05For example, if you hit the ball really fast, send it high up, or hit it super low, the robot can miss.
02:12It's also not great at dealing with spin, something that more advanced players use to mess with their opponents.
02:18The robot just can't measure spin directly yet, so it's a bit of a weak spot.
02:22Now, something I found really interesting is that the robot can't serve the ball.
02:27So, in these matches, they had to tweak the rules a bit to make it work.
02:31And, yeah, that's a bit of a limitation, but hey, it's a start, right?
02:35Anyway, the researchers over at DeepMind weren't even sure if the robot would be able to win any matches at all.
02:40But it turns out, not only did it win, but it even managed to outmaneuver some pretty decent players.
02:46Panag Sanchetti, the guy leading the project, said they were totally blown away by how well it performed.
02:51Like, they didn't expect it to do this well, especially against people it hadn't played before.
02:55And this isn't just a gimmick, guys. This kind of research is actually a big deal for the future of robotics.
03:01I mean, the ultimate goal here is to create robots that can do useful tasks in real environments,
03:07like your home or a warehouse, and do them safely and skillfully.
03:11This table tennis bot is just one example of how robots could eventually learn to work around us and with us,
03:16and maybe even help us out in ways we haven't even thought of yet.
03:19Other experts in the field, like Leryl Pinto from NYU, are saying that this is a really exciting step forward.
03:27Even though the robot isn't a world champion or anything, it's got the basics down, and that's a big deal.
03:33The potential for improvement is huge, and who knows?
03:37We might see this kind of tech in all sorts of robots in the near future, but let's not get too ahead of ourselves.
03:42There's still a long way to go before robots are dominating in sports or anything like that.
03:48For one, training a robot in a simulated environment to handle all the crazy stuff that happens in the real world is super tough.
03:55There are so many variables, like a gust of wind or even just a little bit of dust on the table, that can mess things up.
04:01Chris Walty, who's a big name in robotics, pointed out that without realistic simulations,
04:06there's always going to be a ceiling on how good these robots can get.
04:09That said, Google DeepMind is already thinking ahead.
04:12They're working on some new tech, like predictive AI models that could help the robot anticipate where the ball is going to go,
04:19and better algorithms to avoid collisions.
04:21This could help the robot overcome some of its current limitations and get even better at the game.
04:26And here's the best part, at least for me.
04:28The human players actually enjoyed playing against the robot.
04:31Even the more advanced players, who were able to beat it, said they had fun and thought the robot could be a great practice partner.
04:38Like, imagine having a robot you could play with any time you wanted to sharpen your skills.
04:44One of the guys in the study even said he'd love to have the robot as a training buddy.
04:49OK, now something interesting has surfaced about Boston Dynamics' Atlas robot.
04:53The humanoid hub on Twitter recently shared a video of Atlas doing push-ups, and it's part of an eight-hour-long presentation.
05:00There's not much info available yet.
05:02But it's fascinating to see Atlas performing not just push-ups, but even a burpee.
05:07The movements are incredibly fluid and almost human-like.
05:10But here's the real question. Does it get stronger after each set?
05:14I hope not, because it looks like it could do push-ups forever.
05:17Alright, now let's talk about something really fascinating that's happening right now.
05:22Scientists are working on building a global network of supercomputers to speed up the development of what's known as Artificial General Intelligence, or AGI for short.
05:32And we're not just talking about an AI that excels in one thing, like playing table tennis or generating text.
05:37It's something that can learn, adapt, and improve its decision-making across the board.
05:42It's kind of scary, but also super exciting, right?
05:45So, these researchers are starting by bringing a brand new supercomputer online in September.
05:50And that's just the beginning. This network is supposed to be fully up and running by 2025.
05:55Now, what's cool about this setup is that it's not just one supercomputer doing all the heavy lifting.
06:00It's actually a network of these machines working together, which they're calling a multi-level cognitive computing network.
06:07Think of it as a giant brain made up of several smaller brains, all connected and working together.
06:12to solve problems.
06:13Now, what's really interesting is that these supercomputers are going to be packed with some of the most advanced AI hardware out there.
06:19We're talking about components like NVIDIA L, 40s GPUs, AMD Instinct processors, and some crazy stuff like 10-storrent wormhole server racks.
06:28If you're into the tech side of things, you know this is some serious muscle.
06:32Alright, so what's the point of all this?
06:34Well, according to the folks over at SingularityNet, the company behind this project, they're aiming to transition from current AI models,
06:41which are heavily reliant on big data, to something much more sophisticated.
06:45Their goal is to create AI that can think more like humans, with the ability to make decisions based on multi-step reasoning and dynamic world modeling.
06:53It's like moving from an AI that just repeats what it's been taught to one that can think on its own.
06:59Ben Gertzel, the CEO of SingularityNet, basically said that this new supercomputer is going to be a game changer for AGI.
07:05He talked about how their new neural symbolic AI approaches could reduce the need for massive amounts of data and energy,
07:12which is a big deal when you're talking about scaling up to something as complex as AGI.
07:16And if you're into the bigger picture, SingularityNet is part of this group called the Artificial Super Intelligence Alliance, or ASI.
07:24These guys are all about open-source AI research, which means they want to make sure that as we get closer to creating AGI,
07:31the technology is accessible and transparent.
07:34Oh, and speaking of timelines, we've got some pretty bold predictions here.
07:37Some leaders in the AI space, like the co-founder of DeepMind, are saying we could see human-level AI by 2028.
07:45Ben Gertzel, on the other hand, thinks we might hit that milestone as soon as 2027.
07:50And let's not forget Mark Zuckerberg. He's also in the race.
07:53Throwing billions of dollars into this pursuit, we're so close to creating machines that could potentially surpass our intelligence.
07:59Whether that's a good or bad thing, we will soon find out.
08:02The next few years in AI are going to be absolutely insane.
08:06Alright, if you found this video helpful or interesting, don't forget to smash that like button, hit subscribe, and ring the bell so you don't miss any of my future videos.
08:14Thanks for watching, and I'll see you in the next one.
Comments