Vai al lettorePassa al contenuto principale
  • 2 ore fa
AMD’s Jack Huynh, SVP and GM, Computing and Graphics Group, and Mark Cerny, lead architect of PS5 and PS5 Pro, discuss the latest developments from Project Amethyst – the collaboration between Sony Interactive Entertainment and AMD focused on Machine Learning-based technology for graphics and gameplay – a shared commitment to push gaming technology forward. Mark and Jack share three gaming technology breakthroughs that will lead to benefits across the gaming industry in the future.
Trascrizione
00:00We've been working on some seriously cool gaming tech with Sony and I couldn't think of a better person to walk us through it than my good friend Mark Cerny. Mark, thank you so much for coming out to Austin.
00:11My pleasure.
00:13I've asked Mark to join me to pop open the box and give you a peek at some of the gaming breakthroughs we've been working on together behind the scenes.
00:22Yes, we've been busy, that's for sure.
00:23So today it seemed like it might be fun to share a bit about the brand new technologies being developed, starting with our collaboration under Project Amethyst to create the machine learning technologies of the future.
00:34As every gamer knows, it takes a complex set of system expertise to get your setup and games to deliver the highest possible experience.
00:43As we progress together, the future brings us to real-time physics, cinematic lighting, efficient asset streaming, and keeping everything in sync across the CPU and GPU with super low latency.
00:58Trying to brute force that with raw power alone just doesn't scale.
01:01That's why we're combining traditional rasterization with neural acceleration.
01:06And machine learning isn't just a neat trick anymore.
01:08It's become a real tool for developers, smarter pipelines, cleaner visuals, smoother gameplay, and more headroom to create the worlds we want to all get lost in.
01:21And that's what FSR is all about.
01:24FSR and PSR actually come from deep co-engineering between Sony and AMD.
01:30Co-engineering the neural networks that power both technologies.
01:33And going forward, more and more what you see on screen, the detail, the fidelity, the atmosphere, it will be touched or enhanced by ML.
01:43And that means we're not just hitting new technical benchmarks.
01:46We're getting closer to the vision of the artists and creators behind the games.
01:50And the challenge comes in how we implement these systems.
01:53The neural networks found in technologies like FSR and PSR are incredibly demanding on the GPU.
02:00They're both computationally expensive and require speedy access to large amounts of memory.
02:06The nature of the GPU fights us here.
02:08It's made up of a large number of compute units.
02:11And problems are therefore typically broken up into bite-sized pieces to enable the individual compute units to tackle them.
02:18And there's a downside to that.
02:19Subdividing a problem can cause inefficiency or even force us to give up and find a different approach.
02:26Exactly.
02:26And that challenge got us thinking.
02:29And what came out of it is something we're calling neural arrays.
02:33Here's the idea.
02:34Instead of having a bunch of compute units all working on their own, we built a way for them to team up.
02:40To actually share data and process things together like a single focus AI engine.
02:46Now, we're not linking the entire GPU into one mega unit.
02:50That'll be a cable management nightmare.
02:53But we are connecting CUs with each shader engine in a smart, efficient way.
02:58And that changes the game for neural rendering.
03:01Bigger ML models.
03:02Less overhead.
03:04More efficiency.
03:05And way more scalability as workloads grow.
03:08Neural arrays will allow us to process a large chunk of the screen in one go.
03:12And the efficiencies that come from that are going to be a game changer as we begin to develop the next generation of upscaling and denoising technologies together.
03:21With neural arrays, we're unlocking a whole new level of performance for ML.
03:25Not just faster, but more capable.
03:29That means better FSR, better ray regeneration, and brand new ML power features we're just starting to imagine.
03:38All working in real time, right on the GPU.
03:41And we're just getting started.
03:43As we look ahead, you also see dedicated innovations that bring cinematic rendering to an entirely new level.
03:49Another area we've been focusing on has been ray tracing.
03:53When I look at its broad usage on PlayStation 5 for reflections, shadows, and global illumination,
03:59it's difficult to believe that it's been just five years since ray tracing was introduced.
04:03Definitely, Mark.
04:04And now with path tracing becoming more central to real-time graphics, the demands on GPU just continue to grow.
04:12That's why we've been pushing hard to go beyond the current approach to help developers bring even more realism
04:18and cinematic lighting into their games.
04:21But the challenge is that the current approach has reached its limit.
04:25To perform ray tracing today, a shader program has to juggle two very different responsibilities.
04:31One is ray traversal, digging through complex data structures to locate where the millions of rays being cast
04:37hit the millions of triangles in the scene geometry.
04:40When there are intersections, that same shader program has to also be doing its usual work of shading the scene,
04:47using texture and lighting information and the like.
04:50And we spent the past two years rethinking the entire path tracing pipeline from hardware to software.
04:58Early this year at Computex, we introduced neural radiance caching, a key part of FSR Redstone.
05:05Now, we're building on that with Radiance Cores, a new dedicated hardware block designed for unified light transport.
05:14It handles ray tracing and path tracing in real-time, pushing line performance to the whole new level.
05:21Together, these form a brand new rendering approach for AMD.
05:25Radiance Cores takes full control of ray transversal, one of the most compute-heavy parts of the process.
05:32And that frees up the CPU for geometry and simulation, and lets the GPU focus on what does best, shading and lighting.
05:39The result? A cleaner, faster, and more efficient pipeline, built for the next generation of ray-traced games.
05:48There's a significant speed boost that comes from putting the traversal logic in hardware,
05:52and a further boost that comes from having that hardware operate independently from the shader cores.
05:57On top of those performance increases, there's other features in the works too,
06:01such as flexible and efficient data structures for the geometry being ray traced.
06:07Overall, I'm really looking forward to the time when we can get Radiance Cores into the hands of game creators.
06:13And we're excited to see how developers push ray tracing and path tracing even further with these tools.
06:19And here's the thing, whether it's ML or ray tracing, they both hit the same bottleneck.
06:26Current GPU memory bandwidth limitations hinder the seamless adoption of next-gen rendering techniques,
06:32requiring significantly more bandwidth to handle 4K plus textures and ray tracing denoising mass for smooth access streaming.
06:40And that's where our final piece of news comes in. And yeah, it's a big one.
06:43With current GPUs, including the ones in PlayStation 5 and PlayStation 5 Pro,
06:48we have something called DCC, or Delta Color Compression.
06:52It's a strategy that reduces the memory bandwidth consumed when the GPU is reading or writing certain data,
06:58such as textures or render targets.
07:01And what we've built for future GPUs and SoCs take the idea of data compression much further.
07:07We call it universal compression.
07:10It's a system that evaluates every piece of data headed to memory, not just textures,
07:16and compresses it whenever possible.
07:18Only the essential bytes are sent out, which dramatically reduces memory bandwidth usage.
07:23That means the GPU can deliver more detail, higher frame rates, and greater efficiency.
07:29Here too, I'm really looking forward to what improvements universal compression will bring,
07:33and to what degree the effective bandwidth of the GPU will exceed its paper spec.
07:39There's a multitude of benefits from this, including lower power consumption, higher fidelity assets,
07:44and perhaps most importantly, the synergies that universal compression has with neural arrays and radiance cores,
07:52as we work to deliver the best possible experiences to gamers.
07:56Overall, it's of course still very early days for these technologies.
07:59They only exist in simulation right now, but the results are quite promising,
08:04and I'm really excited about bringing them to a future console in a few years' time.
08:08We feel the same way, Mark, and we're so excited to bring these innovations to developers across every gaming platform.
08:17Because this isn't just about silicon,
08:19it's about empowering the creators and communities that make gaming what it is.
08:23And we're just getting started.
08:24As we continue building with close partners like Sony,
08:27everything we're doing is focused on one thing,
08:30pushing games forward for all of you.
08:33Gaming has always been at the heart of what we do,
08:36and it's never meant more than it does right now.
08:39We're here for the players, the creators, and the communities that make this industry matter.
08:44And everything we're building is for you.
08:47Thank you, Mark, so much for taking the time to join me here today.
08:50Thank you, Jack.
08:54Thank you, Jack.
Commenta prima di tutti
Aggiungi il tuo commento

Consigliato