Zum Player springenZum Hauptinhalt springen
  • vor 22 Stunden
Wann die PS6 kommt, wird hier zwar leider nicht verraten, aber immerhin sagt Mark Cerny, dass es in ein paar Jahren soweit sein soll. In diesem Tech-Talk geht es aber vor allem um gemeinsame Technologien wie PSSR und Project Amethyst, die die beiden Firmen Sony und AMD gemeinsam vorantreiben.
Transkript
00:00Wir haben eine wirkliche sehr gute Gäste mit Sony gearbeitet.
00:03Und ich kann mir nicht mehr jemanden zu überdenken,
00:07als mein guter Freund, Marc Cerny.
00:09Marc, danke für Sie so viel zu kommen.
00:11My pleasure.
00:12Ich habe mich gefragt, Marc,
00:14zu join mir,
00:15zu popfen die Box
00:17und zu geben Sie ein peek
00:18auf die Gaming Breakthroughs
00:19wir haben zusammengearbeitet,
00:21behind-the-scenes.
00:22Ja, wir haben schon viel Zeit, das ist für sicher.
00:23So, heute, es scheint mir zu erzählen,
00:26um die neue Technologien zu entwickeln,
00:28Starting with our collaboration under Project Amethyst
00:31to create the machine learning technologies of the future.
00:34As every gamer knows,
00:36it takes a complex set of system expertise
00:39to get your setup and games
00:41to deliver the highest possible experience.
00:44As we progress together,
00:46the future brings us to real-time physics,
00:49cinematic lighting, efficient asset streaming,
00:52and keeping everything in sync across a CPU and GPU
00:55with super low latency.
00:58Trying to brute force that with raw power alone
01:00just doesn't scale.
01:01That's why we're combining traditional rasterization
01:04with neural acceleration.
01:06And machine learning isn't just a neat trick anymore.
01:09It's become a real tool for developers,
01:12smarter pipelines, cleaner visuals, smoother gameplay,
01:17and more headroom to create the worlds
01:19we want to all get lost in.
01:21And that's what FSR is all about.
01:24FSR and PSR actually come from deep co-engineering
01:27between Sony and AMD.
01:30Co-engineering the neural networks that power both technologies.
01:33And going forward, more and more what you see on screen,
01:37the detail, the fidelity, the atmosphere,
01:41it will be touched or enhanced by ML.
01:43And that means we're not just hitting new technical benchmarks.
01:46We're getting closer to the vision of the artists and creators
01:49behind the games.
01:50And the challenge comes in how we implement these systems.
01:54The neural networks found in technologies like FSR and PSSR
01:58are incredibly demanding on the GPU.
02:00They're both computationally expensive
02:02and require speedy access to large amounts of memory.
02:06The nature of the GPU fights us here.
02:08It's made up of a large number of compute units,
02:11and problems are therefore typically broken up into bite-sized pieces
02:14to enable the individual compute units to tackle them.
02:17And there's a downside to that.
02:19Subdividing a problem can cause inefficiency,
02:22or even force us to give up and find a different approach.
02:25Exactly.
02:27And that challenge got us thinking.
02:29And what came out of it is something we're calling neural arrays.
02:33Here's the idea.
02:34Instead of having a bunch of compute units all working on their own,
02:38we built a way for them to team up,
02:40to actually share data and process things together,
02:43like a single focus AI engine.
02:46Now, we're not linking the entire GPU into one mega unit.
02:50That'll be a cable management nightmare.
02:53But we are connecting CUs with each shader engine
02:56in a smart, efficient way.
02:58And that changes the game for neural rendering.
03:01Bigger ML models, less overhead, more efficiency,
03:05and way more scalability as workloads grow.
03:08Neural arrays will allow us to process a large chunk of the screen in one go.
03:12And the efficiencies that come from that are going to be a game changer
03:16as we begin to develop the next generation of upscaling
03:19and denoising technologies together.
03:21With neural arrays, we're unlocking a whole new level of performance for ML.
03:26Not just faster, but more capable.
03:28That means better FSR, better ray regeneration,
03:32and brand new ML power features we're just starting to imagine.
03:38All working in real time, right on the GPU.
03:41And we're just getting started.
03:43As we look ahead, you also see dedicated innovations that bring cinematic rendering
03:48to an entirely new level.
03:50Another area we've been focusing on has been ray tracing.
03:52When I look at its broad usage on PlayStation 5 for reflections, shadows, and global illumination,
03:58it's difficult to believe that it's been just five years since ray tracing was introduced.
04:03Definitely, Mark.
04:04And now with path tracing becoming more central to real-time graphics,
04:09the demands on GPU just continue to grow.
04:12That's why we've been pushing hard to go beyond the current approach
04:16to help developers bring even more realism and cinematic lighting into their games.
04:21But the challenge is that the current approach has reached its limit.
04:24To perform ray tracing today, a shader program has to juggle two very different responsibilities.
04:30One is ray traversal, digging through complex data structures to locate where the millions of rays
04:36being cast hit the millions of triangles in the scene geometry.
04:40When there are intersections, that same shader program has to also be doing its usual work of
04:46shading the scene, using texture and lighting information and the like.
04:50And we spent the past two years rethinking the entire path tracing pipeline from hardware to software.
04:57Early this year at Computex, we introduced neural radiance caching, a key part of FSR Redstone.
05:05Now, we're building on that with Radiance Cores, a new dedicated hardware block designed for unified light transport.
05:14It handles ray tracing and path tracing in real-time, pushing line performance to the whole new level.
05:20Together, these form a brand new rendering approach for AMD.
05:25Radiance Cores takes full control of ray traversal, one of the most compute-heavy parts of the process.
05:32And that frees up the CPU for geometry and simulation, and lets the GPU focus on what does best, shading and lighting.
05:40The result? A cleaner, faster, and more efficient pipeline built for the next generation of ray-traced games.
05:48There's a significant speed boost that comes from putting the traversal logic in hardware,
05:52and a further boost that comes from having that hardware operate independently from the shader cores.
05:57On top of those performance increases, there's other features in the works too, such as flexible and
06:03efficient data structures for the geometry being ray traced.
06:07Overall, I'm really looking forward to the time when we can get Radiance Cores into the hands of game creators.
06:13And we're excited to see how developers push ray tracing and path tracing even further with these tools.
06:19And here's the thing, whether it's ML or ray tracing, they both hit the same bottleneck.
06:26Current GPU memory bandwidth limitations hinder the seamless adoption of next-gen rendering techniques,
06:32requiring significantly more bandwidth to handle 4K plus textures and ray tracing denoising mass for smooth
06:39asset streaming. And that's where a final piece of news comes in. And yeah, it's a big one.
06:44With current GPUs, including the ones in PlayStation 5 and PlayStation 5 Pro,
06:48we have something called DCC, or Delta Color Compression. It's a strategy that reduces the
06:54memory bandwidth consumed when the GPU is reading or writing certain data, such as textures or render targets.
07:01And what we've built for future GPUs and SoCs take the idea of data compression much further.
07:07We call it universal compression. It's a system that evaluates every piece of data headed to memory,
07:14not just textures, and compresses it whenever possible. Only the essential bytes are sent out,
07:20which dramatically reduces memory bandwidth usage. That means the GPU can deliver more detail,
07:26higher frame rates, and greater efficiency.
07:29Here too, I'm really looking forward to what improvements universal compression will bring,
07:33and to what degree the effective bandwidth of the GPU will exceed its paper spec.
07:39There's a multitude of benefits from this, including lower power consumption,
07:43higher fidelity assets, and perhaps most importantly, the synergies that universal compression
07:49has with neural arrays and radiance cores, as we work to deliver the best possible experiences to gamers.
07:56Overall, it's of course still very early days for these technologies. They only exist in simulation right now,
08:02but the results are quite promising, and I'm really excited about bringing them to a future console
08:07in a few years' time.
08:08We feel the same way, Mark, and we're so excited to bring these innovations to developers across
08:14every gaming platform. Because this isn't just about silicon, it's about empowering the creators and
08:20communities that make gaming what it is. And we're just getting started. As we continue building with
08:26close partners like Sony, everything we're doing is focused on one thing, pushing games forward for all of you.
08:33Gaming has always been at the heart of what we do, and it's never meant more than it does right now.
08:38We're here for the players, the creators, and the communities that make this industry matter,
08:44and everything we're building is for you. Thank you, Mark, so much for taking the time to join me here today.
08:49Thank you, Jack.
08:56Thank you.
09:04Thank you.
09:08Thank you.
09:12Thank you.
Schreibe den ersten Kommentar
Kommentar hinzufügen

Empfohlen