Vai al lettorePassa al contenuto principale
  • 3 ore fa
Trascrizione
00:02Hi, everybody.
00:04The keynote prior to this session
00:06was the first time Google pulled back the curtain on Stadia.
00:10I'm sure that's left everyone with a healthy appetite
00:13to learn more about the platform
00:14and what it's like to develop a game on it.
00:17I can tell you that it has been working with Google
00:19for two and a half years,
00:21and in light of that relationship,
00:23instead of using my time to dig deep into any one subject,
00:26I instead want to convey the larger picture
00:28of bringing a game to Stadia.
00:30Hopefully, this will provide you with more context
00:33as you move forward with bringing your own games to the platform.
00:37But first, who am I?
00:40Obviously, I'm not Marty.
00:42I'm not pretty enough.
00:45I won't be as brief as our CTO, Robert Duffy.
00:49I'm not your typical face you'd see representing our studio.
00:54Like Uswa said, nevertheless,
00:56I've been working with ID now for 11 years,
00:59and for the last two and a half years,
01:00I've been working with Google
01:01and helping managing the studio's relationship.
01:06I've done a lot of things for ID over my tenure.
01:09Currently, I'm the programming lead of a small internal R&D team,
01:13with Stadia being one of the projects we've been focused on.
01:17I did give a talk last year titled,
01:19Getting Explicit, How Hard Is Vulcan Really?
01:22Some of your answers were, very.
01:25So don't worry.
01:26We'll talk a bit about Vulcan today,
01:29because that's a big part of Stadia.
01:33And that talk was spawned by a blog series I wrote titled,
01:36I Am Graphics, And So Can You,
01:37which conveyed my experience porting Doom 3 to Vulcan.
01:40The blog was surprisingly successful in spite of my best efforts.
01:45It's something I'm still getting book offers over.
01:50So enough about me, it's story time.
01:52And we begin our story where every good one in our industry begins,
01:56at a press conference.
01:59It was E3 2016, and we had just shipped Doom,
02:02and we're moving forward with DLC and Doom VFR.
02:05Behind the scenes at the event,
02:07Google approached our CTO, Robert Duffy,
02:09and once he heard the words Google, Linux, Vulcan,
02:13and streaming in the same sentence, he was sold.
02:17But other than Doom's notoriety at the time,
02:20why did Google come to us specifically?
02:24First, they knew we would be an acid test for their technology.
02:27A large amount of work done by a video encoder is motion estimation.
02:31Doom is a fast-paced Twitch shooter that runs at 60 FPS or higher.
02:37So if streaming couldn't be solved for us,
02:39was it worth pursuing at all?
02:41In a sense, Google was challenging themselves
02:43by front-loading one of their more problematic experiences.
02:47Second, our technology.
02:49Early on, they had decided to use Linux and Vulcan
02:51as cornerstones for their platform.
02:53Not only are we historically friendly to Linux,
02:57but we were one of the first titles to ship a Vulcan binary in the wild.
03:01And lastly, we had a reputation for overall good technology,
03:04good IP, and being willing to roll the dice on new endeavors.
03:09So Duffy returned from E3
03:11and started disseminating the news to the team.
03:14There are a lot of ways I can convey our first impressions,
03:17but I think this describes it best.
03:21Obviously, everyone was excited for a change in the landscape,
03:24any technical challenges to overcome,
03:26not so excited about the method of delivering the experience.
03:30Our main bias was not that dissimilar to everyone else's at the time.
03:33Game streaming, you mean that thing that adds latency
03:36to a project I'm working feverishly to remove latency from.
03:40But still, it was Google.
03:42How crazy could an Internet giant be?
03:44We'll hear them out.
03:47So a meeting between the two teams was scheduled
03:49at our studio on September 13th.
03:52A good majority of our engineering team was in attendance,
03:55so we had to use our theater.
03:56The product pitch we received hasn't changed really all that much
04:00between then and today.
04:01Essentially, they told us that they had been watching
04:04the YouTube numbers for years,
04:06waiting for enough of their online population
04:08to spill over into a bucket considered performant enough
04:11for great game streaming to be viable.
04:14Because of this, the technical design of the platform
04:17has been relatively consistent as well.
04:19We probably spent more time in QA,
04:22mainly because it evolved into a blue-sky discussion
04:25in which Google was feverishly taking notes
04:27on all of our feedback.
04:29During the presentations,
04:31a couple members of the Stadia team
04:33were busy setting up dev kits in our server room
04:34because when the talks were over,
04:36we did a department-wide code lab
04:38as a quick and dirty way to get context
04:39on working with the technology.
04:42After that, and being in Texas,
04:44Google was ready to grab some actual Tex-Mex.
04:46I remember this day vividly
04:48because our team got the wrong address
04:51from one of our own, not naming any names,
04:53and we all spent about an hour in rush hour traffic
04:56before discovering it was a restaurant by the office.
05:03After that, we started work on the bring-up
05:05of Doom onto Stadia.
05:06It took us about three weeks
05:09to produce the screenshots I have up here.
05:13There were two of us working on it full-time.
05:17To our astonishment, Vulcan and the graphics driver
05:19just worked.
05:21I should say, everyone but Axel
05:23who wrote the renderer was astonished.
05:25The system code itself was also fairly trivial
05:28because we already had Linux support,
05:30and the Stadia OS is essentially an Ubuntu kernel.
05:34The not-so-easy parts were integrating their tool chain
05:37into our build process.
05:38I'll speak more on this later.
05:41Also, the debugger had obviously not been thrown
05:44at a code base of our size,
05:45so there were a lot of issues there
05:47which we helped them iterate on.
05:48and lastly, there were also issues
05:50with their asset management pipeline
05:52for getting data to the system.
05:55And, frankly, the results were lackluster at first.
05:58Video and audio quality were good,
05:59but lag was definitely apparent,
06:01and we expected something better
06:02being on our internal network.
06:04At the end of all of this,
06:06we delivered a build to Google,
06:07and they went to work cutting the technology's teeth on it.
06:12So Google took our build
06:13and iterated and iterated on the core streaming technology,
06:16and one day we got an excited email saying
06:18they wanted to come back out
06:19and give us a hands-on demo.
06:22So they flew back in on November 15th,
06:25and we set them up in Spider Mastermind.
06:27Spider is by far our largest conference room.
06:30The demo consisted of a cloud instance in DFW,
06:37their own wireless router they had brought,
06:39our wall-mounted TV, a Chromebook,
06:42and someone's Android phone
06:43that acted as a proxy for input going to the TV.
06:49Needless to say, it was a night and day difference.
06:51We were stunned by how much things had improved
06:54that we wanted to do a blind test
06:56with someone not involved in the meetings.
06:58We had one of our senior designers, Peter Sokol,
07:01come up and play without telling him what he was playing.
07:05After a few minutes, he said,
07:06yep, that's Doom.
07:09We asked him how it felt.
07:11He said, feels like someone just forgot
07:13to enable game mode on their TV.
07:15After that success,
07:16we went on to have another seven to eight hours of meetings
07:19before going out to steak and dinner,
07:20and this time we didn't get lost.
07:23It should be noted that it no longer feels like
07:25someone forgot to enable game mode on their TV.
07:28This was over two years ago.
07:31We had many more on-site demos since then
07:33in which things kept getting better
07:34until we could hardly tell what was local
07:36and what was remote.
07:39Also, it's quite a jarring experience
07:41to know how well your game runs
07:43on a particular piece of hardware,
07:44and then to see it run on a potato machine
07:46you know doesn't have the silicon
07:48to match your experience.
07:49And today, the demos are going to be run
07:51from a pixel book just to demonstrate that.
07:57So the origin story ends at an unknown location in time,
08:00but definitely before today.
08:03Since receiving our first build,
08:05Google had set up a Pepsi challenge scenario
08:08that people could come by and test.
08:10One station was Stadia,
08:12and the other was a local piece of hardware.
08:14They did this to keep themselves honest
08:16and really drill down
08:17to eliminating perceivable differences
08:19in the play experience.
08:21They also wanted to demonstrate
08:22that Stadia could be superior
08:24to a local experience in certain ways.
08:28The results of that were successful.
08:30The experience was great,
08:31even enough to get non-gamers
08:33in the company excited,
08:34and it was exciting enough
08:35to get Google's executives
08:36to greenlight the project for production,
08:38and that is what has brought
08:40everybody here today.
08:42All right, so having seen that,
08:44let's talk about the elephant in the room
08:46for a minute besides live demos.
08:48I don't know exactly
08:49where the server I just played on is located,
08:52but I do know it's within a 64-kilometer radius.
08:55To talk about internet speeds,
08:57we need to talk about the speed of light itself.
09:00As you may know,
09:01light travels roughly 300,000 kilometers a second.
09:04This number is so ridiculous
09:06as to be meaningless to our human perceptions.
09:09To humanize it a bit,
09:10let's do an experiment together.
09:12Everyone stick out your arm,
09:15and then run your finger
09:16from your elbow to your wrist.
09:20All right?
09:21In a nanosecond,
09:22light roughly travels 30 centimeters
09:23or 12 inches,
09:25which is about the length of your arm
09:26or what you just traced out.
09:28In a microsecond,
09:29it can get from Moscone South
09:31to Moscone West
09:32because it's late for another session.
09:36Obviously, it's not going that fast in a fiber.
09:40The typical refractive index
09:42for light at fiber optic wavelengths
09:44is 1.45-ish,
09:46which means it's traveling
09:47at roughly two-thirds its normal speed.
09:49That puts a server we just used
09:51within a 320 microsecond radius.
09:55Now, we know those speeds aren't reality
09:59for the open internet.
10:00You have switching.
10:01You have congestion.
10:03Packet loss issues.
10:04Endpoint processing of data
10:05such as video encode, decode
10:07are very time-consuming as well.
10:09And this is all on top
10:10of the last-mile delivery
10:11and consumer AV hardware.
10:14The upside is these are all things
10:16that can be addressed
10:18and are continuously being addressed
10:20by the various industries involved
10:21in maintaining and building the internet.
10:24I'm not going to get into the rise
10:26of silicon photonics
10:27and networking gear
10:27or gaming-specific video encode hardware,
10:30but I did want to stop for a moment
10:32to acknowledge everyone's major concern
10:34with streaming in general.
10:34If you want to hear more details
10:37about how Google is tackling these issues,
10:39come back for the talk at 4.30
10:41titled Gaming in the Cloud,
10:43a Technical Deep Dive.
10:45Now, let's move on to talking
10:46about some of the cornerstone technology
10:49of the platform.
10:51Linux and Vulkan are a large part of Stadia
10:54and we're talking about all on their own.
10:56Obviously, Vulkan on Linux
10:58has been on the market
10:59for several years now.
11:01Linux itself is ubiquitous,
11:02but that it doesn't mean
11:04that our industry hasn't been largely avoiding them
11:07up to this point.
11:08So I wanted to take some time
11:09to address where both technologies are
11:11and say that this is a good time
11:13to adopt them.
11:16So why was our bring-up on Stadia so fast?
11:19The TLDR is that we were basically there.
11:24For Vulkan, we shipped a 1.0 renderer in 2016.
11:27I think we shipped somewhere between 17 and 23
11:31on the miner.
11:33Our code base is very comfortable
11:35running on GCN architecture,
11:37which also powers Stadia.
11:40Going to Linux was trivial
11:42because there were surface extensions in place
11:44and Linux was a focus for Kronos early on.
11:49Similarly, IHVs were starting fresh
11:51with their drivers
11:51and we were told that Windows and Linux drivers
11:54shared the same source code,
11:55which made porting even easier.
11:58As for Linux itself,
12:01it has always been historically associated with it.
12:03There is a legacy of it being prevalent
12:05in our code base
12:06and it has never not been present
12:07in our code in some form.
12:10For Doom 2016,
12:12we had shipped our dedicated server on Linux,
12:14specifically running in Ubuntu containers.
12:17This meant the core game loop
12:18and system-level code
12:19worked to a shipping standard.
12:21It was headless, however,
12:22so it lacked video, audio, and input.
12:25Later in 2016,
12:28a couple of our colleagues
12:28at our Frankfurt branch
12:29took it upon themselves
12:30to bring it to full completion.
12:34So why exactly did we adopt Vulkan
12:36to begin with, though?
12:39The long and short of it
12:40is mainly because it's an elegant modern API
12:43and it allows us to do things
12:44that we couldn't with legacy ones.
12:47We found this designed
12:48to be very intuitive
12:49and a good mate
12:50to how we approach rendering
12:51on the other half-dozen platforms we support.
12:55One thing we found
12:56that was very useful,
12:57especially early on,
12:59was the ability
12:59to take advantage
13:00of new IHV features
13:01immediately through Vulkan's support
13:03of new extensions.
13:05Also,
13:07Kronos' goal
13:07has always been
13:08to broaden its support.
13:09Today,
13:10it supports Stadia,
13:11multiple versions of Windows,
13:13Linux, Android,
13:15and through Molten VK,
13:16OSX,
13:16and iOS.
13:19We immediately found
13:20easy performance gains
13:21on the CPU side
13:22and over time
13:24have realized
13:25more and more GPU gains
13:27that would not have been possible
13:28with legacy APIs.
13:31When we first adopted it,
13:33there weren't really any good tools
13:34except RenderDoc,
13:35but now that situation
13:36has changed
13:37and the community
13:38has supplied a number
13:39of useful add-ons.
13:40And frankly,
13:42the documentation matters
13:43a great deal
13:43and compared to a lot of specs
13:45I've read over the years,
13:46it is relatively easy
13:47to digest
13:47and navigate.
13:51Now,
13:51we were already married
13:52to the OpenGL renderer
13:53that we had shipped
13:54with Doom
13:54and that kept us
13:55from realizing
13:56all the gains
13:56we wanted
13:57in our initial implementation.
13:58However,
13:59we did learn a great deal
14:00from that renderer
14:01which we carried forward
14:02into id Tech 7,
14:03Wolf 2,
14:04and now Doom Eternal.
14:07In Doom 2016,
14:08we immediately split up
14:10the CPU work
14:11to allow multiple threads
14:12to generate
14:12command buffers
14:13for the frame.
14:14for the game.
14:15We implemented
14:16a great deal
14:17of functionality
14:17into Async Compute,
14:19including our GPU particle
14:20simulation
14:21and post-processing.
14:23Due to that,
14:24we also took advantage
14:25of present
14:26from Compute Queue.
14:28We also did a lot
14:29of hand-tuning
14:29of our shaders
14:30for individual architectures,
14:32specifically AMD
14:33and NVIDIA,
14:35but we still managed
14:36to maintain
14:36only 350 total graphics
14:38pipelines across
14:39the entire game.
14:40We avoid shader
14:41permutations entirely.
14:42We instead go
14:44with uber shaders
14:44deployed over long
14:45portions of the frame
14:46to avoid switching
14:47and instruction cache thrashing.
14:51So where is Vulkan today?
14:53It has come a long way
14:54in the last three years.
14:56Like I said,
14:57we released our renderer
14:58early in Vulkan's life,
15:00and we did pay
15:01an early adopters fee
15:02for that.
15:03There are always issues
15:04with a small pool of users,
15:05and Vulkan was no exception.
15:08However,
15:09Kronos,
15:10the community of ISVs
15:11and IHVs moved rapidly
15:13to supply much-needed tools,
15:15conformance tests,
15:17shader toolchain improvements,
15:18et cetera,
15:19over the next two years.
15:20Despite this,
15:22adoption by game devs
15:24was still slow,
15:25and this really can be
15:26summed up more as,
15:27if it's not broke,
15:28don't invest a shitload
15:29of money into fixing it.
15:31Most engines
15:32have a solid rendering pipeline
15:33with a legacy API.
15:35Why would you invest heavily
15:36in switching your pipeline out
15:38and introducing that risk?
15:40The market incentive
15:41was not that strong.
15:43Still,
15:44by the time 1.1 rolled around,
15:46a good number of Vulkan games
15:47had shipped.
15:48Almost all the major engines
15:50on the market
15:51had support for it
15:52in some form or fashion,
15:53and the ecosystem itself
15:55has matured a great deal.
15:56All in all,
15:57Vulkan is really solid
15:58thanks to the talented people
15:59working on the technology.
16:01And I would be remiss
16:02to not mention
16:03that Google
16:03has had a hand
16:04in its maturation
16:05since before its release,
16:07as the Stadia project alone
16:08has several members
16:09involved directly
16:11with Kronos working groups.
16:15Now,
16:15to talk about
16:16why Stadia uses Linux.
16:22I think that sums it up,
16:24don't you?
16:25We still love you, Windows.
16:29Despite how rock-solid
16:30these technologies are,
16:31they are still outliers
16:32when it comes
16:33to the consumer side of things.
16:35historically,
16:36our industry
16:36has been adverse to Linux
16:37with it representing
16:39anywhere from 1% to 2%
16:40of the consumer market.
16:42That 1% to 2%
16:43is propped up
16:44by a decades-long stream
16:45of articles declaring,
16:47Linux gaming
16:48doesn't suck now.
16:50It's very convincing.
16:52And even that small portion
16:53is subdivided
16:54along different distributions.
16:56So it's really no wonder
16:57that no one pays attention
16:58to poor Linux.
16:59However,
17:01it is 70% or more
17:02of the server market.
17:03again,
17:05because it's not Windows.
17:06In essence,
17:07it is the OS
17:08of the internet itself.
17:09Every time you browse
17:10a website,
17:11stream a video,
17:12perform a search,
17:13message someone,
17:14you are likely hitting
17:15the Linux kernel
17:16in some way.
17:17Still,
17:18it is perceived
17:18as an outlier
17:19in our industry.
17:21And that goes for Vulkan 2.
17:23As I mentioned,
17:24if it isn't broke,
17:25don't fix it.
17:26And these reasons make sense.
17:28They make financial sense
17:29and they make product sense
17:31up until today.
17:34We estimate that Stadia's
17:36entry into the market
17:38expands our reachable audience
17:39by a factor of 10.
17:41The reasons for that include
17:44it's considerably easier
17:46to access to mobile.
17:47The intersection
17:48of AAA experiences
17:49that can be ported
17:50to a phone
17:51is very small.
17:53And typically,
17:53if a studio targets mobile,
17:55the product or IP
17:56is entirely a new one.
17:58Also,
17:59it runs on potato machines.
18:01Stadia has demonstrated
18:03the experience many times
18:04using its Chromebooks
18:05as you've seen today.
18:07Many people got to experience
18:08Project Stream
18:08on their MacBooks.
18:10Did anybody play
18:11in Project Stream?
18:13Yeah.
18:14Good number.
18:15All right.
18:17The market for cheap laptops
18:19and phones
18:19dwarfs the entire hardware
18:21complement
18:21we can currently address
18:22with our titles right now.
18:25and the nice thing is
18:26that all of these potato machines
18:28still have the one piece
18:29of hardware
18:30that they need
18:31to be a streaming device
18:32besides an antenna
18:33and that is
18:34a video
18:38sorry
18:39there's a video
18:40decoding IP core.
18:42With video streaming
18:43being ubiquitous
18:44these days
18:45so too is the hardware
18:46needed to handle
18:47this platform's delivery.
18:49With the market expansion
18:51comes a whole new generation
18:52of non-gamers
18:53ready to meet
18:54and experience our titles.
18:55This cuts to
18:56reduced friction
19:00that comes from streaming.
19:01There are almost
19:02zero upfront costs.
19:04You don't need
19:05several hundred
19:05to several thousand dollars
19:06to purchase
19:07a machine
19:08that can net you
19:09a great experience.
19:10You don't have to have
19:11the patience
19:12to research the hardware
19:13to purchase.
19:14The whole experience
19:15comes to the rest
19:16of the internet
19:16where it is.
19:18There are no downloads.
19:19There are no updates.
19:21I know those things
19:23kill me.
19:23When I come home
19:24and want to play
19:25the last thing
19:25I want to do
19:26is spend 20 minutes
19:27to an hour
19:28installing updates
19:29so I can play.
19:31And last
19:31there's just better security
19:33as the binary
19:34is not exposed
19:34to the end user at all.
19:37For a quick size comparison
19:38let's look at
19:39a couple planets.
19:40We like the planet Mars
19:42at id.
19:43If we took it
19:43as the current volume
19:45of the existing market
19:46we can address
19:47then planet Chrome
19:48would be the volume
19:49we estimate
19:50this new market
19:50to be at.
19:51This has a lot
19:52of potential
19:52for alleviating
19:53the stress points
19:54in our industry
19:54with upward development
19:56costs
19:56and relatively stagnant
19:57unit costs.
19:58Over the last decade
19:59we've seen all sorts
20:00of new attempts
20:01to address this issue.
20:02It comes in the form
20:03of free-to-play,
20:04software-as-a-service
20:05and yes,
20:06loot crates.
20:07And while some
20:07of those methods
20:08have made financial sense
20:09or expanded the market
20:11we haven't had anything
20:12this big come around
20:13since the advent
20:14of the smartphone generation.
20:17So with that in mind
20:18let's look back
20:19at the platform
20:19technology again.
20:21This is a stack
20:21that Stadia runs on.
20:23As you can see
20:24apart from your own
20:25game technology
20:26and Stadia's GGP library
20:27everything is open
20:29and available today.
20:30This is because
20:31Google does things
20:31differently
20:32than other platform
20:33providers.
20:34They believe
20:34in open source.
20:36They believe
20:36in using technologies
20:37people are used to
20:38and can contribute to.
20:39The hope is
20:41that this also
20:41speeds adoption.
20:42This is a reoccurring
20:44theme for Stadia
20:45because it is ingrained
20:46into Google
20:47as a company.
20:50As I mentioned before
20:51we had gotten
20:52Doom 2016
20:53running on vanilla Ubuntu.
20:55We actually used
20:56this as a stepping stone
20:57to get onto Stadia
20:58itself.
20:59It is an easy approach
21:01for you to take
21:02to start experimenting
21:04with the various
21:04technologies involved.
21:06We always recommend
21:07that you simply engage
21:08with Google directly
21:09but this is an easy
21:10way to evaluate
21:11technologies if that
21:12is preferred.
21:13What we did
21:14is we developed
21:16our own spec
21:17which we took
21:18for the software
21:19side of things
21:19we took a vanilla
21:21Ubuntu 18.04 image
21:24with Clang 6
21:25as the compile tool
21:26chain.
21:27You'll want to develop
21:29against Vulkan
21:29Pulse Audio
21:30and then for the
21:32hardware you can use
21:36you can build
21:37a box comprising
21:38an Intel processor
21:39with 8 threads
21:40and AMD Vega 10
21:41GPU with 8 gigs
21:42of VRAM
21:438 to 16 gigs
21:44of system memory
21:45and an SSD
21:46like storage device.
21:49So now that we've
21:50addressed two of the
21:51big technical pillars
21:52of Stadia
21:53let's move on to
21:53talking about the
21:54day-to-day development
21:55experience.
21:58I need some water.
22:07First, it'd be nice
22:08to know how to even
22:09think of it.
22:10Is it a console?
22:12Is it a PC?
22:13The answer is yes.
22:16Actually, you can
22:17think of it as a
22:18console with some
22:19PC aspects.
22:20These are the kits
22:21themselves.
22:22The bottom two
22:23were the Gen 0
22:24we brought Doom up
22:25on.
22:26The Gen 1
22:27are what we are
22:27currently using.
22:29I definitely would
22:30not recommend having
22:30one of these on
22:31your desk unless
22:32you wanted to use
22:33it as a riser.
22:35As you heard
22:36earlier in the
22:37keynote, the kits
22:37actually comprise
22:38multiple game
22:39instances and these
22:41can be thought of
22:41as the unit you
22:42develop against.
22:44When Gen 1 kits
22:45these Gen 1 kits
22:46can handle four
22:47instances a piece
22:49meaning we support
22:4916 developers with
22:51these four servers.
22:54So let's compare
22:55it to some things
22:56we're used to.
22:57PC is the easiest
22:58and most ubiquitous.
23:00We all work
23:01with PCs.
23:02You interact
23:02with the input
23:03device.
23:04This directs the PC
23:05which renders to a
23:06display and the
23:07photons make it back
23:08to your eyes.
23:09Not everyone has
23:11experience with a
23:12console but even if
23:13you don't you can
23:14imagine how it
23:14works.
23:15You still interact
23:16with the PC.
23:17Your PC talks to
23:18the console which
23:19has its own input
23:20devices and display.
23:23For Stadia the main
23:24difference is that the
23:25video feed comes back
23:26to your PC so you
23:28only have one display
23:29device.
23:29Also if you're playing
23:31on the same PC you're
23:32developing on then you
23:33don't need the second
23:34input device.
23:35All the input is just
23:36forwarded to the kit.
23:39So taking a closer look
23:41at the instance we can
23:42see the lines of
23:43communication between it
23:44and your development
23:44PC.
23:46On the instance is a
23:47streamer application that
23:49acts as the agent
23:50connected to your remote
23:51client.
23:51The client which is
23:52Chrome sends input data
23:54to the streamer and
23:55receives audio and
23:56video back.
23:58When you're debugging
23:59you use your local IDE
24:01to connect to the
24:02remote debugger.
24:03This of course will
24:04automatically start
24:05Chrome, deploy the
24:07binary and start the
24:08session orchestration
24:09just as if you were
24:10running the application
24:11locally.
24:12The last bit is a tool
24:13called GGP.
24:14It stands for Google
24:15Gaming Platform.
24:17This is an all-in-one
24:18tool for interacting with
24:19your instance and other
24:20services that Stadia
24:22provides.
24:22One of its primary
24:23responsibilities is giving
24:25you direct access to the
24:26instance over SSH for
24:28transferring assets.
24:31As I mentioned
24:32earlier, Google is all
24:33about using open or
24:34well-known technologies.
24:36For compiling, they use
24:37the Clang toolchain which
24:39is completely integrated
24:40into Visual Studio.
24:42For deploying, they also
24:44have Visual Studio
24:44integration as well as
24:46direct SSH access through
24:48GGP or other tools.
24:50On the debugging side,
24:52again, they have tight
24:53Visual Studio integration.
24:54This interacts with LLDB
24:56on the instance side.
24:57For graphics frame
24:58debugging, they have
24:59RenderDoc support and
25:00for trace-level debugging,
25:02they have Graphics API
25:03debugger.
25:04And lastly, for profiling,
25:06they have AMD's RGP,
25:08Intel's VTune, and
25:09Valgrind.
25:10These are all tools you
25:11have access to outside of
25:12Stadia itself and you may
25:14even use on a day-to-day
25:15basis.
25:18Now time for something
25:19different.
25:19We haven't even talked
25:20about the cloud aspect of
25:22development yet.
25:23Let's take the Stadia
25:25workflow from earlier and
25:26we'll add on the cloud
25:27workflow.
25:29Normally, with any
25:30platform you have, you
25:31are working with on-premise
25:33hardware.
25:34Stadia adds on to this
25:35with something called
25:36Cloud Quota.
25:37Cloud Quota is essentially
25:39a pool of instances
25:40available off-premise.
25:42They're not too
25:43dissimilar from production
25:44instances consumers
25:45would use.
25:46The workflow is
25:47consistent across both
25:49Dev Kits and Cloud Kits
25:50in terms of how you
25:51reserve and work
25:52against the instance.
25:53This gives you the
25:54ability to both work
25:55on and off-premise.
25:57In our case, we have a
25:58small batch of Dev Kits
25:59as you've seen, which
26:01are reserved for
26:01engineers.
26:02For QA and
26:03non-technical people,
26:04you would mainly want to
26:05use Cloud Quota as
26:06that's the best
26:07representation of the
26:08end-user experience.
26:10This puts a heavier
26:11burden on infrastructure
26:12within your own studio.
26:14Some houses are large
26:15enough that the extra
26:16bandwidth and power
26:17isn't a big deal.
26:19While for Indies,
26:20Cloud Quota may be
26:20the only solution for you.
26:22If you want to put one
26:23of these OneU's on your
26:24desk, you can do that
26:25as well.
26:28I don't want to paint a
26:29picture that everything
26:30was perfect from the
26:31start.
26:31In the early days,
26:32there was a lot of
26:33friction with the tool
26:34chain simply because it
26:35hadn't been put through
26:36its paces yet.
26:37For example, the asset
26:39management system was
26:40tough to work with.
26:41It behaved like a git
26:42repository, which you
26:43had to initialize and
26:44manage locally.
26:46This made it sensitive to
26:47both location and machine.
26:49Porting builds to other
26:50machines was often very
26:51difficult without completely
26:52rebuilding the manifest.
26:55The environment was based
26:56on the asset repository.
26:57There were essentially 12
26:59steps involved to
27:00configuring a new one every
27:01time you set it up.
27:02And even after that, you
27:04weren't able to run
27:04multiple users.
27:06On top of this, there were
27:08bugs as you'd expect with
27:09any project in its infancy.
27:11My favorite was the one
27:12where files over 1,750 lines
27:15would fail to retrieve
27:16symbols or really any data
27:18from the debugger.
27:19At first, it just appeared
27:20that the debugger was
27:21working intermittently.
27:22It took us a while to
27:23realize the pattern.
27:24One other bug that was a
27:26common source of headaches
27:26were orchestration issues.
27:28We would regularly have an
27:30instance become inaccessible
27:31because the debug streaming
27:33session could not be
27:34established and this required
27:35a reboot.
27:36All in all, things were
27:37what you would expect
27:38for an alpha state.
27:40Being one of the first
27:40adopters, we were a good
27:41rubber duck to help Google
27:43work through the initial
27:43growing pains.
27:46So nowadays, developers
27:47are used to two different
27:49workflows in regards to
27:50assets.
27:52We have the quick iteration
27:53loop of just throwing files
27:54onto a system and running
27:55our code against it.
27:57Then there's the publishing
27:58release pipeline, which is
28:00more involved and has
28:01several layers of metadata
28:02that go along with it.
28:04In the early days of Stadia,
28:06these two things were one
28:07and the same.
28:08The original system locked
28:10builds to a particular
28:11machine.
28:11You had to run a separate
28:12service to allow the
28:14transfer to happen.
28:15There were a lot of issues
28:16switching configurations,
28:18adding new artifacts,
28:19and dealing with stale
28:20items.
28:21What you really want for
28:23daily iteration is to have
28:24as few steps as possible
28:25between you and running
28:26the game again.
28:28I heard there's a mantra
28:29inside Google, and I don't
28:30know if it's real or not,
28:32but I'm going to use it
28:33anyways.
28:34It's okay to be wrong once,
28:36but it's not okay to not
28:38learn from it.
28:39And that's exactly what
28:41Google did next.
28:44What they did was they
28:45pivoted quickly and released
28:46an entirely new workflow
28:48called Project-Based
28:49Workflow.
28:52It was a relatively clean
28:53slate initiative.
28:54They brought in a UX team
28:56to help.
28:57On one visit, their team
28:58spent an entire day talking
29:00to every department about
29:01what they'd like to see in
29:02terms of workflow.
29:03With them on board, we
29:04rapidly started seeing our
29:06feedback integrated into
29:07subsequent SDK releases.
29:10Asset management changed
29:12completely.
29:13This is when they gave us
29:15direct access to the box
29:16via SSH.
29:18If a developer borks
29:19and instance, they can
29:20just re-image it.
29:21No harm, no foul.
29:23The environment was made
29:25global and allowed for
29:26multiple profiles.
29:27Now we simply just call
29:29ggpi init to change the
29:31environment.
29:31It's, frankly, it's
29:32glorious.
29:34Instead of 12 manual steps,
29:35you have a nice shortened
29:36prompt sequence to go
29:37through, and they even
29:38automatically generate your
29:40SSH keys for you.
29:43And overall, functionality
29:44of interacting with the
29:45platform is merged into a
29:46singular tool, which we'll
29:48get to on the next slide,
29:49and it was partnered with a
29:50web portal for
29:51administration.
29:52One thing that really
29:53impressed us in our
29:55relationship was the degree
29:56to which Google listened.
29:58And even as they took on
30:00more and more partnerships
30:01with other studios and
30:02publishers and developers,
30:04their attentiveness and
30:05hunger have not
30:06diminished.
30:08As more APIs came online,
30:10it became clear that a
30:11common idiom was
30:12necessary to interact with
30:13all of them from the
30:14development side of things.
30:15project-based workflow
30:17solved this by combining
30:18the developer interfaces
30:19into one command line
30:20tool.
30:21The idiom for using the
30:22tool is the same across the
30:24board, and it itself is
30:25well-documented.
30:31It provides functionality for
30:32things like achievements,
30:34commerce APIs, managing your
30:36instances, managing your
30:38dev kits, managing cloud
30:40quota pools, things like
30:41that, and this isn't even a
30:42comprehensive list.
30:43As you can imagine, this is
30:45a simple and straightforward
30:46way of interacting with all
30:47the touch points of the
30:48development process.
30:51If you want to go deeper
30:52into what it's like
30:53developing on Stadia, then
30:55you'll want to attend the
30:56session at 145, a guide to
30:58developing on Stadia.
31:00So that's it for
31:01development.
31:02The one topic we haven't
31:03addressed yet is the service
31:05end of the platform.
31:06Nowadays, consumers and
31:07developers expect a lot from
31:09any given first party.
31:10Social interactions have
31:12become more complex, and
31:13games have moved to take
31:14advantage of these
31:15developments.
31:16The barrier to entry for any
31:18new competitor is quite high
31:20just to reach parity.
31:21So what did Stadia's
31:22platform look like at the
31:24offset?
31:25Well, Stadia didn't have a
31:27platform.
31:28What it did have was the
31:30core streaming technology
31:31that included the development
31:33tool chain, video, audio,
31:34and input, as well as some
31:36basic dev kit and instance
31:37management.
31:39It didn't have anything else
31:40to go on top of this.
31:42It didn't even have a user
31:43identity system, which meant
31:44it also didn't have save
31:45games.
31:49It should be pointed out
31:50that just because you're
31:51Google doesn't mean you get
31:52things for free.
31:53A lot of very talented people
31:55at Google have poured their
31:56blood, sweat, and tears into
31:57this, just like anyone else
31:58would for a massive
31:59undertaking such as this.
32:01I know several people whose
32:02bodies have long forgotten
32:04what time zone they are even
32:05in.
32:09One of the first things Google
32:11asked us to do is provide a
32:12stack rank list of all the
32:13features we would need for
32:14shipping a commercial DOOM.
32:16In October, we shopped a
32:18document around called Stadia
32:20Platform Feature Set
32:21Prioritization and Definition
32:22for ID Tech Games.
32:23Whoever wrote that sounds like
32:25an asshole.
32:28It outlined all the functional
32:30gaps in the Stadia platform
32:31that would be required for
32:32fully bringing ID Tech titles
32:34to Stadia.
32:35Due to this, most of our
32:36working relationship centered
32:38around a milestone cycle in
32:39which we would first discuss
32:41upcoming API's features and
32:43what we needed and what we
32:44needed from them.
32:46Second, we'd integrate APIs in
32:48development as they became
32:49available.
32:50Then we'd have multiple rounds
32:51of feedback on what we had
32:53implemented.
32:53And lastly, we delivered a
32:55build of DOOM which
32:58incorporated the new features
32:59so that they had something to
33:00evaluate on their end.
33:02As time went on, Google
33:04brought on more partners and
33:06we started seeing other lines
33:07of feedback and requests
33:08incorporated into each SDK
33:10delivery.
33:11And this is great because one
33:13game and one engine and one
33:14genre do not a platform make.
33:17The more Stadia could be
33:18defined by a good diversity of
33:19games, the better off it would
33:21be.
33:21And to that point, make sure that
33:23you come back at 2.45 for the
33:25Assassin's Creed Odyssey and
33:26Project Stream presentation.
33:30Fast forward to today, and
33:32Stadia has become a robust
33:33platform.
33:34Since those early days, Stadia's
33:36small but crafty team has
33:37turned into a small army, as you
33:39could imagine.
33:40That army has produced a
33:41complement of services and
33:43features which not only rival
33:44industry standards but even
33:46surpass it in some ways.
33:47also, the image I've used up here
33:49isn't your typical token Google
33:52server farm image.
33:53The team hasn't just been busy
33:55building software but hardware
33:57too.
33:57Hardware capable of handling the
33:59demand a AAA launches as well as
34:02sustaining a plethora of world
34:04class titles on the platform
34:05simultaneously.
34:08We haven't even talked about YouTube,
34:11YouTube's role in all of this.
34:13I don't have to explain YouTube to
34:15anybody.
34:15It's ubiquitous.
34:16And it's not even a surprise that
34:17YouTube would be a large part of a
34:20Google gaming streaming experience.
34:23Video streaming is a big factor in
34:25determining the success for a lot of
34:26titles these days.
34:27And that is only going to increase in
34:29the years ahead.
34:30We know how intimate the space can be,
34:32especially with the personalities we
34:33follow, and their content we enjoy.
34:36As you may have seen in the keynote,
34:38there are multiple ways that you
34:39could go from viewing a content
34:42creators page or content to going
34:44directly into game.
34:46We're not affiliated with Markiplier
34:48directly.
34:48It just so happens that he played
34:51our game after it launched and I'm a
34:53subscriber of his.
34:55But it really hit home with us,
34:58like seeing that and then seeing
34:59Stadia come along and seeing how all
35:03of a sudden the barrier to entry for
35:06consumers from going to viewing
35:08content to playing content was
35:10basically all but eliminated.
35:14Let's look at a more concrete example
35:16of how our users can interact with
35:18content creators through YouTube.
35:21Dracu is a personality we love at
35:22id Software.
35:23He's a speed runner and a damn talented
35:25one at that.
35:27CrowdChoice is an API Google has
35:29available to developers today to enable
35:31viewers to interact with the content
35:33creator.
35:34There are different types of polls the
35:35game can run which users can interact
35:37with to influence the game.
35:39Now Dracu did several Ultra Nightmare
35:41runs with his personal best being close
35:43to an hour.
35:45Imagine if we wanted to make things even
35:47more challenging for Dracu by allowing
35:49spectators to spawn extra demons on top
35:51of them by voting for the demons that
35:53they wanted to see thrown against him.
35:55This makes things even more unpredictable for
35:58him.
35:59It also creates a small metagame experience
36:01for the viewers and flips the viewing
36:02experience from being passive to active.
36:05Now this isn't an actual feature in our
36:07game right now, but CrowdChoice and
36:09other YouTube APIs as you may have seen in
36:11the keynote are something we are looking
36:12at heavily right now.
36:14If you want to know more about the APIs and
36:16features available for YouTube, please
36:18come back at 5.30 for the engaging the
36:23crowd with YouTube and Stadia talk.
36:26All right, that's enough about Google.
36:29Now that we have been up now, what have we
36:32been up to since Doom 2016?
36:34As I mentioned before, we shipped several
36:39DLC, we shipped Doom VFR, and in the
36:42meantime, a good portion of our engine team
36:44was working on Wolfenstein 2 to carry our
36:46momentum on our technology forward as we
36:48got ready to start production on Doom Eternal.
36:52As you may be well aware, Doom Eternal was
36:54announced at E3 last year and subsequently
36:56revealed at QuakeCon a few months later.
36:59It's currently deep in production.
37:01Everyone back at the studio is very busy,
37:03except me, who is out here talking.
37:06Stadia has been a first-class citizen from
37:09the beginning as it has inherited the work
37:11from Doom 2016.
37:13As opposed to Doom, which was focused on the
37:15fundamentals of Stadia, we are focusing now on
37:19and how we can differentiate the title on the
37:22platform in ways not possible before.
37:25That is all I'm currently allowed to say on the subject.
37:30What I can talk more about is what we've done with
37:32idTech since then.
37:34First, it has lost some weight.
37:36Coming off of idTech 6, we removed roughly a million
37:39lines of code.
37:41We rewrote a lot of systems from the ground up
37:44so we could properly incorporate some lessons we
37:46had learned from shipping Doom while we divorced ourselves
37:48from the past.
37:50By that, I mean we deleted things such as the OpenGL
37:53renderer and went full Vulkan for everything.
37:55We removed Megatexture and went to Texture's streaming
37:58solution instead.
37:59There are countless other examples, and I'm not going to
38:02bore you with the entire Perforce history.
38:05As always, we focus on running at 60 FPS or higher,
38:09which our engine lead has coined as fast as hell.
38:12If there's one religion we have internally, it is this.
38:17We've since integrated unified HDR lighting and shadowing.
38:21We have a new multi-PBR compositing, blending, and
38:25painting solution, which our artists love.
38:29And there have been massive image geometric detail
38:32improvements across the board, including new tools,
38:35which allow us to examine our assets and pipelines with
38:37finer granularity.
38:39And lastly, we've greatly improved our global illumination
38:42quality for our lightmaps.
38:45On the Vulkan front, as I mentioned, everything uses Vulkan
38:49now.
38:50And by that, I do mean everything.
38:52The engine, id studio, even our helper tools.
38:58Our OpenGL renderer had lent itself to a lot of stateful changes
39:02at the top of a frame, as well as throughout.
39:05We've since eliminated all of those by better resource tracking
39:08across the frame, allowing for optimal descriptor set caching.
39:13We've moved to using even more async compute, with 50% of the frame
39:17overlapped with the next.
39:22All of our resource transfers happen overlapped with rendering,
39:26as now we use transfer queues.
39:29And since 1.1, we now use subgroup operations liberally.
39:33This avoids memory sharing between threads, allows us to optimize atomic
39:37appends, and gives us things such as faster down samples.
39:43And we've also adopted quite a bit of...
39:46We're also now using quite a bit of indirect draw and dispatch.
39:53Here's a screenshot of id studio running full Vulkan.
39:56In this image, there are four independent swap chains married to
39:59four independent viewports.
40:00We're very proud of this accomplishment, as we haven't heard of
40:03any other development houses pushing Vulkan thus far.
40:06We're also happy to be off of using OpenGL for our tools,
40:09as the performance gains allow our content creators to do more in real time.
40:15All right, I've talked enough, time for some more gameplay.
40:19I'm going to turn it over to Doom Eternal's executive producer,
40:22Marty Stratton, who's going to come up and play through a segment of the game
40:26we've never shown publicly before.
40:27And I'll let you show another feature as well.
40:28And I'll let you know if I can see that next time.
40:28Thank you.
40:30Thank you.
40:31Thank you.
40:33Thank you.
40:35Thank you.
40:36Thank you.
40:36Thank you.
40:36Thank you.
40:59Grazie a tutti.
41:22Grazie a tutti.
41:53Grazie a tutti.
42:34Grazie a tutti.
42:51Grazie a tutti.
42:54Grazie a tutti.
42:58Grazie a tutti.
43:01Grazie a tutti.
46:03Grazie a tutti.
46:43Grazie a tutti.
46:54Grazie a tutti.
46:56Grazie a tutti.
Commenti

Consigliato