00:00.
00:06Hellblade will follow Senua's dark and harrowing journey into a hellish underworld.
00:10She needs to feel real, nuanced, complex and believable.
00:20For cutscenes in a typical AAA game, a voice artist delivers the dialogue, another actor
00:26performs the body motion, and animators hand key the face and cameras.
00:30It can take a team of 30 or 40 artists to stitch together all of the cutscenes in this way.
00:37Our entire team on Hellblade is 13 people, yet we still believe that there is a way to
00:41shoot and produce high quality cutscenes on a budget.
00:45Ever since we shot Heavenly Sword 10 years ago at Weta Digital, we've always captured
00:49face, body and voice simultaneously, and that's what we call performance capture.
01:08Heavenly Sword was the first game to use performance capture, yet it's still rarely used in games
01:13today.
01:14The perception is that it's too difficult and too expensive for most games, even in the
01:18AAA bracket.
01:19We don't have the budget to shoot cutscenes in a performance capture studio like we have
01:23done in our previous games, so instead we started prototyping homebrew hardware and software
01:29solutions to see if we could find another way.
01:32Our first prototype rig consisted of several GoPro cameras with different lenses, so pairs
01:38of cameras would capture the face in 3D, the body, and also pick up on visual markers around
01:44the room.
01:44The idea was that you'd be able to capture face, body and even the camera position anywhere
01:49in 3D space without having to have a purpose-built studio.
01:54The device was giving us data, but it soon became clear that we needed more time and more
01:58resources than we had available to make it robust enough for commercial use.
02:02So we contacted our friends at Vicon, who are leading players in high-end motion capture
02:07solutions, to see if they could help us out within our limitations on Hellblade.
02:11Thankfully, Vicon were able to find a solution that worked within our constraints.
02:15They sent us 12 of their Benita cameras and a blade server to drive them.
02:20We don't have space in our studio for shooting, so we cleared out our largest meeting room and
02:25to mount the Benita cameras we bought some cheap wardrobe posts from Ikea, which did the
02:30job nicely.
02:32We did some tests and the body data we got back were easily as good as anything we've ever
02:36captured before.
02:37So we moved on to our next challenge, which is how do you capture the face?
02:42As facial performance is captured with video, we needed good even lighting in the room.
02:47Studio lights are very expensive, but we found an alternative solution, cheap LED lights that
02:52we bought from Amazon.
02:54Now we had good even lighting, we could capture the face using video.
02:57And to do this, we had previously created a prototype using a cricket helmet, some umbrella
03:04wire and a webcam.
03:05Using the cricket helmet, we could capture facial expressions, and then we started developing
03:10our own facial solver.
03:12The facial solver is needed to interpret the video feed into real-time 3D facial expressions.
03:19I want each person here to give George their report by Thursday afternoon.
03:25Stop laughing, Stuart.
03:27For the shoot proper, we would need a much more robust solution than the cricket helmet.
03:31So for that, Vicon kindly lent us their high-end Chara head rig, which consists of four cameras,
03:38which allows you to capture the full 3D facial expression of the actor.
03:44Despite good results, we're still looking for even more cost-effective solutions, including
03:493D printing our own helmet.
03:52So now we knew we could capture the face, but you still need to convert that video feed
03:57into a believable real-time digital double.
04:01And to create a digital double, we used a technique called photogrammetry, which is able to construct
04:07a 3D face from a series of photographs.
04:10But instead of just using still photographs, we used Go programmers, and that way we could
04:14capture the full range of expressions as they happen.
04:19And to capture even finer levels of detail, such as skin pores and wrinkles, we created
04:25another prototype device called the PlantPot, and it's called a PlantPot because it's actually
04:30housed within a PlantPot casing, and inside we've set up some LED lights, and it's controlled
04:35with a Raspberry Pi and some custom code.
04:37So the idea is that you could put this onto any surface, and by triggering the lights, you
04:43can capture still photographs with different lighting conditions.
04:46And from that, you can create a very high-res normal map, which captures very precise details.
04:53So now we had a solution for both the face and the body, but we also wanted to capture
04:57the camera itself, which is very much a part of visual storytelling.
05:01And to do that, we built a further rig.
05:04On it we attached a GoPro camera, a portable LCD screen, and put markers on the rig so that
05:11every nuance of the camera could be captured on set in the scene.
05:17Finally, to capture audio, we rented a wireless recording system.
05:21Our room isn't soundproof, but even so, we were pleasantly surprised with the result.
05:25It meant that we wouldn't have to record audio separately.
05:29But we're still pushing other areas and other types of performance capture.
05:33These are highly experimental and may or may not produce results.
05:38With just a few thousand pounds, we now had a set up in the studio that would allow us
05:42to capture a full performance – face, body, voice and cameras.
05:47We also hope to show that a small team with limited means is still able to capture high-end
05:53character performances.
05:54And this is something that, up until now, has only been the preserve of big-budget AAA studios.
06:01So to fully test our set up and see if all the pieces of the puzzle come together, we conducted
06:06a test shoot and we're still putting this together and it's something that we hope to show
06:11you soon.
06:15Shhhh.
06:19Someone's here.
06:26What the?
06:28What?