Skip to playerSkip to main content
  • 2 years ago
Martin Adams, Co-founder, Metaphysic Moderator: Jeremy Kahn, FORTUNE
Transcript
00:00 Welcome back, everybody.
00:01 We've got some great sessions ahead, so let's get to it.
00:04 You probably saw the Deep Tom Cruise videos online,
00:07 the ones where he's hanging out with Paris Hilton backstage
00:10 or dancing with Diplo on New Year's Eve,
00:12 looking ever younger than before.
00:15 Well, guess what?
00:16 They aren't actually him, but an AI-generated video
00:19 from Metaphysic AI, a startup utilizing AI
00:22 to create hyper-realistic visual effects.
00:25 Here to show us how AI can help us build immersive content,
00:29 please welcome co-founder of Metaphysic AI, Martin Adams.
00:32 [MUSIC PLAYING]
00:35 Hi, Martin.
00:36 Good to see you, Martin.
00:36 Hi.
00:37 How are you doing?
00:38 Before we get started, can you tell us a little bit
00:39 how you got into this whole space?
00:41 Yeah, absolutely.
00:42 Nice to see everyone.
00:43 Thank you for having me.
00:45 So maybe a bit strangely, my background
00:47 is actually as an intellectual property lawyer,
00:50 which at this sort of conference for generative AI
00:53 definitely gets a few reactions.
00:55 I've seen the policy people sort of lean in
00:57 and some of the CTOs and co-founders kind of edge out
00:59 of the room.
01:01 But yeah, so I left my legal career in New York
01:04 behind about 20 years ago, and I've
01:06 been building sort of machine learning, big data, AI
01:08 businesses, mainly in the creative and cultural
01:10 industries ever since.
01:12 And so broadly, that's what we're
01:14 interested in doing at Metaphysic is investing
01:17 in technology that can actually build shared experiences
01:19 and kind of create standout, standout internet moments,
01:24 really.
01:24 Cool.
01:25 What do you guys actually do?
01:26 Like, give us some--
01:27 A little bit more detail.
01:28 Yeah.
01:28 So we specialize in basically sort of powering
01:31 photo real AI content.
01:33 So what does that mean?
01:34 That means that we can scan anything that has data.
01:37 That could be my voice.
01:38 It could be my face.
01:39 That could be this room.
01:40 It could be an item of clothing.
01:42 And then our models can sort of power that
01:44 into any sort of video feed or any sort of digital experience
01:47 in a way that looks like it was made and shot with a camera.
01:51 So again, maybe that's a bit too abstract.
01:54 To bring this down a little level,
01:55 since I've been on the stage, I've
01:57 had actually a model of my own face on top of my face.
02:02 So if you look on the screens, you
02:03 may be able to see the sort of slight, very significant kind
02:08 of transition that happens there when I put
02:10 an occlusion in front of my face.
02:11 So why would I do this?
02:12 And is this interesting?
02:13 In this particular use case, no.
02:15 But what if you were trying to make a film in which you
02:18 wanted to manipulate that model to age me up or age me down?
02:21 That could be the difference between being
02:23 able to tell that story and not being able to tell that story.
02:27 So let's look at something that's
02:29 maybe a little bit more interesting and applied.
02:32 So we're going to try and take my face
02:35 and put it onto our dear Jeremy's here.
02:38 So let's see how that looked.
02:39 In theory, we should be able to do that
02:41 at the click of a finger.
02:43 Let's see.
02:44 Oh my god, look at that.
02:45 There he is.
02:45 So there's Jeremy.
02:46 High shave this morning.
02:47 Yeah, exactly.
02:48 So we've got the beard.
02:49 I would like to apologize for what you're experiencing right
02:51 now, Jeremy.
02:52 Something I have to deal with every day.
02:54 Problem shared, problem halved, all that stuff.
02:57 But as you can see, we don't have necessarily the same facial
03:01 shape or head shape.
03:03 But you can see that those impossible crow-line wrinkles
03:06 that I've had for the last 10, 20 years
03:08 have been imported onto Jeremy's face.
03:11 If Jeremy smiles-- maybe try smiling to her.
03:13 There you go.
03:14 And I smile.
03:15 It's not too dissimilar at all.
03:17 If you frown, you can both be sad on stage together, Jeremy.
03:22 Then it looks very similar.
03:23 And what's important here is that this is real time.
03:27 This is live.
03:28 This is dynamic.
03:29 And that's important because it really augments and respects
03:32 the underlying performance of, let's say,
03:34 an actor or a musician.
03:37 And we hear that.
03:38 We hear that from people at the absolute top of their game,
03:40 people like Tom Hanks saying, wow, this
03:43 is not an abstracted post-production process.
03:45 This is something that with a set-side monitor,
03:48 I can actually see this younger version of myself playing out.
03:51 And that can build empathy and connection and ultimately
03:54 power.
03:54 How is this actually working?
03:56 So we have a normal camera.
03:58 We have a portable laptop backstage.
04:01 So there's nothing too intensive on that side.
04:03 What essentially is happening is there's some destination
04:05 markers on your face which are identifying your face.
04:08 There's been a neural network model
04:11 on me which has encoded the different features of my face,
04:14 the composition, the symmetry, the way it structures.
04:16 But it can also infer in for anything
04:19 that it hasn't managed to capture data-wise.
04:21 Just a few minutes with an iPhone
04:22 has allowed me to create this model.
04:25 And it's basically going onto your face
04:28 and filling in all those gaps.
04:29 And this is just a live version and a cut-down,
04:32 less ambitious version of what we're using on Hollywood
04:34 Studios.
04:35 And how will this, do you think, change the entertainment
04:38 industry?
04:39 What is this going to enable Hollywood Studios to do?
04:42 Yeah, I think it will have a dramatic impact.
04:44 I think that impact is already being felt by innovators
04:48 and pushed by innovators.
04:50 I think that it allows flexibility.
04:52 We've talked about some of those things,
04:54 both in terms of age or being able to scale someone
04:57 to another destination if they can't or won't travel.
04:59 It allows possibility in the sense
05:01 that if, with consent, if a deceased person was
05:04 caught to a storyline, you could put them back into that film.
05:08 So I think film is a big area.
05:09 We've talked about that.
05:10 I think content is another big area, broadly speaking,
05:13 branded or otherwise.
05:14 You could have a luxury fashion brand that
05:16 could film a catwalk, for example,
05:19 and then with consent and my scan,
05:21 I could be on that model walking along and build that empathy,
05:24 build that connection with the brand and the product.
05:26 Think of the network effects that
05:28 start to be unlocked when you're in the content.
05:30 And I think live concerts as well, live concerts,
05:33 live music, which is another area where we're
05:35 doing a lot of great work.
05:36 And I believe we have a video where we
05:38 could see an example of that now.
05:40 So maybe we'll cut to that.
05:41 [VIDEO PLAYBACK]
05:42 [BELL RINGING]
05:44 [MUSIC - "HANDS ON YOUR HANDS"]
05:46 [APPLAUSE]
05:47 - (SINGING) Quite a long time.
05:49 You ain't nothing but a hound dog.
05:52 Quite a long time.
05:54 Well, you ain't never caught a rabbit
05:56 if you ain't no friend of mine.
05:58 - So that was Metaphysic in the finals of America's Got Talent
06:06 last year, first AI, as far as I'm aware,
06:08 that's managed to get to the finals, placed fourth,
06:10 recreating Mr. Presley there.
06:13 And that's done with his consent and everything else.
06:16 Well, not his, but his family's, I should probably add.
06:18 - His estate's consent, yeah.
06:19 - We haven't quite got the technology that far.
06:21 - The seance technology is not quite developed yet.
06:24 - But yeah, so you can see the impact here.
06:27 And I think what's interesting is
06:29 that a very young audience in America's Got Talent,
06:32 they wouldn't have had the opportunity
06:34 to experience anything like seeing
06:35 that form of a live performance.
06:37 And now they have.
06:38 So again, impossible stories can now be told.
06:41 - In the screen actors strike that just took place,
06:44 this was a big issue about whether actors
06:46 were gonna be forced, or there was this fear from the guild
06:48 that actors would be forced to sort of sign away
06:50 their facial rights, their biometrics,
06:53 sort of for life to studios.
06:54 There's a particular concern about, I think,
06:56 younger actors who are less established
06:58 with less leverage and extras.
07:00 Were they right to be worried about this?
07:01 Do you think that that's a legitimate concern?
07:03 - Yeah, look, I think clearly there's a consent-driven
07:06 market here, which you can see here.
07:07 I was working with the Presley estate
07:09 and the rights holders there.
07:11 But I do think that they were right to be concerned.
07:13 These are powerful technologies.
07:15 And I think broadly speaking,
07:18 a lot of other industries will have their moment
07:20 and they'll look to SAG-AFRA and say,
07:22 how did they negotiate and what did they negotiate?
07:24 And there'll be something to learn there.
07:26 I think that the balances that were struck
07:28 broadly seem right.
07:29 They've kind of enshrined a role for a innovative technology
07:32 in a creative space, which seems appropriate.
07:35 But they made sure that the consents couldn't be writ large,
07:38 couldn't be abstracted away.
07:40 They were specific and tailored to specific projects.
07:42 So really it puts more power in performers' hands,
07:46 which should be the case.
07:47 - Interesting.
07:48 And your co-founder and CEO, Tom Graham,
07:50 I know he's gone out and tried to copyright his likeness.
07:54 I think he's the first person to have done this.
07:56 Is that something we all should be doing
07:57 or we're all gonna have to copyright our likeness?
08:00 - Frankly, yes, I think it is in the long-term
08:02 something that all of us should do.
08:03 The technology and the data is there to,
08:06 there's enough data to basically create a version
08:08 of most people online, not just famous people.
08:11 Tom and I were two of the co-founders of the business.
08:14 We met at Harvard Law School 13 years ago,
08:17 very, very kind of committed to an ethical approach
08:20 to this technology.
08:21 And it's not simply about, you know,
08:23 so we have a new product called Pro,
08:25 which is around scanning your identity.
08:27 And that's about preventing misuses
08:30 so that you could, for example, get a copyright
08:32 and then avail yourself of the existing apparatus
08:34 with DMCA takedowns.
08:36 Otherwise, you're fighting against publicity rights,
08:39 which are different state by state and country by country
08:41 or terms and conditions, which again,
08:43 are very, very hard to kind of get through quickly.
08:45 So by having a copyright, you avail yourself
08:48 of those existing apparatus.
08:49 And there are teams in all of the big platforms
08:51 that will respond to DMCA takedown.
08:54 I think the other thing is actually the creative
08:57 and commercial uses that you can unlock.
08:58 So if you are an actor and you wanna license your likeness,
09:01 you can do that.
09:02 If you're a film director and you want to,
09:05 Star Trek, you wanna put some super fans into that content,
09:09 then you can actually do that now.
09:10 But only if you have the scan
09:12 that allows that sort of creative use.
09:14 So I think the principles as we see it are,
09:18 you need to have consent at the core.
09:19 You need to be looking at compensation.
09:21 You need to be giving control.
09:22 And copyright is a core way of doing that
09:24 in a kind of principled ethical way.
09:26 - Right.
09:27 Are there any other dangers?
09:28 We talked a little bit of maybe identity theft
09:29 is one obviously, but what are some of the other dangers
09:31 of this technology?
09:32 - I mean, look, we're in 2024.
09:34 We know that there's an election coming, right?
09:36 We want to keep this sort of technology out of that space.
09:38 I think that's a combined responsibility
09:41 of people building the technology,
09:43 having these consent frameworks
09:44 and the platforms that host.
09:46 I think we're heading in the right direction,
09:47 but we need to keep talking about it.
09:49 And I think actually building these creator rights
09:53 in order to get the outcomes that we want as a society.
09:56 - Great, Martin, we're just about out of time.
09:57 Thank you so much for coming
09:58 and showing us this amazing technology
09:59 and giving me your face for trying it on for a little bit.
10:02 I appreciate it.
10:03 It's good to be you for a few minutes.
10:05 - Thank you.
10:06 [BLANK_AUDIO]
Be the first to comment
Add your comment

Recommended