Skip to playerSkip to main content
  • 2 years ago
Natalie Monbiot, Head of Strategy, Hour One Moderator: Ellie Austin, Deputy Editorial Director, Live Media, FORTUNE
Transcript
00:00 Hello, everyone, and welcome back.
00:02 We have some really interesting conversations
00:05 ahead this afternoon, so let's get started.
00:08 Now, generative video is reimagining even
00:12 the way we create content.
00:14 Simple text can be turned into video presentations,
00:17 news content, and product demos in multiple languages
00:22 within just a matter of minutes.
00:24 And that's just the beginning of this.
00:27 This spring, Hour One, an AI video generation company,
00:31 launched Video Wizard, a feature that
00:34 automates the process of script writing and video creation
00:37 by having the user simply submit a prompt.
00:41 Here to show us how to use generative video
00:43 and to discuss what the future holds for the technology,
00:47 please welcome Hour One's head of strategy, Natalie Monbiot.
00:51 [MUSIC PLAYING]
00:55 [APPLAUSE]
00:57 Hello, Natalie.
00:58 Thank you so much for being with us.
00:59 Now, I alluded to it just then in the introduction,
01:01 but I'd love you to briefly run us
01:04 through Hour One's mission and possible use cases.
01:07 Yeah, so our mission at Hour One is to reinvent video production
01:11 and make it very easy for anybody,
01:13 without even having the skills to produce video,
01:16 to do so in just a few clicks.
01:19 And so maybe what we can do is jump into a quick example.
01:23 Let's go for it.
01:26 At Hour One, we're on a mission to replace cameras
01:28 with code for the next generation of video creation.
01:31 We replace the camera layer with AI
01:33 and combine all the generative AI tools
01:36 to enable professional video creation at your fingertips.
01:40 So you might recognize this as someone
01:43 that looks a bit like me.
01:44 This is my virtual twin.
01:45 She is entirely generated using AI.
01:48 After a little bit of footage was submitted to our algorithm
01:52 with my permission to generate her,
01:55 and that's also my generated voice.
01:57 And I can also show you a little bit
02:00 about how this was made.
02:02 So once that's done, you can then
02:04 find AI.Net within our platform.
02:08 And so the platform has a selection,
02:10 a whole range of different people
02:12 that you could-- or virtual avatars that you could choose.
02:14 Exactly.
02:15 So you can see here, it's almost like a library of virtual humans
02:18 there on the left.
02:19 And for this video, I picked my own.
02:22 And I picked one in a style of expressiveness
02:25 that I felt was fitting to an introduction at the Fortune AI
02:28 Brainstorm.
02:29 And in this case, I entered the text
02:31 because I knew what I wanted to say.
02:33 But as you said in your intro, Ellie,
02:34 it's also possible to just generate a video from a prompt.
02:39 And that includes backgrounds that are relevant.
02:42 And in this case, you can drag and drop
02:44 your logo, your brand assets, pick your voice.
02:47 In this case, I'm picking my own AI-generated voice.
02:51 And then just in a few clicks, you're
02:53 able to create this video.
02:55 And with this panel on the left, you
02:57 can see a range of virtual humans.
02:59 And for each-- so you can create your own virtual human
03:02 or use a stock character from the platform.
03:05 And what we see this as being is the home
03:08 for your virtual human because we're constantly
03:11 enabling new functionalities and features
03:13 to not just create and protect your virtual human,
03:16 but to actually activate it in different levels
03:19 of lifelikeness and expressiveness
03:21 appropriate to the context in which you're presenting it.
03:23 And so these stock characters that we see in the library,
03:25 are they the avatars of actors that you've hired?
03:28 Could you talk a bit about that?
03:30 And then I'd also love to know how you can go to bed at night
03:33 and sleep easy knowing that your virtual avatar is only
03:35 going to be used in contexts that you
03:37 feel comfortable with.
03:38 Absolutely.
03:39 So every single one of the virtual humans on our platform--
03:42 and there's been hundreds that have
03:43 been captured over the last four and a half years
03:46 that Hour One has existed.
03:48 And each of them is backed by an actual human.
03:51 And each of these humans has an agreement with Hour One
03:54 that allows, first of all, with permission for their likeness
03:57 to be generated.
03:59 And then also the agreement determines
04:01 use of their characters.
04:02 So for stock characters on the platform,
04:05 they basically agree to being featured in content
04:09 from our commercial customers.
04:12 And they get paid micropayments or a passive income
04:15 for their appearances.
04:17 In the case of my virtual twin and how
04:19 I can sleep easy at night, I have sole use--
04:22 and the team at Hour One, I've given them permission as well--
04:25 to generate content using my likeness.
04:27 But through agreements, we protect and sort of restrict
04:33 who has access to your virtual twin.
04:35 Now, I understand avatar Nat could speak
04:39 in multiple different languages.
04:40 I think Hour One supports 80 languages.
04:43 Is that correct?
04:44 And if so, can you explain how you pick the languages
04:47 to integrate, given that I think there's between 6,000
04:49 and 7,000 languages in the world?
04:51 How do you choose which ones to focus on?
04:52 Well, we aim to enable all of them.
04:55 So if there's any demand or we can
04:58 imagine any kind of future demand for these languages,
05:01 then we do enable that.
05:02 And just a note on that, so the fact
05:05 that my avatar is there right now,
05:07 I can create infinite amounts of content featuring my avatar
05:11 or a stock character if that's more appropriate.
05:14 But what gets me really interested and excited
05:16 about the future of the technology--
05:18 and actually, say, the future, but it exists today--
05:20 is the ability to kind of augment my skills.
05:23 So I can now have my--
05:24 I can be communicating in different languages
05:27 that I don't know or don't know so well.
05:29 And I can show you how that looks.
05:32 I never thought I'd be able to speak Mandarin,
05:34 spoken by over a billion people.
05:36 How the world has--
05:38 [NON-ENGLISH SPEECH]
05:39 [LAUGHTER]
05:40 [END PLAYBACK]
05:41 [NON-ENGLISH SPEECH]
05:42 [LAUGHTER]
05:43 [END PLAYBACK]
05:43 And so presumably, you have quite an international client
05:48 base at the moment.
05:49 Yeah.
05:49 So I'm a big fan of the French language.
05:51 I'm a big fan of the French language.
05:52 I'm a big fan of the French language.
05:54 I'm a big fan of the French language.
05:55 I'm a big fan of the French language.
05:57 I'm a big fan of the French language.
05:58 I'm a big fan of the French language.
06:00 I'm a big fan of the French language.
06:01 I'm a big fan of the French language.
06:03 I'm a big fan of the French language.
06:04 I'm a big fan of the French language.
06:06 I'm a big fan of the French language.
06:07 I'm a big fan of the French language.
06:09 Yeah.
06:09 So we can-- so yeah.
06:12 So basically--
06:12 [VIDEO PLAYBACK]
06:13 - Welcome to Avatar Activate, where
06:14 you can make your visual--
06:15 - That's OK.
06:15 We can let you talk and then come back to answer.
06:17 - Simply drag and drop your visual asset and the text
06:19 you wanted to say, and boom, here we are.
06:22 [VIDEO PLAYBACK]
06:23 [NON-ENGLISH SPEECH]
06:25 [LAUGHTER]
06:37 - So I have a very keen avatar activate
06:39 that wanted to jump in before we talked about our customer base.
06:42 But just to explain what's happening here is that--
06:45 so in my original avatar, the one that you just saw,
06:48 I was captured in a professional setting.
06:50 And it's designed well for static scenarios
06:53 and static context where I'm presenting.
06:55 But what we're working to do now and to help
06:57 fulfill our mission of reinventing video production
07:00 to enable anybody to be able to star in videos
07:04 is that we want to remove that friction point of having
07:06 to be in a professional setting to create your avatar.
07:09 So now what you can do, as my avatar activate actually just
07:14 sort of explained, is that you can take any data asset that
07:17 already exists, or you can just shoot
07:21 a little bit of footage on your phone, which
07:23 is what happened here.
07:24 And I can take that visual data, and I can basically
07:27 drop in any text that I want and just suddenly bring it to life.
07:32 So we're kind of lowering the barriers across the board
07:35 to enable this type of video creation at large.
07:38 We're kind of coming to the end of the sessions,
07:40 but I've got a few more questions I want to get in.
07:42 What's the cost of this?
07:43 If I'm a client and I come to you
07:44 and I want to use this technology,
07:45 what's the entry point?
07:46 So the entry point is really low.
07:49 It's kind of like single-digit dollars
07:51 if you want to just kind of get started.
07:52 You can even make a free video if you want to just
07:55 to kind of get a taste for it.
07:57 And then it escalates depending on kind of volume of video,
08:00 how many custom avatars you want to create,
08:03 sort of enterprise packages.
08:05 We work with a lot of companies of sort of enterprise size,
08:09 including J&J, AstraZeneca, Berlitz,
08:15 a number of different companies of that scale,
08:17 and we do kind of custom packages.
08:19 What's your biggest concern about the technology?
08:22 So I think it's really important that we kind of proceed
08:25 in a way where we're being very thoughtful about how
08:29 to protect and create ethically sourced avatars,
08:34 which is kind of a new term that we're banding around
08:37 the office at hour one.
08:39 First of all, that avatars are actually based on real people
08:42 that actually consent to having their avatar created
08:45 and activated.
08:47 And so that is something that we are really doubling down on,
08:51 and sort of related to that is privacy-enabled avatars
08:56 whereby you can kind of opt out of being in the data set
08:59 if you want to.
09:00 So we're kind of building in different levels
09:02 of being able to do that.
09:04 And very finally, where do you anticipate this technology
09:06 being in five years, and what use cases might there be
09:08 that we can't envisage right now, perhaps?
09:11 Yeah, so maybe I'll just flick to this one quickly,
09:15 right to the end.
09:19 If we can go to the final video, and I can just explain.
09:23 So with all of our avatars, as I've said,
09:27 each one is actually a real person behind it.
09:30 And what we're seeing is people like this.
09:33 This is actually a guy called Ian Beecraft.
09:35 He's a real-world futurist.
09:37 And his virtual twin has actually been employed
09:42 by a broadcaster, a 24-hour news broadcaster that, by the way,
09:47 doesn't have any studios, doesn't have any real anchors
09:51 on staff, and this business was allowed to exist
09:54 because of this technology.
09:56 I heard earlier in part of the intro to this conference,
09:58 like, what does this mean for real-world,
10:00 like, anchor jobs, journalists?
10:02 In this case, how we see the future is that people
10:06 will be able to augment themselves.
10:08 They'll be able to scale themselves to create--
10:11 to augment their presence.
10:13 In this case, Ian is a person that trades off his appearances.
10:17 He can actually be in more places at once.
10:19 He can speak languages that he doesn't speak ordinarily.
10:22 And so we see this as the ability to kind of, like,
10:25 scale yourself and also to create new businesses,
10:29 business units, whole new ideas with this new medium
10:32 of generative video.
10:33 Natalie, this is fascinating.
10:35 Thank you so much for being here today.
10:37 Thank you.
10:38 Thank you.
10:39 [applause]
10:40 [BLANK_AUDIO]
Comments

Recommended