Skip to playerSkip to main content
  • 14 hours ago
Transcript
00:00What am I going to get with 3.0?
00:02It's a bit of an overhaul here.
00:05Thank you so much, and thanks for having me.
00:06I'm excited to be back here.
00:07So 3.0 is a bunch of different things.
00:11The main thing is that we're evolving video
00:13from being kind of a static broadcast format, right?
00:15You record it once, and everybody essentially watches
00:18the exact same version of that video.
00:20Into video, that's conversational,
00:21and we're launching this new product called Video Agents,
00:23where you're going to be able to create your videos
00:25like you've always created your videos,
00:26but then you can insert these agents at different points in your video,
00:29and they can serve different purposes.
00:30If you're doing a training video, for example,
00:33you can have an agent that actually checks
00:35if the person watching the video has understood the content.
00:37If you're doing recruiting, for example,
00:39you could have an agent that interviews,
00:42gives a case study to a candidate that's kind of on the spot.
00:44So it moves video away from being static
00:46to something that's much more dynamic and conversational.
00:49And then in the wake of that,
00:51we're also launching a whole bunch of new features
00:52to just make the core product better.
00:54So launching a new co-pilot,
00:55making it easier for people to create video
00:57in a chat-style interface,
00:58we're launching new avatar technologies
01:00where you can take and prompt new avatars into existence.
01:03You can take the avatar that you already have of yourself,
01:05and you can say, hey, I want to be on a boat,
01:06I want to be in a corporate office,
01:08I want to change my outfit.
01:09And it certainly just opens up
01:10a whole bunch of new creative possibilities.
01:13New possibilities.
01:14At a time where we're questioning the effectiveness
01:17of these AI agents
01:20and indeed adoption within enterprise, Victor,
01:22just talk to us about
01:23how you've been talking to your customers.
01:25We've had the MIT report saying 95% of pilots aren't working.
01:29How is it working within your clients?
01:33We've always had this mantra called utility of our novelty.
01:35We've always said technology is amazing and it's cool,
01:38and of course, it's what we build our company around.
01:39But ultimately, we have to solve a problem for the end customer, right?
01:43And I think that the MIT report is pretty bang on.
01:45In my own personal experience,
01:47I think that like 80% of AI tools today simply just don't work.
01:5015% of them kind of stumbled their way there,
01:52and the last 5% actually work.
01:54When you look at our business
01:56and I think the kind of numbers that we're delivering,
01:58it's very clear that our product really truly works.
02:01We increased our NRR to 142% from up 13% from 12 months ago.
02:07And we have, you know, four times as many customers
02:09paying as more than 100K than we did 12 months ago.
02:11And all these things only happen
02:12because people actually get value out of the product, right?
02:14Now, I think what we've discovered at Synthesia
02:16is that as important as the models are, right?
02:18Like, of course, generating the avatars,
02:20generating these like video clips for people is really important.
02:22But ultimately, it's about a workflow.
02:24Why do you make a video?
02:25You make a video because you want to communicate something to someone.
02:28What we spent the last couple of years building out
02:30is a platform that helps you with that entire process.
02:32You create the video, you edit it
02:34in this PowerPoint sort of style interface,
02:36you collaborate with your colleagues,
02:38you have content management, translation,
02:40we have a publishing platform with our own video player.
02:42And so what we've managed to do
02:43is to take that process and make it as quickly as we can.
02:46You quantified the market for these tools
02:49and gave numbers and those that you feel don't work,
02:52and we as a team can go away and check that math.
02:55But for Synthesia 3.0,
02:57do you have some benchmarking data
02:59that is evidence of its performance
03:01and capability against the large field that is text to video?
03:06Well, I think the way you benchmark these things
03:12depends very much on the use case, right?
03:13If I'm creating a training video,
03:14that's a very different benchmark
03:15than creating a marketing video, for example.
03:18I think what we've seen very clearly,
03:19like 3.0, a lot of these things,
03:21we're literally releasing them as we go right now,
03:23so they haven't been in the market for 12 months.
03:26But what we see is that if you're doing training content,
03:28for example, engagement is increased by 30%
03:31versus sharing that as normal text.
03:33There's a benchmark against other text to video tools.
03:36What we're really good at is avatars.
03:37That's the thing that we deeply care about.
03:38We're now integrating other models as well.
03:40So if you want to use VO3 in the product,
03:42you can use that.
03:43We're adding a bunch of other models as well.
03:44So I think for us as a company,
03:46I don't really think it makes sense
03:47to benchmark against text to video tools, you know?
03:49For us, it's about the process
03:50of communicating something to someone,
03:52and that goes much deeper than just the models themselves.
03:55This week, the big focus has been on Meta's Vibes,
04:00a funny name,
04:02and the reporting that OpenAI is looking at a video tool
04:08in conjunction with being a social media platform.
04:11You've been on this program and discussed, you know,
04:14the risks of AI-generated video content
04:17in social media domains,
04:19but what's your reaction to those two moves
04:21by those two players?
04:23I think it's very predictable.
04:25I'm quoted, you know, I think four or five years ago
04:27in a book saying that I think by 2026,
04:3095% of all content on the internet
04:32is going to be AI-generated.
04:33I think that's where we're moving towards.
04:35Now, it feels odd.
04:37It feels weird, I think,
04:38that we're going to be watching AI-generated content,
04:40but I think, you know,
04:41ultimately, we'll care less about how something was produced.
04:43We'll care more about the content in itself.
04:45A lot of the content people make with AI today
04:47is what most people would call slop, right?
04:49It's not very high quality.
04:50It's kind of like engagement farming, engagement baiting,
04:53but I think they'll eventually be replaced
04:54by people using these tools to create awesome content.
04:57There's so many great creators on the internet.
04:58They'll pick up these tools
04:59and they'll use it to create great content.
05:02As for the big platforms moving into this space,
05:04I think, again, highly predictable.
05:06These companies ultimately make their money off ads, right?
05:09And to launch an ad in one of these networks,
05:11you want that to be video in 2025
05:13because that performs way better.
05:16And these platforms,
05:17they want to offer you the tools
05:18to make those videos for those ads, right?
05:21Which is exactly like Google owning the platform
05:23to create AdWords for search engine marketing, for example.
05:26So I think it's very predictable
05:27they're going to move into this space.
05:28I think it's also predictable
05:29that they're going to offer these tools inside their apps.
05:31You know, it feels new maybe that they're adding AI video,
05:34but really they've been doing this for six, seven years,
05:36you know, like putting dog ears on yourself
05:38with face filters.
05:39All this stuff is roughly the same technology, right?
05:41So I think it's just like this sort of trajectory
05:44of more and more AI being part of the content creation flows,
05:48both in our personal lives,
05:49but also very much in our corporate lives.
05:50And boy, am I getting Sora 2 made videos coming my way
05:54and the standalone app has been announced just yesterday, Victor.
05:56But I'm interested in your story
05:59with, at the moment,
06:01avatars are being liked by NVIDIA, no less.
06:03We said that you're already backed by NVIDIA and Adobe.
06:05Jensen's just went on stage last week in the UK
06:07saying he's going to get into your next round.
06:09When's that next funding round happening?
06:13Time will show.
06:14You know, we're very appreciative
06:15of Jensen and our NVIDIA partnership.
06:17And I think, if anything,
06:19for us, it's a testament to the fact
06:21that we drive so much real value for our client
06:23that we're way past the kind of demo stage.
06:26I think, you know, we're well capitalized.
06:27We've raised a bunch of funding rounds.
06:29Who knows when the next one is going to come?
06:31But, you know, we've given Jensen our word
06:33that he'll get to participate in it.
Be the first to comment
Add your comment

Recommended