Skip to playerSkip to main content
  • 14 hours ago
Transcript
00:00Caroline and I have been discussing all morning, kind of a bit like the Salesforce question,
00:04what does modal do? But I think like the understanding is it's more analogous to like
00:09a together AI and maybe nearer to a Neo cloud, but just explain that basic question to us to start.
00:16Yeah, totally. And it's great to be here, by the way. So we build the infrastructure layer of a
00:20lot of AI applications. If you think about a lot of all these AI applications, you know,
00:24people building things like generative media or large language models or things like vibe
00:28coding platforms, they all need a software layer to run that. And so we basically provide that
00:33software layer by aggregating a lot of underlying physical infrastructure, a lot of GPUs all over
00:38the world, and then make it easy for engineers to build things on top of that, build things,
00:44whether that's related to training or inference or batch processing or code the execution or many
00:50other things. So we have a lot of customers like Meta, Lovable, Suno, Ramp, Scale, a lot of happy
00:57developers who basically build things using our SDK on top of this infrastructure platform.
01:01So that sounds very much like AWS and AWS Lambda. Is that fair? Like, what's the landscape here?
01:10There are some similarities. Like AWS Lambda is built for more of a traditional infrastructure.
01:15And I think what we realized when we started this company was basically a lot of traditional
01:18infrastructure, you know, things like, you know, AWS Lambda or Kubernetes or Docker just isn't built
01:23for AI. So we realized that in order to build this new layer of infrastructure to support all
01:28these applications, we basically got to throw out a lot of the existing infrastructure and build a
01:33whole new stack. But we're very happy customers of AWS. We run on AWS. We run on many other
01:38hyperscalers. We have great partnerships with them. I think what they're good at is running all the
01:43physical infrastructure, all the data centers. And then what we're very good at is innovating in the
01:47layer above with the developer experience and building this platform that enables engineers to
01:51build all these AI applications on top of it.
01:54So you've got $80 million more in funding. We've raised more than $80 million. And now that takes
02:00you to $111. What do you need all the money for? What are your absolute costs, Eric? What's keeping
02:05you up?
02:06Yeah, for sure. So, so far we've actually been pretty capital efficient, but we see a lot of
02:10demand out there. There's so many customers building these applications, you know, starting to see a lot of
02:15adoption also later stage company. A lot of enterprise companies are now taking what's maybe previously
02:20research prototypes and putting it into production. And they need, you know, a platform to run these
02:25applications. And that's when they come to you. We think there's just a massive opportunity that
02:30the market is exploding, so much demand out there. Since we're going to take this money and, you know,
02:34first and foremost hire a lot of engineers to build, you know, because we need to invest in this
02:38platform. We're building a very deep platform, but also invest, of course, in sales and marketing.
02:42How hard is it to find the right talent? How expensive is it?
02:45It's hard. It's hard. I mean, we're in New York, so I think we're actually a little bit
02:50sheltered from the very worst part of it. But yeah, it's very hard to find all these, you know,
02:55hard, you know, these people solving these hard infrastructure problems.
02:58Eric, we're going to bring back some video. We're just showing the platform. It still comes back to
03:03access to infrastructure, the GPUs, the underlying technology, right? Are you taking on the capital
03:10burden of that? Like who's actually paying for access to lease or buy out the capacity?
03:17No, we're trying to be very capital efficient in the sense that we don't actually own any
03:21underlying physical infrastructure. Like I mentioned, we run on top of a lot of different
03:24clouds. We run on all the major hyperscalers. We're adding a bunch of neoclouds. And so
03:28our view is that, you know, in order to move fast, in order to expand very quickly,
03:32it's much better to build an existing physical infrastructure. And we focus more on the layer
03:36above the software layer. And that enables us to move very quickly and expand very quickly in a
03:41more capital efficient way. Eric, you're going global, you've got a lot of love from the likes
03:46of scale AI, but lovable as well, that's more an overseas play. How much are you able to broaden this?
03:52A lot more. And it's great to see customers like lovable. I actually grew up in Sweden. And so,
03:57you know, lovable is Swedish. But yeah, most of the customers are in the US. But we're seeing a lot of
04:03very wide different types of use cases, right? Like I mentioned, you know, things like
04:07generative media or large language model applications. But some of the stuff we've
04:12also seen more recently is, you know, a lot of computational biotech. Well, literally,
04:16I talked to a customer the other day who's literally using modal to cure cancer. And then,
04:20you know, weather forecasting, there's many other applications. It's super exciting.
Be the first to comment
Add your comment

Recommended