00:00Can you help us to understand what 100,000 chips means?
00:06I mean, it could be potato chips as far as I'm concerned, right?
00:10Although far more expensive.
00:12How much compute do you get from 100,000 Blackwell GPUs?
00:19You know, if you were to ask me that a few years ago, I would say that's a lot.
00:23But now we are, you know, hitting about 500,000 chips clusters.
00:27So there is a, you know, this is just getting bigger and bigger every day.
00:30But, you know, frankly speaking, the Amazon deal, you would have expected that because of the revised Microsoft OpenAI pact,
00:38under which OpenAI has the ability to go out to any cloud provider now before Microsoft had to sign off on it.
00:46Anurag, again, every day it's a new deal, be it Amazon and OpenAI or Microsoft and iREN.
00:53And with these deals being inked, how much of them are actually in production?
00:58How much of the data centers is actually shifting to Gen AI use versus just general cloud?
01:04So this is a very good question.
01:06And the answer, it really varies depending on who you are asking.
01:10So if you're thinking about, you know, Oracle creating a data center for OpenAI, that's going to take some time.
01:15But the deal today basically says OpenAI will have availability of those GPUs under the Amazon Cloud's infrastructure.
01:23And that could be actually very near term.
01:26So if you remember, Microsoft had the, you could say, agreement with OpenAI to be the exclusive cloud provider.
01:33During that time period, OpenAI was able to go to a handful of them, such as Oracle or CoreWeave.
01:40But Amazon was not in that mix because Amazon's the biggest cloud provider with the highest market share.
01:47After the agreement was revised and OpenAI had the ability to go anywhere, you know, we had predicted last Thursday that this is a very strong case this is going to happen
01:56because AWS has the largest scale distribution network and it has all the chips that, you know, people want.
02:02So this is a very logical step for OpenAI to go to AWS and add to what they're already doing.
02:08We see so many deals with companies saying they'll get X amount of NVIDIA chips and NVIDIA obviously doesn't have an endless supply of chips.
02:17So how does Jensen Wang decide, you know, you get 100,000, you get 500,000, I'll give you a million because that seems to be very important right now.
02:26No, this is always the case.
02:28So, you know, I've talked to Kunjin quite about it, who covers NVIDIA for us.
02:32You know, in his model, he doesn't think chips being a big bottleneck, which means they have the ability to create as many chips for this year as their demand is looking through.
02:41Now, how NVIDIA distributes it also depends on how big is the cloud provider.
02:47When it comes to Amazon and Microsoft, I don't think they will have a problem gaining a lot of the chips.
02:52I think another group of companies that will get a lot of the NVIDIA chips are the neoclouds, people like Corvive, because they are, you could say, a de facto distribution arm for NVIDIA.
03:04So I think chips is not the roadblock right now.
03:06It's usually power.
03:08And that power demand, again, it's another thing that we hear deal activity swirling around, be it nuclear this or nuclear that.
03:16That's also a timeline that takes a very, very long time to get up and running.
03:21So what happens when you have this mismatch of data centers coming online, but the power supply not yet there?
03:28Yeah, I think that, I think, is a bigger problem just because it takes a lot longer to get them active.
03:33And, you know, we have had one of our energy analysts explain to us that if you want to open a data center, for example, in Virginia, it's going to be very difficult because the power constraints are there.
03:43But if you want to go to Michigan, some of the other Midwestern states where there could be excess power at that point, you know, there is a chance you can create some data centers there in a short period of time.
Comments