Skip to playerSkip to main content
  • 8 hours ago
Transcript
00:00Let's break down, though, what this so-called Gemini Leap means.
00:03Mandeep Singh is with us, Bloomberg Intelligence Senior Tech Analyst, joining us.
00:07Is it a big leap?
00:08It is, and when you look at some of the benchmarks they showed in the paper around visual reasoning,
00:14I mean, everyone has been focused on multimodality.
00:17This was the true kind of model where you could see multimodality in action in terms of,
00:23okay, we can do code, the model can also do image generation and visual reasoning,
00:28which is what you see in VAMO.
00:31I mean, when I think about, you know, why they're so successful with VAMO,
00:34yes, they have been doing it for the longest, but also some of it is AI that's coming from their models,
00:40and I think that was reflected in the paper.
00:42And look at how far they've come in the past two years from that BART launch
00:47to now Gemini 3 model being a frontier model, so really well executed,
00:52and I think it was all on TPUs.
00:55That's the other thing, right?
00:56Right.
00:56No NVIDIA GPUs used for training, which everyone still relies on NVIDIA for training,
01:03so that's a big leap.
01:05Mandy, this is interesting.
01:06We were reading your research this morning.
01:08I think we're going to bring it up on the screen.
01:09So you're basically saying that if this is evidence of the success of the TPU, Google's custom chip,
01:16that might free up Google Cloud or GCP to take their NVIDIA allocation and then put it to work for customers,
01:24which is a good thing for their cloud business when it comes to external-facing customers.
01:28That's right.
01:28And so, look, Google is still buying NVIDIA chips.
01:33In fact, they are one of the top three customers for NVIDIA.
01:37And so when I look at how everyone is using their NVIDIA allocation, some of the workloads are training.
01:44In fact, for Meta, everything is being consumed inside Meta with the family of apps for training and for inferencing and recommendation systems.
01:54In the case of Alphabet, I mean, given everything internal is running on TPUs, Google Cloud is where they deploy a lot of the NVIDIA allocation,
02:02whether it's the latest Blackwell or the prior versions, and that's where you can rent it, you can generate revenues, same way as NeoClouds are doing it.
02:10And so from that perspective, I do think that cloud revenue could get a lift just because there are more availability of NVIDIA GPUs over there.
Be the first to comment
Add your comment

Recommended