Skip to playerSkip to main content
  • 16 hours ago
Transcript
00:00The way that I look at this is it's a very interesting go-to-market channel for you,
00:05a sales channel. Think about all of the clients that IBM has and how you've tried to grow the
00:10company. Explain how people will access LPUs through this or through the cloud matrix.
00:16Absolutely. It's an extraordinary opportunity for both of us. IBM is going to have their
00:21sellers sell at GrokSkew, and so now you'll be able to directly access our speed. The
00:27advantages that we offer, you could think of it a little bit like offering broadband in the era
00:33where dial-up wasn't fully rolled out and people were still trying to connect to the internet.
00:37Our LPUs are just significantly faster, but we also keep the cost down. Just imagine
00:42if you were to offer broadband and you charged more per bit of data that was sent over the line,
00:50it would be uneconomical. Broadband increases the demand. With agentic use cases, it's particularly
00:56important to reduce the speed. You don't want to ask a question, wait 10 minutes later and come
01:00back. You'd rather get the answer in under a minute. Rob, under this arrangement with Jonathan,
01:08does IBM make any sort of financial investment into Grok, or is there some kind of sales or revenue
01:15split? Explain the economics of this deal for you guys. Big picture is we have a lot of momentum
01:21in AI with WatsonX. As we said on our earnings last quarter, $7.5 billion as a book of business.
01:29And we're trying to solve the client problem of how do they deploy AI faster. So this partnership
01:36is all about what Jonathan said, which is 5X performance at 20% of the cost. We've seen it
01:43with WatsonX running on Grok. And so we will be distributing Grok as part of our go-to-market.
01:49And there's a revenue share as part of that. But we are really excited because we've seen clients
01:55already getting an impact to how they're deploying AI because of the integration of our technology
02:00together. Let's talk about that, Rob, a little bit more because you're the man who's in charge of the
02:05software business. You're also really responsible for the world revenue and profitability of your
02:09company. So help us understand why Grok was the obvious choice. How is it helping your clients
02:14get answers faster on the inference side of things? We looked at every possibility in the market. And
02:21the clients are looking for significant performance. So something that changes how your call center
02:27operates or how your supply chain runs. And then you combine that with a fraction of the cost.
02:33Suddenly the economics makes sense. AI does have a cost problem. And we think this breaks through that.
02:39In IBM, we've said we're going to drive four and a half billion of productivity by the end of this
02:44year. That's another example of AI truly having an impact. And the number one question I get from
02:49clients now is how are you doing that at IBM? And can you help us do that? And we think the combination
02:55of IBM and Grok can make this a reality for any company. Let's dig into that a little bit
03:03now with you, Jonathan, because the integration with WatsonX orchestrate,
03:06what does that look like on your side? How does that happen and happen seamlessly?
03:11So the WatsonX API is available for anyone to use today. It'll be invisible to most users. It'll
03:19simply work. We have a compatible API. And this is something we've been working on. We will also
03:25work on some lower level integrations with VLLM, which is a technology that IBM is very deeply involved
03:32in. But it should just be transparent. You should just get more speed. Just imagine one day you come
03:37home. You had dial up and now you have broadband and it costs less. Rob, where's the demand coming
03:45from on your side, like IBM Granite or some other agentic workload that they want to run using the
03:51Grok LPUs? Are these public sector names? Are they private sector SMEs? I'm trying to understand who
03:57you're serving with it. As often happens, I would say financial services have been early adopters.
04:03But the thing that has changed in the market in the last six months is everything is moving to
04:09multi-model. We have IBM models that we open source, which are the Granite models.
04:14We announced a partnership with Anthropic. We have a partnership with Mistral and Llama,
04:18just to name a few. What is incredible about what Jonathan and team have built
04:22is any model can run and get instant improvement running on the LPUs from Grok. So I think this is
04:30a combination of a multi-model world, accelerating inference with Grok. I think this is a great
04:36combination. Jonathan, does this capacity already exist or are you supply constrained still? You've got
04:44to go out and build it either in Saudi, Finland, here in the States. So the entire world is supply
04:51constrained. And I would actually expect that to continue for at least the next five to 10 years
04:55when it comes to AI. Our advantage is that we have a supply chain that actually ramps much faster.
05:01So customers will be able to come to IBM, put in an order, and we will be able to fulfill that
05:08faster than you would be able to with other technologies. But the supply constraints are real.
05:13And this is another reason to start working with IBM sooner. The sooner you get access to that capacity,
05:18the sooner you're going to have it. I can't tell you how many startups come to us and other companies
05:23come to us. And they are looking for capacity because some of them are actually growing 10,
05:2820 or even 30 percent per week or per month, which is an astronomical growth rate. But by approaching
05:35us early, we can build to your needs. You were just mentioning, Rob, about all the partnerships you
05:43have when it comes to LLMs and the offerings that you're intertwining within yours. Will you go to
05:49others to ensure that inference is as fast as possible? Or is it this exclusive with Grok?
05:54We are open to working with anybody in the ecosystem of AI around what we're doing specifically
06:01on the acceleration with Grok. We want to lean into this partnership. That's why this is the one that
06:05we've announced today because we have confidence working together with Grok. As Jonathan mentioned,
06:12we're also enabling some of the lower level technologies in open source like VLLM. So this
06:17is the right place to be when it comes to inference. But when you think broadly about what's happening in
06:22AI, we have many companies working with us on agents. Last week, we announced S&P Global is now
06:28running on Watson X orchestrate as an example. So we're always open to new partnerships.
06:32And let's just talk about, Jonathan, the go-to-market strategy here of teaming with
06:39the age-old juggernaut that is IBM that has so many deep relationships across global enterprises.
06:47But is that how you're going to work this going forward? It's teaming up with companies that have
06:51those legacy relationships? Or do you still go out there and win the business yourself?
06:56So I would say this is a peanut butter and jelly sort of relationship in the sense that
07:00oftentimes when we meet with C-level executives, those C-level executives turn to their tech teams
07:07and ask them to evaluate Grok. And I've been in meetings where the CTO did that and the response
07:14from the person is, I already use Grok. It's my default for everything. So we already have the bottoms
07:20up. We have 2.3 million developers already building on us. For comparison, OpenAI has 4 million. Now,
07:26going to those deep relationships from IBM and the fact that IBM is a trusted partner who's been
07:32delivering for decades, you put those two together and that's an amazing go-to-market motion.
Be the first to comment
Add your comment

Recommended