- 15 hours ago
Category
🗞
NewsTranscript
00:00Let's frame the opportunity, Lisa. The market reaction is very clear. But for AMD and the AI
00:06industry at large, what do you think this represents? Well, look, this is a huge milestone
00:11for AMD. We are so thrilled with the partnership with the OpenAI team. And it's also a huge moment
00:17for the AI industry because when you get to the right down to it, you need more AI compute. I
00:23mean, that's where we are today. Compute is the foundation for all of the intelligence we can get
00:26from AI. And we are a compute provider. We have spent years on our roadmap. We've spent years
00:33working with OpenAI and the team. And together, now we're embarking on a massive buildout
00:39of six gigawatts of AI compute. And it's a big deal for us, for our shareholders, for our teams,
00:46and for really the partnership and the overall AI ecosystem. Greg, I said at the top, the focus
00:52is inference. I think that's really important to be specific about what you will do with
00:57this capacity. So literally explain that part. And I'm conscious that, you know, in the first
01:01instance, the first target is one gigawatt and then eventually six gigawatts. But what
01:06will you use it for? Well, I think that the world continues to underestimate the amount of demand
01:12for AI compute, right? That just, we've seen this explosion of demand with things like ChatGPT,
01:17you know, we're at 800 million weekly active users now. This product didn't even exist three
01:21years ago. And we're in a position where we cannot launch features. We cannot launch new products
01:26simply because of lack of computational power. And we see these models continuing to get exponentially
01:31better. And I think we're just heading to a world where so much of the economy is going to be lifted
01:35up and driven by progress in AI. And so we're very much heading to a world by default that I think
01:40looks like a compute desert, right? There's just not enough compute to go around. And so we're trying to
01:44build as much as possible as quickly as possible. So we're starting with one gigawatt simply because
01:48you've got to start somewhere. But honestly, we're building as fast as we possibly can and trying to
01:53bring as much computational power to bear for the economy and for the world.
01:57Lisa, this is such a big commitment to Instinct in particular. As a customer, does it make OpenAI the
02:04largest for that particular product? Well, this is certainly the largest deployment that we have
02:10announced by far. I mean, you know, six gigawatts of compute. As Greg said, we're going to start with
02:15the first gigawatt in the second half of 2026 on our new next generation MI450 chip. I think the
02:23thing to understand is, you know, these types of partnerships actually take, you know, years to
02:28really get comfortable with the idea that we're going to, you know, go all in together. And this is
02:32an all in partnership in terms of building out, you know, the AI compute that OpenAI needs for
02:39everything that they're offering to the world. So yes, it's a huge deal. And it also says a lot
02:44about, you know, how much needs to come together for this entire ecosystem to operate. So, you know,
02:49we are setting up, you know, certainly there's a lot of engineering work, but our teams are working
02:53together on hardware, software, we're ensuring the supply chain, all of those elements are set up and
02:59ready to deliver on this massive commitment. Greg, talk us through a little bit about the players that
03:06you need to also lean on. This has been years in the making, as you say, with AMD, but what other
03:11cloud providers were involved? How are you thinking about this working with an Oracle or others out
03:15there? Yeah, we really think of this as an industry-wide effort. And in general, we think
03:20that compute is something that does require the entire supply chain to really wake up and to really
03:25start building much more than people were planning on. I think this starts from energy to try to get
03:31far more power to be built. Things like nuclear, I think, are going to be very important to come
03:35online. The cloud providers are an important part of this as well. So we're going to be deploying AMD
03:39in our own data centers. We'll be deploying them together with cloud providers. You know,
03:43we have a deal with Oracle, lots of other cloud providers out there. You can really see that we're
03:48very much in the, we just want compute, as much compute as possible. We think this is important for
03:53the economy. We think this is important for the nation. We think this is important for humanity.
03:56And so really, we're working with everyone in this whole industry in order to get as much
04:01compute power online as quickly as we can. Lisa, I'm sorry, specifics. Where is this data center
04:07going to be? Is it one single site? Is it Oracle that will partner with you on this?
04:15Well, actually, what this really is, is an announcement of what, you know, AMD and OpenAI are
04:20going to do together. You know, OpenAI has a lot of partners in terms of, you know, where they deploy.
04:25I imagine a lot of it will be in cloud service providers. It's really up to, you know, OpenAI and
04:31Greg and Sam and the team. But the way to think about it is, for this, you know, amount of compute,
04:37it's going to have to be in a lot of different places. I mean, it's a massive amount.
04:41Multiple locations.
04:42Multiple locations. I would imagine, you know, multiple providers to really get this online as
04:47fast as possible.
04:48Greg, there is a lot of focus on where OpenAI is going to get the money from to fund all of
04:55this. Sam Altman's big picture commitment is well documented, right? And the numbers are in,
05:01to his mind, are in the trillions. But have you specifically thought about debt financing for this
05:06relationship with AMD? Have you thought about doing a specific equity raise? You are very committed
05:14across multiple projects.
05:15Yeah, look, the way that I would look at this is that AI revenue is growing faster than I think
05:21almost any product in history. And that ultimately, at the end of the day, the reason this compute
05:27power is so important, is so worthwhile for everyone to build, is because the revenue ultimately will
05:32be there. Now, as a company that is trying to move as fast as we can, we look at everything, right? We
05:38look at equity, debt. We look at trying to find creative ways of financing all of this. That's been
05:45actually a huge focus of us for the past couple of years, is thinking about how can we possibly build
05:49the amount of compute that is required in order to really transform this whole economy into an AI
05:54powered economy. And so I think you'll see lots of creative ideas. But fundamentally, I think at the
05:59end of the day, it is because we believe that...
06:01I'm sorry to jump in and interrupt. And Caro, just forgive me on this one. The condition of AMD
06:07issuing the stock to open AI requires you to spend money, basically, because you have to deliver that
06:14gigawatt of capacity first. Lisa, I have to ask you if you have assurances that open AI is good for it.
06:21Well, let me be clear. I mean, this deal is a win for AMD. It's a win for open AI. And it's a win for
06:28our shareholders. And that's kind of the way we put this together. I have full confidence in, you know,
06:33open AI, Sam, Greg, Sarah. I mean, this is a massive opportunity for us right now, right here. It's about
06:39who has the most compute and how fast can we get it online. And we're committing to doing this
06:45together. And the fact is, as open AI buys chips, that's great for AMD. Our revenue goes up, our
06:51earnings go up. You know, we expect that it will also be very, very accretive to our shareholders
06:58from day one. And as we do that, you know, we're very happy to have open AI as a deep partner and we
07:03win together. So it's like a virtuous positive cycle in how we build out, you know, this big vision
07:09for having all this compute out there. Right. And yet we still question, as you were just talking
07:14about, Greg, some of the other supply chain elements. You're talking about the need for
07:17nuclear, for power. What's really interesting is we, are you feeling confident enough about the
07:23rest of the compute, the supply chain is there? Is this going to be U.S. manufactured from your
07:27perspective? Or are you looking at also building out internationally with AMD? Yeah, we've been looking
07:33at really all options. Our preference and really the core thing that we try to do is build as much
07:38as possible in the U.S. And you can see the commitments that we've made over the past year,
07:42you know, $500 billion of investment in the U.S. And that's not stopping. We're continuing to build.
07:47I do think that international, that there, it is also going to be important for the world to have
07:51compute. I think that compute is going to become this like national security strategic resource.
07:56And every country is going to need computational power. And so we are really not limiting our sort of
08:03sites in terms of where to build. But we do think it is important that the U.S. leads in this
08:07technology leads in computational power and we're expanding the supply chain. But you can see that
08:11we've really been working with partners across the globe in order to actually meet the demand that we
08:15expect to be coming in upcoming years. Lisa, the manufacturing of these chips, will you look to
08:20Intel at all for it, do you think, in the future? Well, as you know, the supply chain is something that we
08:26work on, you know, very, very meticulously. I think we have a very strong supply chain. We're certainly
08:33deeply partnered with, you know, TSMC across the supply chain. You know, just to that earlier
08:38question, we're absolutely prioritizing building in the United States because I think that's super
08:43important. This is the U.S. AI stack. We want to have as much of it in the U.S. as possible. And,
08:50you know, we continue to really look at, you know, how do we ensure that there will be a strong
08:54supply chain going forward? Greg, Sam posted on X that this deal with AMD is incremental to what's
09:00already being done with NVIDIA. But as Lisa knows, I've spent quite a lot of time looking at the MI
09:05family and the newer generations of products to come. Is there a very clear, specific benefit to
09:11using AMD technology for inference relative to the capabilities of NVIDIA? Or do you just see it
09:18broadly as some sort of diversifying factor? Well, I would look at it this way, that there's a huge
09:24fixed cost to getting AI models running on any platform. And so that when we look at what's out
09:30there, that actually getting AI training to work is a huge, huge amount of lift. That's something
09:35we've really only done the work for NVIDIA. But for inference, that that's something that's much
09:39more, that there's an easier barrier to entry there. And one thing we found is that I think that the
09:46work that Lisa and team have been doing on the MI450 series, it's looking like it's going to be a
09:50really incredible chip. I think that there's, the way that these things work is there's niches for
09:55different balances of memory and computational power. And so as we have a diversity of workloads,
10:01we're finding that having a diversity of chips also really accelerates what we're able to do.
10:06Lisa, at the beginning of this conversation, I said there are both operational and financial
10:10milestones to be met. And Greg explained, you got to start somewhere. So in the first instance,
10:15one gigawatt. But would you just sort of draw out the pathway to that first gigawatt?
10:20You know, it seems like you're prepared to move quickly here.
10:23Yeah, absolutely. And maybe Ed, if I can just build on something that Greg said,
10:27I think he's absolutely right. You know, we're a believer in there's a diversity of workloads,
10:31and there will be a diversity of workloads across, you know, customers, models, use cases. And from
10:38that standpoint, you know, we feel really good about how we're positioned. You know, we love the work
10:42here because, you know, frankly, you know, OpenAI is the ultimate power user of our chips and tests us
10:49in very good ways. So I think that's what gives us confidence that, you know, the technology is
10:55there. And then to your point about milestones, yes. I mean, this is, you know, clearly a case where
11:00we are tied to each other. The first gigawatt of deployment is super important. We're going to start
11:06that, you know, second half of next year, and we're going to build on from there. And it really
11:11is not just the technology, but, you know, commercial milestones, adoption milestones,
11:16and just how we proliferate the capability going forward. But I'm looking forward to building this
11:21as fast as possible. You know, we're already working with a number of cloud service providers
11:25who are also very active on our technology. And I think this is a great catalyst to get the
11:31industry to build faster. Tied to each other is such an interesting turn of phrase. And Greg,
11:37look, you are seeing more AI users and chip makers and designers becoming more financially tied to
11:44each other. Is this going to continue? Is this the step forward for how you see this financing going
11:49forward? Well, I really see the world transitioning to this AI-powered economy. And the interesting thing
11:56is within OpenAI that we've really seen what it's like when your progress is limited and accelerated
12:02as two sides of the coin by computational power, like teams within OpenAI that their ability to
12:08deliver really is tied to the amount of compute that they get. And I think we're heading to a world
12:13where that is how the whole economy will function. And we're starting to see it, right, that people
12:18having access to better AI tools, if you're a coder, you're able to do far more if you have access
12:23to better AI models. And we're heading to a world where if you can have 10 times as much AI power
12:28behind you, you will probably be 10 times more productive. And so I think that we're moving to
12:33a world where the whole industry is waking up to the fact that we have just not planned. We have not
12:38planned for this moment where this explosion in AI demand is happening. So it's happening all the way
12:43from the power to the silicon. And I think this whole industry has to find a way to actually rise to meet
12:49the occasion. Lisa, you have given us a look into the future before about how you see the total
12:55addressable market, the industry. Now that the ink is dry with OpenAI and Greg, are you rethinking
13:02either your bigger picture analysis of the market for AI accelerators and GPUs, or do you see AMD now
13:09having an improved position in that market relative, of course, to your friends at NVIDIA?
13:14Well, again, I think, and I've told you before, I believe that this is a huge market. You know,
13:21we have size, just the AI accelerator TAM being, you know, over $500 billion in TAM over the next
13:28few years. I think some might say, you know, maybe I was a little conservative in that TAM analysis.
13:34But the way to think about it is there's so much need from compute. I mean, you just heard it from Greg.
13:39So, you know, this is a huge pie and you're going to see the need for, you know, more players coming
13:45into it. And, you know, from my standpoint, this is a big validation of our technology and our
13:51capability. You know, as much as we love the work with OpenAI, we're working with a lot of other
13:55customers as well. There's a lot of excitement in the industry around MI450. So we're ready for it.
Recommended
1:06
|
Up next
1:57
5:55
2:45
4:31
6:33
5:48
3:32
9:21
8:16
5:18
5:38
5:43
2:51
4:25
4:14
4:44
5:33
1:58
5:32
3:14
2:22
6:14
4:28
Be the first to comment