Skip to playerSkip to main content
  • 6 weeks ago

Category

😹
Fun
Transcript
00:00We're at the very, very early stage of the intelligence Big Bang.
00:04Being a multi-planet species greatly increases the probable lifespan of civilization
00:08or consciousness and intelligence, both biological and digital.
00:13I think we're quite close to digital superintelligence.
00:15If it doesn't happen this year, next year for sure.
00:22Please give it up for Elon Musk.
00:30Elon, welcome to AI Startup School.
00:39We're just really, really blessed to have your presence here today.
00:43Thanks for having me.
00:44So, from SpaceX, Tesla, Neuralink, XAI, and more,
00:51was there ever a moment in your life before all this where you felt,
00:55I have to build something great?
00:57And what flipped that switch for you?
01:00Well, I didn't originally think I would build something great.
01:03I wanted to try to build something useful,
01:06but I didn't think I would build anything particularly great.
01:09If you said probabilistically, it seemed unlikely,
01:12but I wanted to at least try.
01:14So, you're talking to a room full of people who are all technical engineers,
01:19often some of the most eminent AI researchers coming up in the game.
01:25Okay.
01:25I think we should, I think that, I like the term engineer better than researcher.
01:33I mean, I suppose if there's some fundamental algorithmic breakthrough,
01:38it's research, but otherwise it's engineering.
01:41Maybe let's start way back.
01:43I mean, when you were, this is a room full of 18 to 25-year-olds.
01:48It skews younger because the founder set is younger and younger.
01:53Can you put yourself back into their shoes when, you know, you were 18, 19, you know,
01:59learning to code, even coming up with a first idea for Zip2?
02:04What was that like for you?
02:05Yeah, back in 95, I was faced with a choice of either do, you know,
02:12grad studies, PhD at Stanford in material science,
02:15actually working on ultra capacitors for potential use in electric vehicles,
02:19essentially trying to solve the range problem for electric vehicles,
02:22or try to do something in this thing that most people have never heard of called the internet.
02:29And I talked to my professor, who was Bill Nix in the material science department,
02:35and said, like, can I defer for the quarter because this will probably fail
02:42and then I'll need to come back to college.
02:45And then he said, this is probably the last conversation we'll have.
02:49And he was right.
02:52So, but I thought things would most likely fail,
02:55not that they would most likely succeed.
02:56And then in 95, I wrote basically, I think, the first or close to the first,
03:08maps, directions, internet, white pages and yellow pages on the internet.
03:14I just wrote that personally.
03:16I didn't even use a web server.
03:17I just read the port directly.
03:19Because I couldn't afford, and I couldn't afford a T1.
03:24The original office was on Sherman Avenue in Palo Alto.
03:30There was, like, an ISP on the floor below.
03:33So I drilled a hole through the floor and just ran a LAN cable directly to the ISP.
03:38And, you know, my brother joined me and another co-founder, Greg Curry, who passed away.
03:47And we, at the time, we couldn't even afford a place to stay.
03:52So we just, the office was 500 bucks a month.
03:54So we just slept in the office and then showered at the YMCA on Page Mill in El Camino.
03:59So, yeah, I guess we ended up doing a little bit of a useful company, Zip2, in the beginning.
04:10And we did build a lot of really good software technology.
04:21But we were somewhat captured by the legacy media companies.
04:27And that Knight Ritter, New York Times, Nehurst, whatnot, were investors and customers and also on the board.
04:35So they kept wanting to use our software in ways that made no sense.
04:41So I wanted to go direct to consumers.
04:43Anyway, it's a long story, dwelling too much on Zip2.
04:45But I really just wanted to do something useful on the Internet.
04:50Because I had two choices.
04:52Like, do a PhD and watch people build the Internet or help build the Internet in some small way.
04:59And I was like, well, I guess I can always try and fail and then go back to grad studies.
05:03And, anyway, that ended up being, like, reasonably successful.
05:08It sold for, like, $300 million, which was a lot at the time.
05:12These days, that's, like, I think the minimum impulse put for an AI startup is, like, $1 billion.
05:17It's, like, there's so many frigging unicorns.
05:21It's, like, a herd of unicorns at this point.
05:24You know, unicorns, a billion-dollar situation.
05:27There's been inflation since, so quite a bit more money, actually.
05:30Yeah, I mean, like, $9.95, you could probably buy a burger for a nickel.
05:35Well, not quite.
05:36But, I mean, yeah, there has been a lot of inflation.
05:39But, I mean, the hype level on AI is pretty intense, as you've seen.
05:45You know, you see companies that are, I don't know, less than a year old getting sometimes billion-dollar or multi-billion-dollar valuations, which I guess could pan out and probably will pan out in some cases.
06:02But it is eye-watering to see some of these valuations.
06:07Yeah.
06:08What do you think?
06:09I mean, well, I'm pretty bullish, personally.
06:14I do.
06:15I'm pretty bullish, honestly.
06:16So, I think the people in this room are going to create a lot of the value that, you know, a billion people in the world should be using this stuff.
06:24And we're scratching the surface of it.
06:28I love the Internet story in that, even back then, you know, you are a lot like the people in this room back then in that, you know, the heads of all the CEOs of all the legacy media companies look to you as the person who understood the Internet.
06:44And a lot of the world, you know, the corporate world, like the world at large that does not understand what's happening with AI, they're going to look to the people in this room for exactly that.
06:55It sounds like, you know, what are some of the tangible lessons?
06:57It sounds like one of them is don't give up board control or be careful about having a really good lawyer.
07:02I guess for the first, my first startup, the big, really the mistake was having too much shareholder and board control from legacy media companies who then necessarily see things through the lens of legacy media and that they'll kind of make you do things that seem sensible to them, but really don't make sense with the new technology.
07:28I should point out that I didn't actually at first intend to start a company.
07:35I tried to get a job at Netscape.
07:38I sent my resume into Netscape and Mark Hendrickson knows about this, but I don't think he ever saw my resume and then nobody responded.
07:47So, and then I tried hanging out in the lobby of Netscape to see if I could like bump into someone, but I was like too shy to talk to anyone.
07:54So I'm like, man, this is ridiculous.
07:56So I'll just write something for myself and see how it goes.
07:58So it wasn't actually from the standpoint of like, I want to start a company.
08:01I just want to be part of building, you know, the internet in some way.
08:06And, and since I couldn't get a job at an internet company, I had to start an internet company.
08:11Anyway, the, yeah, I mean, from an AI will so profoundly change the future.
08:17It's difficult to fathom how much, but, you know, the, the, the economy, assuming we don't, things don't go awry.
08:29And, and like AI doesn't kill us all and itself.
08:32Um, then you'll, you'll, you'll see ultimately an economy that is not, not 10 times more than the current economy.
08:43Ultimately, like if we become, say, or whatever, our future machine descendants or mostly machine descendants become like a Kottashev scale to civilization or beyond.
08:56We're, we're, we're talking about an economy that is thousands of times, maybe millions of times bigger than the economy today.
09:07So, um, yeah, I mean, I, I, I, I did sort of feel a bit like, you know, when I was in DC taking a lot of flack for like getting rid of waste and fraud, which was an interesting side quest, uh, as side quests go.
09:22Um, but, uh, I got to get back to the main quest.
09:26Yeah.
09:27I got to get back to the main quest here.
09:29Um, so back to the main quest.
09:32Um, uh, so, but I did feel, you know, a little bit like there's.
09:37You know, it's like fixing the government.
09:39It's kind of like, there's like, say the beach is dirty and there's like some needles and feces and like trash and you want to clean up the beach.
09:46But then there's also this like thousand foot wall of water, which is a tsunami of AI.
09:52Like, and, uh, how much does cleaning the beach really matter if you've got a thousand foot tsunami about to hit?
09:58Not that much.
10:00Oh, we're glad you're back on the main quest.
10:02It's very important.
10:04Yeah.
10:04Back to the main quest.
10:05Um,
10:05building technology, which is, uh, what I like doing.
10:10Um, it's just so much noise.
10:11Like this, the signal to noise ratio in politics is terrible.
10:15So, um.
10:17I mean, I live in San Francisco, so you don't need to tell me twice.
10:20Yeah, DC is like, you know, kind of, I guess it's all politics in DC, but, um, the, uh, if you're trying to build a rocket or cars or you're trying to have software that compiles and runs reliably, then you have to be maximally truth seeking or your software or your hardware won't work.
10:40Um, like there's no, you can't fool math, like math and physics are rigorous judges.
10:45Um, so I'm used to being in like a maximally truth seeking environment.
10:49Um, and, and that's definitely not politics.
10:51Um, so anyway, I'm, I'm, I'm glad to be back in, you know, technology.
10:55I guess I'm kind of curious, going back to the zip two moment, you had hundreds of millions of dollars or you had an exit of worth hundreds of millions of dollars.
11:03I mean, I, I got $20 million.
11:05Right.
11:06Okay.
11:07So you solve the money problem.
11:08Um, and you basically took it and you rolled, you kept rolling with X.com, which became PayPal and Confinity.
11:16Uh, yes.
11:17Uh, I kept the chips on the table.
11:20Um, so not everyone does that.
11:22A lot of the people in this room will have to make that decision.
11:25Actually, what drove you to jump back into the ring?
11:28Well, I, I think I felt for, with, with zip two, we'd both like incredible.
11:32Technology, but never really got, uh, used.
11:35Um, you know, I think at least from my perspective, we had better technology than say Yahoo or anyone else, but it was constrained by our customers.
11:44Um, and, uh, so I wanted to do something that where, okay, we wouldn't be constrained by our customers, go direct to consumer.
11:51Um, and that's what ended up being like X.com PayPal, uh, essentially X.com merging with Confinity, which together created PayPal.
12:01And, and then, uh, that, that actually, the, the sort of PayPal diaspora, uh, has, it might've created more companies than, so more companies than probably any, anything in the 21st century.
12:14You know, uh, so, so many talented people were at the combination of, of Confinity and X.com.
12:20Um, so I just, I just wanted to like, I felt like, uh, we, we kind of got our wings clipped somewhat with zip two.
12:27And it's like, okay, what if our wings aren't clipped and we go direct to consumer and that's, that's what, um, PayPal ended up being.
12:33Um, but, uh, yeah, with, uh, I got that like $20 million check for, um, for my share of zip two at the time I was living with, uh, in a house with four housemates.
12:47Um, and, um, had like, I don't know, 10 grand in the bank.
12:51And then the, this check arrives in the mail, uh, of all places in the mail.
12:56Um, and then, and then my bank balance went from 10,000 to 20 million and 10,000.
13:01Um, you know, like, well, okay.
13:03Um, so I have to pay taxes on that and all, but, uh, then I ended up putting, um, almost all of that into, uh, X.com.
13:10And as you said, right.
13:11Just kind of keeping almost all the chips on the table.
13:14Um, and, um, yeah.
13:19And then after PayPal, I was like, well, I, I was kind of curious as to why we had not sent anyone to Mars.
13:27Uh, and I went on the, went on the NASA website to find out when we're sending people to Mars.
13:32And there was no date.
13:33I thought maybe it was just hard to find on the website.
13:36Uh, but in fact, there, there was no real plan to send people to Mars.
13:41So then, uh, you know, I've come, this is such a long story, so I don't want to take up too much time here, but, um, the.
13:48I think we're all listening with rapt attention.
13:50So, so I was actually, I was on the long island expressway with my friend, Adaya Oressi.
13:55We were like, uh, housemates in college.
13:57And, and Adaya was asking me what I'm, what we're going to do.
14:00What am I going to do after PayPal?
14:02And I was like, it's like, I don't know.
14:03I guess maybe I'd like to do something philanthropic in space because I didn't think I could actually do anything commercial in space.
14:09Cause that seemed like the purview of nations.
14:11Um, so, um, but you know, I'm kind of curious as to when we're going to send people to Mars.
14:17And, and that's when I was like, ah, it's not on the website.
14:20And then I started digging on not, there's nothing on the mass website.
14:23So then I started digging in and, um, and, uh, and I'm, I'm, I'm definitely summarizing a lot here, but, um, I, I, I, uh, my first idea was to do a philanthropic mission to Mars called life to Mars.
14:38Where it would send a, a small greenhouse with season dehydrated nutrient gel land, land that on Mars and grow, you know, hydrate the gel.
14:49And then you'd have this, this great sort of money shot of green plants on a red background.
14:54Um, but the longest time I, by the way, I didn't realize money shot, I think is a porn reference, but, uh, but anyway, the point is that that would be the great shot of green plants on a red background.
15:07And to try to inspire, uh, you know, NASA and the public to, to send astronauts to, to Mars.
15:14As I learned more, I came to realize along the way, by the way, I went to Russia in like 2001 and 2002 to buy ICBMs, which is like, that's an adventure.
15:27You know, you go and meet with Russian high command and say, I'd like to buy some ICBMs.
15:31Um, this was to get to space as a rocket.
15:34Not to, not to, not to nuke anyone, but, uh, but they had, they had to, uh, as a result of arms reduction talks, they had to actually destroy a bunch of their, their big nuclear missiles.
15:46So I was like, well, how about if we take two of those, you know, minus the nuke, um, add an additional, uh, upper stage for, for Mars.
15:55Um, but it was kind of trippy, you know, being in Moscow in 2001, negotiating with like the Russian military to buy ICBMs.
16:05Like, that's crazy.
16:06Um, and, uh, but they, they, they kept also like raising the price on me.
16:14So that, so like literally it's kind of like the opposite of what a negotiation should, should do.
16:18So I was like, man, these things are getting really expensive.
16:21And then I came to realize that actually the problem was not that there was insufficient will to go to Mars, but there was no way to do so, uh, without breaking the budget, you know, even breaking the NASA budget.
16:32So that's where I decided to start SpaceX to, uh, SpaceX to, uh, advance rocket technology to the point where we could send people to, uh, Mars.
16:42Um, and, uh, that was in 2002.
16:46So that wasn't, you know, you didn't start out, uh, wanting to start a business.
16:51You wanted to start just something that was interesting to you that you thought humanity needed.
16:57And then as you sort of, you know, like a cat pulling on, you know, uh, a string, it just sort of the ball sort of unravels and it turns out this is, could be a very profitable business.
17:10I mean, it is now, but it, um, there, I, I, there had been no prior example of really a rocket startup succeeding, but there've been various attempts to do commercial rocket companies and that all failed.
17:23So, um, again, with, with SpaceX starting SpaceX was, uh, really from the standpoint of like, I think there's like a less than 10% chance of being successful, maybe 1%, I don't know.
17:35Um, but, um, but if, if, if a startup doesn't do something to advance, uh, rocket technology, it's definitely not coming from, from the big defense contractors because they just impedance match to the government.
17:47And the government and the government just wants to do very conventional things.
17:51So there's, it's either coming from a startup or it's not happening at all.
17:54So, so like a small chance of success is better than no chance of success.
18:00And, and so that, yeah, so that SpaceX, uh, yeah, started that in, in mid, mid 2002 expecting to fail.
18:09I'd, like I said, probably 90% chance of failing.
18:12And even like when recruiting people, I didn't like try to, you know, make out that it would, well, I said, we're probably going to die.
18:19Uh, but, uh, small chance we might not die.
18:22And if, uh, but this is the only way to get people to Mars and advance the state of the art.
18:27And, um, and then, uh, I ended up being chief engineer of the rocket, uh, not because I wanted to, but because I couldn't hire anyone who was good.
18:37So like none of the good sort of chief engineers would join.
18:42Cause it was like, this is too risky.
18:43You were going to die.
18:44And, uh, so then I ended up being chief engineer of the rocket and, you know, the first three flights did fail.
18:51So it's a bit of a learning exercise there and, um, fourth one fortunately worked, but if the fourth one hadn't worked, uh, I had no money left.
19:00And that would have been, it would have been curtains.
19:02So it was a pretty close thing.
19:04If, if the fourth launch of Falcon not work, it would have been just curtains.
19:08And we would have just been joined the graveyard of prior rocket startups.
19:13So it was like, like my estimate of success was not far off.
19:17Um, we just, we made it by the skin of our teeth.
19:20Um, and, um, Tesla was happening sort of simultaneously.
19:25Um, like 2008 was a rough year, uh, because at mid 2008, uh, we've called summer 2008.
19:33Um, the third, the third launch of SpaceX had failed, a third failure in a row.
19:40Uh, the Tesla financing round had failed.
19:43And so Tesla was going bankrupt fast.
19:45Um, it was just, uh, it was just, uh, it was just, uh, it was just, uh, it was just,
19:50it's like, man, this is grim.
19:51Uh, this is, this is going to be, uh, uh, a tale of warning of an exercise in hubris.
20:00Probably throughout that period.
20:01A lot of people were saying, you know, Elon is a software guy.
20:05Why is he working on hardware?
20:07Why would you, yeah.
20:08Why would he choose to work on this?
20:10Right?
20:11100%.
20:12So you can look at the, like the, cause there's still the, you know, the press of that time
20:16is still online.
20:17You can just search it.
20:18And, and they kept calling me internet guy.
20:20Um, so like internet guy, AKA fool is attempting to build a rocket company.
20:27Um, so, um, you know, um, that we, we got ridiculed quite a lot.
20:36Um, and it does sound pretty absurd.
20:39Like internet guy starts rocket company.
20:41Doesn't sound like a recipe for success, frankly.
20:44So I didn't hold it against them.
20:46I was like, yeah, you know, admittedly it does sound improbable.
20:49And I agree that it's improbable.
20:50Um, but fortunately the fourth launch worked and, um, and, uh, and, and NASA awarded us,
20:58uh, a contract to resupply the space station.
21:02Um, and I think that was like maybe, I don't know, December 22nd or it was like right before
21:12Christmas.
21:13Um, because even the fourth launch working, wasn't enough to succeed.
21:16It, NASA also needed, we also needed a big contract to keep us alive.
21:19So, um, so I got, I got that call from like the, the, the NASA, NASA team.
21:26And I literally, they said, we're, we're awarding you one of the contracts to resupply the space station.
21:31I like literally blurted out.
21:33I love you guys, which is not normally, you know, what they hear.
21:37Um, cause it's usually pretty, you know, sober, but I was like, man, this is a company saver.
21:43And then, uh, we closed the Tesla financing round on the last hour of the last day that
21:50it was possible, which was 6 PM, December 24th, 2008.
21:54Um, we would have bounced payroll two days after Christmas if that round hadn't closed.
21:58So that was a nerve wracking end of 2008.
22:01That's for sure.
22:02I guess from your PayPal and zip to experience jumping into these hardcore hardware startups,
22:09you know, it feels like one of the through lines was being able to find and eventually
22:14attract the smartest possible people in those particular fields.
22:18You know, what, what would, I mean the people in this room, like some of the, most of the people here,
22:23I don't think have even managed a single person yet.
22:25They're just starting their careers.
22:27What would you tell to, you know, the Elon who's never had to do that yet?
22:32I generally think to try, to try to be as useful as possible.
22:36It's so, it may sound trite, but it's, it's so hard to be useful, especially to be useful
22:41to a lot of people.
22:42Uh, where you say the area under the curve of total utility is like, how much, how useful
22:47have you been to your fellow human beings times how many people?
22:50Um, it's almost like, like the physics definition of true work.
22:54It's incredibly difficult to do that.
22:55And I think if you aspire to do true work, um, your, your, your probability of success is much higher.
23:01Um, like, like don't aspire to glory, aspire to work.
23:06How can you tell that it's true work?
23:09Like, is it external?
23:11Is it like what happens with other people or, you know, what the product does for people?
23:15Like what, you know, what is that for you when you're looking for people to come work for you?
23:19Like what, you know, what's the salient thing that you look for?
23:22Or if there are a few.
23:23That's a good question.
23:24I guess it's, I mean, in terms of your, of your end product, you just have to say like, well, if this thing is successful,
23:30how useful will it be to how many people?
23:32And, um, that that's, that's what I mean.
23:35And, and then you, you, you do whatever, you know, whether you're a CEO or, or any role in a startup, you do whatever it takes to succeed.
23:42Like, and, and just, and just always be smash, smashing your ego, like, like internalized responsibility.
23:50Um, like a major failure mode is when, uh, ego to ability ratio, um, is double greater than sign one.
23:59You know, uh, like if you, if your ego to ability ratio is, it gets too high, then you're, you're, you're, you're going to basically break the feedback loop to reality.
24:07Um, and in, in AI terms, your arrow, you'll have, you'll, you'll, you'll break your RL loop.
24:14So you want, you want to, you don't want to break your art.
24:16You want to have a strong RL loop, which means internalizing responsibility and minimizing ego.
24:20And you do whatever the task is, no matter whether it's, you know, grand or humble.
24:25So, I mean, that's kind of like why I actually, I prefer the term like engineering, as opposed to research.
24:33I prefer the term.
24:35And I, I don't, I actually don't want it to call XAI a lab.
24:38I just want to be a company.
24:40Um, I like, it's like, whatever the, whatever the simplest, um, most straightforward.
24:48Uh, ideally lowest ego terms are, those are generally a good way to go.
24:54Um, to, you want, you want to just close the loop on reality hard.
24:58Um, that's, that's a, that's a super big deal.
25:01I think everyone in this room is, uh, really looks up to everything you've done around being
25:06sort of a paragon of first principles and, you know, thinking about the stuff you've done.
25:11Um, how do you actually determine your reality?
25:16Cause that seems like a pretty big part of it.
25:19Like other people, people who have never made anything, non-engineers, uh, sometimes journalists
25:25at time who've never done anything, like they will criticize you.
25:28But then clearly you have another set of people who are builders who have very high, you know,
25:35sort of area under the curve who are in your circle.
25:38Like, you know, how should people approach that?
25:41Like what has worked for you and what would you pass on?
25:44Like, you know, to, to X, to your children?
25:47Like, you know, what do you tell them when you're like, you need to make your way in this
25:51world here, you know, here's how to construct a reality that is predictive from first principles.
25:56Well, the, the tools of physics are incredibly helpful, uh, to, to, um, understand and make progress in any field.
26:05Um, the first principles means just, obviously just means, you know, break things down to the fundamental axiomatic elements that are most likely to be true.
26:13And then reason up from there as cautiously as possible, as opposed to reasoning by analysis or metaphor.
26:19Um, and then it just, simple things like, like thinking in the limit.
26:25Like if you extrapolate, you know, minimize this thing or maximize that thing, thinking in the limit is, is very, very helpful.
26:32Um, I'd use all the tools of physics.
26:34Um, they, they apply to any field.
26:37Um, this is like a superpower actually.
26:41Um, so you can take, say, take for example, like rockets.
26:45You could say, well, how, how much should a rocket, rocket cost?
26:49Um, the typical approach to how, to, that people would take to how much rocket should cost is they would look historically at what the cost of rockets are and assume that any new rocket must be somewhat similar to the prior cost of rockets.
27:01A first principles approach would be you, you look at the materials that the rocket is comprised of.
27:06So if that's aluminum, uh, copper, carbon fiber, uh, steel, whatever the case may be.
27:12Um, and say what, uh, what, how much does that rocket weigh and, and what are the constituent elements and how much do they weigh?
27:18What is the material price per kilogram of those constituent elements?
27:22And that sets the actual floor on what a rocket, uh, can cost.
27:26It's, it can asymptotically approach the cost of the raw materials.
27:30Um, and then you realize, oh, actually a rocket, the raw materials of a rocket are only maybe one or 2% of, of the historical cost of a rocket.
27:40So the manufacturing must necessarily be very inefficient.
27:44Um, if the, if, if the raw material cost is only one or 2%.
27:49That would be a first, first principles analysis of the potential for the cost, for cost optimization of a rocket.
27:55And that's before you get to reusability.
27:57You know, to give an AI sort of AI example, I guess.
28:00Uh, last year when, for XAI, when we were trying to build, uh, a, a training super cluster, uh, we, we, we went to the various suppliers to ask.
28:11We said this was beginning of last year that we needed a hundred thousand H 100s to be able to train coherently.
28:19Um, and, uh, their estimates for how long it would take to complete that were 18 to 24 months.
28:26It's like, well, we need to get that done in six months.
28:29So then, um, or we won't be competitive.
28:34So, so then, uh, if you break that down, what, what are the things you need?
28:38Well, you need a building, you need power, you need cooling.
28:41Um, we didn't have enough time to build a building from scratch.
28:45So we've had to find an existing building.
28:47So we found, uh, a, a factory that was no longer in use in Memphis that used to build, uh, electrolytes products.
28:54Um, but then the, the input power was 15 megawatts and we needed 150 megawatts.
28:59So, uh, we, we, um, rented generators and had generators on one side of the building, and then we have to have cooling.
29:06So we rented about a quarter of the mobile cooling capacity of the U S and put the chillers on the other side of the building.
29:11Uh, that didn't fully solve the problem because the voltage, the power variations during training, um, are, are very, very big.
29:19So you can have power can drop by 50% in a hundred milliseconds, which the generators can't keep up with.
29:25So then we combined, we added Tesla mega packs and modified the software in the mega packs to be able to, to, to smooth out the, uh, the power variation during the training run.
29:35Um, and then there were, there were a bunch of network networking challenges.
29:39Um, cause the networking cables, if you're trying to make, uh, a hundred thousand GPUs train coherently are very, very challenging.
29:46Um, almost, it sounds like, uh, almost any of those things you mentioned, uh, I could imagine someone telling you very directly, no, you can't have that.
29:55You can't have that power.
29:56You can't have this.
29:57Uh, and it sounds like one of the salient pieces of first principles thinking is actually, let's ask why let's, you know, figure that out.
30:05And actually let's challenge the person across the table.
30:09And if they, if I don't get an answer that I feel good about, I'm going to, you know, not allow that to be, I'm not going to let that know to stand.
30:18Is that, I mean, that feels like something that, you know, everyone, if someone were to try to do what you're doing in hardware, hardware seems to uniquely need this.
30:27In software, we have lots of, you know, fluff and things that, you know, it's like we can add more CPUs to that.
30:34It'll be fine.
30:35But in hardware, it's, it's just not going to work.
30:37I think these general principles of first principle thinking apply to software and hardware apply to anything really.
30:43Um, I'm just using kind of a hardware example, um, of, of how we were told something is impossible.
30:49But once we broke it down into the constituent elements of, we need a building, we need power, we need cooling, we need, uh, we need, uh, power smoothing.
30:58And then, and then we could solve those constituent elements.
31:01Um, but it, and then we, and then we just ran the, the networking operation to, to do all the cabling, everything, um, uh, in four shifts, 24, seven.
31:11And, and I was like sleeping in the data center and also doing cabling myself.
31:15Um, and, and there were a lot of other issues to solve.
31:19Um, you know, no, nobody had done a training run with a hundred thousand, um, H100 training coherently last year.
31:28I mean, maybe it's been done this year.
31:29I don't know, but, um, and then, and then we ended up doubling that, uh, to 200,000.
31:34And so now we've got 150,000 H100s, 50K H200s, and 30K GB200s, um, in the, in the Memphis, uh, training center.
31:46And we're about to bring 110,000 GB200s online, um, at a second data center also in the Memphis area.
31:54Is it your view that, you know, uh, pre-training still working and, you know, larger, the scaling laws still hold and whoever wins this race will have basically the biggest, smartest possible model that you could distill?
32:08Well, there's other various elements that, um, beside competitiveness for, for large AI.
32:17Um, this, this for sure, the, the talent of the people matter.
32:22Um, the scale of the hardware matters and how well you able to bring that hardware to bear.
32:27So you can't just order a whole bunch of GPUs and they, they don't, you can't just plug them in.
32:32So you've got to, you've got to get a lot of GPUs and have them, um, train, train coherently and stably.
32:39Um, then it's like, what unique access to data do you have?
32:44I guess distribution matters to some degree as well.
32:47Like how do people get exposed to your AI?
32:49Those, those are, those are critical factors for if it's going to be like a large foundation model that's competitive.
32:54Um, um, um, you know, as, um, as many have said, I, my friend Elias at Sky has said, uh, you know, we, we, we've kind of run out of pre-training data for, of human generated pre, like human generated data.
33:08You run out of tokens pretty fast, um, certainly of high quality tokens.
33:13Um, and, um, and then you, and you have to do a lot of, uh, you, you need to essentially create synthetic data, um, and, and be able to accurately judge the synthetic data that you're creating to verify.
33:29Like, is this real synthetic data or is it an hallucination that doesn't actually match reality?
33:36Um, so achieving grounding in reality is, is, is tricky, but, but, but we are, we are at the stage where there's more effort put into synthetic data.
33:46Um, and like right now we're, we're training Grok 3.5, which is a heavy focus on reasoning.
33:54Going back to your physics point, uh, what I heard for reasoning is that, uh, hard science, particularly physics textbooks are very useful for reasoning.
34:03Whereas, um, I think researchers have told me that, uh, social science is totally useless for reasoning.
34:09Uh, yes, that's probably true.
34:14Um, so yeah, um, there's something that's going to be very important in the future is, um, combining deep AI, uh, in the, the data center or super cluster with robotics.
34:30Uh, so that, uh, you know, things like, like the Optimus humanoid robot.
34:35Incredible.
34:36Yeah.
34:37Optimus is awesome.
34:38There's going to be so many humanoid robots and, and robots of all, robots of all sizes and shapes.
34:43But my prediction is that there will be more humanoid robots by far than all other robots combined by maybe an order of magnitude, like a big difference.
34:53Um, and, um,
34:56Is it true that you, you're planning a robot army of a sort?
35:00Whether we do it or, or, or, or, you know, whether Tesla does it, you know, Tesla works closely with XAI.
35:07Um, like you've, you've seen how many humanoid robot startups are there.
35:12Like, it's like, I think Jensen Fong was on stage with a lot, with a massive number of robots, you know, robots from different companies.
35:21I think there was like a dozen different humanoid robots.
35:24So, I mean, I guess, you know, part of what I've been fighting and maybe what has slowed me down somewhat is that I'm a, I'm a little, I don't want, I don't want to make Terminator real.
35:35You know, so I've been sort of, I guess, at least until recent years dragging my feet on, on AI and, and humanoid robotics.
35:46And then I sort of come to realize that realization it's happening, whether I do it or not.
35:51So you got really two choices, participant, you can either be a spectator or a participant.
35:57And so like, well, I guess I'd rather be a participant than a spectator.
36:01Um, so now it's, you know, pedal to the metal on humanoid robots and, um, digital super intelligence.
36:09So I guess, you know, there's a third thing that, uh, everyone has heard you talk a lot about that I'm really a big fan of, you know, becoming a multi-planetary species.
36:17Where does this fit? You know, this is all, you know, not, not just a 10 or 20 year thing, maybe a hundred year thing.
36:23Like it's a multi, you know, many, many generations for humanity kind of thing.
36:27You know, how do you think about it? There's, you know, AI, obviously there's embodied robotics and then there's being a multi-planetary species.
36:37Does everything sort of feed into that last point or, you know, what, what are you driven by right now for the next 10, 20 and a hundred years?
36:45Geez, a hundred years? Man, I hope civilization's around in a hundred years.
36:50If, if it is around, it's going to look very different from civilization today.
36:54Um, I mean, I'd predict that there's going to be at least five times as many humanoid robots as there are humans, maybe 10 times.
37:05Um, and one way to look at the progress of civilization is percentage completion, Kardashev.
37:13So if you're, you know, Kardashev scale one, you've, um, you've harnessed all the energy of a planet.
37:19Now, in my, in my opinion, we've, we've, we've only, uh, harnessed maybe one or 2% of, uh, Earth's energy.
37:26Uh, so we've got a long way to go to be Kardashev scale one.
37:30Uh, then Kardashev, Kardashev two, you've harnessed all the energy of a sun, uh, which would be, I don't know, a billion times more energy than Earth, maybe closer to a trillion.
37:42Um, and then Kardashev three would be all the energy of a galaxy, pretty far from that.
37:49So we're, we're at the very, very early stage of the intelligence big bang.
37:55I, I hope, I hope we're on the, in terms of being multi-planetary, like I think, I think we'll have enough mass transferred to Mars within like roughly 30 years to make Mars self-sustaining such that Mars can continue to grow and prosper.
38:11Even if the resupply ships from Earth stop coming.
38:14Um, and that, that greatly increases the probable lifespan of civilization or consciousness or intelligence, both biological and digital.
38:25Um, so that's why I think it's important to become a multi-planet species.
38:30And I'm somewhat troubled by the phobia paradox.
38:32Like why have we not seen any aliens?
38:35And it could be because intelligence is incredibly rare.
38:39Um, and maybe we're the only ones in this galaxy.
38:42Um, in which case.
38:44The intelligence of consciousness is this tiny candle in a vast darkness.
38:50And we should do everything possible to ensure the tiny candles candle does not go out.
38:56And being a multi-planet species or making consciousness multi-planetary, uh, greatly improves the probable lifespan of civilization.
39:05And it's, it's, it's, it's the next step before going to other star systems.
39:09Um, once you, once you at least have two planets, then you've got a forcing function for the improvement of space travel.
39:15Um, and, um, and that, that ultimately is what will lead to, uh, consciousness expanding to the stars.
39:23It could be that, um, the Fermi paradox dictates once you get to some level of technology, you destroy yourself.
39:30How do we stay ourselves?
39:32How do we actually, what would you prescribe to, I mean, a room full of engineers?
39:37Like what can we do to prevent that from happening?
39:40Yeah.
39:41How do we avoid the great filters?
39:42One of the great filters would obviously be global thermonuclear war.
39:46Uh, so we should, we should try to avoid that.
39:50Um, I guess building benign AI robots that AI that loves humanity and, um, you know, robots that are helpful.
40:04Um, something that I think is, uh, extremely important in building AI is, is a very rigorous adherence to truth, even if that truth is politically incorrect.
40:16Um, I, my intuition for what could make AI very dangerous is if, um, if you force AI to believe things that are not true.
40:27How do you think about, you know, there's sort of this argument for open, uh, open for safety versus closed for competitive edge.
40:34I mean, I think the great thing is you have a competitive model.
40:37Many other people also have competitive models.
40:39And in that sense, you know, we're sort of off of maybe the worst timeline that I'd be worried about is, you know, there's fast takeoff and it's only in one person's hands, you know, that might, you know, sort of collapse, uh, a lot of things.
40:52Whereas now we have choice, which is great.
40:55How do you think about this?
40:56Yeah.
40:57I do think there will be several deep intelligences, maybe, maybe at least five, um, maybe as much as 10.
41:07Um, I'm not sure that there's going to be hundreds, but it's probably close to, like, maybe there'll be like 10 or something like that.
41:17Um, of which maybe four will be in the U S.
41:22Um, so I don't think it's going to be any, any one AI that, that has a runaway capability.
41:33Um, but, but yeah, so several deep intelligences.
41:40What will these deep, deep intelligences actually be doing?
41:43Will it be scientific research or trying to hack each other?
41:48Probably all of the above.
41:49Hmm.
41:50Um, I mean, hopefully they will discover new physics and I think they will very, they're, they're definitely going to invent new technologies.
42:00Um, like, I think, I think we're quite close to digital super intelligence.
42:07It may happen this year.
42:10And if it doesn't happen this year, next year, for sure.
42:13A digital super intelligence to find as smarter than any human at anything.
42:19Well, so how do we direct that to sort of super abundance?
42:22You know, we have, we could have robotic labor.
42:25We have cheap energy intelligence on demand.
42:28You know, is that sort of the white pill?
42:30Like where do you sit on the spectrum?
42:32And are there tangible things that you would encourage everyone here to be working on to make that white pill actually reality?
42:40I think, I think it most likely will be a good outcome.
42:46Um, I, I guess I'd sort of agree with Jeff Hinton that maybe it's a 10 to 20% chance of annihilation.
42:52Uh, but look on the bright side, that's 80 to 90% probability of a great outcome.
42:57Um, so, yeah, I can't emphasize this enough.
43:05A rigorous adherence to truth, uh, is, is the most important thing for AI safety, safety.
43:11Um, and obviously empathy for, uh, humanity and life as we know it.
43:18We haven't talked about Neuralink at all yet, but I'm curious, you know, you're working on closing the input and output gap
43:26between humans and machines.
43:28Yeah.
43:29Uh, how critical is that to AGI, ASI?
43:32And, you know, once that link is made, can we not only read, but also write?
43:37The Neuralink is not necessary to solve, um, digital super intelligence.
43:42Uh, that'll happen before Neuralink is at scale.
43:46Uh, but, uh, what Neuralink can effectively do is solve the, um, the input output bandwidth constraints,
43:55especially our output bandwidth is very low.
43:58The, the out, the, the sustained output of a human over the course of a day is less than one bit per second.
44:06So there's, you know, 86,400 seconds in a day.
44:09Um, and it's extremely rare for a human to output more than that number of symbols per day.
44:14So, um, certainly for several days in a row.
44:19Uh, so you, you really, um, with, with it, with a Neuralink interface, you can massively increase your output bandwidth and your input bandwidth.
44:29Um, input being right to, you have to do right operations to the brain.
44:34Um, we, uh, we, we have now five humans who have received the, uh, the, the kind of the read, uh, input where it's reading signals.
44:45And you've got people with, with ALS who, um, really have no, they're, they're tetraplegics, but they, they can now communicate at, with, um, similar bandwidth to a human with a fully functioning body, um, and control their computer and a phone, um, which is pretty cool.
45:02And then, um, I think in the next six to 12 months, we'll be doing our first implants for vision, where even if somebody is completely blind, um, uh, we can, we can write directly to, um, the, uh, the visual cortex.
45:18Um, and, and we've had that working in monkeys.
45:21Actually, I think one of our monkeys now has had a visual implant for three years.
45:26And, um, at first it'll be relatively fairly low resolution, but long-term you would have very high resolution and be able to see multi-spectral wavelengths.
45:37So, uh, you could see an infrared ultraviolet radar, like a superpower situation, but like at, at, at some point, the cybernetic implants would, would not simply be correcting things that went wrong, but, uh, augmenting human capabilities dramatically.
45:54Augmenting, augmenting, augmenting intelligence and senses and bandwidth, uh, dramatically.
46:00And that's, that's going to happen at some point.
46:03Um, but digital superintelligence will happen well before that.
46:08At least if we have a, uh, a neural link, we might, we'll be able to appreciate the, the AI better.
46:17I guess one of the limiting reagents to all of your efforts across all of these different domains is access to the smartest possible people.
46:26Um, but, you know, sort of simultaneous to that, we have, you know, the rocks can talk and reason.
46:33And, you know, there may be 130 IQ now, and they're probably going to be super intelligent soon.
46:39Uh, how do you reconcile those two things?
46:42Like what's going to happen in, you know, five, 10 years?
46:45And what should the people in this room do to make sure that, you know, they're the ones who are creating instead of maybe below the API line?
46:53Well, they call it the singularity for a reason, because we don't know what's going to happen.
46:57In, in the, not that far future, the percentage of intelligence that is human will be quite small.
47:03At some point, the collective sum of human intelligence will be less than 1% of all intelligence.
47:09Um, and if, if, if things get to a quarter shift level two, um, we're talking about human intelligence, even assuming a significant increase in human population and intelligence augmentation.
47:26Like massive intelligence augmentation, where like everyone has an IQ of a thousand type of thing.
47:32Um, even in that circumstance, uh, collective human intelligence will be probably one billionth that of, uh, digital intelligence.
47:40Anyway, we're, we're, we're, we're the biological boot loader for digital super intelligence.
47:45I guess just to end off.
47:47Was I, it was like, was I a good boot loader?
47:51Where do we go? How do we go from here?
47:53I mean, I mean, all of this is pretty wild sci-fi stuff that also could be built by the people in this room.
48:01You know, if you, do you have a closing thought for the smartest technical people of this generation right now, what should they be doing?
48:11What should they, what should they be working on?
48:13What should they be thinking about, you know, tonight as they go to dinner?
48:16Well, I, I, as, as I started off with, I think if you're doing something useful, that's great.
48:23Um, if you just, just try to be as useful as possible to your fellow human beings and that, that, then you're doing something good.
48:32Um, I keep harping on this, like focus on super truthful AI that that's the most important thing by AI safety.
48:41Um, you know, obviously if, you know, um, anyone's interested in working at XAI, I mean, please, please let us know.
48:50Um, we're aiming to make Grok, um, the, the, the maximally truth seeking AI.
48:56Um, and, uh, I think that's a very important thing.
48:59Um, hopefully we can understand the nature of the universe.
49:03That's, that's really, I guess what AI can hopefully tell us.
49:08Maybe AI, AI can maybe tell us where are the aliens and what, you know, how did the universe really start?
49:14How will it end?
49:15What are the questions that we don't know that we should ask?
49:19And, um, are we in a simulation or what level of simulation are we in?
49:27Well, I think we're going to find out.
49:30Am I an MPC?
49:33Elon, thank you so much for joining us.
49:35Everyone, please give it up for Elon Musk.
49:39Yeah, go on.
49:40I'm going to play.
49:41All right.
49:42Dr.
49:42We're gonna play.
49:43Bob Vargas.
49:44We're gonna be able to play.
49:45Great.
49:46Perfect.
49:47You're gonna play.
49:48Yes.
49:49We're gonna play.
49:50We're gonna play.
49:51We're gonna play.
49:52Okay, great.
49:53Wonderful.
49:55Bye.
49:56Maybe we're gonna play.
49:57Well, I'm gonna play.
49:58Bye.
49:59Bye.
50:00Bye.
50:01Bye.
50:02Bye.
50:03Bye.
50:04Bye.
50:08Bye.
Be the first to comment
Add your comment