bookmark Topics
description Transcript
Almost everything is going to be energy constrained. And so if you take a look at one of the most important technology advances in histories, this idea called Moore's Law. Moore's Law was that started basically in mind generation. And mind generation is the generation of computers. I graduated in...
- Joe Rogan, what are you guys checking out?
- The Joe, Rogan, experience.
- Shrained by Dave, Joe Rogan, podcast by night, all day.
(upbeat music)
- Good job, Steve.
- Nice job.
- Good to see you again.
- We were just talking about, was that the first time
we ever spoke?
Or was the first time we spoke at SpaceX?
- SpaceX.
- SpaceX, first time.
When you were giving Elon that crazy AI chip.
- Right, did you export?
- Yeah, ooh, that was a big moment.
- That was a huge moment.
- That felt crazy to be there.
I was like, watching these wizards of tech,
like exchange information and what,
you're giving him this crazy device, you know?
And then the other time was,
I was shooting arrows in my backyard
and randomly get this call from Trump
and he's hanging out with you.
- President Trump called, and I called you.
- Yeah, we were talking about you.
- He's just, he was talking about you.
- He was talking about the UFC thing
he was gonna do in his front yard.
- Yeah.
- And he pulls out, he's, Johnson, look at this design.
He's so proud of it.
And I go, you're gonna have a fight
in the front lawn in the White House.
He goes, yeah, yeah, you're gonna come.
This is gonna be awesome.
And he's showing his design and how beautiful it is.
And he goes, and somehow your name comes up.
He goes, do you know Joe?
And I said, yeah.
I'm gonna be on his podcast.
He's, let's call him.
(laughing)
- He's like a kid.
- I know, let's call him.
- He's so, he like, set up your old kid.
- No, he's not in trouble.
- Yeah, he's an odd guy.
Just very different.
You know, like what you'd expect from him,
very different than what people think of him.
And also just very different as a president.
Like I would just call you or text you out of the blue.
Also, he makes, when you text you, you have an Android.
So it won't go through with you.
But with my iPhone, he makes the text go big.
Like, you know what I mean?
- The USA's respected again.
Like, (laughing)
- It's all caps and it makes the text in large.
It's kind of ridiculous.
- Well, the one on one Trump,
president Trump is very different.
He surprised me.
First of all, he's an incredibly good listener.
Almost everything I've ever said to him,
he's a remembered.
- Yeah, people don't, they only want to look
at negative stories about him
or negative narratives about him.
You know, you can catch anybody on a bad day.
Like, there's a lot of things he does
where I don't think he should do.
Like, I don't think he should say
to a reporter quiet piggy.
Like, that's pretty ridiculous.
Also, objectively funny.
I mean, it's unfortunate that it happened to her.
I wouldn't want that to happen to her,
but it was funny.
Just ridiculous that the president does that.
I wish he didn't do that.
But other than that, like, he's an interesting guy.
Like, he's a lot of different things
wrapped up into one person, you know?
- You know, part of his charm,
well, part of his genius is, yes, he says what's on his mind.
- Yes.
- And that's just like an anti-politician.
- Yeah, right.
- So you know what's on his mind is really what's on his mind.
- Which I think he's telling you what he believes.
- I do, I do that too.
- Some people would rather be lied to.
- Yeah, but I like the fact that he's telling you
what's on his mind.
Almost every time he explains something,
he says something, he starts with his,
you could tell his love for America,
what he wants to do for America.
And everything that he thinks through
is very practical and very common sense.
And, you know, it's very logical.
And I still remember the first time I met him.
And so this was, I'd never known him, never met him before.
And Secretary Lutney called, and we met right before,
right at the beginning of the administration.
And he said, he told me what wasn't important
to President Trump, that United States manufacturers on shore.
And that was really important to him because,
because it's important to national security,
he wants to make sure that the important
critical technology of our nation is built in United States.
And that we re-industrialize and get good
at manufacturing again, because it's important for jobs.
- It just seems like common sense, right?
- Incredible common sense.
And almost like literally the first conversation
I had with Secretary Lutney.
And he was talking about how he started our conversation
with Jensen, this is Secretary Lutney.
And I just wanna let you know that you're a national treasure,
and Vity is a national treasure.
And whenever you need access to the president,
the administration, you call us.
We're always gonna be available to you.
Literally, that was the first sentence.
- That's pretty nice.
- And it was completely true.
Every single time I called, if I needed something,
I wanna get something off my chest,
express some concern, they're always available.
- Incredible.
- It's just unfortunate we live in such a politically polarized
society that you can't recognize good common sense things
if they're coming from a person that you object to.
And that I think is what's going on here.
I think most people generally, as a country,
as a giant community, which we are,
it just only makes sense that we have
manufacturing in America,
that especially critical technology,
like you're talking about.
It's kind of insane that we buy so much technology
from other countries.
- If the United States doesn't grow,
we will have no prosperity.
We can't invest in anything domestically or otherwise.
We can't fix any of our problems.
If we don't have energy growth,
we can't have industrial growth.
If we don't have industrial growth,
we can't have job growth.
These are as simple as that.
- Right.
- And the fact that he came into office,
and the first thing that he said was drill baby drill,
his point is we need energy growth.
Without energy growth,
we can have no industrial growth.
And that was, it saved, it saved the AI industry.
God, I gotta tell you flat out.
If not for his pro-growth energy policy,
we would not be able to build factories or AI,
we would not be able to build chip factories.
We won't surely, we won't be able to build
super computer factories.
None of that stuff would be possible.
And without all of that,
construction jobs would be challenged, right?
Electrical, you know, electrician jobs.
All of these jobs that are now flourishing,
would be challenged.
And so I think he's gotta write.
We need energy growth.
We wanna re-industrialize the United States.
We need to be back in manufacturing.
Every successful person doesn't need to have a PhD.
Every successful person doesn't have to have
gone to Stanford or MIT.
And I think that that, that, you know,
that sensibility is spot on.
- Now, when we're talking about technology growth
and energy growth, there's a lot of people that go,
oh no, that's not what we need.
We need to, you know, simplify our lives and get back,
but the real issue is that we're in the middle
of a giant technology race.
And whether people are aware of it or not,
whether they like it or not, it's happening.
And it's a really important race.
Because whoever gets to whatever the event horizon
of artificial intelligences, whoever gets their first
has massive advantages in a huge way.
- You agree with that?
- Well, first, the part,
I will say that we are in a technology race
and we are always in a technology race.
We've been in a technology race with somebody forever.
- Right, right.
Since the Industrial Revolution,
we've been in a technology race.
- Since the Manhattan Project.
- Yeah.
- Or, you know, even going back to the discovery of energy.
Right?
The United Kingdom was where the Industrial Revolution
was, if you were well, invented.
When they realized that they can turn steam
and such into, into energy and to electricity,
all of that was invented largely in Europe.
And United States capitalized on it.
We were the ones that learned from it.
We industrialized it.
We defused it faster than anybody in Europe.
They were all stuck in discussions about policy and jobs
and disruptions.
Meanwhile, the United States was forming.
We just took the technology and ran with it.
And so I think we were always in a bit of a technology race.
What were two was a technology race.
Manhattan Project was a technology race.
We've been in a technology race ever since.
During the Cold War.
I think we're still in a technology race.
It is probably the single most important race.
It is the technology as it gives you superpowers.
You know, whether it's information superpowers
or energy superpowers or military superpowers
is all founded in technology.
And so technology leadership is really important.
- Well, the problem is if somebody else
has superior technology, right?
That's the issue, that's right.
It seems like with the AI race,
people are very nervous about it.
Like, you know, Elon has famously said
there was like 80% chance, it's awesome.
20% chance, we're in trouble.
And people are worried about that 20%.
- Rightly so, you know, if you had 10 bullets
in a revolver and, you know, you took out eight of them
and you still have two in there and you spin it.
You're not gonna feel real comfortable
when you pull that trigger, it's terrifying.
And when we're working towards this ultimate goal of AI,
it's just, it's impossible to imagine
that it wouldn't be of national security interest
to get there first.
We should, the question is what's there?
This episode is brought to you by Zip Recruiter.
Today's world will look a lot different
without innovators like Gulliel Momarconi.
He was the first to transmit electrical signals
across long distances paving the way for radio, phones,
and entertainment as we know it.
In a way, we might not even have this show without him.
New innovations are key to success.
Zip Recruiter gets that and it's why they're always looking
for new and better ways to make hiring faster and easier.
See for yourself just how much of an impact
they can make tried for free at ziprecruiter.com/rogan.
One of the ways they're making a difference
is through their matching technology.
When you post a job, it immediately starts scouring the site
for qualified candidates in the area.
Zip Recruiter even improved its resume database recently,
making it easier to connect and talk
with any of the candidates that you're interested in.
See how Zip Recruiter's new hiring innovations
are changing the game.
Four out of five employers who post on Zip Recruiter
get a quality candidate within the first day.
And right now, you can try it for free at ziprecruiter.com/rogan.
Again, that's ziprecruiter.com/rogan.
Zip Recruiter, the smartest way to hire.
- This episode is brought to you
by the Focus Features film Hamnet.
From director Chloe Zhao and producer Steven Spielberg
and Sam Mendez discover the untold story
behind Shakespeare's greatest masterpiece,
winner of more audience awards than any film this year.
Hamnet is a monumental cinematic experience.
And now it's the Critics Choice Award winner
for Best Actress, Jessie Buckley, Hamnet
is the best picture of the year.
Hamnet ready PG-13, maybe inappropriate
for children under 13, now playing only in theaters.
- So that was the part that-- - What is there?
- Yeah, I'm not sure.
And I don't think anybody-- - That's the problem.
- I don't think anybody is really knows.
- That's crazy though. - Yeah.
- Would I ask you, you're the head of NVIDIA.
You don't know what's there, who knows?
- Yeah, I think it's probably gonna be
much more gradual than we think, it won't be a moment.
It won't be as if somebody arrived
and nobody else has, I don't think it's gonna be like that.
I think it's gonna be things that just get better
and better and better, better, just like technology does.
- So you are rosy about the future.
You're very optimistic about what's gonna happen with AI.
Obviously, will you make the best AI chips in the world?
- It would probably better be.
If history is a guide, we were always concerned
about new technology.
Humanity has always been concerned about new technology.
They're always somebody who's thinking,
they're always a lot of people who are quite concerned.
We're quite concerned.
And so if history is a guide, it is the case
that all of this concern is channeled
into making the technology safer.
And so, for example, in the last several years,
I would say AI technology has increased probably
in the last two years alone, maybe a hundred X.
Let's just give it a number, okay?
It's like a car two years ago was a hundred times slower.
So AI is a hundred times more capable today.
Now, how did we channel that technology?
How do we channel all of that power?
We directed it to causing the AI to be able to think,
meaning that it can take a problem
that we give it, break it down step by step.
It does research before it answers.
And so it grounds it on truth.
It'll reflect on that answer, ask itself,
is this the best answer that I can give you?
Am I certain about this answer?
If it's not certain about the answer
or highly confident about the answer,
you'll go back and do more research.
It might actually even use a tool
because that tool provides a better solution
than it could hallucinate itself.
As a result, we took all of that computing capability
and we channeled it into having it produce a safer result,
safer answer, a more truthful answer.
Because as you know,
one of the greatest criticisms of AI in the beginning
was that hallucinate it.
- Right.
- And so, if you look at the reason
why people use AI so much today,
is because the amount of hallucination has reduced.
I use it almost,
well, I use the whole trip over here.
And so I think the capability,
most people think about power
and they think about,
maybe it's an explosion power,
but the technology power,
most of it is channeled towards safety.
A car today is more powerful,
but it's safer to drive.
A lot of that power goes towards better handling.
I'd rather have a, well,
you have a thousand horsepower truck.
I think 500 horsepower is pretty good.
Now, a thousand better.
I think a thousand is better.
- I don't know what's better,
but it's definitely faster.
- Yeah, no, I think it's better.
You get out of trouble faster.
I enjoyed my 5.99 more than my 6.12.
It was, I think it was better.
And more horsepower is better.
My 459 is better than my 4.30.
More horsepower is better.
I think more horsepower is better.
I think it's better handling,
it's better control.
In the case of technology,
it's also very similar in that way.
And so if you look at what we're gonna do
with the next thousand times of performance in AI,
a lot of it is going to be channeled towards
more reflection, more research,
thinking about the answer more deeply.
- So when you're defining safety,
you're defining it as accuracy.
- Functionality.
- Functionality.
- Okay.
- It does what you expect it to do.
And then you take the technology and horsepower,
you put guardrails on it, just like our cars.
We've got a lot of technology in a car today.
A lot of it goes towards, for example, ABS.
ABS is great.
And so traction control.
That's fantastic.
Without a computer in the car,
how would you do any of that?
And that little computer,
the computers that you have doing your traction control,
is more powerful than a computer that went to Apollo 11.
And so you want that technology,
channeled towards safety,
channeled towards functionality.
And so when people talk about power,
the advancement of technology,
oftentimes I feel what they're thinking
and what we're actually doing is very different.
- What do you think they're thinking?
- Well, they're thinking somehow,
that this AI is being powerful,
and their mind probably goes towards a sci-fi movie.
The definition of power.
Oftentimes the definition of power is military power
or physical power,
but in the case of technology power,
when we translate all of those operations,
it's towards more refined thinking,
more reflection, more planning, more options.
- I think the big fears that people have
is one, a big fear is military applications.
That's a big fear.
Because people are very concerned
that you're going to have AI systems that make decisions
that maybe an ethical person wouldn't make
or a moral person wouldn't make
based on achieving an objective versus based on,
you know, how it's gonna look to people.
- Well, I'm happy that our military is gonna use
AI technology for defense.
And I think that, andro, building military technology,
I'm happy to hear that.
I'm happy to see all these tech startups
now channeling their technology capabilities
towards defense and military applications.
I think it needs you to do that.
- Yeah, we had Palmer lucky on the podcast,
he was demonstrating some of the stuff
that was helpful. - Yes, incredible.
- Helmet on, and we showed some videos
how you could see behind walls and stuff.
Like it's nuts.
- He's actually the perfect guy
to go start that company.
- A hundred percent, yeah, a hundred percent.
- He's born for that, yeah.
He came here with a copper jacket on him.
He's a freak, it's awesome.
He's awesome.
But it's also an unusual intellect channeled
into that very bizarre field is what you need, you know?
- And I think it's, I think I'm happy
that we're making it more socially acceptable.
There was a time where when somebody wanted
to channel their technology capability
and their intellect into defense technology,
somehow they're vilified, but we need people like that.
We need people who enjoyed that part
of application of technology.
- Well, people are terrified of war.
- Yeah, so it makes sense.
- The best way to avoid it has excessive military might.
- Do you think that's absolutely the best way?
- Not diplomacy, not working stuff out.
- All of it, all of it.
- Yeah, you have to have military might
and you have to get people to sit down there.
- Right, exactly, all of it.
- Otherwise, they just invade.
- That's right, why ask for permission?
- Again, like you said, in history.
Go back and look at history.
When you look at the future of AI
and you just said that no one really knows what's happening.
- Do you ever sit down and ponder scenarios?
Like what do you think is like best case scenario
for AI over the next two decades?
- The best case scenario is that AI diffuses
into everything that we do and everything's more efficient,
but the threat of war remains a threat of war.
Cyber security remains a super difficult challenge.
Somebody is going to try to breach your security.
You're going to have thousands of millions of AI agents
protecting you from that threat.
Your technology is going to get better.
Their technology is going to get better,
just like cyber security.
Right now, while we speak, we're seeing cyber attacks
all over the planet on just about every front door
you can imagine.
And yet, you and I are sitting here talking.
And so, the reason for that is because we know
that there's a whole bunch of cyber security technology
in defense.
And so we just have to keep amping that up,
keep stepping that up.
- This episode is brought to you by Visible.
When your phone plans is good as visible,
you've got to tell your people.
It's the ultimate wireless hack to save money
and still get great coverage and a reliable connection.
Get one line wireless with unlimited data
and hotspot for $25 a month.
Taxes and fees included all on Verizon's 5G network.
Plus, now for a limited time,
new members can get the visible plan
for just $19 a month for the first 26 months.
Use promo code switch26 and save beyond the season.
It's a deal so good, you're going to want to tell your people.
Switch now at visible.com/rogan.
Terms apply, limited time offers subject to change.
See visible.com for plan features
and network management details.
- That's a big issue with people,
is the worry that technology is going to get to a point
where encryption is going to be obsolete.
Encryption is just, it's no longer going to protect data.
It's no longer going to protect systems.
Do you anticipate that ever being an issue
or do you think it's as the defense grows,
the threat grows, the defense grows,
and it just keeps going on and on and on
and they'll always be able to fight off
any sort of intrusions?
- Not forever.
Some intrusions will get in and they will all learn from it.
And you know the reason why cybersecurity works
is because of course the technology of defense
is advancing very quickly.
The technology offense is advancing very quickly.
However, the benefit of the cybersecurity defense
is that socially the community,
all of our companies work together as one.
Most people don't realize this.
There's a whole community of cybersecurity experts.
We exchange ideas, we exchange best practices,
we exchange what we detect.
The moment something has been breached
or maybe there's a loophole or whatever it is.
It is shared by everybody.
The patches are shared with everybody.
- That's interesting.
- Yeah, most people don't realize this.
- No, I had no idea.
I've assumed that it would just be competitive
like everything else.
- No, we work together all of a sudden.
- Has that always been the case?
- It surely has been the case for about 15 years.
It might not have been the case long ago.
But this, what do you think started off that cooperation?
People recognizing it's a challenge and no company
can stand alone.
And the same thing is gonna happen with AI.
I think we all have to decide working together
to stay out of harm's way is our best chance for defense.
Then it's basically everybody against the threat.
- And it also seems like you'd be way better at detecting
where these threats are coming from and neutralizing.
- Exactly, because the moment you detect it somewhere,
you're gonna find out right away.
- It'll be really hard to hide.
- That's right.
- Yeah.
- That's how it works.
That's the reason why it's safe.
That's why I'm sitting here right now instead of,
locking everything down in video.
(laughing)
- It's not only am I watching my own back.
I've got everybody watching my back
and I'm watching everybody else's back.
- It's a bizarre world, isn't it?
When you think about that cyber threats.
- And this idea about cybersecurity
is unknown to the people who are talking about AI threats.
They're, I think when they think about AI threats
and AI cyber security threats,
they have to also think about how we deal with it today.
Now, there's no question that AI is a new technology
and it's a new type of software in the end of software.
It's just, it's a new type of software.
And so it's gonna have new capabilities.
But so would the defense, you know,
where you use the same AI technology
to go defend against it.
- So do you anticipate a time ever in the future
where it's going to be impossible,
where there's not going to be any secrets,
where the bottleneck between the technology
that we have and the information that we have,
information is just all a bunch of ones and zeros.
It's out there on hard drives
and the technology has more and more access
to that information.
Is it ever gonna get to a point in time
where there's no way to keep a secret?
- I don't know.
- 'Cause it seems like that's where everything is kind
of headed and we're going back.
- I don't think so.
I think the quantum computers we're supposed to,
well, yeah, quantum computers will make it possible,
we'll make it so that the previous quantum,
previous encryption technology is obsolete.
But that's the reason why the entire industry
is working on post-quantum encryption technology.
- Well, that looked like new algorithms.
- And what the crazy thing is when you hear about
the kind of computation that quantum computing can do,
and the power that it has where, you know,
you're looking at all the supercomputers in the world,
they take billions of years and it takes them
a few minutes to solve these operations.
Like, how do you make encryption for something
that can do that?
- I'm not sure.
- But I've got a bunch of scientists who are working on that.
- But, yeah. - They can figure it out.
- Yeah, we've got a bunch of scientists
who are expert in that.
- And the ultimate fear that it can't be breached,
that quantum computing will always be able to decrypt
all other quantum computing encryption?
- I don't know.
- It just gets to some point where it's like,
stop playing the stupid game, we know everything.
I don't think so.
- No?
- Because I'm, you know, history is guide.
- History is guide before AI came around.
That's my worry.
My worry is this is a totally, you know,
it's like history was one thing and then nuclear weapons
kind of changed all of our thoughts on war
and mutually assured destruction got everybody
to stop using nuclear bombs.
- Yeah.
- My worry is that the thing is, Joe,
is that AI is not gonna, it's not like we're cavemen,
and then all of a sudden one day AI shows up.
Every single day we're getting better and smarter
because we have AI.
And so we're stepping on our own AI's shoulders.
So when that, whatever that AI threat comes,
it's a click ahead.
It's not a galaxy ahead, you know?
It's just a click ahead.
And so I think the idea that somehow this AI
is gonna pop out of nowhere and somehow think in a way
that we can't even imagine thinking
and do something that we can't possibly imagine
I think is far fetched.
And the reason for that is because we're all have,
we all have AIs and, you know,
there's a whole bunch of AIs being in development.
We know what they are and we're using it.
And so every single day we're getting close to each other
but don't they do things that are very surprising?
- Yeah, but so you have an AI that does something surprising.
I'm gonna have an AI and my AI looks at your AI
and goes, that's not that surprising.
- The fear for the lay person like myself
is that AI becomes sentient and makes its own decisions.
And then ultimately decides to just govern the world,
do it its own way.
They're like, you guys, you had a good run
but we're taking over now.
Yeah, but my AI is gonna take care of me.
(laughing)
- This is the cybersecurity argument.
- Yes, you have an AI and it's super smart
but my AI is super smart too.
And maybe your AI, let's pretend for a second
that we understand what consciousness is
and we understand what sentience is.
- And we really are just pretending.
- Okay, let's just pretend for a second that we believe that.
I don't believe actually, I don't actually don't believe that
but nonetheless, let's pretend we believe that.
So your AI is conscious and my AI is conscious
and let's say your AI is, you know, wants to,
I don't know, do something surprising.
My AI is so smart that it might be surprising to me
but it probably won't be surprising to my AI.
And so maybe my AI thinks is surprising as well
but it's so smart, the moment it sees it the first time,
it's not gonna be surprised the second time, just like us.
And so I feel like I think the idea
that only one person has AI and that one person's AI
is compares everybody else's AI is Neanderthal
is probably unlikely.
I think it's much more like cybersecurity.
- Interesting.
I think the fear is not that your AI is gonna battle
with somebody else's AI.
The fear is that AI is no longer gonna listen to you.
That's the fear is that human beings
won't have control over it after a certain point.
If it achieves sentience and then has the ability
to be autonomous.
- That there's one AI.
- Well, they just combine.
It becomes one AI.
- That it's a life form.
But that's the, there's arguments about that, right?
That we're dealing with some sort of synthetic biology
that it's not as simple as new technology
that you're creating a life form.
- If it's like life form, let's go along with that for a while.
I think if it's like life form, as you know,
all life forms don't agree.
And so I'm gonna have to go with your life form
and my life form are gonna agree.
Because my life form is gonna wanna be the super life form.
And now that we have disagreeing life forms,
we're back again to where we are.
- Well, they would probably cooperate with each other.
It would just, the reason why we don't cooperate
with each other is we're territorial primates.
But AI wouldn't be a territorial primate.
We realized the folly in that sort of thinking
and it would say, listen, there's plenty of energy
for everybody.
We don't need to dominate.
We don't need, we're not trying to acquire resources
and take over the world.
We're not looking to find a good breeding partner.
We're just existing as a new super life form
that these cute monkeys created for us.
- Okay, well, that would be a super power
with no ego.
- Right.
- And if it has no ego,
why would have to ego to do any harm to us?
- Well, I don't assume that it would do harm to us.
But the fear would be that we would no longer have control
and that we would no longer be the apex species on the planet.
This thing that we created would now be.
- Is that funny?
- No.
- I just think it's not gonna happen.
- I know you think it's not gonna happen.
But it could, right?
And here's the other thing is like,
if we're racing towards could
and could be the end of human beings being controlled
of our own destiny.
- I just think it's extremely unlikely.
That's what they said in the Terminator movie.
And it hasn't happened.
- No, not yet, but you guys are working towards it.
- The thing about, you're saying about conscience and sentience,
that you don't think that AI will achieve consciousness?
- Or that consciousness was the definition.
- What's the definition?
- What is the definition to you?
Consciousness, I guess first of all,
you need to know about your own existence.
You have to have experience, not just knowledge and intelligence.
The concept of a machine having an experience,
I'm not, well, first of all, I don't know what defines
experience, why we have experiences in it.
- Right.
- And why this microphone doesn't.
And so I think I know, I think I know what consciousness is.
The sense of experience, the ability to know self versus,
the ability to be able to reflect,
know our own self, the sense of ego.
I think all of those human experiences,
probably, is what consciousness is.
But why it exists versus the concept of knowledge
and intelligence, which is what AI is defined by today.
It has knowledge, it has intelligence,
artificial intelligence.
We don't call it artificial consciousness.
Artificial intelligence, the ability to perceive,
recognize, understand, plan, perform tasks.
Those things are foundations of intelligence
to know things, knowledge.
I don't, it's clearly different than consciousness.
- But consciousness is so loosely defined.
How can we say that?
I mean, doesn't a dog have consciousness?
- Yeah.
- Dog seem to be pretty conscious.
- That's right.
- So, and that's a lower level consciousness
than a human being's consciousness.
- I'm not sure.
- Yeah, right.
- Well, the question is what,
a lower level intelligence.
- It's lower level intelligence.
- But I don't know that's lower level consciousness.
- That's a good point, right.
- Because I believe my dogs feel as much as I feel.
- Yeah, they feel a lot.
- Yeah, right.
- Yeah, they get attached to you.
- That's right.
- They get depressed if you're not there.
- That's right, exactly.
- There's definitely that.
- Yeah.
- The concept of experience.
- Right.
- But isn't AI interacting with society?
So, doesn't it acquire experience
through that interaction?
- I don't think interactions is experience.
I think experience is experience,
is a collection of feelings, I think.
- You're aware of that AI,
I forget which one where they gave it some false information
about one of the programmers having an affair
with this wife, just to see how it would respond to it.
And then when they said they were gonna shut it down,
it threatened to blackmail them and reveal his affair.
And it was like, whoa, like it's conniving.
Like if that's not learning from experience
and being aware that you're about to be shut down,
which would imply at least some kind of consciousness,
or you could kind of define it as consciousness
if you're very loose with the term.
And if you imagine that this is gonna exponentially
become more powerful, wouldn't that ultimately
lead to a different kind of consciousness
than we're defining from biology?
- Well, first of all, Reese's peanut butter cups,
they go perfectly with music, podcasts,
and welcome back to the show.
- Even nature sounds.
Oh, and the thing where someone crinkles tissue
and whispers at you.
- Hello.
- Look, I'm not here to judge what you listen to.
I'm here to judge you for not eating Reese's
while you listen to it.
Reese's actually go back to the nature sounds.
Nice, that's really nice.
- Let's just break down what it probably did.
It probably read somewhere.
There's probably text that in these consequences,
certain people did that.
- Right. - I could imagine a novel.
- Right.
- Having those words related.
- Sure. - And so inside,
they realize it's a strategy for survival.
- It's just a bunch of numbers.
- That it's just a bunch of numbers.
That in the collection of numbers
that relates to a husband cheating on a wife,
has subsequently a bunch of numbers
that relates to black male and such things.
However, whatever the revenge was.
- Right.
- And so it is spewed it out.
- And so it's just like, it's just as if I'm asking it
to write me a poem in Shakespeare.
It's just whatever the words are
in that dimensionality,
this dimensionality is all these vectors
in multi-dimensional space.
These words that were in the prompt
that described the affair subsequently led to,
one word after another led to,
some revenge and something,
but it's not because it had consciousness
or it just spewed out those words, generated those words.
- I understand what you're saying.
That it's not in front of patterns,
that human beings exhibited both in literature
and in real life.
- That's exactly right.
- But at a certain point in time,
one would say, okay, well, it couldn't do this two years ago
and it couldn't do this four years ago.
Like when we were looking towards the future,
like at what point in time
when it can do everything a person does,
what point in time do we decide that it's conscious?
If it absolutely mimics all human thinking
and behave your patterns, that doesn't make it conscious.
- And becomes indecernable.
It's aware, it can communicate with you
the exact same way a person can.
Like is consciousness,
are we putting too much weight on that concept?
'Cause it seems like it's a version
of a kind of consciousness.
- It's a version of imitation.
- Imitation consciousness, right?
But if it perfectly imitates it.
- I still think it's an example of imitation.
- So it's like a fake Rolex
when they 3D print them and make them look indecernable.
- The question is what's the definition, consciousness?
- Yeah. - Yeah.
- That's the question
and I don't think anybody's really clearly defined that.
That's where it gets weird
and that's where the real Doomsday people are worried
that you are creating a form of consciousness
that you can't control.
- I believe it is possible to create a machine
that imitates human intelligence.
And has the ability to understand information,
understand instructions,
break the problem down, solve problems and perform tasks.
I believe that completely.
I believe that we could have a computer
that has a vast amount of knowledge.
Some of it true, some of it not true.
Some of it generated by humans,
some of it generated synthetically
and more and more of knowledge in the world
will be generated synthetically going forward.
Until now, the knowledge that we have
or knowledge that we generate and we propagate
and we send to each other and we amplify it
and we add to it and we modify it, we change it.
In the future, in a couple of years, maybe two or three years,
90% of the world's knowledge will likely be generated by AI.
- That's crazy.
- I know, but it's just fine.
- But it's just fine.
- I know, and the reason for that is this.
Let me tell you why, it's because what differences
that make to me that I'm learning from a textbook
that was generated by a bunch of people I didn't know
or written by a book that from somebody I don't know,
to knowledge generated by AI computers
that are simulating all of this and re-synthesizing things.
To me, I don't think there's a whole lot of difference.
We still have to fact check it.
We still have to make sure that it's based
on fundamental first principles
and we still have to do all of that,
just like we do today.
- Is this taking into account the kind of AI
that exists currently, and do you anticipate that
just like we could have never really believed
that AI would be, at least personally to myself,
would never believe AI would be as so ubiquitous
and so worth, it's so powerful today
and so important today.
- You never thought that 10 years ago.
- Never thought that.
- Right.
Imagine like what are we looking at 10 years from now?
- I think that if you reflect back 10 years from now,
you would say the same thing,
that we would have never believed that,
but-- - In a different direction.
- Right, but if you go forward nine years from now
and then ask yourself what's gonna happen 10 years from now,
I think it'll be quite gradual.
- One of the things that Elon said that makes me happy
is he believes that we're gonna get to a point
where it's not necessary for people to work
and not meaning that you're gonna have no purpose in life,
but you will have, in his words, universal high income
because so much revenue is generated by AI
that it will take away this need for people
to do things that they don't really enjoy doing
just for money.
And I think a lot of people have a problem with that
because their entire identity and how they think of themselves
and how they fit in the community is what they do.
Like this is Mike, he's an amazing mechanic,
go to Mike and Mike takes care of things,
but there's gonna come a point in time
where AI is going to be able to do all those things
much better than people do
and people will just be able to receive money,
but then what does Mike do?
When Mike is really loves being the best mechanic around,
what does the guy who codes?
What does he do when AI can code infinitely faster
with zero errors, like what happens with all those people?
And that is where it gets weird
because we've sort of wrapped our identity
as human beings around what we do for a living.
You know, when you meet someone,
one of the first things you meet somebody at a party,
hi, Joe, what's your name, Mike, what do you do, Mike?
And you know, Mike's like, oh, I'm a lawyer.
Oh, what kind of law?
And you have a conversation, you know?
When Mike is like, I get money from the government
and I play video games, it gets weird.
And I think the concept sounds great
until you take into account human nature.
And human nature is that we like to have puzzles to solve
and things to do and an identity is wrapped around
our idea that we're very good at this thing
that we do for a living.
- Yeah, I think, let me start with the more mundane.
Okay, I'll work backwards.
- Okay, work forward.
So, one of the predictions from Jeff Hinton,
who started the whole deep learning phenomenon,
deep learning technology trend and incredible,
incredible researcher, professor at University of Toronto,
he invented, discovered and invented the idea
of back propagation, which allows the neural network
to learn.
And, as you know, for the audience,
software historically was humans applying first principles
and are thinking to describe an algorithm
that is then codified just like a recipe
that's codified in software.
It looks just like a recipe, how to cook something.
It looks exactly the same, just in a slightly different language.
We call it Python or C or C++ or whatever it is.
In the case of deep learning,
this invention of artificial intelligence,
we put a structure of a whole bunch of neural networks
and a whole bunch of math units.
And, we make this large structure,
it's like a switchboard of little mathematical units.
And, we connect it all together.
And, we give it the input that the software would eventually
receive and we just let it randomly guess what the output is.
And, so, we say, for example, the input could be a picture
of a cat and one of the outputs of the switchboard
is where the cat signal is supposed to show up.
And, all of the other signals,
the other one's a dog, the other one's an elephant,
the other one's a tiger.
And, all of the other signals are supposed to be zero
when I show it a cat and the one that is a cat should be one.
And, I show it a cat through this big, huge network
of switchboards and math units.
And, they're just doing multiply and ads.
Multiplies and ads, okay?
And, and this thing, this switchboard is gigantic.
The more information you're gonna give it,
the bigger this switchboard has to be.
And, what Jeff Hinton discovered was invented, was a way
for you to guess that, put the cat signal in,
put the cat image in and that cat image could be a million numbers
because it's a megapixel image, for example.
And, it's just a whole, a whole bunch of numbers.
And, somehow from those numbers,
it has to light up the cat signal.
Okay, that's the bottom line.
And, if it, the first time you do it,
it just comes up with garbage.
And so, it says, the right answer is cat.
And so, you need to increase this signal
and decrease all of the other.
And, back propagates the outcome through the entire network.
And then you show it in another,
now it's an image of a dog and it guesses it,
takes a swing at it and it comes up with a bunch of garbage.
And you say, no, no, no.
The answer is this is a dog.
I want you to produce dog and all of the other switch,
all the other outputs have to be zero.
And, I wanna back propagate that and just do it
over and over and over again.
It's just like showing a kid, this is an apple,
this is a dog, this is a cat,
and you just keep showing it to them
until they eventually get it.
Okay, in many ways, that big invention is deep learning.
That's the foundation of artificial intelligence.
A piece of software that learns from examples.
That's basically machine learning,
a machine that learns.
And so, one of the big first,
applications was image recognition
and one of the most important image recognition applications
is radiology.
And so, he predicted about five years ago
that in five years time,
the world won't need any radiologists
because AI would have swept the whole field.
Well, turns out, AI has swept the whole field.
That is completely true.
Today, just about every radiologist is using AI in some way.
And what's ironic though, what's interesting
is that the number of radiologists has actually grown.
And so, the question is why?
That's kind of interesting, right?
Yes.
And so, the prediction was, in fact,
that 30 million radiologists will be wiped out.
But as it turns out, we need it more.
And the reason for that is because the purpose
of a radiologist is to diagnose disease.
Not to study the image, the image studying
is simply a task to, in service of diagnosing the disease.
And so, now, the fact that you could study the images
more quickly and more precisely, without ever making a mistake
and never gets tired,
you could study more images,
you could study it in 3D form instead of 2D.
Because, you know, the AI doesn't care
whether it's studies images in 3D or 2D,
you could study it in 4D.
And so, now you could study images
in a way that radiologists can't easily do
and you could study a lot more of it.
And so, the number of tests that people are able to do
increases.
And because they're able to serve more patients,
the hospital does better.
They have more clients, more patients.
As a result, they have better economics.
When they have better economics, they hire more radiologists
because their purpose is not to study the images.
Their purpose is to diagnose disease.
And so, the question is, what I'm leaning up to is,
ultimately, what is the purpose?
What is the purpose of the lawyer?
And has the purpose changed?
What is the purpose?
You know, one of the examples that I gave is that I would give is,
for example, if my car became self-driving,
well, all chauffeurs be out of jobs,
the answer probably is not.
Because for some chauffeurs,
they, for some people who are driving you,
they could be protectors.
Some people, they're part of the experience,
part of the service.
So, when you get there, they, you know,
they could take care of things for you.
And so, for a lot of different reasons,
not all chauffeurs would lose their jobs.
Some chauffeurs would lose their jobs.
And many chauffeurs would change their jobs.
And the type of applications of autonomous vehicles
will probably increase, you know, the usage of the technology
within, find new homes.
And so, I think you have to go back to,
what is the purpose of a job?
You know, like, for example, if AI comes along,
I actually don't believe I'm going to lose my job
because my purpose isn't to,
I have to look at a lot of documents,
I study a lot of emails, I look at a bunch of diagrams,
you know, the question is, what is the job?
And the purpose of somebody probably hasn't changed.
A lawyer, for example, helped people,
that probably hasn't changed.
Studying legal documents, generating documents,
it's part of the job, not the job.
But don't you think there's many jobs that AI will replace?
If your job is the task.
Yeah, if your job is the task.
Right, so automation.
Yeah, if your job is the task.
That's a lot of people.
It could be a lot of people, but it'll probably generate,
like, for example, let's say we, let's say,
I'm super excited about the robots Elon's working on.
It's still a few years away.
When it happens, when it happens,
there's a whole new industry of technicians
and people who have to manufacture the robots, right?
And so that job never existed.
And so you're going to have a whole industry of people
taking care of, like, for example,
you know, all the mechanics and all the people
who are building things for cars, super charging cars.
That didn't exist before cars.
And now we're going to have robots.
You're going to have robot apparel.
So a whole industry of, right?
Isn't that right?
Because I want my robot to look different than your robot.
And so you're going to have a whole, you know,
apparel industry for robots.
You're going to have mechanics for robots.
And you have, you know, people who comes and maintain
your robots, no, you don't think so.
You don't think that they're all done by other robots?
Eventually, and then there'll be something else.
So you think ultimately people just adapt,
except if you are the task, which is a large percentage
of the workforce.
If your job is just to chop vegetables,
cuisine art's going to replace you.
Yeah.
So people have to find meaning in other things.
Your job has to be more than to task.
What do you think about Elon's belief
that this universal basic income thing
will eventually become necessary?
Many people think that.
Andrew Yang thinks that, that's a quote.
He was one of the first people to sort of sound that alarm
during the 2020 election.
Yeah, I guess, you know, both ideas probably won't exist
at the same time.
And as in life, things will probably be in the middle.
One idea, of course, is that there'll
be so much abundance of resource that nobody needs a job.
And we're all be wealthy.
On the other hand, we're going to need universal basic income.
Both ideas don't exist at the same time.
Right.
And so we're either going to be all wealthy,
or we're going to be all using it.
How could everybody be wealthy, though?
Well, because we're an area--
Well, not because you have a lot of dollars,
wealthy because there's a lot of abundance.
Like, for example, today, we are wealthy of information.
This is a concept several thousand years ago,
only a few people have.
And so today, we have wealth of a whole bunch of things,
resources, that historic point.
And so we're going to have wealth of resources,
things that we think are valuable today,
that in the future are just not that valuable.
And so it's automated.
And so I think the question maybe partly--
it's hard to answer partly because it's
hard to talk about infinity, and it's hard to talk about
a long time from now.
And the reason for that is because there's
just too many scenarios to consider.
But I think in the next several years,
call it five to 10 years, there are several things
that I believe in hope.
And I say hope because I'm not sure.
One of the things that I believe is
that the technology divide would be substantially collapsed.
And of course, the alternative viewpoint
is that AI is going to increase the technology divide.
Now, the reason why I believe AI is going to reduce
the technology divide is because we have proof.
The evidence is that AI is the easiest application
in the world to use.
ChatGPT has grown to almost a billion users, frankly,
practically overnight.
And if you're not exactly sure how to use--
everybody knows how to use ChatGPT.
You just say something to it.
If you're not sure how to use ChatGPT,
you ask ChatGPT how to use it.
No tool in history has ever had this capability.
A queasy art-- if you don't know how to use it,
you're kind of screwed.
You're going to walk up to and say, how do you use a queasy art?
You're going to have to find somebody else.
And so but an AI will just tell you exactly how to do it.
Anybody could do this.
They'll speak to you in any language.
And if it doesn't know your language,
you'll speak it in that language.
And I'll probably figure out that it
doesn't completely understand your language.
Go and learns it instantly and comes back and talk to you.
And so I think the technology divide
has a real chance, finally, that you
don't have to speak Python or C++ or Fortran.
You can just speak human.
And whatever form of human you like.
And so I think that that has a real chance
of closing the technology divine.
Now, of course, the counter-narrative would say
that AI is only going to be available for the nations
in the countries that have a vast amount of resources
because AI takes energy.
And AI takes a lot of GPUs and factories
to be able to produce the AI.
No doubt at the scale that we would like to do in the United
States.
But the fact that it matters, your phone's
going to run AI just fine all by itself in a few years.
Today, it already does it fairly decently.
And so the fact that in every country, every nation,
every society will have to benefit a very good AI.
It might not be tomorrow's AI.
It might be yesterday's AI.
But yesterday's AI's freaking amazing.
In 10 years' time, nine-year-old AI is going to be amazing.
You don't need a 10-year-old AI.
You don't need frontier AI.
Like we need frontier AI because we want
to be the world leader.
But for every single country, everybody,
I think the capability to elevate everybody's knowledge
and capability and intelligence, that day is coming.
The Octagon isn't just in Las Vegas anymore.
It's right in your hands.
With Draft King Sportsbook, the official sports betting
partner of UFC.
Get ready because when Duavish Willie and Jan face off again
at UFC 323, every punch, every takedown, every finish,
it all has the potential to pay off in real time.
New customers bet just $5.
And if your bet wins, you get paid $200 in bonus bets.
And hey, Missouri, the weight is over.
Draft King Sportsbook is now live in the Showme State.
Download the Draft King Sportsbook app
and use promo code Rogan.
That's code Rogan to turn $5 into $200 in bonus bets
if your bet wins.
In partnership with Draft King's, the crown is yours.
Gambling problem called 1-800-Gambler.
In New York, call 877-8 open wire.
Text hope in Y-467-369.
In Connecticut, help is available for problem gambling
call 888-789-77777.
Or visit ccpg.org.
Please play responsibly.
On behalf of Boothill Casino and Resorting Kansas,
pass through of per-wager tax may apply in Illinois.
21 and over, age and eligibility varies
by jurisdiction, void and Ontario.
Restrictions apply.
Bet must win to receive bonus bets,
which expire in seven days.
In a moment, odds required.
For additional terms and responsible gaming resources,
cdkng.co/audio, limited time offer.
And also energy production, which is the real bottleneck
when it comes to third world countries.
That's right.
Electricity and all the resources that we take for granted.
Almost everything is going to be energy constrained.
And so if you take a look at one of the most important technology
advances in histories, this idea called Moore's Law.
Moore's Law was that started basically in mind generation.
And mind generation is the generation of computers.
I graduated in 1984.
And that was basically at the very beginning of the PC Revolution.
And the microprocessor.
And every single year, it approximately doubled.
And we describe it as every single year
we double the performance.
But what it really means is that every single year,
the cost of computing halved.
And so the cost of computing in a course of five years
reduced by a factor of 10, the amount of energy
necessary to do computing.
To do any task reduced by a factor of 10.
Every single 10 years, 100, 1,000, 10,000, 100,000,
so on and so forth.
And so each one of the clicks of Moore's Law,
the amount of energy necessary to do any computing reduced.
That's the reason why you have a laptop today
when back in 1984, it's set on the desk.
You got to plug in, it wasn't that fast.
And it consumed a lot of power today.
And there's only a few watts.
And so Moore's Law is the fundamental technology.
The fundamental technology trend that made it possible.
Well, what's going on in AI?
The reason why I'm videos here is because we invented
this new way of doing computing.
We call it accelerated computing.
We started 33 years ago.
It took us about 30 years to really made
it a huge breakthrough.
And in that 30 years or so, we took computing--
probably a factor of-- well, let me just say
in the last 10 years.
The last 10 years, we improved the performance
of computing by 100,000 times.
Imagine a car over the course of 10 years
and became 100,000 times faster.
Or at the same speed, 100,000 times cheaper.
Or at the same speed, 100,000 times less energy.
If your car did that, it needed energy at all.
What I mean, what I'm trying to say
is that in 10 years time, the amount of energy
necessary for artificial intelligence for most people
will be minuscule, an utterly minuscule.
And so we'll have AI running in all kinds of things
in all the time, because it doesn't consume that much energy.
And so if you're a nation that uses AI for almost everything
in your social fabric, of course,
you're going to need these AI factories.
But for a lot of countries, I think you're going to have
excellent AI, and you're not going to need as much energy.
Everybody will be able to come along.
It's my point.
So currently, that is a big bottleneck, right?
It's energy.
It is the bottleneck.
The bottleneck.
So was it Google that is making nuclear power plants
to operate one of its AI factories?
Well, I haven't heard that.
But I think in the next six, seven years,
I think you're going to see a whole bunch of small nuclear
reactors.
And by small, how big are you talking about?
Hundreds of megawatts, yeah.
OK.
And that these will be local to whatever specific company
they have.
That's right.
We'll all be power generators.
Whoa.
You know, just like you're somebody's farm,
it's probably the smartest way to do it, right?
And it takes the burden off the grid.
It takes--
Yeah.
--and you could build as much as you need.
And you can contribute back to the grid.
It's a really important point that I think you just
made about Moore's Law and the relationship to pricing,
because a laptop today, you can get
one of those little MacBook Airs.
They're incredible.
They're so thin.
Unbelievably powerful battery life is very--
You never have to charge it.
Yeah.
It's crazy.
And it's not that expensive.
That's probably speaking like something like that.
I remember when--
And that's just Moore's Law.
Then there's the Nvidia Law.
Oh.
Just right?
I was talking to you about computing that we invented.
The reason why we're here, this new way of doing computing,
is like Moore's Law on energy drinks.
I mean, it's like Moore's Law on Joe Rogan.
Wow.
That's interesting.
Yeah.
That's us.
So explain that.
This chip that you brought to Elon,
what's the significance of this?
Like, why is it so superior?
And so in 2012, Jeff Hinton's lab,
this gentleman I was talking about, Ilya Suskerber,
Alex Kirchevsky, they made a breakthrough in computer vision
in literally creating a piece of software called AlexNet.
And its job was to recognize images.
And it recognized images at a level computer vision, which
is fundamental to intelligence.
If you can't perceive, you can't,
it's hard to have intelligence.
And so computer vision is a fundamental pillar of--
not the only, but fundamental pillar of--
and so breaking computer vision, or breaking through
in computer vision, is pretty foundational to almost
everything that everybody wants to do in AI.
And so in 2012, their lab in Toronto made this breakthrough
called AlexNet.
And AlexNet was able to recognize images so much better
than any human-created computer vision algorithm
in the 30 years prior.
So all of these people, all of these scientists--
and we had many too--
working on computer vision algorithms.
And these two kids, Ilya and Alex, under Jeff Hinton,
took a giant leap above it.
And it was based on this thing called AlexNet,
this neural network.
And the way it ran, the way they made it work,
was literally buying two NVIDIA graphics cards.
Because NVIDIA's GPUs, we've been working on this new way
of doing computing.
And our GPUs application--
and it's basically a supercomputing application--
back in 1984, in order to process computer games
and what you have in your racing simulator,
that is called an image generator supercomputer.
And so NVIDIA started--
our first application was computer graphics.
And we applied this new way of doing computing,
where we do things in parallel instead of sequentially.
A CPU does things sequentially.
Step one, step two, step three.
In our case, we break the problem down,
and we give it to thousands of processors.
And so our way of doing computation
is much more complicated.
But if you're able to formulate the problem
in the way that we created called CUDA--
this is the invention of our company--
if you could formulate it in that way,
we could process everything simultaneously.
Now, in the case of computer graphics,
it's easier to do because every single pixel on your screen
is not related to every other pixel.
And so I could render multiple parts of the screen
at the same time, not completely true,
because maybe the way lighting works, or the way shadow works,
there's a lot of dependency and such.
But computer graphics, with all the pixels,
I should be a broad process everything simultaneously.
And so we took this embarrassingly parallel problem
called computer graphics, and we applied it
to this new way of doing computing, NVIDIAs.
NVIDIAs accelerated computing.
We put it in all of our graphics cards.
Kids were buying it to play games.
You probably don't know this, but were
the largest gaming platform in the world today?
Oh, I know that.
Oh, OK.
I used to make my own computers.
I used to buy your graphics cards.
Oh, that's super cool.
Yeah, set it up SLI with your graphics cards.
Good morning, crust.
It's a great day to be a bread brother.
Mornings are not my jam, or jelly.
Oh, come on.
Stop loafing around.
I just woke up feeling hollow inside.
Just grab one in the new morning on crustible sandwiches,
like bright eyed berry or up an apple filled
with 12 grams of protein and tons of deliciousness.
Crust, what are you eating?
It's just granola, not even yogurt.
No crust, no fuss, uncrust your mornings.
Yeah, I love it.
OK, that's super cool.
Oh, yeah, man, I used to be a quick junkie.
Oh, that's cool.
Yeah.
OK, so SLI, I'll tell you the story in just a second.
And how it led to Elon.
I'm still answering the question.
And so anyways, these two kids trained this model
using the technique I described earlier on our GPUs,
because our GPUs could process things in parallel.
It's essentially a supercomputer in a PC.
The reason why you used it for Quake
is because it is the first consumer supercomputer, OK?
And so anyways, they made that breakthrough.
We were working on computer vision at the time.
It caught my attention.
And so we went to learn about it.
Simultaneously, this deep learning phenomenon
was happening all over the country,
universities after another recognized
the importance of deep learning.
And all of this work was happening at Stanford,
at Harvard, at Berkeley, just all over the place.
New York University, Jan LeCoon, Andrew Yang,
at Stanford, so many different places.
And I see it cropping up everywhere.
And so my curiosity asked, what is so special
about this form of machine learning?
And we've known about machine learning for a very long time.
We've known about AI for a very long time.
We've known about neural networks for a very long time.
What makes now the moment?
And so we realized that this architecture
for deep neural networks, back propagation,
the way deep neural networks were created,
we could probably scale this problem,
scale the solution to solve many problems.
That is essentially a universal function approximator.
Meaning, when you're in school, you have a box.
Inside of it is a function.
You give it an input.
It gives you an output.
And the reason why I call it a universal function approximator
is that this computer, instead of you describing the function,
a function could be a new in this equation.
f equals m a, that's a function.
You write the function in software.
You give it input, f, a mass acceleration.
It'll tell you the force.
And the way this computer works is really interesting.
You give it a universal function.
It's not f equals m a, just a universal function.
It's a big, huge, deep neural network.
And instead of describing the inside,
you give it examples of input and output.
And it figures out the inside.
So you give it input and output, and it figures out the inside.
A universal function approximator.
Today, it could be Newton's equation.
Tomorrow, it could be Maxwell's equation.
It could be Coulomb's law.
It could be thermal dynamics equation.
It could be Schrodinger's equation for quantum physics.
And so you could have this describe almost anything.
So long as you have the input and the output.
So long as you have the input and the output.
Or it could learn the input and output.
And so we took a step back and we said,
hang on a second.
This isn't just for computer vision.
Deep learning could solve any problem.
All the problems that are interesting.
So long as we have input and output.
Now, what has input and output?
Well, the world.
The world has input and output.
And so we could have a computer that could learn almost
anything, machine learning, artificial intelligence.
And so we reasoned that maybe this
is the fundamental breakthrough that we needed.
There were a couple of things that had to be solved.
For example, we had to believe that you
could actually scale this up to giant systems.
It was running in a--
they had two graphics cards, two GTX 580s, which, by the way,
is exactly your SLI configuration.
So that GTX 580s Li was the revolutionary computer
that put deep learning on the map.
It was 2018.
And you were using it to play quick.
Wow, that's crazy.
That was the moment.
That was the big bang of modern AI.
We were lucky because we were inventing this technology,
this computing approach.
We were lucky that they found it.
Turns out they were gamers, and it was lucky they found it.
And it was lucky that we paid attention to that moment.
It was a little bit like that Star Trek first contact.
The Vulcans had to have seen the warp drive at that very moment.
If they didn't witness the warp drive,
they would have never come to Earth.
And everything would have never happened.
It's a little bit like if I hadn't paid attention
to that moment, that flash, and that flash didn't last long.
If I hadn't paid attention to that flash,
or our company didn't pay attention to it.
Who knows what would happen?
But we saw that, and we reasoned our way into--
this is a universal function approximator.
This is not just a computer vision approximator.
We could use this for all kinds of things
if we could solve two problems.
The first problem is that we have to prove to ourselves
it could scale.
The second problem we had to wait for, I guess,
contribute to and wait for is the world
will never have enough data on input and output
where we could supervise the AI to learn everything.
For example, if we have to supervise our children
on everything they learned, the amount of information
they could learn is limited.
We needed the AI, we needed the computer
to have a method of learning without supervision.
And that's where we had to wait a few more years.
But unsupervised AI learning is now here.
And so the AI could learn by itself.
And the reason why the AI could learn by itself
is because we have many examples of right answers.
Like, for example, if I want to teach an AI
how to predict the next word.
I could just grab it, grab a whole bunch of texts
that we already have, mask out the last word,
and make it try and try and try again
until it predicts the next one.
Or I mask out random words inside the text.
And I make it try and try and try until it predicts it.
Like, Mary goes down to the bank.
Is that a river bank or a money bank?
Well, if you're going to go down to the bank,
it's probably a river bank.
And it might not be obvious even from that,
it might need and cut a fish.
Now you know it must be the river bank.
So you give these AI as a whole bunch of these examples.
And you mask out the words, it'll predict the next one.
And so unsupervised learning came along.
These two ideas, the fact that it's scalable
and unsupervised learning came along,
we were convinced that we had to put everything into this
and help create this industry because we're
going to solve a whole bunch of interesting problems.
And that was in 2012.
By 2016, I had built this computer called the DGX1.
The one that you saw me give to Elon
is called DGXSpark.
The DGX1 was $300,000.
It cost Nvidia a few billion dollars to make the first one.
And instead of two chips SLI, we connected eight chips
with a technology called MV link.
But it's basically SLI supercharged.
And so we connected eight of these chips together instead
of just two.
And all of them work together, just like your Quake rig
did to solve this deep learning problem to train this model.
And so we created this thing.
I announced it at GTC, and at one of our annual events.
And I described this deep learning thing, computer vision
thing, and this computer called DGX1.
The audience was completely silent.
They had no idea what it was talking about.
But I was lucky because I had known Elon.
And I helped him build the first computer for Model 3,
the Model S. And when he wanted to start working
on autonomous vehicle, I helped him build the computer that
went into the Model S AV system, his full self-driving system.
We were basically the FSD computer version one.
And so we were already working together.
And when I announced this thing,
nobody in the world wanted it.
I had no purchase orders, not one.
Nobody wanted to buy it.
Nobody wanted to be part of it.
Except for Elon, he goes, he was at the event
and we were doing a fireside chat about the future of self-driving
cars, I think is like 2016, maybe that time was 2015.
And he goes, you know what?
I have a company that could really use this.
I was like, wow, my first customer.
And so I was pretty excited about it.
And he goes, yeah, we have this company.
It's a non-profit company.
And all the blood drained out of my face.
I just spent a few billion dollars building this thing,
cost $300,000 and the chances of a non-profit being able
to pay for this thing is rocks meet zero.
And he goes, you know, this is an AI company.
And it's a non-profit.
And we could really use one of these supercomputers.
And so I picked it up.
I built the first one for ourselves.
We're using it inside the company.
I boxed one up.
I drove it up to San Francisco.
And I delivered it to Elon in 2016.
A bunch of researchers were there.
Peter Beale was there.
Ilya was scared.
And there was a bunch of people there.
And I walk up to the second floor where they were all kind
of in a room, smaller than your place here.
And that place turned out to have been open AI.
2016, just a bunch of people sitting in a room.
It's not really a non-profit anymore, though.
They're not non-profit anymore, yeah.
- Weird how that works.
- Yeah, yeah.
But anyhow, anyhow, Elon was there.
Yeah, it was really a great moment.
- Oh yeah, there you go, yeah, that's it.
- Look at you bro, same jacket.
- Look at that.
I haven't aged.
Not a lick of black hair though.
The size of it is significantly smaller.
That was the other day.
- Okay, so there you go.
- Yeah, look at the difference.
- Exactly the same industrial design.
- He's holding it in his hand.
- Here's the amazing thing.
DGX-1 was one petaflops.
Okay, that's a lot of flops.
And DGX-Spark is one petaflops, nine years later.
- Wow.
- The same amount of computing horsepower.
- And a much more shrunken down.
- Yeah.
- And instead of $300,000,
it's now $4,000.
- And it's the size of a small book.
- Incredible.
- Crazy.
- That's how technology moves.
- Anyways, that's the reason why I wanted to give him
the first one, because I gave him the first one, 2016.
- It's so fascinating.
I mean, if you wanted to make a story for a film,
I mean, that would be the story that like,
what better scenario, if it really does become
a digital life form, how funny would it be
that it is birthed out of the desire
for computer graphics for video games?
(DGX-Spark laughs)
- Exactly.
It's kind of crazy.
- Yeah.
- Kind of crazy when you think about it that way.
- Because it turns out.
- Perfect origin story.
- Computer graphics was one of the hardest
super computer problems.
Generating reality.
- And also one of the most profitable to solve,
because computer games are so popular.
When NVIDIA started in 1993,
we were trying to create this new computing approach.
The question is, what's the killer app?
And the problem we wanted to,
the company wanted to create a new type of computing
architecture, a new type of computer
that can solve problems that normal computers can't solve.
Well, the applications that existed in the industry
in 1993 are applications that normal computers can solve
because if the normal computers can't solve them,
why would the application exist?
And so we had a mission statement for a company
that has no chance of success.
(DGX-Spark laughs)
But I didn't know that in 1993.
It just sounded like a good idea.
- Right.
And so if we created this thing,
that can solve problems, it's like,
you actually have to go create the problem.
And so that's what we did.
In 1993, there was no quake.
John Carmack hadn't been released Doom yet.
You probably remember that.
- Sure, yeah.
- And there were no applications for it.
And so I went to Japan
because the arcade industry had this,
at the time of Sega, you remember?
- Sure.
- The arcade machines, they came out with 3D arcade systems,
virtual fighter, Daytona, virtual cop.
All of those arcade games were in 3D
for the very first time.
And the technology they were using was from Martin Marietta,
the flight simulators.
They took the guts out of a flight simulator
and I put it into an arcade machine.
The system that you have over here,
it's got to be a million times more powerful
than that arcade machine.
And that was a flight simulator for NASA.
- Whoa.
- And so they took the guts out of that.
They were using it for flight simulation with jets
and space shuttle and they took the guts out of that.
And Sega had this brilliant computer developer,
his name is Yusuzuki.
Yusuzuki and Miyamoto, Sega and Nintendo,
these were the incredible pioneers,
the visionaries, the incredible artists,
and they're both very, very technical.
They were the origins really of the gaming industry.
And Yusuzuki pioneered 3D graphics gaming.
So I went, we created this company and there were no apps.
And we were spending all of our afternoons.
We told our family where we're going to work,
but it was just the three of us who's gonna know.
And so we went to Curtis's, one of the founders,
went to Curtis's townhouse and Chris and I were married.
We have kids.
I already had Spencer and Madison.
They were probably two years old.
And Chris's kids are about the same age as ours.
And we would go to work in this townhouse,
but when you're a startup and the mission statement
is the way we described,
you're not gonna have too many customers calling you.
And so we had really nothing to do.
And so after lunch, we would always have a great lunch.
After lunch, we would go to the arcades
and play the Sega, the Sega Virtua fighter and Daytona
and all those games and analyze how they're doing it,
trying to figure out how they were doing that.
And so we decided, let's just go to Japan.
And let's convince Sega to move those applications
into the PC and we would start the PC gaming,
the 3D gaming industry partnering with Sega.
That's how an Nvidia started.
Wow.
And so in exchange for them,
part developing their games for our computers in the PC,
we would build a chip for their game console.
That was the partnership.
I build a chip for your game console.
You port the Sega games to us.
And then they paid as a, you know,
at the time quite a significant amount of money
to build that game console.
And that was kind of the beginning of Nvidia getting started.
And we thought we were on our way.
And so, so I started with a business plan,
a mission statement that was impossible.
We lucked into the Sega partnership.
We started taking off,
started building our game console.
And about a couple of years into it,
we discovered our first technology didn't work.
It was, it would have been a flaw.
It was a flaw.
And all of the technology ideas that we had,
the architecture concepts were sound.
But the way we were doing computer graphics
was exactly backwards.
You know, instead of, I won't bore you with the technology,
but instead of inverse texture mapping,
we were doing forward texture mapping.
Instead of triangles, we did curved surfaces.
So other people did it flat, we did it round.
Other technology, the technology that ultimately won,
the technology we use today has, has Z buffers.
It automatically sorted.
We had an architecture with no Z buffers.
The application had to sort it.
And so we chose a bunch of technology approaches
that three major technology choices,
all three choices were wrong.
Okay, so this is how incredibly smart we were.
And so, and so 1995, 1995, we realized
we were going down the wrong path.
Meanwhile, the Silicon Valley was packed
with 3D graphics startups
because it was the most exciting technology at that time.
And so 3D effects and rendition
and Silicon graphics was coming in.
Intel was already in there.
And you know, gosh, what added up eventually
to a hundred different startups we had to compete against.
Everybody had chosen the right technology approach
and we chose the wrong one.
And so we were the first company to start.
We found ourselves essentially dead last
with the wrong answer.
And so, the company was in trouble.
And ultimately, we had to make several decisions.
The first decision is, well, if we change now,
we will be the last company.
And even if we changed into the technology
that we believe to be right, we'd still be dead.
And so that argument, you know,
do we change and therefore be dead?
Don't change and make this technology work somehow
or go do something completely different.
That question stirred the company strategically
and was a hard question.
I eventually, you know, advocated for it.
We don't know what the right strategy is
but we know what the wrong technology is.
So let's stop doing it the wrong way
and let's give ourselves a chance
to go figure out what the strategy is.
The second thing, the second problem we had
was our company was running on a money
and I was in a contract with Sega
and I owed them this game console.
And if that contract would have been canceled,
we'd be dead.
We would have vaporized instantly.
And so, I went to Japan and I explained to
the CEO of Sega, Iri Madhuri, really great man.
He was the former CEO of Honda USA,
went back to Sega to run Sega.
I went back to Japan and run Sega.
And I explained to him that I guess I was what,
30, 33 years old, you know, when I was 33 years old,
I still had acne and I got this, you know, Chinese kid
and I was super skinny.
And he was already kind of elder and I went to him
and I said, I said, listen, I've got some bad news for you.
And first, the technology that we promised you
doesn't work.
And second, we shouldn't finish your contract
because we'd waste all your money
and you would have something that doesn't work.
And I recommend you find another partner
to build your game console.
And so I'm terribly sorry that we've set you back
in your product roadmap.
And third, even though you're going to,
I'm asking you to let me out of the contract,
I still need the money.
Because if you didn't give me the money,
we'd vaporize overnight.
And so I explained it to humbly, honestly,
I gave him the background,
explain to him why the technology doesn't work,
why we thought it was going to work, why it doesn't work.
And I asked him to,
convert the last $5 million that they were going
to complete the contract to give us that money as an investment.
Instead, and he said,
but it's very likely your company will go out of business,
even with my investment.
And it was completely true.
Back then, 1995, $5 million was a lot of money.
It's a lot of money today, $5 million was a lot of money.
And here's a pile of competitors doing it right.
What are the chances that giving him video $5 million?
That we would develop the right strategy,
that he would get a return on that $5 million
or even get it back.
Zero percent.
You do the math is zero percent.
If I were sitting there right there, I wouldn't have done it.
$5 million was a amount of money to say at the time.
And so, I told him that if you invested that $5 million
in us, it is most likely to be lost.
But if you didn't invest that money, we'd be out of business
and we wouldn't have no chance.
And I told him that I,
I don't even know exactly what I said in the end,
but I told him that I would understand if he decided not to,
but it would make the world to me if he didn't.
He went off and thought about it for a couple of days
and came back and said, "Well, do it."
- Wow.
- You should have a strategy to how to correct
what it was doing wrong, did you expect that to happen?
- Oh man, wait until I tell you the rest of it.
- It's even scarier.
- Oh no.
(laughing)
- And so, so what he decided was a young man he liked.
That's it.
- Wow.
- To this day.
- That's nuts.
- I was, but the world was that guy.
- No doubt.
- Right?
- Well, he celebrated today in Japan.
And if he would have kept that five the investment,
I think it'd be worth probably about a trillion dollars today.
(laughing)
I know, but the moment we went public,
they sold it, they'd go, "Wow, that's a miracle."
(laughing)
They sold it at NVIDIA valuation about 300 million.
That's our IPO valuation, 300 million.
- Wow.
- And so anyhow, I was incredibly grateful.
And then now we have to figure out what to do
because we still were doing the wrong strategy,
the wrong technology.
So unfortunately, we had to lay off most of the company.
We shrunk the company all back.
All the people working on the game console, you know?
We had to shrunk it all back.
And then somebody told me that, but Jensen,
we've never built it this way before.
We've never built it the right way before.
We've only know how to build it the wrong way.
And so nobody in the company knew how to build this
super computing image generator, 3D graphics thing
that Silicon graphics did.
And so I said, "Okay, how hard can it be?"
You've got all these 30 companies, you know,
50 companies doing it, how hard can it be?
And so luckily, there was a textbook written
by the company, Silicon graphics.
And so I went down to the store.
I had 200 bucks in my pocket.
And I bought three textbooks, only three they had.
$60 a piece.
I bought the three textbooks.
I brought it back and I gave one to each one of the architects.
And I said, "Read that and let's go save the company."
And so they read this textbook,
learn from the giant at the time, Silicon graphics,
about how to do 3D graphics.
But the thing that was amazing,
and what makes some video special today,
is that the people that are there
are able to start from first principles.
Learn best known art, but re-implement it in a way
that's never been done before.
And so when we re-imagined the technology of 3D graphics,
we re-imagined it in a way that manifests today
the modern 3D graphics, we really invented
modern 3D graphics.
But we learned from previous known arts
and we implemented fundamentally differently.
- What did you do to change it?
- Well, ultimately, the simple answer is that
the way Silicon graphics works, the geometry engine,
is a bunch of software running on processors.
We took that and eliminated all the generality
the general purposeness of it.
And we reduced it down into the most essential part
of 3D graphics.
And we hard-coded it into the chip.
And so instead of something general purpose,
we hard-coded it very specifically
into just the limited applications,
limited functionality necessary for video games.
And that capability, that superch,
and because we reinvented a whole bunch of stuff,
it supercharged the capability that one little chip
and our one little chip was generating images,
as fast as a $1 million image generator.
That was the big breakthrough.
We took a $1 million thing
and we put it into the graphics card
that you now put into your gaming PC.
And that was our big invention.
And of course, the question is, how do you compete against
these 30 other companies doing what they were doing?
And there we did several things.
One, instead of building a 3D graphics chip
for every 3D graphics application,
we decided to build a 3D graphics chip for one application.
We bet the farm on video games.
The needs of video games are very different
than the needs for CAD,
needs for flight simulators.
They're related, but not the same.
And so we narrowly focused our problem statement
so I could reject all of the other complexities
and we shrunk it down into this one little focus
and then we supercharged it for gamers.
And the second thing that we did
was we created a whole ecosystem of working
with game developers and getting their games ported
and adapted to our Silicon.
So that we could get, turn essentially,
what is a technology business
into a platform business, into a game platform business.
So we, you know, G-Force is really,
today it's also the most advanced
3D graphics technology in the world.
But a long time ago, G-Force is really
the game console inside your PC.
It's, you know, it runs Windows,
it runs Excel, it runs PowerPoint, of course,
those are easy things.
But it's fundamental purpose
was simply to turn your PC into a game console.
So we, we were the first technology company
to build all of this incredible technology
in service of one audience gamers.
Now, of course, in 1993, the gaming industry didn't exist.
But by the time that John Carmack came along
and the doom phenomenon happened and then Quake came out,
as you know, that entire community boom took off.
Do you know where the name Doom came from?
It came from this, there's a scene in the movie,
The Color of Money, where Tom Cruise,
who's this elite pool player shows up at this pool hall
and this local hustler says, "What he got in the case?"
And he opens up this case.
He has a special pool queue he goes in here
and he opens it up and he goes, "DOOM!"
(laughing)
That's where it came from, Chris.
Yeah, 'cause Carmack said that's what they wanted to do
to the gaming industry, that when Doom came out,
it would just be everybody be like, "Oh, we're fucked."
Oh wow.
This is Doom.
That's awesome.
Isn't that amazing?
'Cause it's the perfect name for the game.
Yeah.
And the name came out of that scene in that movie.
That's right.
Well, and then of course, Tim Sweeney and Epic Games
and the 3D gaming genre took off.
Yes.
And so if you just kind of, in the beginning
was no gaming industry, we had no choice,
but to focus the company on one thing, that one thing.
It's a really incredible origin story.
Oh, it's amazing.
Like you must, like, look back.
A disaster is what?
That $1 billion, that pivot with that conversation
with that gentleman, if he did not agree to that,
if he did not like you, what would the world look like today?
That's crazy.
Oh wait.
Then our entire life hung on another gentleman.
And so now, here we are, we built, so before GeForce,
it was Riva 128.
Riva 128 saved the company.
It revolutionized computer graphics,
the performance cost performance ratio
of 3D graphics for gaming was off the charts.
Amazing.
And we're getting ready to ship it.
Get what?
We're building it.
But we're, so as you know, $5 million doesn't last long.
And so every single month, every single month,
we were drawing down, you have to build it, prototype it.
You have to design it, prototype it.
Get the silicon back, which costs a lot of money.
Test it with software.
Because without the software testing the chip,
you don't know the chip works.
And then you're going to find a bug, probably,
because every time you test something, you find bugs.
Which means you have to tape it out again,
which is more time, more money.
And so we did the math.
There was no chance somebody was going to survive it.
We didn't have that much time to tape out a chip,
send it to a foundry, TSMC, get the silicon back,
test it, send it back out again.
There was no shot, no hope.
And so the math, the spreadsheet, doesn't allow us to do that.
And so I heard about this company.
And this company built this machine.
And this machine is an emulator.
You could take your design, all of the software that
describes the chip.
And you could put it into this machine.
And this machine will pretend it's our chip.
So I don't have to send it to the fab.
Wait until the fab sends it back.
I could have this machine pretend it's our chip.
And I could put all of the software on top of this machine
called an emulator.
And test all of the software on this pretend chip.
And I could fix it all before I send it to the fab.
And if I could do that, when
I send it to the fab, it should work.
Nobody knows, but it should work.
And so we came to the conclusion that let's
take half of the money we had left in the bank.
At the time, it was about a million dollars.
Take half of that money and go buy this machine.
So instead of keeping the money to stay alive,
I took half of the money to go buy this machine.
Well, I call this guy up-- the company's called Icos.
Call this company up.
And I said, hey, listen, I heard about this machine.
I like to buy one.
And they go, oh, that's terrific, but we're out of business.
I said, what?
You're out of business.
He goes, yeah, we have no customers.
And I said, wait, hang on, say, so you never made the machine?
They could say, no, no, no, we made the machine.
We have one in inventory, if you want,
but we're out of business.
So I bought one out of inventory, OK?
After I bought it, they went out of business.
I bought it out of inventory.
And on this machine, we put NVIDIA's chip into it.
And we tested all of the software on top.
And at this point, we were on fumes.
But we convinced ourselves that chip is going to be great.
And so I had to call some other gentleman.
So I called TSMC.
And I told TSMC that, listen, TSMC is the world's largest
founder today.
At the time, there was just a few hundred million dollars
large, tiny little company.
And I explained to them what we were doing.
And I told them I had a lot of customers.
I had one, diamond multimedia, probably one of the companies
you bought the graphics card from back in the old days.
And I said, we have a lot of customers,
and demand is really great.
And we're going to tape out a chip to you.
And I like to go directly to production
because I know it works.
And they said, nobody has ever done that before.
Nobody has ever taped out a chip that worked the first time.
And nobody starts out production without looking at it.
But I knew that if I didn't start to production,
I'd be out of business anyways.
And if I could start to production, I might have a chance.
And so TSMC decided to support me.
And this gentleman named Morris Chang.
Morris Chang is the father of the Foundry industry,
the founder of TSMC, really great man.
He decided to support our company.
I explained to them everything.
He decided to support us, frankly, probably,
because they didn't have that many other customers anyhow.
But they were grateful, and I was immensely grateful.
And as we were starting the production,
Morris flew to United States.
And he didn't so many words ask me so.
But he asked me a whole lot of questions
that was trying to tease out, do I have any money?
But he didn't directly ask me that.
And so the truth is that we didn't have all the money.
But we had a strong PO from the customer.
And if it didn't work, some wafers would have been lost.
And I'm not exactly sure what would have happened,
but we would have come short.
It would have been rough.
But they supported us with all of that risk involved.
We launched this chip, turns out to have been completely
revolutionary, knocked the ball out of the park.
We became the fastest growing technology company in history
to go from zero to $1 billion.
So while they didn't test the chip, we tested afterwards.
Yeah, we tested afterwards.
Afterwards, but we didn't do production already.
But by the way, that methodology that we developed
to save the company is used throughout the world today.
That's amazing.
Yeah, we changed the whole world's methodology
of designing chips, the whole world's rhythm
of designing chips.
We changed everything.
How well did you sleep those days?
It must have been so much trash.
What is that feeling where the world just kind of feels
like it's flying?
What do you call that feeling?
You can't stop the feeling that everything's moving super
fast.
And you're laying in bed.
And the world just feels like--
and you'd feel deeply anxious, completely out of control.
I've felt that probably a couple of times in my life.
It's during that time.
Wow.
Yeah.
It was incredible.
What an incredible success.
But I learned a lot.
I learned about-- I learned several things.
I learned how to develop strategies.
I learned how to-- and when I-- our company
learned how to develop strategies.
What are winning strategies?
We learned how to create a market.
We created the modern 3D gaming market.
We learned how-- and so that exact same skill
is how we created the modern AI market.
It's exactly the same--
Yeah.
It's exactly the same skill, exactly the same blueprint.
And we learned how to deal with crisis.
How to stay calm.
How to think through things systematically.
We learned how to remove all waste in the company
and work from first principles and doing only the things
that are essential.
Everything else is waste because we have no money for it.
To live on fumes at all times.
And the feeling-- no different than the feeling
I had this morning when I woke up--
that you're going to be out of business.
That phrase, 30 days from going out of business,
I've used for 33 years because--
You still feel--
Oh, yeah.
Oh, yeah.
Every morning.
Every morning.
But you guys are one of the biggest companies
on planet earth.
But the feeling doesn't change.
Wow.
The sense of vulnerability, the sense of uncertainty,
the sense of insecurity, it doesn't leave you.
That's crazy.
You know, we had nothing.
We had nothing.
And you still find that.
And you still find that.
Oh, yeah.
Oh, yeah.
Every day.
Every moment.
Do you think that fuels you?
Is that part of the reason why the company so successful
that you have that hungry mentality?
That you never rest.
You're never sitting on your laurels.
You're always on the edge.
I have a greater drive from not wanting to fail
than the drive of wanting to succeed.
[LAUGHTER]
Is that like success coaches that tell you
that's completely the role that college is?
The world has just heard me say that out loud for the first time.
But it's true.
Well, this has happened.
The fear of failure drives me more than the greed
or whatever it is.
Well, ultimately, that's probably a more healthy approach.
Nothing I'm thinking about it.
Because the fear--
I'm not ambitious, for example.
I just want to stay alive, Joe.
I want the company to thrive.
I want us to make an impact.
That's interesting.
Well, maybe that's why you're so humble.
That's what-- maybe that's what keeps you grounded.
Because with the kind of spectacular success
the company's achieved, do it be easy to get a big head.
Right?
But isn't that interesting?
If you were the guy that your main focus is just success,
you probably would, well, made it, nailed it in the man.
It's dead.
It's dead.
You wake up, you're like, oh, we can't fuck this up.
No, exactly.
Every morning.
Every morning, not every moment.
Yeah, that's crazy.
Before I go to bed.
Well, listen, if I was a major investor in your company,
that's why I'd want running it.
I want a guy who's like--
That's why I work--
Yeah.
That's why I work.
That's why I work seven days a week.
Every moment, I'm awake.
You work every moment.
Every moment, I'm awake.
Wow.
I'm thinking about solving a problem.
I'm thinking about--
How long can you keep this up?
I don't know, but so far.
Could be next week.
It sounds exhausting.
It is exhausting.
It sounds completely exhausting.
Always in a state of anxiety.
Wow.
Always in a state of anxiety.
Well, Kudos to you for admitting that.
I think that's important for a lot of people to hear.
Because there's probably some young people out there
that are in a similar position to where you were
when you were starting out that just feel like all those people
that have made it, they're just smarter than me,
and they had more opportunities than me, and it's just like--
it was handed to them, or they were just
in the right place at the right time.
And Joe, I just described to you somebody
who didn't know what was going on.
Actually did it wrong.
Yeah.
Yeah.
And the ultimate diving catch, like two or three times.
Crazy.
Yeah.
The ultimate diving catch is the perfect way to put it.
It's just like the edge of your glove.
It probably bounced off of somebody's helmet
and landed at the edge.
[LAUGHTER]
That's incredible.
It's incredible, but it's also--
it's really cool that you have this perspective
that you look at it that way.
Because you know, a lot of people that have delusions
for grandeur, and they're in flames.
And they're rewriting of history.
Oftentimes, had them somehow extraordinarily smart,
and they were geniuses, and they knew all along,
and they were spot on.
The business plan was exactly what they thought.
And they destroyed the competition.
And they emerged victorious.
[LAUGHTER]
Meanwhile, you're like, I'm scared every day.
Exactly.
[LAUGHTER]
Exactly.
It's so funny.
Oh my god, that's amazing.
It's so true, though.
It's amazing.
It's so true.
It's amazing.
But I think there's nothing inconsistent
with being a leader and being vulnerable.
The company doesn't need me to be a genius, right all along,
right all the time.
Absolutely certain about what I'm trying to do
and what I'm doing.
The company doesn't need that.
The company wants me to succeed.
The thing that-- and we started out today
talking about President Trump, and I was about to say something.
And listen, he is my president.
He is our president.
We should all-- and we're talking about just
because as President Trump, we all want him to be wrong.
I think the United States, we all have to realize.
He is our president.
We want him to succeed.
Because--
No matter who's president, that's right, too.
That's right.
We want him to succeed.
We need to help him succeed because it helps everybody.
All of us succeed.
And I'm lucky that I work in a company
where I have 40,000 people who wants me to succeed.
They want me to succeed, and I can tell.
And every single day to help me overcome these challenges,
trying to realize what I describe to be our strategy,
doing their best, and if it's somehow wrong or not perfectly
right, to tell me so that we could pivot.
And the more vulnerable we are as a leader,
the more able other people are able to tell you--
you know, that's not exactly right, or have you
considered this information or--
and the more vulnerable we are, the more able
we're actually able to pivot.
If we put ourselves into the superhuman capability,
then it's hard for us to pivot strategy,
because we were supposed to be right all along.
And so if you're always right, how can you possibly pivot?
Because pivoting requires you to be wrong.
And so I've got no trouble with being wrong.
I just have to make sure that I stay alert,
that I reason about things from first principles all the time,
always break things down to first principles,
understand why it's happening, reassess continuously.
The reassessing continuously is kind of partly what causes continuous anxiety.
Because you're asking yourself, are you wrong yesterday?
Are you still right?
Is this the same?
Has that changed?
Has that condition?
Is that worse than you thought?
Because that mindset is perfect for your business, though,
because this business is ever changing all the time.
And there's competition coming from every direction.
So much of it is kind of up in the air.
And you have to invent a future where 100 variables are included.
And there's no way you could be right on all of them.
And so you have to be-- you have to surf.
Wow, that's a good way to put it.
You have to surf.
Yeah.
You're surfing waves of technology and innovation.
That's right.
You can't predict the waves.
You got to deal with the ones you have.
And but skill matters.
And I've been doing this for 30--
I'm the longest running tech CEO in the world.
Is that true?
Congratulations.
That's amazing.
And people ask me, how is one?
Don't get fired.
[LAUGHTER]
They'll stop a short and a heartbeat.
And then two, don't get bored.
Yeah.
Well, how do you maintain your enthusiasm?
The honest truth is it's not always enthusiasm.
Sometimes it's enthusiasm.
Sometimes it's just good old-fashioned fear.
And then sometimes a healthy dose of frustration.
Whatever keeps you moving.
Yeah, just all the emotions.
I think CEOs, we have all the emotions, right?
And so probably jacked up to the maximum
because you're kind of feeling on behalf of the whole company.
I'm feeling on behalf of everybody at the same time.
And it kind of encapsulates into somebody.
And so I have to be mindful of the past.
I have to be mindful of the present.
I've got to be mindful of the future.
And it's not without emotion.
It's not just a job.
Let's just put it that way.
No, it doesn't seem like it at all.
I would imagine one of the more difficult aspects of your job
currently now that the company is massively successful
is anticipating where technology is headed
and where the applications are going to be.
So how do you try to map that out?
Yeah, there's a whole bunch of ways.
And it takes a whole bunch of things.
But let me just start.
You have to be surrounded by amazing people.
And Nvidia is now--
if you look at the large tech companies in the world today,
most of them have a business in advertising
or social media or content distribution.
And at the core of it is really fundamental computer
signs.
And so the company's business is not computers.
The company's business is not technology.
Technology drives the company.
And Nvidia is the only company in the world that's large
whose only business is technology.
We only advertise the only way that we make money
is to create amazing technology and sell it.
And so to be Nvidia today, the number one thing
is you're surrounded by the finest computer scientists
in the world.
And that's my gift.
My gift is that we've created a company's culture,
a condition by which the world's greatest computer
scientists want to be part of it.
Because they get to do their life's work
and create the next thing.
Because that's what they want to do.
Because maybe they're not--
they don't want to be in service of another business.
They want to be in service of the technology itself.
And we're the largest form of its kind
in history of the world.
Wow.
I know.
It's pretty amazing.
Wow.
And so one, we have got a great condition.
We have a great culture.
We have great people.
And now the question is, how do you systematically
be able to see the future, stay alert of it,
and reduce the likelihood of missing something
or being wrong?
And so there's a lot of different ways you could do that.
For example, we have great partnerships.
Fundamental research.
We have a great research lab, one
of the largest industrial research labs in the world today.
And we partner with a whole bunch of universities
and other scientists.
We do a lot of open collaboration.
And so I'm constantly working with researchers outside
the company.
We have the benefit of having amazing customers.
And so I've the benefit of working with Elon and others
in the industry.
And we have the benefit of being the only pure play
technology company that can serve consumer internet,
industrial manufacturing, scientific computing,
health care, financial services, all the industries
that were in, they're all signals to me.
And so they all have mathematicians and scientists.
And so because I have the benefit now
of a radar system, that is the most broad of any company
in the world, working across every single industry
from agriculture to energy to video games.
And so the ability for us to have this vantage point,
one, doing fundamental research ourselves,
and then two, working with all the great researchers,
working with all the great industries.
The feedback system is incredible.
And then finally, you just have to have a culture
of staying super alert.
There's no easy way of being alert,
except for paying attention.
I haven't found a single way of being able to stay alert
without paying attention.
And so I probably read several thousand emails a day.
How?
How do you have the time for that?
I wake up early this morning.
I was up at four o'clock.
How much do you sleep?
Six, seven hours?
Yeah.
And then you're up at four, reading emails for a few hours
before you get going.
That's right.
Wow.
Every day.
Every single day, not one day missed.
Including Thanksgiving Christmas.
Do you ever take a vacation?
Yeah, but they're--
My definition of a vacation is when I'm with my family.
And so if I'm with my family, I'm very happy.
I don't care where we are.
And you don't work then?
Or do you work in a little?
No, no, I work a lot.
Even if you go on a trip somewhere--
Oh, she's still working.
Oh, sure.
Oh, sure.
Wow, every day, every day.
Well, my kids work every day.
You make me tired just saying this.
My kids work every day.
Both of my kids work every day.
They work every day.
Wow.
Yeah, I'm very lucky.
Wow.
It's brutal now, because it's just me working every day.
Now we have three people working every day.
And they want to work with me every day.
And so it's a lot of work.
Well, you've obviously imparted that ethic into them.
They work incredibly hard.
I mean, it's unbelievable.
But my parents work incredibly hard.
Yeah, I was born with the work gene, the suffering gene.
Well, listen, man, it has paid off a crazy story.
I mean, it's really an amazing origin story.
It really-- I mean, it has to be kind of surreal
to be in the position that you're in now
when you look back at how many times
that it could have fallen apart and humble beginnings.
But Joe, this is a great country.
And I'm an immigrant.
My parents sent my older brother and I here first.
We're in Thailand.
I was born in Taiwan, but my dad had a job in Thailand.
He was a chemical and instrumentation engineer, incredible engineer.
And his job was to go start an oil refinery.
And so we moved to Thailand, lived in Bangkok.
And in 1973, 1974, time frame, you know how Thailand, every so often,
they would just have a coup.
The military would have an uprising.
And all of a sudden, one day there were tanks and soldiers
in the streets.
And my parents thought, you know, probably
isn't safe for the kids to be here.
And so they contacted my uncle.
My uncle lives in Tacoma, Washington.
And we had never met him.
And my parents sent us to him.
How old were you?
I was about to turn nine.
And my older brother almost turned 11.
And so the two of us came to the United States.
And we stayed with our uncle for a little bit
while he looked for a school for us.
And my parents didn't have very much money.
And they never had been to the United States.
My father was-- I'll tell you that story in a second.
And so my uncle found a school that would accept foreign students
and affordable enough for my parents.
And that school turned out to have been in Onida, Kentucky.
Clark County, Kentucky, the epicenter
of the OPO crisis today.
Cold country.
Clark County, Kentucky, was the poorest county in America
when I showed up.
It is the poorest county in America today.
And so we went to the school.
It's a great school.
Onida Baptist Institute in a town of a few hundred.
I think it was 600 at the time that we showed up.
No traffic light.
And I think it was 600 today.
It's kind of an amazing feat, actually.
The ability to hold your population for what
it's 600 people is quite a magical thing.
However, they did it.
And so the school had a mission of being
an open school for any children who would like to come.
And what that basically means is that if you're a trouble student,
if you have a trouble family, whatever your background,
you're welcome to come to Onida Baptist Institute,
including kids from international who
would like to stay there.
Did you speak English at the time?
OK, yeah.
OK, yeah.
And so we showed up.
My first thought was, gosh, there
are a lot of cigarette butts on the ground.
A hundred percent of the kids smoked.
So right away, you know this is not a normal school.
Nine-year-olds?
No, I was the youngest kid.
OK, 11-year-olds.
My roommate was 17 years old.
Wow.
Yeah, he just turned 17.
And he was jacked.
And I don't know where he is now.
I know his name, but I don't know where he is now.
But anyways, that night, we got--
and the second thing I noticed when you
walk into your dorm room is that there are no drawers
and no closet doors, just like a prison.
And there are no locks so that people could check up on you.
And so I go into my room and he's 17.
And get ready for bed.
And he had all this tape all over his body.
And he turned out he was in a knife fight.
And he's been stabbed all over his body.
And these were just fresh ones.
And the other kids were hurt much worse.
And so he was my roommate, the toughest kid in school.
And I was the youngest kid in school.
It was a junior high, but they took me anyways.
Because if I walked about a mile across the Kentucky
river, the swing bridge, the other side is a middle school
that I could go to.
And then I can go to that school, and I come back,
and then I stay in the dorm.
And so basically, Anita Baptist Institute
was my dorm when I went to this other school.
My older brother went to the junior high.
And so we were there for a couple of years.
Every kid had chores.
My older brother's chore was to work in the tobacco farm.
So they raised tobacco so they could raise some extra money
for the school, kind of like a penitentiary.
Wow.
And my job was just to clean the dorm.
And so I was nine years old.
I was cleaning toilets.
And for a dorm of 100 boys, I cleaned more bathrooms
than anybody.
And I just swished there.
But it was a little bit more careful.
I know.
[LAUGHTER]
But anyways, I was the youngest kid in school.
My memories of it was really good.
But it was a tough town.
It sounds like--
Yeah, town kids, they all carried-- everybody had knives.
Everybody had knives.
Everybody smoked.
Everybody had a zippo lighter.
I smoked for a week.
Did you?
Oh, yeah.
How old were you?
I was nine.
When you're nine, you're nine, you tried smoking.
Yeah, I got myself a packet of cigarettes.
Everybody else did.
Did you get sick?
No, I got used to it.
And I learned how to blow smoke rings.
And you know, breathe out in my nose.
Take it an hour through my nose.
I mean, there was all the different things that you learned.
Yeah.
At nine.
Yeah.
Why?
You just did it to fit in or--
Yeah, because everybody else did it.
Right.
Yeah.
And then I did it for a couple of weeks, I guess.
And I just rather have--
I had a quarter, you know?
I had a quarter of a month or something like that.
I just rather by popsicles and fudge cycles with it.
I was nine, you know?
Right.
I chose the better path.
Wow.
That was our school.
And then my parents came to United States two years later.
And we met them in Tacoma, Washington.
That's wild.
It was a really crazy experience.
What a strange formative experience.
Yeah, tough kids.
Thailand to one of the poorest places in America.
Or if not the poorest, as a nine-year-old.
That was my first experience with your brother.
Wow.
Yeah.
Yeah.
No, I used to remember.
And what breaks my heart-- probably the only thing that
really breaks my heart about that experience was--
so we didn't have enough money to make international phone
calls every week.
And so my parents gave us this tape deck, this Iowa tape deck
and a tape.
And so every month, we would sit in front of that tape deck
in my older brother, Jeff and I.
The two of us would just tell them what we did the whole month.
And we would send that tape by mail.
And my parents would take that tape
and record back on top of it and send it back to us.
Could you imagine it for two years?
Is that tape still existed of these two kids just
describing their first experience with United States?
I remember telling my parents that I joined a swim team.
And my roommate was really buff.
And so every day, we spent a lot of time in the gym.
And so every night, 100 push ups, 100 sit-ups,
every day in the gym.
So I was nine years old.
I was pretty buff.
And I'm pretty fit.
And so I joined the soccer team.
I joined the swim team.
Because if you join the team, they take you to meet.
And then afterwards, you get to go to a nice restaurant.
And that nice restaurant was McDonald's.
And I recorded this thing.
I said, mom and dad, we went to the most amazing restaurant
today.
This whole place is lit up.
It's like the future.
And the food comes in a box.
The food is incredible.
The hamburger is incredible.
It's McDonald's.
But anyhow, it wouldn't be amazing.
Oh my God.
Two years.
You're recording?
Yeah, two years.
What a crazy connection to your parents, too.
They're just sending a tape and them sending you in back.
And it's the only way you're communicating for two years.
Yeah.
Wow.
Yeah.
Now, my parents are incredible, actually.
They just-- they grew up really poor.
And when they came to United States,
they had almost no money.
Probably one of the most impactful memories I have is they came
and we were staying in a apartment complex.
And they had just rent-- back in the--
I guess people still do rent a bunch of furniture.
We were messing around.
And we bumped into the coffee table and crushed it.
It was made out of particle wood.
We crushed it.
And I just still remember the look of my mom's face,
because they didn't have any money.
And she didn't know how she was going to pay it back.
But anyhow, that kind of tells you how hard it was for them
to come here, but they left everything behind.
And all they had was their suitcase.
And the money had to add in their pocket.
And they came to United States.
How old were they?
They were in their 40s.
Yeah, late 30s.
Pursue the American dream.
This is the American dream.
I'm the first generation of the American dream.
Wow.
Yeah, it's hard not to love this country.
It's hard not to be romantic about this country.
That is a romantic story.
That's an amazing story.
Yeah.
And my dad found his job literally in the newspaper.
The ads.
And he calls people.
Got a job.
What did he do?
He was a consulting engineer in a consulting firm.
And they helped people build oil refineries, paper mills,
and fabs.
And that's what he did.
He's really good at factory design.
Instrumentation engineer.
And so he's brilliant at that.
And so he did that.
And my mom worked as a maid.
And they found a way to raise us.
Wow.
That's an incredible story, Johnson.
It really is.
All of it.
From your childhood to the perils in video, almost falling.
It's really incredible, man.
It's a great story.
Yeah.
I've lived a great life.
You really have.
And it's a great story for other people to hear, too.
It really is.
You don't have to go to Ivy League schools to succeed.
This country creates opportunities.
Has opportunities for all of us.
You do have to strive.
You have to claw your way here.
But if you put in the work, you can succeed.
Nobody works.
There's a lot of luck and a lot of good decision-making.
And the good graces of others.
Yes.
That's really important.
Yeah.
You and I spoke about two people who are very dear to me.
But the list goes on.
The people I did in video who have helped me,
many friends that are on the board, the decisions,
them giving me the opportunity.
Like when we were inventing this new computing approach,
I tanked our stock price.
Because we added this thing called CUDA to the chip.
We had this big idea.
We added this thing called CUDA to the chip.
But nobody paid for it.
But our cost doubled.
And so we had this graphics chip company.
And we invented GPUs.
We invented programmable shaders.
We invented everything modern computer graphics.
We invented real-time ray tracing.
That's why it went from GTX to RTX.
We invented all this stuff.
But every time we invented something,
the market doesn't know how to appreciate it.
But the cost went way up.
And in the case of CUDA that enabled AI,
the cost increased a lot.
But we really believed it.
And so if you believe in that future,
and you don't do anything about it,
you're going to regret it for your life.
And so I always tell the team, do you believe this or not?
And if you believe it, and so grounded on first principles,
not random hearsay.
And we believe it, we owe it to ourselves to go pursue it.
If we're the right people to go do it,
if it's really, really hard to do, it's worth doing,
and we believe it, let's go pursue it.
While we pursued it, we launched the product.
Nobody knew-- it was exactly what--
like when I launched GTX1, and the entire audience
was like complete silence.
When I launched CUDA, the audience was complete silence.
No customer wanted it, nobody asked for it,
nobody understood it, and video was a public company.
What yours is?
This is a-- let's see, 2000 and 2006, 20 years ago.
2005?
Wow.
Our stock prices went phew.
I think our valuation went down to like $2 or $3 billion
from about $12 or something like that.
I crushed it in a very bad way.
Yeah.
What is it now, though?
Yeah, it's higher.
Very envelope.
It's higher.
But it changed the world.
Yeah.
That invention changed the world.
It's an incredible story, Johnson.
It really is.
Thank you.
Like your story, it's incredible.
My story's not as incredible.
My story's more weird, much more fortuitous and weird.
OK, what are the three milestones that most important milestones
that led to here?
That's a good question.
What will step one?
I think step one was seeing other people do it.
Step one was in the initial days of podcasting.
Like in 2009, when I started podcasting,
it only been around for a couple of years.
The first was Adam Curry, my good friend, who was the pod father.
He invented podcasting.
And then I remember Adam Corolla had a show.
Because he had a radio show.
His radio show got canceled.
And so he decided to just do the same show,
but do it on the internet.
And that was pretty revolutionary.
Nobody was doing that.
And then there was the experience
that I had had doing different morning radio shows,
like Opie and Anthony in particular.
Because it was fun.
And we would just get together with a bunch of comedians.
I'd be on the show with like three or four other guys
that I knew.
And it was always just look forward to it, which was such a good time.
And I said, God, I miss doing that.
It's so fun to do that.
I wish I could do something like that.
And then I saw Tom Green set up.
Tom Green had a set up in his house.
And he essentially turned his entire house
into a television studio.
And he did an internet show from his living room.
He had servers in his house and cables everywhere
at a step over cables.
This is like 2007.
I'm like, Tom, this is nuts.
And I'm like, you got to figure out a way
to make money from this.
I wish everybody in internet could see your set up.
It's nuts.
I just wanted you guys to know that.
It's not just this.
So that was the beginning of it.
I was just seeing other people do it.
And then saying, all right, let's just try it.
And then so the beginning days, we just did it on a laptop.
Had a laptop with a webcam and just messed around.
Had a bunch of comedians come in.
We were just talking, joke around.
Then I did it like once a week.
And then I started doing it twice a week.
And then all of a sudden I was doing it for a year.
And then I was doing it for two years.
Then I was like, oh, it's starting
to get a lot of viewers, a lot of listeners.
And then I just kept doing it.
It's all it is.
I just kept doing it because I enjoyed doing it.
Was there any setback?
No.
No, there's never really a setback.
Really?
No, it must have been.
Or you saw the same kind of story.
You're just resilient.
Or you're just tough.
No, no, no, no.
Wasn't tough or hard.
It was just interesting.
So I just, the whole--
You were never once punched in the face.
No, not in the show.
No, not really.
Not doing the show.
You never did something that big blowback.
No, not really.
No, it all just kept growing.
It kept growing.
And the things stayed the same from the beginning to now.
And the thing is, I enjoy talking to people.
I've always enjoyed talking to interesting people.
I could even tell just when we walked in,
the way you interacted with everybody, not just me.
Yeah, that's cool.
People cool.
Yeah, that's cool.
You know, it's an amazing gift
to be able to have so many conversations
with so many interesting people, because it changes
the way you see the world, because you see the world
through so many different people's eyes.
And you have so many different people
have different perspectives and different opinions
and different philosophies and different life stories.
And it's an incredibly enriching and educating
experience having so many conversations
with so many amazing people.
And that's all I started doing.
And that's all I do now.
Even now, when I book the show, I do it on my phone.
And I basically go through this giant list of emails
of all the people that want to be on the show
or the requested to be on the show.
And then a factor in another list that I have
of people that I would like to get on the show
that I'm interested in, and I just map it out.
And that's it.
And I go, ooh, I'd like to talk to him.
If it wasn't because of President Trump,
I wouldn't have been bumped up on that list.
(laughing)
Yeah, I wanted to talk to you already.
I just think what you're doing is very fascinating.
I mean, how would I not want to talk to you?
And then today, you proved to be absolutely the right decision.
Well, listen, it's strange to be an immigrant one day
going to O'Neill Baptist Institute
with the students that were there.
And then here, Nvidia is one of the most
consequential companies in the history of companies.
It is a crazy story.
- It has to be.
- Yeah, that journey is a, and it's very humbling.
And I'm very grateful.
- It's pretty amazing, man.
- Surrounded by amazing people.
- You're very fortunate, and you've also,
you seem very happy, and you seem like you're 100%
on the right path in this life, you know?
- You know, everybody says you must love your job,
not every day.
(laughing)
That's not hot.
- You know what I'm saying?
- The beauty of everything.
- Yeah.
- 'Cause that there's ups and downs.
- That's right, it's never just like this giant dopamine high.
- We leave this impression.
Here's an impression I don't think is healthy.
People who are successful leave the impression often
that our job gives us great joy.
I think largely it does.
That our jobs were passionate of our work.
And that passion relates to, it's just so much fun.
I think it largely is.
But it distracts from, in fact,
a lot of success comes from really, really hard work.
- Yes.
- There's long periods of suffering and loneliness
and uncertainty and fear and embarrassment and humiliation.
All of the feelings that we most not love,
that creating something from the ground up
and Elon will tell you something similar.
Very difficult to invent something new.
And people don't believe you all the time.
You're humiliated often, disbelieved, most of the time.
And so people forget that part of success
and I don't think it's healthy.
I think it's good that we pass that forward
and let people know that it's just part of the journey.
- Yes.
And suffering is part of the journey.
- You will appreciate it so much.
These horrible feelings that you have
when things are not going so well,
you will appreciate it so much more when they do go well.
- Deeply grateful.
- Yeah.
Deep pride, incredible pride, incredible gratefulness
and surely incredible memories.
- Absolutely.
- Janssen, thank you so much for being here.
Those are really fun.
I've really enjoyed it and your story is just absolutely
incredible and very inspirational.
And I think it really is the American dream.
It is the American dream.
- It really is.
- Thank you so much.
- Thank you.
- Bye everybody.
(upbeat music)
(upbeat music)