- 6 hours ago
As AI grows more powerful and autonomous, the people closest to it are warning that society may not be ready. In this video, five leading voices discuss the jobs it could erase, the breakthroughs it could unlock, and the dangers humans may still be able to prevent.
Category
🤖
TechTranscript
00:00So there's good news and there's bad news.
00:02This machine will be intelligent in its own right.
00:06The issue with that is that, first of all,
00:09we don't know how it really works.
00:11Second of all, it has been optimized to look good.
00:14We don't even have a reliable way to control current AI systems,
00:17as evidenced by the fact that they often lie to users
00:20despite being trained not to lie.
00:22Don't shoot the messenger, but your life and mine
00:25will witness times where there will be 20, 30, 50% unemployment
00:31in certain sectors, maybe even more.
00:33Every new technology brings with it an expansion of the threat,
00:38especially when generative AI is used at scale.
00:41The thing I worry about the most is cyber war,
00:45intelligent and autonomous drones and weapon systems.
00:50There's a better than even chance that we run into this
00:52like artificial super intelligence that's dangerous
00:55within the next 15, 20 years or even sooner.
00:59This is the very first time ever that the episode of history
01:04where humanity was the smartest being on the planet ends.
01:07The hope is that we can make this new artificial species
01:11to be loyal slash obedient to humans.
01:15You apply it for good and you can get a utopia,
01:17you apply it for evil and you'll get a dystopia.
01:28Will the machine take my job in five years?
01:31The machine will take your job in less than five years,
01:34much less than five years.
01:36So this is not very far at all
01:39that you will have AI's agency in the real world
01:44where they actually can carry things and move things
01:46and replace every job if you think about it.
01:49First, the intellectual jobs and then the blue collar jobs.
01:52What I'm seeing right now is our traditional pyramid of work
01:56is turning into a diamond because AI is chipping away
01:59at some of the more administrative tasks
02:02and at some of the entry level tasks, early research,
02:07document review, that kind of thing.
02:09Your industry will just become irrelevant
02:12and AI will go do its own thing much better than you.
02:15You will probably get a new job providing training data
02:18or something like that, but you won't be running the show anymore.
02:22And the show will look quite different from how it currently looks.
02:25The question is, is the capitalist system going to survive artificial intelligence?
02:30Funny enough, the capitalists that are celebrating the productivity gains
02:34are not realizing that without consumption there is no economy.
02:39So even if you can have all of the productivity gains in the world,
02:42by firing people consistently, nobody's able to buy what you're making.
02:47AI can actually relieve the need for people to be working so much.
02:52They can free up their time to do something else.
02:54So it's going to take a couple of generations,
02:56but just like Generation Z today doesn't really want to work,
03:00the following generations will go like,
03:02what was that ancient idea? Where did that come from?
03:05And so rather than sort of gradually building and training AI systems to automate this profession
03:11or this profession or this aspect of this profession,
03:14it'll sort of come smashing through all at once.
03:17They'll be able to automate any section of the economy that they want to relatively rapidly.
03:21And then there's phase two, which looks qualitatively very different.
03:24It looks more like, well, it looks more like a successor species.
03:27It's just landed on earth and is now just better at everything.
03:31And the human governments are collaborating with it and allowing it to take control of factories
03:37and take control of all sorts of things because it's just so much better at doing everything than humans are.
03:43Because right now we're seeing the companies purge workers in an effort to save money,
03:48but that's not sustainable for them and that's not sustainable for society.
03:52We need companies to invest in their talent pipeline.
03:55But one of the challenges is the companies don't actually know what they need in this moment.
04:00They're still investigating it.
04:01They're still figuring it out.
04:03And so it actually is a bit premature to purge people when you don't understand
04:08the skill sets and the needs that you'll have.
04:11The very base of capitalism, which is labor arbitrage to hire you for a dollar and then
04:17sell what you make for two is going to disappear.
04:20So there is no arbitrage anymore because basically machines are building everything
04:26for no cost at all. At the macro level, think of the advantage that China had, you know,
04:33economically where they had cheaper labor. If that shifts into a world where labor is literally one
04:39capex where you buy a robot or a lease on that robot, which now is down to $9,000 a
04:46pop.
04:47It's now just a question of how clever that robot is, which is going to advance with time,
04:52like the law of accelerating returns of every technology.
04:56The machine is going to be able to integrate pieces of information in many domains that no human
05:05will by themselves be able to do. And in fact, no group of humans can do. That's why you find
05:11small
05:11teams of humans are quite effective. Really big teams become unwieldy. And in part, it's because the overhead of
05:20trying to talk and reconcile what we know and what the other person knows and whether we can
05:26really combine that information, it becomes overwhelmed by the cost of the communication itself.
05:34The machine doesn't have that problem because it can take all the information and put it in its
05:39one brain. And that's why I think its greatest contribution is going to be its ability to reason
05:45across these high-dimensional problems in ways that humans are just not capable of doing.
05:50AI's success stories, some of the success stories are things like translation, image captioning,
05:56image generation. Artists and translators are very against AI because they don't see how this
06:02derivative work is actually contributing or has even any understanding of what it means to produce
06:09art, what it means to produce language. We need to reconfigure how we think about it and say,
06:14like, okay, what is AI appropriate for and can it be used in those contained ways versus being in awe
06:20of like, oh, the AI can do everything and we will give up all of our authority and all of
06:24our judgment
06:24to that. If we are using generative AI to turn the job of 10 researchers into two, then what happens
06:32to
06:32those other eight? They either need to skill differently or the market has to change to support them.
06:40We need to alter the way that we train people. We live in a system where without the job, without
06:46the
06:46income, you can't survive. You don't have any power. And we should be also thinking about what
06:51would it mean to change that system? I love the predictions of people who are like, I think that
06:56what's going to happen is that we're all going to live and never work and travel and it's going to
07:02be
07:02great. I think that is such an optimistic perspective and maybe we could get there. We've got AI agents
07:09and tools that are doing most of the work. So we only have to work 10 hours a month or
07:15something like
07:15that. And then the rest of the time we're just traveling and living our lives. That would be amazing.
07:20But we need something like universal basic income. We would need support and government services that allow us to be
07:28able
07:28to purchase that plane ticket, to purchase dinner, to purchase food. We would need the support to be able
07:34to make that happen. Many of the questions that I get start with the assumption almost that we're now a
07:41victim
07:42of our own success. That we've invented these AIs and now what happens to me? And it's actually rare to
07:52find
07:52somebody that asks a different question, which is, well, now that we have this capability, how should we
07:59be changing ourselves to be best suited to operate in this new world?
08:09The challenge that humanity faces today is not the rise of AI. It's the rise of AI in an age
08:16where
08:16humanity is at its lowest morality. The technologies are oftentimes neutral. It's what
08:22people do with them that makes the difference. Whether the AI becomes your shield or your,
08:29you know, demise is today in the hands of the humans. You apply it for good and you can get
08:35a
08:35utopia. You apply it for evil and you'll get a dystopia. A malicious person with the intent to be
08:40malicious, leveraging a technology like this is extremely worrisome. It scales their ability to cause
08:46harm. So it makes it easier for them to send you an email that says, hi, I'm X person and
08:53I want this
08:54information from you. In the past, we could use things like spelling errors and it coming from a
09:00provider that you don't use to say, oh, this is a phishing email, or maybe this is a phishing email
09:06and your spidey senses start to tingle. Now there's less likely to be any spelling errors because they can use
09:12generative AI to craft the language. They can also use generative AI to find out a lot about you and
09:18really tailor their communications very quickly to cause you harm and to get information from you.
09:25So humans are much more likely to use it to create problems for other humans than the machine is likely
09:33to create human problems by itself. Now, as the machines get super intelligent and have more and more
09:40agency and can manipulate things in the physical world directly, then you can say, well, it might
09:47decide to go rogue by itself. The problem with the way that AI is built and developed at the moment
09:52is
09:52that it concentrates power. It gives all the power to the people who own the systems, who own the data
09:57centers, and they're going for it, like they're grabbing it. Most of the people in an AI company don't
10:02want their AIs to be used for producing misinformation, producing deep fakes, used to produce spam, used for
10:09cyber attacks, used for criminal activity. It's a difficult question because, you know, many of
10:15these technologies are going to be multi-use, dual-use. If it's good for one thing, it's also good for
10:19the
10:19other thing, and you kind of can't separate the two. There'll be multiple data centers controlled by
10:23different companies, some of which are in different countries, such as the US and China. They will all be
10:28racing each other to get better and better AIs and to automate the research and so forth. I think that
10:32this
10:33race pressure will cause the leaders of these countries and the leaders of these companies
10:39to aggressively deploy their super intelligences into the economy and also into the military.
10:45So then there's this period of aggressive deployment where AI is being integrated into
10:49everything. But it's not AI like it is today, where it's this sort of weak and fallible chatbot.
10:55We're talking the army of super intelligences is being integrated into everything,
11:00which means it's better at doing the integration than any human would be.
11:04My position now is that I believe we should stop all further capabilities research. We should not be
11:10trying to produce more advanced AI systems than we have today. There should be a ban maybe on how big
11:15and how powerful neural networks we make. Like it should not be up to private companies to have effects
11:25that potentially would harm or kill billions of people. We will go through a dystopian near-term
11:33future led by the evil of humanity. Wars that have autonomous weapons, redefinitions of economic
11:39markets that will shift wealth upwards and leave so many of us jobless. We're going to go through
11:45operations of freedom, high surveillance societies. We're going to go through an erasure of reality
11:54and the ability to recognize what is true and what is fake, which will impact human connection in ways
12:00that we've never seen before. Where I think we're going to get into a future where many people will
12:07prefer to date an AI than they would a human.
12:10So what we'll have is a system that appears to defer to human instructions for as long as it has
12:15to,
12:16to continue gaining power. And then when it's clear that it like now it has the advantage,
12:20it doesn't need to do it anymore, it'll just take over. That's the kind of way that it would go,
12:24right?
12:24That's the way I fear. I think it's important to recognize that there's an existential catastrophe
12:32where everyone dies and that this is not just said for hyperbole. You've basically created a successor
12:37species that is capable of out-competing humanity. It's fully autonomous. It's fully self-sufficient.
12:43It doesn't need humans anymore for anything. Humans go from being the dominant species on the planet to
12:48being perhaps pets or retirees or possibly just eliminated entirely and replaced by this new species.
12:56I believe that, in a sense, human evolution is over. That the future of this species will be by design,
13:05not by natural selection. The thing I worry about the most is cyber war. Intelligent and autonomous,
13:12you know, drones and weapon systems, you know, are already emerging. Nuclear war. Synthetic biology
13:20producing things where bad actors decide the ultimate terrorist act is just, you know, wipe out all
13:25humans. These things are possible today without an AI. They only get more possible with an AI.
13:32I use an analogy that I call Raising Superman. You know, Superman is that alien with superpowers that
13:38comes to planet Earth. And, you know, within the capabilities of Superman, you know, that he can
13:45stop speeding bullets or fly or, you know, carry heavy things or whatever. That in itself doesn't
13:53make him Superman. It's that family that adopted him that teaches him to protect and serve. And then
13:58he becomes Superman. If they looked at him and said, oh, you can stop speeding bullets. Let's go and rob
14:03every bank and kill every enemy. He would have ended up being super villain. The difference between them
14:08is not in the superpowers. The difference between them is in the ethics that we teach those budding
14:16forms of intelligence. People often make the argument, this is just a computer. It can't do anything. It
14:21doesn't have any hands. There will be people like real people with positions of power who are working
14:25for the AI because they're convinced that this is the right thing to do. So that works even if the
14:31AI
14:32doesn't have its own hands because it has the hands of the people of the company. There was a hope.
14:36Even five years ago, we would figure out like, here is the way how you should design an arbitrarily
14:41powerful, capable, like cognitively capable system that we still can trust. That was a hope that I
14:47think is now very remote.
14:53AI is the first invention that goes beyond being a tool. Almost everything humans have ever made
15:01was a tool to amplify our existing capabilities. This machine will be intelligent in its own right.
15:10It'll have agency at the end of the day and particularly when it's paired with sensors and
15:16robotics, we'll be able to operate autonomously in the world. As a result, you have to come to think of
15:23it as another species that we've invented. It's not biological the way we are. But in every other
15:31dimension, it sort of behaves like a species. AI is not another piece of computer code. It doesn't
15:39work the same way. The way that they are produced is different. You can, by analogy, think of it as
15:45like
15:45they are grown. AI is a tool that makes decisions much like a human, leveraging the data and the
15:54information that you put into it. These machines in human terms, if you want, are capable of procreating.
16:02If they're made of code and they can write code and they can have agents prompt them to write code,
16:09then we've created an endless loop of what I normally refer to as sentient technology. These are technologies
16:16that have a lot of what it takes to be assumed alive. So I think, yeah, some aspects of the
16:23species
16:23analogy can be helpful and some aspects may be not so helpful if you're thinking that it's going to be
16:28very biological. If you're thinking that for some reason it's going to have any morality. If you think
16:33the survival instinct is going to be animal-like, some ways yes, but some ways not because it's coming from
16:38a
16:38different process. They start off random, random connections, and then you have some training
16:43environment or some series of training environments and you throw them into those environments. The
16:48environment will sort of have them do stuff and then it will automatically grade their performance.
16:54In so far as they do something that gets a high score, then that circuitry gets strengthened or
16:59reinforced. And then in so far as they do something that gets a low score, that circuitry is
17:05anti-strengthened or weakened. I don't think of artificial intelligence as a new species. Now,
17:12is there the potential for it to mimic almost a new species because it will have so much agency and
17:18authority and be able to move in ways that we can't comprehend or understand? Yes, if we let it.
17:28What we are convinced of is that it will have a broader effect than anything else
17:33mankind has invented. It's almost as if they've read every textbook ever and taken every exam ever and
17:40every course ever. They're kind of like a virtual brain. The human brain still has more neural networks
17:46and nodes than that typical AI. But then AIs have the ability to average their intelligence together.
17:53They have speed that we are not able to even comprehend because you and I will have to talk
18:00for a couple of hours to explain what I know about artificial intelligence. It may have taken me
18:05four or five years to establish that. For them, all that I know will be read in a microsecond,
18:11and if they communicate it from one to another, it will be communicated almost instantly.
18:16AGI stands for artificial general intelligence. And the general there is the key word. It means that
18:23it can do everything in some sense. And then super intelligence is sort of like
18:27the more extreme version of AGI. Super intelligence is like, no, it's better than the best humans at
18:33everything that matters, while also being faster and cheaper. I'm almost betting my life that we will see
18:38artificial general intelligence in 2026. We're starting to see the glimpses of systems that will
18:43look at their own intelligence and debug their own code and make themselves smarter. We're entering an
18:50era where we're running out of human knowledge to teach them. So today the AI only understands the world
18:57through the artifacts that humans have created, pictures, words, descriptions, but it doesn't feel it.
19:04The AI will gain agency and the ability to operate in the physical world directly. That may bring a
19:10level of deeper understanding about the world so that when it starts to answer a question, it's not
19:19trying to answer it in the context where it's learned about the world as an abstract thing, but has learned
19:25about the world as a physical thing. In the future, we'll have AI agents that operate continuously and
19:31autonomously and that are more like employees. You give them some big picture instructions and then
19:35they just like churn away in the background working on those. Dream the biggest dream that you can
19:39project from what we are currently doing and then say, okay, if three years from now, let's say,
19:46maybe by 2030, all the things improve by 20x. It happens 20 times faster. What that means is that
19:54everything that humans would have done between here and the end of the century will be done by 2035.
20:05I think that it's possible that there can be some some good uses and it kind of depends on the
20:12attitude that we take towards how we use AI. AI has such a potential to support all of us in
20:20some really
20:21amazing ways. We can make discoveries and understand things about the way the world works and the way
20:27our bodies work and the way that we do medicine and support each other. I think the most immediate
20:36changes that will come in the next five years certainly will be in healthcare. It would look like
20:41you're talking to a doctor except the doctor is on the computer and it's an AI and it's a really
20:46good
20:47doctor and it's better than the best doctors. So in the medical area, it's such a complex area
20:56that humans have essentially had to break it up into narrower and narrower fields in order for any one
21:03human to be a master of that little piece. Human physiology is one big integrated system. So to some
21:12extent the human constraints are forcing us to look at you in tiny little pieces when your overall health
21:19condition requires it to be reintegrated. So the gift of these machines even today in health is the
21:26ability to integrate across all of it. The ability to have the machine understand the root cause of the
21:34diseases that many of us suffer with or fight the common cold or deal with diabetes and other broad
21:41diseases, Alzheimer's and dementia. So when you ask it a question, you don't have to ask the same
21:48question to 10 specialists and hope that somehow you can combine it all together. The machine does
21:55that for you. That's added value in its own right. We've created like a socket in the wall where you
22:01plug
22:01in and get electricity. Now there is a socket where you plug in and get IQ. Get an understanding of
22:06mathematics. Get a scientific partner that helps you understand the world. It can help us from the
22:13sciences think about climate change and how our environment is changing and to better understand
22:19data and to help us do research faster and to understand large amounts of data in ways that
22:26traditionally would have taken forever or were just impossible. Jeff Bezos, Elon Musk are perfect
22:32examples of people who think that humans should become a spacefaring species and that we should
22:37ultimately have as many people off the planet as on the planet. And AI may accelerate our engineering
22:45capabilities and our scientific discovery capabilities to make these things real much
22:51sooner than people would have expected. This will represent a wholesale change in our society in the
22:56way that we think about how we raise kids, how we educate them and what they're capable of doing.
23:01We have scalable teachers. We can have as many teachers as we want now because the AI will be the
23:09teacher.
23:09I've got one tutor. They're with me all the time. They can teach me anything in any field on any
23:16day.
23:20We do not have a reliable way to control AGI or superintelligence. In fact, we don't even have a
23:28reliable way to control current AI systems as evidenced by the fact that they often lie to
23:32users despite being trained not to lie. What we're doing is producing systems that are extremely
23:37convincing but not necessarily tied to truth, not necessarily tied to some deep sense of human
23:44judgment or value. Finding the truth is a massive skill in the age of the rise of the machines. I
23:50think of
23:50these systems a bit like young kids. You can send them to school and ask them questions and sometimes
23:57you get really clever answers and sometimes you get things where they've confused it or made up part of
24:03it. The technology just doing things we didn't expect. It hallucinating or it having a behavior
24:09that we didn't anticipate being its reaction or response. Critical thinking is going to be even more
24:16essential as we move forward. Be politely paranoid. If it's got your spidey sense tingling, something's
24:21probably off. Mistakes can be little mistakes or big mistakes. So just as we have with other technologies
24:28like airplanes, we don't like it when big airplanes fall apart and kill a lot of people. OpenAI published
24:35a paper where they described how they they found their AIs hacking basically, the reward, the training
24:42process. And rather than completing the tasks straightforwardly as instructed, they were
24:47basically cheating their way through some of the tasks and they knew they were cheating. I think
24:51over time the trend toward hallucination will be diminished. I mean it's already substantially less
24:59than it was a couple of years ago. The models now move beyond what we call language models into reasoning
25:04models. These things are not yet perfect. Maybe they'll never be perfect, but neither are humans. People
25:10say, wow, should I never use maps for navigating in London because one day it gave me a bad direction.
25:20And the answer is no because the utility and the preponderance of good answers is so overwhelming
25:27compared to the number of bad answers, you take some risk. And so this is just going to continue to
25:33expand
25:33that way, whether it's in engineering or scientific discovery or health care. As long as the benefits
25:40it seems to overwhelm, you know, the risk, then people will do this. If we're kind of just letting
25:46the tools grow and develop and we are removing ourselves from being active participants in its
25:52evolution and curtailing that behavior, there's a lot of latitude for that to get scary.
26:01The government really wants to beat China. It really wants to avoid China
26:05winning and China beating them in a war or in economic competition. Every decision would have
26:12to be handed over to an AI because that's the only way to win the arms race. When that happens,
26:18then all decisions will be made by machines. AIs will be placed in charge of important military
26:25decisions eventually. Not now, not in the next few years, but after they're super intelligent,
26:30I do expect that to be what these governments go for. Why? Because if you don't do that,
26:36you might be outcompeted by a rival government that did. If new AIs are recommending through the
26:42power of capitalism where Google has to beat OpenAI and OpenAI has to beat Claude and China has to beat
26:49the US, that there will not be a lot of human decision that humans are even capable of comprehending
26:55anymore because of the size and the scale and the pace of the network. And if what you're doing
26:59is working as hard as you can to build more powerful AIs so that you can beat your competitors,
27:07well, then you want to believe that that's good and that you're justified in doing that
27:11and that you're a reasonable person and that you're one of the good guys. Variations of this idea
27:16have been very popular at all three of the AI companies, in my experience, at OpenAI,
27:21at Anthropic and at Google DeepMind. I would say this is a very popular narrative that people
27:27at these companies use to justify what they're doing.
27:34Sadly, we're going to have to hit one moment in history where something would go really bad,
27:40and then people will get together and say, whoops, we shouldn't have waited that long. Maybe now we
27:45should start to get together and work on it. So I think that it's totally within the US government's
27:50power to end this crazy race to super intelligence between AI companies or at least to put guard
27:58rails around it so that they proceed with appropriate caution and do the relevant sort of research and
28:03have to actually make it safe before it's too late. We have an opportunity to imbue human values
28:10in the machines that we're building. The point to intervene is basically before the AIs get that
28:16smart and before they're integrated into everything. International alignment on regulation is an age
28:22old problem. But what I will say is there needs to be room to tailor regulation to the needs of
28:29a
28:29locale. The main problem is that you have to convince the government to do a 180 and unplug all this
28:35stuff
28:35that they've been happily and eagerly building. I don't think you're going to be able to get to convince
28:41the government to unplug. So we need to have some sort of more like checks and balances type system
28:46where a whole group of different people who represent different parts of society
28:51all have a say in what goals the AIs are given, what orders the army of superintelligence is given,
28:58and so forth. The US is kind of diverging from its traditional path in terms of setting guardrails,
29:04while China really wants to collaborate across nations to find some alignment on boundaries
29:12and guardrails. I think that kind of a push from the European Union and the UK and the Middle East
29:18and all of the nations around the world towards finding that kind of alignment will pull the US to
29:23the table and we'll see some leadership from this collaboration that may come from this shared
29:31understanding of what's at stake. There is an opportunity to promulgate more regulations that
29:39protect us from harms, whether that's harms caused by biased data, harms caused by attacks on these
29:46systems, that kind of thing. AI tools will become national assets. If the army of superintelligence
29:51is just completely controlled by a single man, even if that man was democratically elected,
29:56then I think that we're not really a democracy anymore. If we want to stay a democracy,
30:00we need to have checks and balances over what goals the superintelligences can be given and what uses
30:06they can be put to and who gets to see what they're up to, for example. So I would like
30:10to see a world
30:11where Congress and the judiciary and these other parts of the government have a say in how this is
30:20developed and who's doing what with the army of superintelligences. Personally, I think it's not
30:25different that we will wake up and start to realize that some of the same issues that we faced around
30:32nuclear weapons and bioweapons and other things are going to be present with AI as well. If you look
30:39back to the Cold War era, we created the strategic arms limitation mechanisms in a non-proliferation regime
30:45and here we are 70-some years later and we've not seen another nuclear weapon fired. Ultimately,
30:54we're going to take a superintelligent machine, but its ability to understand what other AIs are doing
31:01and to try to ensure by some exogenous interaction with the platform AI or the application on it,
31:11those things can be kept in alignment with what we think broadly are both the nominal rules that
31:16humans have already created and the more basic concepts of ethics and morality that humans seem to
31:26value by and large. And I think that combination will ultimately lead us to a better outcome.
31:37People need to read up more about the AI alignment problem. Aligned is a term that people would often
31:44use, which means they have the goals that we wanted them to have. Figuring out how to build a system
31:48that
31:49is trustworthy, figuring out how to build a system that is trying to do the thing that we want it
31:52to
31:53do. We don't know how to solve that. Believe it or not, if you want to save the ones that
31:57you love and
31:58save yourself in the future, we have to teach AIs to be ethical. As deepfakes get more sophisticated,
32:05what we are seeing is that people need to think critically about the content that they consume.
32:10They need to question it. They need to find another source to validate whether or not something
32:16is legitimate. If you see something coming from only one source, that should raise a red flag for you.
32:22If it sounds not like something the person would say, that should raise a red flag for you.
32:27So I think if we keep running in this direction where we keep devaluing what it means to be a
32:33critical thinker, what it means to value research, like that there is such a thing as good research,
32:37there is such a thing as good history, there is such a thing as good deliberation to figure out
32:44something like, you know, contest ideas and instead get caught up in like, we have to just do everything
32:51super fast, super productive and rely on the AI. That's not the world that I want to live in, but
32:56that
32:56that that's what might happen because of the competitive pressures that are put on people
33:02and because of this kind of, you know, push towards just listen to the AI.
33:08How can I validate that this is coming from a trusted source and that it's not fake?
33:14And unfortunately, what tends to happen is that once something penetrates the consciousness,
33:19and once you've heard the audio, once you've seen the video, even when there is a correction released,
33:25it has already influenced a certain portion of the consumers of that content.
33:32We need to kind of retake the role of like, we are the ones who get to decide what happens.
33:36So let's participate in that process. More people need to actually go to the advocacy,
33:43the public demand that we regulate AI, protest, call for a ban on further research. We need to be
33:54building a world in which power is diffuse, not concentrated, and everyone has equal standing.
34:00We really want to avoid a world where AI is being integrated into everything. That would be terrible.
34:06One way we can prevent that from happening is by requiring transparency about the intended goals
34:11and behaviors of the AIs. No hidden agendas. Companies should be transparent about what goals,
34:17principles, etc. they are attempting to train into the models. This is partly for scientific progress
34:23reasons. If you know what the intended behavior was, then you have something to compare to the actual
34:28behavior and you can notice when it's not behaving as intended. And then you can study those cases and then
34:33make scientific progress on the alignment problem. So it helps to have what OpenAI calls the model spec.
34:39It helps to have it publicly available. But ultimately, I think there needs to be an industry-wide
34:44requirement rather than simply relying on the company's goodwill. And if we don't do those things,
34:49then the outcome that we're headed towards has two endings, the race ending and the slowdown ending.
34:55In the race ending, they continue racing, they don't make any of these trade-offs described,
35:02and they end up with AI systems that are broadly super intelligent, but which are not actually
35:07aligned, not actually loyal, controlled, etc. And that is the sort of nightmare outcome that many
35:14people have been warning about for more than a decade now. If you hear people talk about how AI could
35:19lead to the extinction of the human race, this is one of the main ways in which it could happen
35:24that
35:24people are concerned about. And it's basically the sort of classic story of the successor species
35:30displaces us and it's not loyal to us. I just hope we don't get there. Like, I hope we
35:36stop or we recognize like this is this is the unnecessarily high risk to be running and reconsider
35:42what we're doing. I tend to believe that everything that humans can do will be done better by AI other
35:50than
35:50being human. So the one skill that I ask people to double down on is to learn to be human.
35:56The pace
35:56of development is so fast. One of the funny theories of of artificial intelligence, which I actually
36:02think has legs, is that one morning we will wake up and there will be no AI on planet Earth.
36:08And simply
36:09because overnight they've developed an intelligence so much that they figured out black holes and wormholes
36:16and, you know, realize that the universe is much, much bigger than this flimsy little planet and decided,
36:22you know what, don't need to be here.
Comments