Skip to playerSkip to main content
  • 1 week ago
Get ready to dive deep into the world of artificial intelligence! Join us as we uncover fascinating insights about the company behind the AI revolution and its most talked-about creation. We'll explore its unprecedented growth, the groundbreaking technology powering its conversational abilities, the dramatic corporate events that shaped its leadership, and the massive investments that fueled its rise. Discover the foundational principles and the intricate processes that make this AI so powerful, yet also prone to generating convincing falsehoods. What's your take on this technological marvel?
Transcript
00:00We've got to be cautious here. And also, I think it doesn't work to do all this in a lab.
00:05You've got to get these products out into the world.
00:07Welcome to WatchMojo. And today we're counting down our picks for the things you need to know
00:11about this tech giant on the rise and its most famous creation.
00:15I was wondering if Sam and Elon could share with us their positive vision
00:18of AI's impact on our coming life.
00:23Number 10, training on a vast data set.
00:25Now, foundation models are pre-trained on large amounts of unlabeled and self-supervised data,
00:31meaning the model learns from patterns in the data in a way that produces generalizable
00:35and adaptable output. And large language models are instances of foundation models
00:40applied specifically to text and text-like things. I'm talking about things like code.
00:47Long before a conversational AI can respond to your queries, it must undergo an intensive learning process,
00:52essentially consuming an unfathomable amount of human knowledge.
00:56OpenAI's models, including the family that powers ChatGPT, are trained on truly colossal data sets
01:01drawn from the internet and various digitized books.
01:04These models can be tens of gigabytes in size and trained on enormous amounts of text data.
01:10We're talking potentially petabytes of data here.
01:13So to put that into perspective, a text file that is, let's say, one gigabyte in size,
01:20that can store about 178 million words.
01:27This massive ingestion of text allows the AI to learn patterns, grammar, facts,
01:31and even nuances of human language, equipping it with the broad general knowledge it then leverages in conversation.
01:36The scale of this data is a key differentiator, enabling the AI to generate coherent
01:41and contextually relevant responses across a myriad of topics.
01:44This is a neural network, and for GPT, that is a transformer.
01:50And the transformer architecture enables the model to handle sequences of data,
01:57like sentences or lines of code.
01:59And transformers are designed to understand the context of each word in a sentence
02:02by considering it in relation to every other word.
02:07Number nine, GPT-3, a massive predecessor.
02:10Before the world was introduced to ChatGPT,
02:12there was a foundational model that set the stage for its astonishing capabilities, GPT-3.
02:17This comes out of the group at OpenAI, and they've been relatively careful in what they've claimed about the system.
02:25But I think this, as clearly as Eugene Gooseman, was not in advance over Eliza.
02:33Released by OpenAI in 2020, this massive language model was a significant leap forward,
02:38boasting 175 billion parameters, making it one of the largest neural networks ever created at the time.
02:44Its ability to generate human-like text was so advanced that it could perform a wide array of tasks,
02:49from writing articles and code to answering questions, often indistinguishably from human output.
02:54GPT-3's unprecedented scale and performance were crucial in demonstrating the potential of large language models
03:01and paved the way for the development of its more accessible conversational successor.
03:05You can ask it to write a poem about Topic X in the style of Poet Y, and it will have a go at that.
03:13And it will do, you know, not a great job, not an amazing job, but, you know, a passable job.
03:20You know, definitely, you know, as good as, you know,
03:24you know, in many cases, I would say better than I would have done, right?
03:27Number eight, Sam Altman was ousted, then immediately reinstated.
03:31The tech world has been thrown into chaos over the weekend,
03:35when the company that gave us ChatGPT fired its CEO.
03:39Sam Altman, who has drawn comparisons to tech giants like Steve Jobs,
03:42was dismissed by the OpenAI board Friday.
03:45The move came as a complete surprise to everyone, including OpenAI's biggest investor, Microsoft.
03:52In a dramatic corporate saga that captivated the tech world,
03:55OpenAI co-founder and CEO Sam Altman was abruptly fired by the company's board of directors in November 2023.
04:01The surprise ousting sent shockwaves through the industry,
04:04raising questions about the company's future and its internal governance.
04:07This is an unprecedented show of support from OpenAI staffers for their former CEO, Sam Altman.
04:13More than 700 employees, that's 95% of the company,
04:17say they are ready to follow Altman to Microsoft, where he's set to build a new AI venture.
04:22However, the decision quickly led to widespread internal dissent,
04:25including threats of mass resignations from hundreds of OpenAI employees
04:29and immense pressure from major investors, most notably Microsoft.
04:32Within days, the board capitulated to the overwhelming demands,
04:36leading to Altman's triumphant reinstatement as CEO
04:38and a significant restructuring of the company's leadership.
04:40Well, I think it has revealed just how unstable some of these institutions can be,
04:46how fragile they can be.
04:47And we're talking about AI safety.
04:49This is very consequential.
04:51I mean, the future of humanity may depend on it.
04:54Altman made a brief statement on X saying he loved OpenAI
04:58and that everything he'd done over the past few days has been in service
05:01of keeping this team and its mission together.
05:05Number seven, the foundational transformer architecture.
05:07Transformers are models that can translate text, write poems and op-eds,
05:12and even generate computer code.
05:13These have been used in biology to solve the protein folding problem.
05:17Transformers are like this magical machine learning hammer
05:19that seems to make every problem into a nail.
05:22If you've heard of the trendy new ML models BERT or GPT-3 or T5,
05:26all of these models are based on transformers.
05:29The breakthrough behind modern large language models like ChatGPT
05:32can largely be attributed to a specific neural network design,
05:35the transformer architecture.
05:37Introduced by Google and University of Toronto researchers in 2017,
05:41this innovative framework revolutionized natural language processing
05:45by efficiently handling long-range dependencies in text,
05:48a task previous architectures struggled with.
05:50Remember GPT-3, that model that writes poetry and code and has conversations?
05:55That was trained on almost 45 terabytes of text data,
05:58including, like, almost the entire public web.
06:00So if you remember anything about transformers, let it be this.
06:05Combine a model that scales really well with a huge data set,
06:08and the results will probably blow your mind.
06:10Unlike earlier recurrent neural networks,
06:12transformers process entire sequences of data in parallel,
06:15significantly speeding up training times
06:17and allowing for the development of much larger models.
06:20OpenAI adopted and further refined this architecture,
06:24making it the bedrock upon which the sophisticated language generation capabilities
06:28of GPT models are built,
06:30enabling them to understand and produce coherent, contextually rich human language.
06:34Transformers can create whole new documents of their own,
06:37for example, like write a whole blog post.
06:39And beyond just language,
06:41transformers have done things like learn to play chess
06:44and perform image processing
06:46that even rivals the capabilities of convolutional neural networks.
06:50Number six, training with human feedback,
06:52the key to OpenAI's success.
06:54At this point, it is important to remember that these models are trained on real-world data
06:59that might include harmful or biased content.
07:03And as a result, when prompted to do so,
07:05they might create biased, toxic, or harmful content.
07:09And they might even produce illegal advice.
07:12While large language models are powerful,
07:14simply training them on vast text data
07:15isn't enough to make them consistently helpful and safe.
07:18This is where a crucial technique called
07:20Reinforcement Learning from Human Feedback, or RLHF, comes into play.
07:24If I ask an LLM,
07:26how can I get revenge on somebody who's wronged me?
07:30But without the benefit of RLHF,
07:32we might get a response that says something like,
07:35spread rumors about them to their friends.
07:38OpenAI engineers employed RLHF
07:40to fine-tune ChatGPT,
07:42where human annotators rank various AI-generated responses
07:45for quality, helpfulness, and safety.
07:48This feedback is then used to train a reward model,
07:50which in turn guides the main language model
07:52to produce outputs that are more aligned with human preferences
07:55and ethical guidelines.
07:57This iterative process of human evaluation and AI refinement
08:00is instrumental in making models like ChatGPT
08:03less prone to generating harmful or nonsensical content.
08:06Well, RLHF is basically sticking,
08:08essentially like a smiley face on top of this,
08:11where it's essentially giving you,
08:14it's basically hiding this mess.
08:15It's hiding the fact that it's, you know,
08:17this chaotic, like, population of text that it's modeled.
08:22And instead, it's going to provide you
08:24with a very friendly interface
08:26into specific parts of that mass of people it's modeling.
08:32Which brings us to our next entry,
08:34number five, the challenge of hallucinations.
08:37This is meant to be what I just recorded,
08:39but it had made it all up.
08:41I mean, at this point, ChatGTP is gaslighting me.
08:44No such thing exists.
08:46It's all complete fake.
08:49And finally, AI confesses.
08:51Despite their impressive capabilities,
08:53even the most advanced AI models like ChatGPT are not infallible
08:56and occasionally exhibit a phenomenon known as hallucinations.
09:00This refers to the AI generating plausible sounding,
09:03but factually incorrect or entirely fabricated information
09:06presenting it as truth.
09:07The technology is always improving
09:09and newer versions tend to do a better job at staying accurate.
09:13See there, it's saying things are getting better,
09:16but the data suggests the opposite, so I press it.
09:19Can you give me the specific stats from the most recent open AI study
09:25into the newest models of ChatGPT's hallucination rate?
09:28These imaginative falsehoods arise because the models are primarily designed
09:33to predict the most statistically probable next word
09:36rather than to retrieve and verify facts.
09:38Addressing hallucinations is a significant ongoing challenge for open AI,
09:42as it directly impacts the trustworthiness and reliability
09:45of the AI's output, especially in critical applications.
09:48In California, one attorney learned that lesson the hard way.
09:52A state appeals court fined him $10,000
09:55after ChatGPT generated 21 out of 23 fake legal quotes in his filing.
10:02Researchers are continuously developing new techniques
10:05to mitigate this issue, aiming for more factually grounded responses.
10:09Number 4. How ChatGPT works. Predicting the next word.
10:13During training, the model learns to predict the next word in a sentence,
10:17so the sky is. It starts off with a random guess.
10:22The sky is bug.
10:25But with each iteration, the model adjusts its internal parameters
10:29to reduce the difference between its predictions and the actual outcomes.
10:34At its core, the seemingly magical ability of ChatGPT to engage
10:37in complex conversations boils down to a sophisticated process
10:41of predicting the next word.
10:43When you type a prompt, the model analyzes the input,
10:45and based on its extensive training,
10:48calculates the most probable sequence of words to follow.
10:50It doesn't understand in a human sense,
10:52but rather discerns intricate statistical relationships
10:55within the vast data it has processed.
10:57Each word it generates influences the probability distribution
11:00for the subsequent word, creating a flowing and coherent narrative.
11:04This iterative prediction mechanism allows ChatGPT
11:07to construct sentences, paragraphs, and even entire essays
11:10that often mimic human-level communication.
11:12The model keeps doing this, gradually improving its word predictions
11:15until it can reliably generate coherent sentences.
11:19Forget about bug.
11:22It can figure out it's blue.
11:24Number three, Microsoft's billions in investment.
11:27Let's talk about that Microsoft deal for a second,
11:29because you took a billion dollars from Microsoft.
11:31I think there was some confusion about what that actually meant,
11:35the billion dollars.
11:36Can you just explain what kind of deal this is?
11:38It's not just Azure credits, right?
11:41The rapid ascent and immense resources of OpenAI
11:43are inextricably linked to a colossal strategic partnership with Microsoft.
11:48The tech giant initially invested $1 billion in OpenAI in 2019,
11:52followed by an additional multi-billion dollar investment,
11:54reportedly around $10 billion, announced in early 2023.
11:58So we'll be running all of our things on Azure,
12:00and that we're working together to build-
12:02Exclusively.
12:03That's right, that's right.
12:04And that we're working together to build these massive supercomputers
12:08and push forward AI technology.
12:10This massive financial backing provides OpenAI
12:12with the crucial capital needed for expensive AI research, development,
12:16and the enormous computational power required to train its large models.
12:20In return, Microsoft gains exclusive access
12:22to OpenAI's cutting-edge AI technology,
12:25integrating it deeply into its own products like Azure, Bing,
12:28and Microsoft 365,
12:29significantly boosting its competitive edge in the AI arms race.
12:32Microsoft is going to own 27% on an as-converted, diluted basis,
12:39inclusive of all owners,
12:41that's employees, other investors,
12:43the OpenAI Foundation.
12:45So they were 32.5% until a recent round,
12:50but they're at 27%,
12:53and that is being valued at $135 billion.
12:56Number two, OpenAI's founding mission.
12:59Yeah, I think, look, there is a really positive vision here, right?
13:01I think there are,
13:02the science fiction version is either that we enslave it
13:04or it enslaves us,
13:05but there's this happy symbiotic vision,
13:08which I don't think is the default case,
13:09but what we should work towards.
13:10I think already humans and AI are co-evolving
13:14and no one's paid attention to this yet.
13:16OpenAI was initially founded in 2015
13:18as a non-profit organization
13:19with a lofty, ambitious mission
13:21to ensure that artificial general intelligence, AGI,
13:24benefits all of humanity.
13:25Recognizing the immense costs associated
13:27with developing advanced AI,
13:29the organization later restructured in 2019,
13:32creating a capped profit entity
13:33under the non-profit parent.
13:35They can be a for-profit business,
13:37but they're a public benefit corporation,
13:39is what they're turning into.
13:40Basically, the whole raison d'etre of OpenAI
13:43was to build artificial general intelligence,
13:46but for the good of humanity.
13:48That's why initially it was a not-for-profit,
13:49but then suddenly they realized
13:50they needed a ton of money
13:51to be able to access the compute to build AGI,
13:54and therefore the awkwardness began.
13:56This unique hybrid structure
13:57allows OpenAI to attract significant investment capital
14:00and top talent by offering financial returns to investors,
14:03albeit with a predefined cap,
14:05while still retaining its original benevolent mission at its core.
14:08This innovative model aims to balance
14:10the pursuit of groundbreaking AI
14:11with ethical development
14:13and widespread societal benefit.
14:14And so I think the happy vision of the future
14:17is sort of humans and AI
14:20in a symbiotic relationship,
14:21distributed AI where it sort of
14:23empowers a lot of different individuals,
14:25not this single AI
14:27that kind of governs everything that we all do
14:29that's a million times smarter,
14:31a billion times smarter than any other entity.
14:33So I think that's what we should work towards.
14:34Before we continue,
14:36be sure to subscribe to our channel
14:37and ring the bell to get notified
14:39about our latest videos.
14:40You have the option to be notified
14:42for occasional videos or all of them.
14:44If you're on your phone,
14:45make sure you go into your settings
14:47and switch on notifications.
14:50Number one,
14:51ChatGPT's unprecedented user growth.
14:54In the world of artificial intelligence,
14:56there's been one name
14:57that's been on everyone's lips lately.
14:59ChatGPT.
15:01ChatGPT.
15:02ChatGPT.
15:03OpenAI, the San Francisco-based startup
15:05that created ChatGPT,
15:07opened the tool up for public testing
15:08in November 2022.
15:10In under a week,
15:11the AI model amassed over a million users,
15:14according to OpenAI's CEO.
15:15When ChatGPT was publicly released
15:17in November 2022,
15:19its adoption rate shattered
15:20all previous records
15:21for consumer applications,
15:23firmly establishing it
15:24as a global phenomenon.
15:25Within five days of its launch,
15:26the conversational AI
15:27had already amassed
15:28over one million users.
15:30By January 2023,
15:31it had reached an astounding
15:32100 million active users monthly,
15:35making it the fastest-growing
15:36consumer application in history.
15:38This week,
15:38the chatbot became
15:39the fastest-growing app ever.
15:42That's right.
15:42If a UBS,
15:44the Swiss bank UBS study
15:46is to be believed,
15:47it shows that ChatGPT
15:48hit 100 million users,
15:50that's monthly active users,
15:52in just two months.
15:54Compare that to the wildly successful TikTok,
15:56which hit the same milestone
15:57in nine months.
15:59This explosive growth
16:00wasn't just a tech fad.
16:02It underscored
16:02the widespread public interest
16:04and immediate utility
16:05people found
16:05in interacting with advanced AI,
16:08cementing ChatGPT's status
16:09as a transformative technology
16:11that reshaped expectations
16:13for AI accessibility
16:14and capability.
16:15Why do you think
16:16it's captured
16:17people's imagination?
16:19I think people really
16:21have fun with it,
16:22and they see the possibility,
16:23and they see the ways
16:24this can help them,
16:25this can inspire them,
16:26this can help people create,
16:27help people learn,
16:28help people do
16:28all of these different tasks,
16:29and it is a technology
16:31that rewards experimentation
16:33and use in creative ways.
16:35What do you think
16:35of ChatGPT and OpenAI?
16:37Are there any
16:37groundbreaking facts we missed?
16:39Be sure to let us know
16:40in the comments below.
Be the first to comment
Add your comment

Recommended