00:00Every single day, more than 100 million people open ChatGPT and ask it questions,
00:05but almost nobody knows how it actually works inside.
00:09Today, we are going to open the brain of ChatGPT, piece by piece,
00:15so even a child can understand what happens inside it.
00:19Everything starts with one simple thing.
00:22Language.
00:23The words you type on your keyboard.
00:25Your question.
00:26That is where the whole journey begins.
00:30But here is the first problem.
00:31Computers do not understand words.
00:34They cannot read like you and me.
00:36They only understand numbers.
00:38Nothing else.
00:40So the very first thing ChatGPT does,
00:43it takes your sentence and breaks it into small pieces.
00:46These small pieces are called two-kens.
00:49Think of tokens like small puzzle pieces.
00:53One big word can become two or three tokens.
00:56The machine breaks language into tiny parts.
01:00But the computer still needs numbers.
01:02So each token gets turned into a long list of numbers.
01:05Scientists call this list a vector.
01:07These numbers are smart.
01:09These vectors are very smart.
01:11Words with similar meanings get similar numbers.
01:13So happy and joyful will have numbers that are very close together.
01:16Now, all these numbers enter the real brain of ChatGPT.
01:21A very powerful system called the transformer.
01:24This is where the real magic happens.
01:27Inside the transformer there is a special power called attention.
01:31And attention is the most important idea you need to understand right now.
01:36Attention helps the machine understand which words in a sentence are connected to which other words.
01:42Even if those words are very far apart.
01:45The cat sat on the mat.
01:47Attention tells the machine that cat is connected to sat.
01:51It finds relationships between words.
01:54And the transformer does not read words one by one.
01:57It looks at all the words at the same time.
02:00All together.
02:01All at once.
02:04Inside the transformer there are many layers.
02:07Think of them like floors in a very tall building.
02:10Each floor understands something deeper than the last.
02:14The first layers understand simple things like grammar and spelling.
02:18The deeper layers understand meaning, feelings, ideas, and even the logic behind your question.
02:24All these layers together form something called a neural network.
02:28It is inspired by the human brain.
02:30Billions of tiny connections all working together.
02:34Inside this network are billions of tiny numbers called parameters.
02:39Think of them as tiny knobs on the biggest machine ever built by humans.
02:44GPT-4 has more than 1 trillion parameters.
02:47That is more than 1,000 billion tiny knobs.
02:51All working together at the same time.
02:54But how did all these billions of knobs get set to the right positions?
02:58How did ChatGPT actually learn?
03:00The answer is one word.
03:02Training.
03:04During training, ChatGPT read almost the entire internet.
03:08Billions of books, articles, websites, conversations, more text than any human could ever read.
03:14But it did not memorize all those words.
03:17Instead it learned patterns.
03:19It learned which words usually come after which other words in a sentence.
03:23And here is the biggest secret of ChatGPT.
03:26It works by predicting the next word.
03:29Just one word at a time.
03:31Nothing more.
03:32If you type, the sky is, ChatGPT predicts the next word is probably blue, because it has
03:39seen this pattern millions of times before.
03:42Then it takes, the sky is blue, and predicts the next word.
03:46Then the next, and the next.
03:49Building your entire answer word by word.
03:52But just predicting words was not enough.
03:54The early versions of ChatGPT made many mistakes.
03:58Sometimes it gave wrong or even harmful answers.
04:02So the creators brought in real human teachers.
04:05These people read thousands of ChatGPT answers and told the machine which ones were good and bad.
04:12When the answer was good, ChatGPT got a reward.
04:15When the answer was bad, it got a penalty.
04:18Just like a teacher grading a student.
04:22Scientists call this reinforcement learning from human feedback.
04:26The simple idea is that real humans help the machine become more helpful and more safe.
04:32And here is something very important to remember.
04:35ChatGPT does not truly understand anything.
04:38It predicts words, but it does not know what the sun actually is.
04:43The creators also built strong safety walls around ChatGPT.
04:47These walls make sure it helps people and tries its very best to never hurt anyone.
04:52Remember, ChatGPT is not a search engine like Google.
04:56Google finds pages that already exist, but ChatGPT creates brand new sentences that never existed before.
05:02So now you see the full picture, sure.
05:04You type words, they become tokens, tokens become numbers, the transformer predicts the next word, and your answer appears.
05:12ChatGPT is one of the greatest inventions in human history.
05:15But always remember, it learned everything from us.
05:18From our words.
05:20From our language.
Comments