- 2 days ago
Ever wonder how your program remembers where to return after calling a function? That's the call stack's job.
In this clear, whiteboard-style explanation we cover:
- What the call stack actually is
- How call frames are built and what they contain
- Return addresses, function arguments, local variables
- Why recursion works (until you overflow the stack)
- Stack vs heap memory differences
- Why too much recursion = crash
Great for beginners to intermediate programmers who want to understand what's happening under the hood. References Ed Jorgensen's excellent (and free) x86-64 assembly book.
00:00 Introduction to the Call Stack
00:16 Prerequisites - Understanding Basic Stacks
00:41 Simple Example Program with Function Calls
01:24 Function Call Chain and Recursion Example
02:20 Visualizing the Call Sequence
03:06 Local Variables and Return Values
03:55 Quick Recap - How Generic Stacks Work
04:34 Stacks Can Hold Any Data Type
05:36 Introducing the Call Frame Concept
06:22 What Belongs Inside a Call Frame
07:00 Return Address Explained
07:44 How the CALL Instruction Works
08:32 Function Arguments on the Stack
09:57 Extra Arguments Beyond Registers
10:19 Preserving Registers (Callee-Saved)
11:12 Local Variables on the Stack
11:40 Multiple Call Frames Example
12:54 Tracing Function Calls Step by Step
13:21 Starting with Main's Call Frame
15:00 Pushing Frames for Nested Calls
16:30 Returning - Popping Frames
18:10 Why the Stack is Perfect for Returns
19:45 Recursion and Multiple Instances
21:30 Stack Grows Downward in Memory
23:10 Stack vs Heap Memory Comparison
25:12 Local Variables vs Dynamically Allocated Memory
26:16 Pointers on Stack Pointing to Heap
27:32 Automatic Cleanup of Stack Variables
28:00 Memory Leaks with Heap Allocations
28:56 Recommended Reading - Ed Jorgensen Book
29:56 Call Frame Layout in x86-64 Assembly
31:16 Process Memory Layout Overview
32:22 Stack Grows Down, Heap Grows Up
33:52 Stack Overflow from Deep Recursion
35:09 Summary - Why the Call Stack Matters
35:28 Closing Remarks and Call to Subscribe
=-=-=-=-=-=-=-=-=
Thanks for watching!
Find us on other social media here:
- https://www.NeuralLantern.com/social
- Twitter / X: https://x.com/NeuralLantern
- Rumble: https://rumble.com/c/c-3696939
- BitChute: https://www.bitchute.com/channel/pg1Pvv5dN4Gt
- Daily Motion: https://www.dailymotion.com/neurallantern
- Minds: https://www.minds.com/neurallantern/
- Odysee: https://odysee.com/@NeuralLantern:5
Please show your support!
- Buy me a coffee: https://ko-fi.com/neurallantern
- Subscribe + Sharing on Social Media
- Leave a comment or suggestion
- Subscribe to the Blog: https://www.NeuralLantern.com
- Watch the main "pinned" video of this channel for offers and extras
In this clear, whiteboard-style explanation we cover:
- What the call stack actually is
- How call frames are built and what they contain
- Return addresses, function arguments, local variables
- Why recursion works (until you overflow the stack)
- Stack vs heap memory differences
- Why too much recursion = crash
Great for beginners to intermediate programmers who want to understand what's happening under the hood. References Ed Jorgensen's excellent (and free) x86-64 assembly book.
00:00 Introduction to the Call Stack
00:16 Prerequisites - Understanding Basic Stacks
00:41 Simple Example Program with Function Calls
01:24 Function Call Chain and Recursion Example
02:20 Visualizing the Call Sequence
03:06 Local Variables and Return Values
03:55 Quick Recap - How Generic Stacks Work
04:34 Stacks Can Hold Any Data Type
05:36 Introducing the Call Frame Concept
06:22 What Belongs Inside a Call Frame
07:00 Return Address Explained
07:44 How the CALL Instruction Works
08:32 Function Arguments on the Stack
09:57 Extra Arguments Beyond Registers
10:19 Preserving Registers (Callee-Saved)
11:12 Local Variables on the Stack
11:40 Multiple Call Frames Example
12:54 Tracing Function Calls Step by Step
13:21 Starting with Main's Call Frame
15:00 Pushing Frames for Nested Calls
16:30 Returning - Popping Frames
18:10 Why the Stack is Perfect for Returns
19:45 Recursion and Multiple Instances
21:30 Stack Grows Downward in Memory
23:10 Stack vs Heap Memory Comparison
25:12 Local Variables vs Dynamically Allocated Memory
26:16 Pointers on Stack Pointing to Heap
27:32 Automatic Cleanup of Stack Variables
28:00 Memory Leaks with Heap Allocations
28:56 Recommended Reading - Ed Jorgensen Book
29:56 Call Frame Layout in x86-64 Assembly
31:16 Process Memory Layout Overview
32:22 Stack Grows Down, Heap Grows Up
33:52 Stack Overflow from Deep Recursion
35:09 Summary - Why the Call Stack Matters
35:28 Closing Remarks and Call to Subscribe
=-=-=-=-=-=-=-=-=
Thanks for watching!
Find us on other social media here:
- https://www.NeuralLantern.com/social
- Twitter / X: https://x.com/NeuralLantern
- Rumble: https://rumble.com/c/c-3696939
- BitChute: https://www.bitchute.com/channel/pg1Pvv5dN4Gt
- Daily Motion: https://www.dailymotion.com/neurallantern
- Minds: https://www.minds.com/neurallantern/
- Odysee: https://odysee.com/@NeuralLantern:5
Please show your support!
- Buy me a coffee: https://ko-fi.com/neurallantern
- Subscribe + Sharing on Social Media
- Leave a comment or suggestion
- Subscribe to the Blog: https://www.NeuralLantern.com
- Watch the main "pinned" video of this channel for offers and extras
Category
🤖
TechTranscript
00:00Hello there. In this video we're going to talk about the call stack. What is the call stack?
00:05It's an important data structure inside of your computer that helps your computer call functions inside of code. It's pretty
00:10cool.
00:16So first off, before you watch this video, you should probably already understand how a stack works,
00:21what a stack is in terms of its interface and how to manipulate it with pushes and pops and all
00:27that stuff.
00:27So see my previous video if you don't understand how to use stacks yet.
00:31In this video I'm just going to talk about the call stack, which is just a special use of the
00:36generic stack abstract data type, data structure.
00:41Okay, so let's see. First off, let's suppose for the sake of argument that I have, you know, a function
00:46called main.
00:47This is not a C++ video or a coding video, but the call stack is related to coding.
00:52So I'm going to assume you can at least kind of guess what I mean when I write down some
00:57code.
00:57This is just a simple C++ program.
01:01You can imagine that up above main, actually I don't want to use prototypes, but I'm going to put the
01:08other functions below.
01:10Please forgive me.
01:11You can imagine that main calls on the function f, so I'm just going to do a function called f
01:15here.
01:16And so execution, when it gets to that part of the program on line 3, execution is going to jump
01:20down to line 9 to just execute the f function.
01:23And then maybe f calls g, and then g it calls h.
01:30And maybe h sometimes calls itself.
01:33Maybe we'll say that it's a recursive function.
01:35We'll say if, you know, something happens, then we will return a call, or we'll just call h, so it
01:42calls itself sometimes.
01:45Otherwise, we will have it call on the i function.
01:48And then whenever either of those calls return, then, you know, execution just kind of returns to the caller, which
01:55would be g in this case.
01:57And then we have the i function that just, you know, does whatever, right?
02:02So the code here doesn't really matter.
02:05What I'm really trying to show you is that one function calls another calls another.
02:08And we can sort of allow this to happen under the hood with a call stack.
02:14So I'm just going to draw this real fast.
02:15Main calls f, and f calls on g, and g calls on h.
02:22And h, we'll say that, we'll say, for the sake of argument, one time h calls on h, just so
02:27it's easier to draw later.
02:28And then h eventually calls i for some other reason.
02:30Maybe the deeper h calls i.
02:33And then eventually this starts to return, right?
02:35So if you're a programmer, you kind of take this situation for granted.
02:40You think like, well, I just kind of call a function.
02:43And then when the function's done, I just return to the place that I'm currently at, and everything is fine.
02:49If we had, let's say, local variables in here, they would be preserved.
02:54Let's say, I don't know, maybe h is like a number or something like that.
03:02Or how about, let's do g as a number.
03:06We'll say g returns some kind of a number.
03:09And then maybe in the f function, when we call g, we'll just print whatever g returned.
03:15So g calls h, and then we'll just say, like, return whatever.
03:19Just trying to show you that we might have local variables.
03:23So that means in the g function, maybe I'll have an integer a is equal to 5, and integer b
03:30is equal to 6.
03:31I'm just making up stuff here.
03:33And then when we return our data, we might say, you know, return a plus b, for whatever reason, this
03:39is a nonsense program.
03:40Again, don't take it too seriously.
03:42But I'm just trying to show you that we have functions that can call each other, sometimes call themselves, and
03:47maybe they have local variables on top of that.
03:51So if you understand what is a stack, then you understand that a stack can hold many different types of
03:57data.
03:57Just as a quick recap here, you know, a regular generic data structure stack or an abstract data type stack,
04:06we could say that the stack held integers.
04:08So if I wanted to put like an 8 on there and then put like a 5 on there, it
04:12would just kind of stack up as we're adding data.
04:15And then when eventually we decided to remove data from the stack, we'd pop it by just removing the thing
04:19from the top.
04:20But because stacks are supposed to be abstract data types that hold any type of data,
04:24we don't, we're not limited to putting integers inside of the stack.
04:28So we could invent a totally new data structure and put instances of that data structure inside of the stack.
04:34For example, if you had a custom class that you wrote yourself and you just called it my class, let's
04:41see, let me do class, my class.
04:46And I'll just put braces here.
04:48Again, this is not a C++ video, but you know, you can imagine that you sort of made a blueprint
04:52for a new type of object and you called it my class.
04:56And then later on, you're like, let's make instances of my class.
05:00Let's make an instance called MC1 and MC2 and MC3 and so forth, right?
05:06So in the stack, you could just push instances of my class.
05:09I'm just going to put MC1 here.
05:11If we created MC1 and then pushed it onto the stack and then we could have MC2 and created an
05:17instance of that and just kind of pushed it on the stack.
05:19So we can do whatever we want.
05:20We can put any type of data on the stack.
05:25And this is not exactly the process stack so far.
05:28I'm just telling you what a stack in general can do.
05:31So now, instead of imagining that we have a custom class called my class, what if we made a new
05:36class called call frame?
05:40I'll say we have like an instance, let's say like a blueprint for a class.
05:44We'll call it a call frame.
05:46I'm a penmanship.
05:48It's because this thing, this pen, I push it down and it always like keeps drawing after I start lifting
05:54it up.
05:55I don't know what I'm doing wrong.
05:56I've been trying this for like 50 videos and it's always bad and wrong.
06:00So imagine you have a class called call frame and inside of it, you can just put some sort of
06:04data inside the call frame.
06:06You can imagine defining what an object might look like and then creating instances and filling it up with data.
06:13And then every instance, just like we did with my class, we could just put that onto a stack, right?
06:18So now let's try to understand what might be inside a data structure called a call frame.
06:23Because when we have a stack under the hood in our machine that holds instances of call frames,
06:28then we actually have a process stack or a call stack, which allows us to implement these function calls.
06:35So let's see.
06:37The first thing that we might imagine, I'm going to maybe just do like a little box here just to
06:41let you know that we're deciding what will go inside of a call frame.
06:46So this is one call frame.
06:47So you can imagine that the F function here, in order for the F function to exist, the particular call
06:57to exist, it needs information on what it has, what it is to be alive.
07:03You know, it needs its return address.
07:05I'm going to put RA here for its return address.
07:09What is a return address?
07:11Well, you know, when we were inside of the main function, let me draw the execution path real fast.
07:15When we were inside of the main function, then execution was going from top to bottom.
07:18When we hit this F function call, execution actually jumped to the first line of the F function.
07:24Under the hood, in assembly, there's a jump instruction that allows you to just kind of jump to a totally
07:29different memory location and start executing instructions over there somewhere.
07:34In assembly, also, we have a call instruction, which means I would like to jump to another location, but also
07:40I want to remember where I just came from so that it's easy for me to return later.
07:44Okay, so we'll call that the return address.
07:46The return address in this case would be, let's just call it line 4, meaning when function F returns, then
07:54the next instruction that should execute is this return 0.
07:58Well, I mean, this is C++.
08:00It's not really instructions.
08:01It's more like statements.
08:02But you can just imagine that F gets called, and when it returns, then the next statement is return 0,
08:07right?
08:07So, if we say that the memory location of wherever line 4 starts is the return address of F, then
08:15you can imagine that in order for F's call instance to exist, it has to remember its return address.
08:21So, therefore, we would put the return address inside of its call frame.
08:25So, the return address goes in there, and function arguments sometimes go in there.
08:32If you know assembly, and that's kind of what we're talking about right now in this video, then you'll know
08:38that function arguments, at least the integer arguments, usually come in certain registers like RDI and RSI.
08:44RSI, or if you're, whoops, or if you're passing float arguments, the first argument would be XMM0 and XMM1 and
08:55so forth, right?
08:56So, you just kind of like load up registers with arguments.
09:00But if you have more arguments than you have registers, you know, there's only so many registers you can actually
09:05use to pass arguments into a function that you're calling.
09:08So, what if you run out of float registers, you run out of the general purpose registers, but you want
09:14to have more arguments than that?
09:15For example, if you look at the ABI, then the integer arguments, there's only six registers that we can use
09:23to pass integer or pointer arguments.
09:26So, if you wanted to have seven arguments or more, then, you know, you should be able to do that,
09:30but there has to be a different way to accomplish that.
09:33So, the seventh and beyond arguments will actually show up in the call frame.
09:38Oh, I guess RA should probably be blue, because the call frame, I'm just, you know, making that green.
09:43I'll say return address.
09:45So, the seventh and beyond arguments, args, we'll just say.
09:55So, the call frame contains the return address and any argument that's like the seventh argument and beyond.
10:01And if you don't have seven arguments, then there's not going to be, that's not going to be in the
10:05call frame.
10:06It'll just be blank at that point.
10:08Sometimes, when people are messing around with the stack pointer, they will kind of like save the stack pointer into
10:13the RBP register.
10:15So, basically, anytime you push a register in order to preserve it, then it will end up on the call
10:21frame, at least at the assembly level.
10:24So, suppose you're going to use the base pointer register.
10:27You probably want to preserve the base pointer before you start using it.
10:30So, we would push it onto the stack and it would become part of the call frame.
10:35Any registers that are designated as callee saved would also be pushed into the call frame, like R12.
10:41Oh, my God, R13, my penmanship.
10:46It's like it's the ones, the ones are always L's.
10:49Okay.
10:50And so forth, right?
10:51So, anything that's designated as a callee saved register that you intend to use would end up getting pushed onto
10:56the stack at the assembly level.
10:58And thus, it would end up in the call frame.
11:00But, you know, if you aren't going to use any callee preserved registers, then you don't need to preserve them.
11:05And therefore, they would not show up in the call frame.
11:08And let's see, end local variables.
11:10So, when you make local variables in assembly, you just basically end up pushing them to the stack.
11:17So, I'm going to put local variables here.
11:20And in a second, I'll show you how that kind of works in higher level languages, like on C++.
11:25That's why I wrote this down right here, line 12.
11:27We're making local variables.
11:30And let's see what else.
11:33I should emphasize that local variables are very different from heap variables.
11:37But basically, you can imagine now that at some point, let's say that when G was called,
11:44let's say execution came in here into main and then execution jumped into the F function.
11:51So, it jumped in there and then F wanted to call on G in order to get some information, right?
11:55So, we have a call frame just for G and we have a call frame just for F and we
12:00have a call frame just for main.
12:02And so, we have like three items on our stack at least just to get to that point of the
12:06program.
12:07And every call frame, you know, basically has this sort of information inside of it.
12:11The return address, so the function knows where to jump back to when it's time to return from the function.
12:18Seventh and beyond arguments, if any exist.
12:21The base pointer, if you happen to modify that at the assembly level.
12:25And then same thing for Kali saved registers and then local variables, which you could probably imagine more clearly.
12:31So, now I'm just going to erase this call frame thing here.
12:35Oh God, this eraser needs to be improved.
12:38Oh my gosh, maybe I should make that bigger.
12:41I need like a little shortcut to make it bigger for a second.
12:44Because I don't really want to draw that thing on top again.
12:46That's just annoying.
12:47I don't want to do it.
12:48I won't do it.
12:49I won't do it.
12:52Okay, so imagine now that we are trying to sort of trace what's happening in our call graph, you know,
12:59in our program while we're calling functions.
13:02So, first imagine we have an empty stack.
13:05And again, if you don't understand stacks, you better go see my other video.
13:08Imagine we have an empty stack.
13:09The stack is not ever really going to be empty when you launch your program.
13:12It's going to be, there's going to be stuff on it already.
13:15But let's just pretend that we haven't put anything on the stack at this point.
13:19So, there would be, you know, a call frame just for the main function.
13:22I'm just going to write main here to say that this is the call frame for main.
13:26So, it contains all the information that main needs.
13:30And if you have a main function in the first place, then you're probably linking against the GCC libraries,
13:34which means main definitely has to return somewhere inside of the GCC library.
13:39So, this kind of makes sense for our program.
13:42So, it's a call frame for main.
13:44And then when main calls on F, then another call frame gets created for F.
13:50And then when F calls on G, another call frame gets created for G.
13:55I don't know if I need to put these parentheses, but I guess I've already started to do that.
13:59And then G calls H.
14:02And then H sometimes calls itself.
14:04So, I'm going to put another call frame for H up here.
14:07Oh, my God.
14:09H's look like just crazy.
14:13Okay, that's a bit.
14:15Every time I lift up the pen.
14:17H calls itself.
14:18Sometimes we'll just say that it calls itself once in this case.
14:21And then H calls I.
14:22And then that's, let's just say that's the end of our little call graph.
14:25When I is finished doing whatever it's going to do, then it'll just return to the caller, right?
14:30So, notice how every single time we made a function call, we added a call frame for that particular instance
14:37of a call.
14:38And we called it a call frame.
14:40Something else that I should point out is that I have been describing to you a call frame as if
14:45it's a data structure with sort of like a fixed amount of data inside of it.
14:49And you would sort of blob that whole block of data on at the same time.
14:53That's not really how it works in assembly.
14:55In assembly, you kind of just push, you know, one tiny piece of data at a time.
14:59But in the abstract, you imagine that there's a call frame.
15:02Just keep that in mind.
15:04Anyway, so in the F function, you know, we've been making these calls.
15:08F and then G and then H and then H and then I.
15:11Eventually, when we got to the, let's say, what did I do wrong?
15:16Oh, I did something wrong.
15:18I put the local variables in the wrong function.
15:19Hang on a second.
15:20I want them to be, I mean, it would be okay to have them inside of the G function.
15:24But I really want them, I really want them to be inside the H function for a certain reason.
15:29So I'm just going to cut and paste here.
15:32And I'm going to do like local variables inside of the H function and not in the G function.
15:39And then G should maybe return an integer result.
15:42And then this return statement shouldn't even be in G.
15:45It's just going to be inside of H.
15:49And just, you know, pretend that I did something with the A and the B variables at some point before
15:54returning them.
15:56Modify A and B somehow.
15:58Now, maybe I can even put that comment inside of both of the blocks of the if.
16:05So like.
16:10So maybe if something happens, we'll modify them in one way.
16:13And if it doesn't happen, we'll modify them in some other way.
16:16And then at the very end, we return them just to kind of feel like we have more of an
16:19excuse to have return variables in the first place.
16:22And then when H returns something, now we have an excuse to kind of print out whatever H is doing.
16:27Okay.
16:28And the reason I moved all that inside of H and not in G is because I just want you
16:32to know that there are two instances of the H, you know, call instance on the call stack, which means
16:38the local variables, they're not linked.
16:42They're two distinct copies of each local variable inside of the stack.
16:46Think of it this way.
16:47When we call our H the first time, we have an integer A and an integer B.
16:54You could imagine that there's an integer A and an integer B sitting inside of the call frame somewhere, you
17:00know, being pushed onto the call stack.
17:05But the other called H also has a variable A and a variable B.
17:09But because those are local variables, changing one doesn't actually modify the other copy that is in the other call
17:17frame.
17:17So you could imagine this as A sub one and B sub one and then A sub two and B
17:24sub two.
17:24This is kind of important later on when you realize that your operating system is capable of launching multiple threads
17:30for your program.
17:32And local variables won't potentially become race conditions, which is just like for a totally different video.
17:38But for now, just understand these are totally separate instances.
17:41If I change A sub two, it's not changing A sub one and vice versa.
17:44And same thing for B.
17:47So anyway, we make all these calls and somewhere along the way, you know, we're creating a bunch of data
17:54and a bunch of call frames.
17:56Eventually, when we are done with the I function, when I wants to return here, we don't have an explicit
18:02return statement.
18:03But if you if you know C++ or higher level languages, if there's no return statement in most of the
18:08in most of these languages for a void function, it just returns when it reaches the end of the function.
18:13We don't really need to say return because it doesn't return anything.
18:16It just ends.
18:17So when I is ready to return, what happens is we look inside of its call frame and we look
18:23for the return address,
18:24which is inside of its call frame, the return address specifies exactly where we're supposed to return.
18:30For example, maybe we're supposed to return right here.
18:33Maybe we're supposed to return right here.
18:34Maybe to make things a little bit more interesting, let's put like a little code right after those two points.
18:41So maybe if we called H, then we increased A.
18:44And if not, then we increased B, something like that.
18:48So that means the the call frame for the first H function.
18:52Let's suppose, let's see if it called if it called itself, then that means this block right here is the
18:59first part of the if block.
19:01So the return address for H return address would actually be pointing to.
19:13Would actually be pointing to the next line, right?
19:15So when H called on H, it would look at line 21 or whatever the address was for that instruction
19:20to begin at, you know, increasing A.
19:23And it would just say that's the return address so that whenever H returns from itself, it ends up executing
19:29A plus plus.
19:29And then the same thing for I, so going back on I, I has a return address.
19:37So when we're inside of the I function, it's got its own call frame and it knows where it's supposed
19:42to return, which in this case would be that instruction right there or that statement right there.
19:47So whatever is the first instruction that implements that higher level language statement, that's going to be the return address
19:53inside the the I call frame.
19:56So we look inside of the call frame for the return address.
19:58We know where to return.
20:01It's going to return to H at this point in time.
20:05And once we're done with that, then we just deallocate the call frame in assembly language.
20:13All we're really doing is moving the stack pointer and just sort of ignoring the call frame from that point
20:17forward.
20:18And we'll call it junk data.
20:19So the data is kind of actually still there, but we're just not using it.
20:22And then later, if we want to start adding more stuff on top of the stack, then it will just
20:26override that junk data.
20:27But for now, just imagine that it's gone anyway.
20:30So then we we've returned from I back to H. And so that's line 26 right here.
20:37Imagine we perform this instruction, the B plus plus, and then execution falls through down to the very end, the
20:43return statement.
20:44So then same thing happens this version or I guess this version of H had its own.
20:51No, hang on a second. We returned from I.
20:54Yeah, yeah, this version of H knew that it was supposed to return.
21:00So that means execution will will basically jump from here to right here.
21:07So it kind of jumps, you know, back to that next line like we talked about before when it returns
21:13based off the return address.
21:14Then again, you can imagine that that call frame is deallocated or just ignored as junk data.
21:18And the top of the stack is kind of like moving and then when that call on H finishes.
21:26So, you know, when a plus plus gets executed and then execution falls through down to the return statement, then
21:31again, H has its own return address in its in its call frame or that instance of H has its
21:37own call frame.
21:38Then it knows where to return and this time it knows to return to whoever called H in the very
21:44first place.
21:45So it's going to be I guess it's going to be like when it returns, it's going to go back
21:51to G.
21:54There's nothing after the C out H on G, but I guess you could imagine under the hood, you know,
21:59instead of like another statement, you can imagine more stuff is happening, right?
22:02Like when we called H and then we returned, then we had an expression that evaluated to something because H
22:08returned to value and then we would send that to the stream operator.
22:12So in assembly, there are many instructions happening after the call to H, but you could just imagine at this
22:17point, you know, it's just going to finish up that that existing statement and then move down to finish and
22:23return.
22:24So then again, you know, we just returned from H to G.
22:27So we deallocated that call frame.
22:29The top of the stack is now somewhere around there in that call frame.
22:32When G finishes and it returns, same thing.
22:35It's got a return address.
22:36So it knows where to return.
22:38And then top of the stack kind of goes down there and G knows to jump to whoever called it
22:45because of the return address.
22:47So it's going to be at this point in time.
22:48Again, there's more statements that happen after the return because there's a C out in a stream operator, you know,
22:54but, you know, basically when F is done, then again, it's going to use the return address and know where
22:59to jump back to, which is just going to be this next line right here.
23:02So when F was called originally, it like ended up calling H and then H, you know, called itself and
23:09H called itself and then it called I and then we returned from I and then we returned from H
23:14and return from H twice.
23:16And then eventually we returned to G and then we returned to F.
23:19Finally, after F and all of its calls were finished, then execution returned just to line four.
23:26And that's, that's basically saying like, oh, what happens when F is done?
23:31Let's look at its return address, pop that off the call stack.
23:35And then the top of the stack is basically just sitting inside of the main function.
23:40So, yeah, this is how we implement function calls in the machine.
23:45You know, there's, there's really no, if you think about it, you know, from a certain point of view, there's
23:50no such thing as functions inside the machine.
23:51Um, in assembly language, we certainly see that we have a label and then we have like a return instruction
23:58and we have a call instruction, but, uh, you know, there's, there's no like function.
24:02It's all just ones and zeros all over the place, right?
24:05So the idea of labels and functions and things, that's kind of an abstraction just for the human being with
24:10feelings to make it a little bit easier for us to program.
24:12So this is how we implement it with a call stack.
24:16We have a stack and we stack things on top of the stack and the things that we stack are
24:21called call frames.
24:22Call frames are kind of flexible because sometimes there's more data.
24:25Sometimes there's less data depending on what the function itself is doing, but it allows us to have a function
24:30call itself.
24:31Many times it allows us to have many different instances of function calls exist at the same time.
24:37Even if we're only looking at the, uh, the very top of the call stack in terms of execution and
24:42have all the local variables, not conflict with each other.
24:45Uh, and then, you know, like we have a trail of breadcrumbs when we start returning and returning and returning,
24:50the stack makes it really, really easy.
24:51And so, you know, in the abstract, we can basically say that there are functions in our machines because this
24:58implementation helps make it happen.
25:00Uh, one last thing that I wanted to say before I go here is just keep in mind that these
25:05local variables, they're on the stack, but dynamically allocated memory is not on the stack.
25:12Uh, let me just show you what I'm talking about right here.
25:15You probably understand the heap already at this point.
25:17If you're watching this video, then you know a little bit about dynamic memory allocation.
25:22So I'm just going to do, uh, let's say we have like a F function void F.
25:27And inside of the F function, we have, let's say a, uh, an integer pointer P, and we will allocate
25:34P to a new instance of an integer.
25:36And maybe there's another integer A and an integer B.
25:40So just so you know, the variables A and B and also P, they're considered local variables because I just
25:47declared them.
25:48I declared a pointer on line four.
25:50I declared two integers on line seven.
25:52Maybe I should put those like at the top, I guess.
25:55Okay.
25:55So lines four and five, they declare local variables.
25:59The pointer is a special type of variable, right?
26:01That just points to a memory location.
26:03So the thing that it points to might not be a local variable, but the pointer itself is a local
26:09variable.
26:09So these three variables on lines four and five, A and B and P, those are on the stack.
26:15Any local variable gets put on the stack.
26:17And then when you allocate a new integer with dynamic memory, maybe you could imagine if you're a C programmer,
26:24this would be a call to malloc rather than new.
26:29Then what's happening under the hood is that the system goes and tries to find some free memory in a
26:34different area of your process called the heap, not the stack.
26:37It tries to find some free memory in the heap.
26:40Once it finds some, then it designates that as, you know, being in use.
26:43And then it returns to you the memory location of that new allocation.
26:47So in this case, we're allocating a new integer.
26:50So that's going to be 32 bits of data or four bytes.
26:54And so it just, you know, just finds four bytes somewhere in the heap.
26:57And it finds the memory location of the very first byte.
27:00And it returns that to the caller.
27:01And so P, its value is now going to be the memory location of the new integer that you allocated.
27:08But P itself is on the stack.
27:10So P is kind of weird, right?
27:11Like P exists on the stack, but it points to something in the heap.
27:15Keep in mind the difference.
27:16And then if we just let this function go to the very end without cleaning up that memory, we would
27:21have a memory leak.
27:23I should probably just do like a dedicated video on pointers for people who are interested in pointers in higher
27:27level languages later.
27:28But keep in mind that when the function ends and you pop the call frame off the call stack, then
27:34that automatically cleans up all the local variables.
27:37So A and B and P itself will be deallocated and not become memory leaks.
27:44However, the thing that P pointed to, that will be a memory leak.
27:47I guess that's a topic that's too advanced for this stack video.
27:50But I just wanted you to know the difference between something that exists on the stack and something that exists
27:56on the heap.
27:57Okay, so then I think the last thing that I need to show you here is let me shoot.
28:05Let's see.
28:07Shrumbler.
28:10Okay, so here's my favorite book.
28:12I love to pump up this book.
28:16It's awesome.
28:16The author is like a brilliant man, very kind man.
28:19He offers this book for free.
28:22It's got a copyleft license.
28:23You can literally just download it for free on his website or just get it from a friend.
28:28And it's legal because this is an open book, basically.
28:31It's called X86-64 Assembly Language Programming with Ubuntu by the brilliant Dr. Ed Jorgensen, PhD, or Professor Ed Jorgensen.
28:41This is the latest version that I'm working with right now.
28:43I think it's the latest version.
28:45I don't know.
28:45He updates this thing all the time.
28:46It's always getting better.
28:48But if we look at this book, you can get a copy of it on your own, or you can
28:52just watch this video.
28:53I'm going to go to section 12.8.3.
28:57So 12, section 12, 12.8, 12.8.3, called the call frame.
29:05And here you can kind of see a description of some of the stuff that I talked about in terms
29:08of what sort of data you would expect to see in the call frame.
29:11But if you just kind of look down a little bit, you can see that we are sort of pushing
29:18some preserved registers into the call frame.
29:22And we will sort of like save the stack pointer as the RBP register.
29:30But before we do that, we want to push the RBP register.
29:33And then notice how here it shows that there's a return address.
29:38And the RIP register, that's actually the instruction pointer register.
29:43The stack pointer is RSP.
29:45And then other arguments that go into it.
29:49So just keep in mind that we have a little layout here.
29:53And then somewhere else, there was something I wanted to show you here.
29:56No, no, no, no, no.
29:58No, no, no.
30:00Nope, nope, nope, nope, nope.
30:02Okay, I guess I'm going to...
30:06Oh, I guess I was just supposed to show you the call frame anyway.
30:09Okay, so I guess that's pretty much it.
30:10I just wanted to show you some extra stuff.
30:12Just so you understand, this is the basic idea of the stack frame layout.
30:19And...
30:23Oh, okay, I found it.
30:25I spent a second scrolling through the book to find the other diagram that I really wanted to show you.
30:30So same book, X86-64 Assembly with Ubuntu by Ed Jorgensen, Professor Ed Jorgensen, PhD.
30:38And if you look at section 9.3, there's another picture here that's kind of useful.
30:44And it basically shows, you know, the general idea of how information is laid out inside of your process.
30:50So when you launch a process in order to execute your program, then it's kind of laid out like this.
30:56There's like a little reserved area.
30:57And then there is the text section for assembly.
31:01You know, that's where you have all your instructions.
31:03And then there's the data section.
31:05And then there's uninitialized data, which is the BSS section if we're talking about Yasm assembly.
31:10And then notice how in this big blob of an area right here, we have the heap and the stack
31:17kind of sharing an area inside of available memory.
31:21It's important to understand that the stack, when you actually add things onto the process stack, like call frame data,
31:30you're actually decreasing memory location.
31:32So the stack grows downward in memory.
31:35So I think I'm going to put this in another video, but basically right now, I just want to show
31:39you if we have memory location.
31:42Let's pretend that we have like a stack here and there's like a couple of frames, you know, sitting on
31:46top of it.
31:48Suppose you had, I don't know, memory location, OX, you know, 99 or something.
31:56Suppose for the sake of argument, we're only pushing one byte at a time.
31:59So it's easier for me to write.
32:01Then that means, you know, the next thing that shows up, you know, a little higher on the stack, you
32:06would intuitively think that the memory location is 100 instead of 99, right?
32:10Or if this is hex, then I guess I increase that 9 to A.
32:15But it actually goes down in memory.
32:18So, I mean, look at this diagram right here.
32:20The stack goes down in memory.
32:22So every time you push to the stack, you're pushing to a lower and lower and lower memory location.
32:27So right here, I'm just going to write down OX, 9, 8.
32:30And it just keeps going down and down and down.
32:32Let's see.
32:33One, two, three, four, five, right?
32:37And the heap is something that grows upward in memory.
32:42So the heap, when you call new allocations, like if you say new or malloc or, you know, some kind
32:47of dynamic memory, then its memory locations grow up and the stack grows down.
32:52And if they ever actually meet, then you've ran out of memory.
32:56Just keep that in mind.
32:57But, you know, in modern systems, the heap can kind of grow almost endlessly.
33:02I'm sure you've all been running programs in the past and noticed that the program was consuming gigabytes of data,
33:09right?
33:09But it's not like you had gigabytes of data allocated to that process when you first opened it up.
33:13Like if you have a browser with like 100 million tabs open, I know some of you people do that.
33:18Then when you first open your browser, it doesn't really use that much memory.
33:22But then as you start opening more tabs, maybe some pages are doing some weird stuff and you get memory
33:26leaks.
33:26Then the heap starts to grow and grow and grow.
33:29And then you'll notice the process takes way more than it did in the beginning.
33:33So the heap can kind of grow in an unbounded way.
33:36Of course, that is limited to your physical system RAM.
33:40And eventually in the far off future, we'll be limited by whatever 64 bits can get us.
33:46I don't think we're ever going to reach that in my lifetime.
33:48But, you know, I've been wrong many times before.
33:50However, the stack, it grows downward in memory and it's kind of limited.
33:53You can actually overflow your stack pretty easily.
33:58So when the stack grows downward too much, then you just, you have a stack overflow and the program crashes.
34:04But when the heap grows, then it can just be resized again and again and again.
34:09Or that little block can kind of expand again and again and again.
34:14If you don't believe me, try writing a program with a recursive loop that just goes on forever.
34:19So imagine, actually, I don't know, don't do this on your boss's computer because maybe it'll crash things.
34:27Do it on your own personal computer.
34:31But imagine we have like a function f and, you know, you'll probably want to call f inside of main
34:38or something.
34:39So I'll just say like int main and we'll just call f right away.
34:42And then return zero when you're inside of f, you just call f, right?
34:45So this is like a horrible program.
34:47If you ran this very quickly, you'd crash the program because every single time you call f, you're adding another
34:53call frame on top of the stack.
34:55Your call stack is going to just be filled with f call frames.
34:59And then, you know, it runs out of available memory locations to use and then you're just done.
35:05It just crashes the whole program.
35:07So let's see.
35:09I think hopefully now you feel like you're an expert on the process stack and call frames.
35:15Thank you for watching this video.
35:17And I hope you learned a little bit and had some fun.
35:21I guess I'll see you in the next video.
35:24Bye.
35:30Hey, everybody.
35:31Thanks for watching this video again from the bottom of my heart.
35:33I really appreciate it.
35:34I do hope you did learn something and have some fun.
35:37If you could do me a please, a small little favor, could you please subscribe and follow this channel or
35:44these videos or whatever it is you do on the current social media website that you're looking at right now?
35:49It would really mean the world to me and it'll help make more videos and grow this community.
35:53So we'll be able to do more videos, longer videos, better videos, or just I'll be able to keep making
35:58videos in general.
35:59So please do me a kindness and subscribe.
36:04You know, sometimes I'm sleeping in the middle of the night and I just wake up because I know somebody
36:08subscribed or followed.
36:09It just wakes me up and I get filled with joy.
36:11That's exactly what happens every single time.
36:13So you could do it as a nice favor to me or you could troll me if you want to
36:17just wake me up in the middle of the night.
36:18Just subscribe and then I'll just wake up.
36:21I promise that's what will happen.
36:24Also, if you look at the middle of the screen right now, you should see a QR code, which you
36:27can scan in order to go to the website, which I think is also named somewhere at the bottom of
36:32this video.
36:33And it'll take you to my main website where you can just kind of like see all the videos I
36:37published and the services and tutorials and things that I offer and all that good stuff.
36:44And if you have a suggestion for clarifications or errata or just future videos that you want to see, please
36:51leave a comment.
36:52Or if you just want to say, hey, what's up?
36:54What's going on?
36:55You know, just send me a comment, whatever.
36:56I also wake up for those in the middle of the night.
36:58I get, I wake up in a cold sweat and I'm like, it would really, it would really mean the
37:03world to me.
37:04I would really appreciate it.
37:05So again, thank you so much for watching this video and enjoy the cool music as, as I fade into
37:13the darkness, which is coming for us all.
Comments