- 6 months ago
Category
🤖
TechTranscript
00:00Have you ever wondered what's happening inside your computer when you load a
00:04program or video game? Well millions of operations are happening but perhaps the
00:09most common is simply just copying data from a solid-state drive or SSD into
00:16dynamic random access memory or DRAM. An SSD stores all the programs and data for
00:23long-term storage but when your computer wants to use that data it has to first
00:29move the appropriate files into DRAM which takes time hence the loading bar
00:34because your CPU works only with data after it's been moved to DRAM it's also
00:40called working memory or main memory. The reason why your desktop uses both SSDs
00:46and DRAM is because solid-state drives permanently store data in massive 3D
00:52arrays composed of a trillion or so memory cells yielding terabytes of storage
00:58whereas DRAM temporarily stores data in 2D arrays composed of billions of tiny
01:05capacitor memory cells yielding gigabytes of working memory. Accessing any section of
01:11cells in the massive SSD array and reading or writing data takes about 50
01:16microseconds whereas reading or writing from any DRAM capacitor memory cell takes
01:22about 17 nanoseconds which is 3,000 times faster. For comparison a supersonic jet
01:30going at Mach 3 is around 3,000 times faster than a moving tortoise so the speed of
01:3617 nanosecond DRAM versus 50 microsecond SSD is like comparing a supersonic jet to a
01:44tortoise however speed is just one factor DRAM is limited to a 2D array and temporarily stores one bit per memory cell for example this stick of DRAM with eight chips hold 16 gigabytes of data
02:00whereas a solid state drive of a smaller size can hold two terabytes of data more than 100 times that of DRAM. Additionally DRAM requires power to continuously store and refresh
02:11the data held in its capacitors. Therefore computers use both SSDs and DRAM and by spending a few seconds of loading time to copy data from the SSD to the DRAM and then pre-fetching which is the process of moving data before it's needed your computer can store terabytes of data on the SSD and then access the data from programs that were preemptively copied in
02:18into the DRAM in to the DRAM in a few nanoseconds. For example many video games have a loading time to start up the game itself.
02:25For example many video games have a loading time to use both SSDs and DRAM and by spending a few seconds of loading time to copy data from the SSD to the DRAM and then pre-fetching which is the process of moving data before it's needed your computer can store terabytes of data on the SSD and then access the data from programs that were preemptively copied into the DRAM in a few nanoseconds.
02:42For example many video games have a loading time to start up the game itself and then a separate loading time to load a save file. During the process of loading a save file all the 3D models, textures and the environment of your game state are moved from the SSD into DRAM so any of it can be accessed in a few nanoseconds which is why video games have DRAM capacity requirements. Just imagine without DRAM
03:11without DRAM playing a game would be 3000 times slower.
03:16We covered solid state drives in other videos so in this video we're going to take a deep dive into this 16 gigabyte stick of DRAM.
03:25First we'll see exactly how the CPU communicates and moves data from an SSD to DRAM.
03:32Then we'll open up a DRAM microchip and see how billions of memory cells are organized into banks and how data is written to and read from groups of memory cells.
03:43In the process we'll dive into the nanoscopic structures inside individual memory cells and see how each capacitor physically stores one bit of data.
03:53Finally, we'll explore some breakthroughs and optimizations such as the burst buffer and folded DRAM layouts that enable DRAM to move data around at incredible speeds.
04:06A few quick notes. First, you can find similar DRAM chips inside GPUs, smartphones and many other devices but with different optimizations.
04:16As examples, GPU DRAM or VRAM located all around the GPU chip has a larger bandwidth and can read and write simultaneously but operates at a lower frequency.
04:29And DRAM in your smartphone is stacked on top of the CPU and is optimized for smaller packaging and lower power consumption.
04:38Second, this video is sponsored by Crucial. Although they gave me this stick of DRAM to model and use in the video, the content was independently researched and not influenced by them.
04:51Third, there are faster memory structures in your CPU called cache memory and even faster registers.
04:58All these types of memory create a memory hierarchy with the main trade-off being speed versus capacity while keeping prices affordable to consumers and optimizing the size of each microchip for manufacturing.
05:12Fourth, you can see how much of your DRAM is being utilized by each program by opening your computer's resource monitor and clicking on memory.
05:21Fifth, there are different generations of DRAM and we'll explore DDR5.
05:27Many of the key concepts that we explain apply to prior generations, although the numbers may be different.
05:34Sixth, 17 nanoseconds is incredibly fast.
05:38Electricity travels at around one foot per nanosecond and 17 nanoseconds is about the time it takes for light to travel across a room.
05:47Finally, this video is rather long as it covers a lot of what there is to know around DRAM.
05:53We recommend watching it first at 1.25x speed and then a second time at 1.5x speed to fully comprehend this complex technology.
06:04Stick around because this is going to be an incredibly detailed video.
06:09To start, a stick of DRAM is also called a dual inline memory module or DIM and there are eight DRAM chips on this particular DIM.
06:20On the motherboard, there are four DRAM slots and when plugged in, the DRAM is directly connected to the CPU via two memory channels that run through the motherboard.
06:31Note that the left two DRAM slots share these memory channels and the right two share a separate channel.
06:38Let's move to look inside the CPU at the processor.
06:42Along with numerous cores and many other elements, we find the memory controller which manages and communicates with the DRAM.
06:50There is also a separate section for communicating with SSDs plugged into the M2 slots and with SSDs and hard drives plugged into SATA connectors.
07:01Using these sections, along with data mapping tables, the CPU manages the flow of data from the SSD to DRAM as well as from DRAM to cache memory for processing by the cores.
07:13Let's move back to see the memory channels.
07:16For DDR5, each memory channel is divided into two parts, channel A and channel B.
07:23These two memory channels, A and B, independently transfer 32 bits at a time using 32 data wires.
07:33Using 21 additional wires, each memory channel carries an address specifying where to read or write data
07:40and, using seven control signal wires, commands are relayed.
07:45The addresses and commands are sent to and shared by all four chips on the memory channel, which work in parallel.
07:52However, the 32-bit data lines are divided among the chips and thus each chip only reads or writes eight bits at a time.
08:01Additionally, power for DRAM is supplied by the motherboard and managed by these chips on the stick itself.
08:09Next, let's open and look inside one of these DRAM microchips.
08:14Inside the exterior packaging, we find an interconnection matrix that connects the ball grid array at the bottom with a die which is the main part of this microchip.
08:25This 2-gigabyte DRAM die is organized into eight bank groups composed of four banks each, totaling 32 banks.
08:34Within each bank is a massive array, 65,536 memory cells tall by 8,192 cells across.
08:45Essentially rows and columns in a grid with tens of thousands of wires and supporting circuitry running outside each bank.
08:53Instead of looking at this die, we're going to transition to a functional diagram and then reorganize the banks and bank groups.
09:02In order to access 17 billion memory cells, we need a 31-bit address.
09:08Three bits are used to select the appropriate bank group, then two bits to select the bank.
09:14Next, 16 bits of the address are used to determine the exact row out of 65,000.
09:22Because this chip reads or writes 8 bits at a time, the 8,192 columns are grouped by 8 memory cells, all read or written at a time or by 8, and thus only 10 bits are needed for the column address.
09:38One optimization is that this 31-bit address is separated into two parts and sent using only 21 wires.
09:47First, the bank group, bank and row address are sent, and then after that the column address.
09:53Next, we'll look inside these physical memory cells, but first let's briefly talk about how these structures are manufactured as well as this video's sponsor.
10:03This incredibly complicated die, also called an integrated circuit, is manufactured on 300-millimeter silicon wafers, 2,500-ish dies at a time.
10:16On each die are billions of nonoscopic memory cells that are fabricated using dozens of tools and hundreds of steps in a semiconductor fabrication plant, or FAB.
10:27This one was made by Micron, which manufactures around a quarter of the world's DRAM, including both NVIDIA's and AMD's VRAM in their GPUs.
10:37Micron also has its own product line of DRAM and SSDs under the brand Crucial, which, as mentioned earlier, is the sponsor of this video.
10:47In addition to DRAM, Micron is one of the world's leading suppliers of solid state drives, such as this Crucial P5 Plus M2 NVMe SSD.
10:59By installing your operating system in video games on a Crucial NVMe solid state drive, you'll be sure to have incredibly fast loading times and smooth gameplay.
11:10And if you do video editing, make sure all those files are on a fast SSD like this one as well.
11:17This is because the main speed bottleneck for loading is predominantly limited by the speed of the SSD or hard drive where the files are stored.
11:26For example, this hard drive can only transfer data at around 150 megabytes a second, whereas this Crucial NVMe SSD can transfer data at a rate of up to 6,600 megabytes a second.
11:41Which, for comparison, is the speed of a moving tortoise versus a galloping horse.
11:47By using a Crucial NVMe SSD, loading a video game that requires gigabytes of DRAM is reduced from a minute or more down to a couple of seconds.
12:00Check out the Crucial NVMe SSDs using a link in the description below.
12:08Let's get back to the details of how DRAM works and zoom in to explore a single memory cell situated in a massive array.
12:17This memory cell is called a 1T1C cell and is a few dozen nanometers in size.
12:24It has two parts, a capacitor to store one bit of data in the form of electrical charges, or electrons, and a transistor to access and read or write data.
12:35The capacitor is shaped like a deep trench dug into silicon and is composed of two conductive surfaces separated by a dielectric insulator or barrier just a few atoms thick.
12:47Which stops the flow of electrons but allows electric fields to pass through.
12:52If this capacitor is charged up with electrons to 1 volt, it's a binary 1.
12:58And if no charges are present and it's at 0 volts, it's a binary 0.
13:03And thus, this cell only holds one bit of data.
13:07Designs of capacitors are constantly evolving.
13:10But in this trench capacitor, the depth of the silicon is utilized to allow for larger capacitive storage while taking up as little area as possible.
13:20Next, let's look at the access transistor and add in two wires.
13:25The word line wire connects to the gate of the transistor, while the bit line wire connects to the other side of the transistor's channel.
13:34Applying a voltage to the word line turns on the transistor.
13:38And while it's on, electrons can flow through the channel, thus connecting the capacitor to the bit line.
13:44This allows us to access and charge up the capacitor to write a 1 or discharge the capacitor to write a 0.
13:51Additionally, we can read the stored value in the capacitor by measuring the amount of charge.
13:57However, when the word line is off, the transistor is turned off.
14:01And the capacitor is isolated from the bit line, thus saving the data, or charge, that was previously written.
14:07Note that because this transistor is incredibly small, only a few dozen nanometers wide, electrons slowly leak across the channel.
14:17And thus, over time, the capacitor needs to be refreshed to recharge the leaked electrons.
14:23We'll cover exactly how refreshing memory cells works a little later.
14:28As mentioned earlier, this 1T1C memory cell is one of 17 billion inside this single die.
14:36And is organized into massive arrays called banks.
14:40So, let's build a small array for illustrative purposes.
14:44In our array, each of the word lines is connected in rows.
14:49And then the bit lines are connected in columns.
14:52Word lines and bit lines are in different vertical layers so one can cross over the other.
14:57And they never touch.
15:00Let's simplify the visual and use symbols for the capacitors and the transistors.
15:05Just as before, the word lines connect to each transistor's control gate in rows.
15:11And then all the bit lines and columns connect to the channel opposite each capacitor.
15:17As a result, when a word line is active, all the capacitors in only that row are connected to their corresponding bit lines,
15:24thereby activating all the memory cells in that row.
15:28At any given time, only one word line is active because if more than one word line were active,
15:35then multiple capacitors in a column would be connected to the bit line.
15:39And the data storage functionalities of these capacitors would interfere with one another, making them useless.
15:45As mentioned earlier, within a single bank there are 65,536 rows and 8,192 columns.
15:54And the 31-bit address is used to activate a group of just eight memory cells.
16:00The first five bits select the bank, and the next 16 bits are sent to a row decoder to activate a single row.
16:08For example, this binary number turns on the word line row 27,524, thus turning on all transistors in that row
16:18and connecting the 8,192 capacitors to their bit lines, while at the same time the other 65,000-ish word lines are all off.
16:29Here's the logic diagram for a simple decoder.
16:32The remaining 10 bits of the address are sent to the column multiplexer.
16:36This multiplexer takes in the 8,192 bit lines on the top and, depending on the 10-bit address,
16:43connects a specific group of 8-bit lines to the 8 input and output I.O. wires at the bottom.
16:50For example, if the 10-bit address were this, then only the bit lines 4784 through 4791 would be connected to the I.O. wires,
17:03and the rest of the 8,000-ish bit lines would be connected to nothing.
17:07Here's the logic diagram for a simple multiplexer.
17:11We now have the means of accessing any memory cell in this massive array.
17:15However, to understand the three basic operations, reading, writing, and refreshing,
17:21let's add two elements to our layout.
17:24A sense amplifier at the bottom of each bit line, and a read and write driver outside of the column multiplexer.
17:31Let's look at reading from a group of memory cells.
17:34First, the read command and 31-bit address are sent from the CPU to the DRAM.
17:41The first five bits select a specific bank.
17:44The next step is to turn off all the word lines in that bank, thereby isolating all the capacitors,
17:51and then pre-charge all 8,000-ish bit lines to 0.5 volts.
17:56Next, the 16-bit row address turns on a row, and all the capacitors in that row are connected to their bit lines.
18:04If an individual capacitor holds a 1 and is charged to 1 volt, then some charge flows from the capacitor onto the 0.5 volt bit line,
18:14and the voltage on the bit line increases.
18:16The sense amplifier then detects the slight change or perturbation of voltage on the bit line,
18:22amplifies the change, and pushes the voltage on the bit line all the way up to 1 volt.
18:28However, if a zero is stored in the capacitor, charge flows from the bit line into the capacitor,
18:35and the 0.5 volt bit line decreases in voltage.
18:39The sense amplifier then sees this change, amplifies it, and drives the bit line voltage down to 0 volts or ground.
18:47The sense amplifier is necessary because the capacitor is so small, and the bit line is rather long,
18:53and thus the capacitor needs to have an additional component to sense and amplify whatever value is stored.
19:00Now, all 8,000-ish bit lines are driven to 1 volt or 0 volts,
19:06corresponding to the stored charge in the capacitors of the activated row,
19:10and this row is now considered open.
19:13Next, the column select multiplexer uses the 10-bit column address to connect the corresponding 8-bit lines to the read driver,
19:21which then sends these 8 values and voltages over the 8 data wires to the CPU.
19:28Writing data to these memory cells is similar to reading, however, with a few key differences.
19:34First, the write command, address, and 8 bits to be written are sent to the DRAM chip.
19:41Next, just like before, the bank is selected, the capacitors are isolated, and the bit lines are pre-charged to 0.5 volts.
19:51Then, using a 16-bit address, a single row is activated, the capacitors perturb the bit line,
19:58and the sense amplifiers sense this and drive the bit lines to a 1 or 0, thus opening the row.
20:06Next, the column address goes to the multiplexer.
20:09But this time, because a write command was sent, the multiplexer connects the specific 8-bit lines to the write driver,
20:16which contains the 8 bits that the CPU had sent along the data wires and requested to write.
20:22These write drivers are much stronger than the sense amplifier, and thus they override whatever voltage was previously on the bit line,
20:30and drive each of the 8-bit lines to 1 volt for a 1 to be written, or 0 volts for a 0.
20:37This new bit line voltage overrides the previously stored charges or values in each of the 8 capacitors in the open row,
20:45thereby writing 8 bits of data to the memory cells corresponding to the 31-bit address.
20:51Three quick notes. First, as a reminder, writing and reading happens concurrently with all the 4 chips in the shared memory channel,
20:59using the same 31-bit address and command wires, but with different data wires for each chip.
21:06Second, with DDR5 for a binary 1, the voltage is actually 1.1 volts. For DDR4, it's 1.2 volts.
21:16And prior generations had even higher voltages, with the bit line pre-charge voltages being half of these voltages.
21:24However, for DDR5, when writing or refreshing a higher voltage, around 1.4 volts is applied and stored in each capacitor for a binary 1,
21:34because charge leaks out over time. However, for simplicity, we're going to stick with 1 and 0.
21:41Third, the number of bank groups, banks, bit lines, and word lines varies widely between different generations and capacities,
21:50but is always in powers of 2.
21:52Let's move on and discuss the third operation, which is refreshing the memory cells in a bank.
21:58As mentioned earlier, the transistors used to isolate the capacitors are incredibly small,
22:04and thus, charges leak across the channel. The refresh operation is rather simple,
22:10and is a sequence of closing all the rows, pre-charging the bit lines to 0.5 volts, and opening a row.
22:17To refresh, just as before, the capacitors perturb the bit lines, and then the sense amplifiers drive the bit lines and capacitors
22:25to the open row fully up to 1 volt or down to 0 volts, depending on the stored value of the capacitor,
22:33thereby refilling the leaked charge. This process of row closing, pre-charging, opening, and sense amplifying
22:41happens row after row, taking 50 nanoseconds for each row, until all 65,000-ish rows are refreshed,
22:50taking a total of 3 milliseconds or so to complete. The refresh operation occurs once every 64 milliseconds
22:57for each bank, because that's statistically below the worst-case time it takes for a memory cell
23:03to leak too much charge to make a stored 1 turn into a 0, thus resulting in a loss of data.
23:11Let's take a step back and consider the incredible amount of data that has moved through DRAM memory cells.
23:19These banks of memory cells handle up to 4,800 million requests to read and write data every second,
23:28while refreshing every memory cell in each bank, row by row, around 16 times a second.
23:34That's a staggering amount of data movement, and illustrates the true strength of computers.
23:40Yes, they do simple things like comparisons, arithmetic, and moving data around, but at a rate of billions of times a second.
23:50Now, you might wonder why computers need to do so much data movement.
23:55Well, take this video game for example. You have obvious calculations like the movement of your character in the horse,
24:02but then there are individual grasses, trees, rocks, and animals whose positions and geometries are stored in DRAM.
24:10And then the environment, such as the lighting and shadows, change the colors and textures of the environment,
24:17in order to create a realistic world.
24:20Next, we're going to explore breakthroughs and optimizations that allow DRAM to be incredibly fast.
24:28But, before we get into all those details, we would greatly appreciate it if you could take a second to hit that like button,
24:35subscribe if you haven't already, and type up a quick comment below, as it helps get this video out to others.
24:42Also, we have a Patreon and would appreciate any support.
24:46This is our longest and most detailed video by far, and we're planning more videos that get into the inner details of how computers work.
24:55We can't do it without your help, so thank you for watching and doing these three quick things. It helps a ton.
25:07The first complex topic which we'll explore is why there are 32 banks, as well as what the parameters on the packaging of DRAM are.
25:16After that, we'll explore burst buffers, subarrays, and folded DRAM architecture and what's inside the sense amplifier.
25:24Let's take a look at the banks.
25:27As mentioned earlier, opening a single row within a bank requires all these steps, and this process takes time.
25:33However, if a row were already open, we could read or write to any section of eight memory cells using only the 10-bit column address and the column select multiplexer.
25:44When the CPU sends a read or write command to a row that's already open, it's called a row hit or page hit, and this can happen over and over.
25:54With a row hit, we skip all the steps required to open a row and just use the 10-bit column address to multiplex a different set of eight columns or bit lines, connecting them to the read or write driver, thereby saving a considerable amount of time.
26:09A row miss is when the next address is for a different row, which requires the DRAM to close and isolate the currently open row and then open the new row.
26:19On a package of DRAM, there are typically four numbers specifying timing parameters regarding row hits, precharging, and row misses.
26:28The first number refers to the time it takes between sending an address with a row open, thus a row hit, to receiving the data stored in those columns.
26:37The next number is the time it takes to open a row if all the lines are isolated and the bit lines are precharged.
26:45Then the next number is the time it takes to precharge the bit lines before opening a row.
26:50And the last number is the time it takes between a row activation and the following precharge.
26:56Note that these numbers are measured in clock cycles.
27:00Row hits are also the reason why the address is sent in two sections.
27:04First, the bank selection and row address called RAS and then the column address called CAS.
27:11If the first part, the bank selection and row address, matches a currently open row, then it's a row hit.
27:18And all the DRAM needs is the column address and the new command.
27:22And then the multiplexer simply moves around the open row.
27:25Because of the time saving and accessing an open row, the CPU memory controller, programs, and compilers are optimized for increasing the number of subsequent row hits.
27:37The opposite, called thrashing, is when a program jumps around from one row to a different row over and over and is obviously incredibly inefficient, both in terms of energy and time.
27:49Additionally, DDR5 DRAM has 32 banks for this reason.
27:54Each bank's rows, columns, sense amplifiers, and row decoders operate independently of one another.
28:01And thus, multiple rows from different banks can be open all at the same time, increasing the likelihood of a row hit and reducing the average time it takes for the CPU to access data.
28:13Furthermore, by having multiple bank groups, the CPU can refresh one bank in each bank group at a time while using the other three, thus reducing the impact of refreshing.
28:25A question you may have had earlier is why are banks significantly taller than they are wide?
28:31Well, by combining all the banks together, one next to the other, you can think of this chip as actually being 65,000 rows tall by 262,000 columns wide.
28:45And by adding 31 equally spaced divisions between the columns, thus creating banks, we allow for much more flexibility and efficiency in reading, writing, and refreshing.
28:57Also, note that on the DRAM packaging are its capacity in gigabytes, the number of millions of data transfers per second, which is two times the clock frequency, and the peak data transfer rate in megabytes per second.
29:13The next design optimization we'll explore is the burst buffer and burst length.
29:19Let's add a 128-bit read and write temporary storage location called a burst buffer to our functional diagram.
29:27Instead of eight wires coming out of the multiplexer, we're going to have 128 wires that connect to these 128-bit buffer locations.
29:37Next, the 10-bit column address is broken into two parts.
29:41Six bits are used for the multiplexer and four bits are for the burst buffer.
29:46Let's explore a reading command.
29:48With our burst buffer in place, 128 memory cells and bit lines are connected to the burst buffer using the six column bits, thereby temporarily loading or caching 128 values into the burst buffer.
30:03Using the four bits for the buffer, eight quickly accessed data locations in the burst buffer are connected to the read drivers and the data is sent to the CPU.
30:13By cycling through these four bits, all 16 sets of eight bits are read out and thus the burst length is 16.
30:21After that, a new set of 128-bit lines and values are connected and loaded into the burst buffer.
30:28There is also a right burst buffer which operates in a similar way.
30:33The benefit of this design is that 16 sets of eight bits per microchip totaling 1024 bits can be accessed and read or written extremely quickly as long as the data is all next to one another.
30:47But at the same time, we still have the granularity and ability to access any set of eight bits if our data requests jump around.
30:56The next design optimization is that this bank of 65,536 rows by 8192 columns is rather massive and results in extremely long word lines and bit lines, especially when compared to the size of each trench capacitor memory cell.
31:16Therefore, the massive array is broken up into smaller blocks, 1024 by 1024 with intermediate sense amplifiers below each subarray and subdividing word lines and using a hierarchical row decoding scheme.
31:32By subdividing the bit lines, the distance and amount of wire that each tiny capacitor is connected to as it perturbs the bit line to the sense amplifier is reduced and thus the capacitor doesn't have to be as big.
31:46By subdividing the word lines, the capacitive load from 8000-ish transistor gates and channels is decreased and thus the time it takes to turn on all the access transistors in a row is decreased.
31:59The final topic we're going to talk about is the most complicated.
32:03Remember how we had a sense amplifier connected to the bottom of each bit line?
32:07Well, this optimization has two bit lines per column going to each sense amplifier and alternating rows of memory cells connected to the left and right bit lines, thus doubling the number of bit lines.
32:20When one row is active, half of the bit lines are active while the other half are passive and vice versa when the next row is active.
32:28Moving down to see inside the sense amplifier, we find a cross-coupled inverter. How does this work?
32:35Well, when the active bit line is a 1, the passive bit line will be driven by this cross-coupled inverter to the opposite value of 0.
32:43And when the active is a 0, the passive becomes a 1.
32:47Note that the inverted passive bit line isn't connected to any memory cells and thus it doesn't mess up any stored data.
32:54The cross-coupled inverter makes it such that these two bit lines are always going to be opposite one another and they're called a differential pair.
33:03There are three benefits to this design. First, during the pre-charge step, we want to bring all the bit lines to 0.5 volts and by having a differential pair of active and passive bit lines,
33:15the easiest solution is to disconnect the cross-coupled inverters and open a channel between the two using a transistor.
33:23The charge easily flows from the 1 bit line to the 0 and they both average out and settle at 0.5 volts.
33:31The other two benefits are noise immunity and a reduction in parasitic capacitance of the bit line.
33:37These benefits are related to the fact that by creating two oppositely charged electric wires with electric fields going from one to the other,
33:45we reduce the amount of electric fields emitted in stray directions and relatedly increase the ability of the sense amplifier to amplify one bit line to 1 volt and the other to 0 volts.
33:58One final note is that when discussing DRAM, one major topic is the timing of addresses, command signals and data and the related acronyms DDR or double data rate and SDRAM or synchronous DRAM.
34:13These topics were omitted from this video because it would have taken an additional 15 minutes to properly explore.
34:20That's pretty much it for the DRAM and we're grateful you made it this far into the video.
34:29We believe the future will require a strong emphasis on engineering education and we're thankful to all our Patreon and YouTube membership sponsors for supporting this dream.
34:39If you want to support us on YouTube memberships or Patreon, you can find the links in the description.
34:46A huge thanks goes to Nathan, Peter and Jacob who are doctoral students at the Florida Institute for Cybersecurity Research for helping to research and review this video's content.
34:59They do foundational research on finding the weak points in device security and whether hardware is compromised.
35:06If you want to learn more about the Fisk graduate program or their work, check out the website using the link in the description.
35:14This is Branch Education and we create 3D animations that dive deep into the technology that drives our modern world.
35:22Watch another Branch video by clicking one of these cards or click here to subscribe.
35:28Thanks for watching to the end.
Comments