Understanding PC Memory – CompTIA A+ 220-801: 1.3


PC memory consists of many different operating features and specifications. In this video, you’ll learn about memory speeds, memory latency, error correcting memory, multi-channel memory, and much more.

<< Previous Video: An Overview of Memory TypesNext: Installing and Configuring Expansion Cards >>


In our computers, the memory architecture inside of it is so very important, because there’s so much communication that takes place between the CPU and the memory sticks themselves. Our memory is there, and it’s usually managed by something called the Northbridge You might also hear this referred to as the Memory Controller Hub. This Memory Controller Hub manages this communication process between the CPU and the memory itself.

On our most modern CPU, we’ve even taken the Northbridge and the memory controller aspects of that. And we’ve integrated it directly into the CPU, so that we can then go directly from the CPU to the memory itself, thereby making that process even faster.

There’s a lot of different kinds of memory that we might use inside of our computer. Some that is on the CPU itself, inside of that chip, is used to store information. Those are called registers inside of the CPU. There’s not a lot of information set aside for registers. There’s just enough for that CPU to be able to operate. And every CPU has a certain set of registers that can be used.

Another type of memory that is often on the CPU itself, sometimes it’s just off the CPU, is a cache memory. There is a level one, level two, a level three type cache memory that you might find. And this is generally created from static RAM. This is very, very fast memory, because you have such speed requirements between the CPU itself and the dynamic RAM memory that’s inside of our computers.

This dynamic RAM is where a lot of the information is stored on our system. Whenever we have 2 gig or 4 gig or 8 gigabytes of memory, we’re really referring to this dynamic memory that’s inside of our computer. Sometimes we’ll set aside some of the memory inside of our computer to use for paging or to use for virtual memory. So there’s many places inside of our computer, either directly on the CPU or on the motherboard itself, where we’re taking advantage of that memory.

Whenever we’re transferring information in and out of memory, we sometimes will refer to the bandwidth, or the number of transactions that we can communicate in and out of our memory systems. This refers to the width of the memory bus. Sometimes we’re talking about the total number of bits and bytes that we can transfer on a single clock cycle of your computer.

If you ever look at the numbers in the specifications for your computers, especially as it’s related to the front side bus– or FSB– that’s generally referring to the total amount of information that can be transferred during a normal clock cycle. And sometimes you’ll hear that referred to as the bandwidth, or the amount of information that could be transferred during that time frame.

Sometimes we’re referring to bandwidth as the total number of bits that can be transferred back and forth during a normal clock cycle. Those total number of transfers are represented as 8, 16, 32, or on the most modern memory, we’re transferring 64 bits at a time in and out of that memory information.

If you ever look at the memory itself, there’s a lot of different chips that are on the memory modules. The total number of chips is really irrelevant to how much information is going to be transferred back and forth. That specification is simply built into the architecture of the memory modules and the motherboard this you happen to be using.

If you’re looking at the specifications of memory that’s on your motherboard, or maybe you need to upgrade the memory that’s inside of your system, one of the primary specifications you’ll find is the clock speed of the memory. That’s because with Synchronous Dynamic Random Access Memory, the memory is synchronized to the clock rate of the memory bus itself. So you’ll see a throughput number associated with that.

If we’re talking about SDRAM, the number that’s associated with the memory, like PC 100, means that the memory clock rate is 100 megahertz. DDR memory, DDR2, and DDR3 memory does not use the memory bus clock rate as the designation of the memory. If you’re talking about DDR memory, it will have a PC in front of it. For DDR2, its PC2. And for DDR3, it’s PC3.

And then the number that’s immediately after that refers to the throughput of that memory in a single clock cycle, and it’s measured in megabytes per second. So PC-1600 means that this is DDR memory and it’s rated at a throughput of 1,600 megabytes per second. If it’s a PC3-6400, that’s DDR3 memory, and it’s rated with a throughput of 6,400 megabytes per second.

When you’re purchasing memory or upgrading the memory in a computer, another thing you’ll find is something called a latency number. And it’s usually referred to as a CAS number, which stands for Column Address Strobe. Sometimes it stands for Column Address Select. The specification usually is abbreviated with a CL. That stands for the CAS latency. And this is the number of clock cycles between a time when a request is made to the memory to the time that you start getting the data back from the memory bus.

So the lower the number, the less latency you’re going to have, and the faster that communication is going to be. For example, if you had a DDR2 memory module it was rated at a 667 megahertz speed with a CL 4, that is going to be faster than the same speed memory with a CL 5. The lower the latency, the faster the amount of data transfered.

Another type of memory you’ll run into, especially in really important systems. If you have a web server or a database server, you might use this type of memory that’s able to check itself. One common type is one called parity memory. That is memory that has an additional parity bit onto the memory module itself. And it’s constantly checking the communication in and out of that memory module. If anything comes through and it does not match the parity, then it’s going to flag a message and stop communication so that that particular error does not propagate itself to the rest of the system. It can’t fix the problem, but it can stop the process and give you an opportunity to restart things.

On very important servers, you have a different kind of memory called Error Correcting Code memory. And as the name implies, this is very similar to the parity memory, but it can error correct itself. Which means if it sees an error, it will correct that error and still allow the process to continue. And if you are running a database server or some other type of machine that constantly needs to have all of the uptime possible, we never need downtime for that system, it’s probably going to use something like Error Correcting Code memory.

Some motherboards may be configured to use multi-channel memory. This is memory that has a maximum throughput if we’re filling up two or even three slots of memory on the motherboard. These are usually installed in pairs or trios, and the motherboard itself will be colored so that we can see exactly where we would put all three of them into our motherboard.

So if we had three memory modules, we would try to find three memory modules that were exactly the same, and we would put them into the colored slots that matched. We wouldn’t put them into the top three. We would try to put them in so that we can maximize the memory bus on that computer. We’ll see these colors very often when we run into multi-channel memory, and that should be your cue to make sure that you install the memory in pairs or in trios, depending on what that motherboard requires.

When you’re looking at specifications of memory, you may also see it referred to as single-sided memory and double-sided memory. This does not mean that the chips themselves are on one side of the module or on both sides of the module. That would be much too easy, of course. This is really referring to how the memory is accessed. It uses something called ranks of memory. The memory modules– the memory chips on the module itself– are arranged into groups.

Sometimes there is a single rank that is accessed by the memory controller. Sometimes the memory controller can access multiple ranks on a particular memory module. And when there are two ranks on a memory module, is called double-sided. If there’s only one rank of memory on a memory module, it is a single-sided. And if we look at some of the documentation that we’d find, it may be called rows. It may be called sides. Or it may be called ranks. But it’s all referring to whether it is single-sided or double-sided.

If you wanted to read up on where a good practical example of this might be, you can go to Intel’s website. There is an 875P Chipset Memory Configuration Guide there. This is the very long URL associated with it. But it’s a PDF file that you can download that talks about one rank being a single-sided DIMM, and two ranks being a double-sided DIMM. That’s one good example of where you might find the memory and the different ranks that would be used.

So check your motherboard documentation to see what type of memory it’s using, and then you can go back and look at the memory configuration guide for your motherboard to determine if it’s single-sided memory or double-sided memory. In your documentation, you may see something like this that talks about the different speed of the memory. This is DDR 266 or 333.

It talks about the total number of DIMMs that you would put inside of your computer, and it talks about the number of ranks per DIMM that can be accessed. If there are two ranks, it is a double-sided DIMM. If it is one rank, the documentation here clearly states that it’s a single-sided DIMM. So don’t be thrown by the term single-sided and double-sided to refer to those physical single or double sides of memory. We’re really talking about how the memory controller is accessing different parts of that memory.