Understanding CPU Characteristics – CompTIA A+ 220-801: 1.6

Modern CPUs provide a number of advanced computing characteristics. In this video, you’ll learn about CPU speeds, caching, hyperthreading, virtualization support, integrated GPUs, and much more.

<< Previous Video: An Overview of CPU Socket TypesNext: CPU Cooling Techniques >>

If you look at the specifications for a computer’s CPU, you’ll see that they’re rated by speed. And the speed is usually rated in Hertz, which is a cycle per second. So if you have a CPU that is rated in megahertz, that’s the number of millions of instructions or cycles that it can do it in a second. And a gigahertz processor is one they can do billions of processes in one second.

Those cycles, as they go through, give us an idea of how fast the CPU is. But of course, the actual speed of the computer relies on a lot of different specifications. You can look at factors such as the architecture of the CPU itself and how it’s able to use all of the different processes inside of it. The speed of the bus, you have cache sizes, you have the capabilities of the operating system, there’s so many variables involved.

You can’t just look at the CPU speed to determine how fast a computer is really going to be. And most CPU manufacturers recognize this. They’re trying to move their marketing away from specifying how fast a CPU is and instead talking about the overall capabilities of the processor itself.

This makes it a little more difficult for us to determine the actual performance of a processor of course. So you may want to find a test that works for you. There are a number of benchmarking utilities out on the internet. Or maybe it’s you running and loading a particular spreadsheet or running a certain program that helps you understand the exact performance you’re going to get for this processor.

Another important specification in today’s modern processors are the number of cores inside of the chip itself. We used to have the older style processors that handled one instruction and one instruction only at a time. Well these days, we have multiple sets of instructions that can occur, all on different cores of the processor.

There might be two cores for a dual-core or four cores in a quad-core processor or even more in the same physical chip. And these act as independent processing units. They even have their own cache memory available.

And you can see an example of this here. You might have one physical chip. This is, for instance, a dual-core, which has two separate CPUs inside of it. It has perhaps its own L1 cache inside of each individual core. And then there might for instance be a shared L2 cache that’s outside of the cores themselves.

Here’s an example of a multi-core processor. This is Intel’s Sandy Bridge processor series. This is a quad-core. You can see the four cores right there across the top. These four cores have inside of them their own L1 and L2 cache.

And you can see down here at the bottom is a shared L3 cache. So you have a lot of computing capability, all in each one of these cores. So this processor can effectively be performing calculations on four separate instructions at the same time.

We’ve talked a little bit about that cache, but what is that really doing for us? Cache memory is really, really fast memory. And we’re storing the most used instructions or the most used data within those caches. So instead of having to go all the way out to memory and pull that information in from memory, we’ve got it right there in the processor itself.

So I mentioned that you may have, for instance on that Sandy Bridge processor, each core has within it a first level cache and a second level cache to give even faster performance. And then there’s even a shared L3 cache right here on the processor itself. Sometimes these cache memory sections are just off the processor. In the case of the Sandy Bridge, we’ve put the L3 cache right on the CPU itself.

Another term you may hear when working with the performance of a CPU is something called hyper-threading. This is an Intel term that you’ll also see abbreviated as HTT. It takes a single CPU and it splits it up to make it look like there are actually two processors inside of that CPU. It does not double the performance of a system. But it does give you a little boost of speed.

So as the processor is waiting for a retrieval from memory, at the same time maybe there’s another instruction that it can do in the meantime with data that it already has. And ultimately, you get a performance increase of somewhere around 15% to 30%. Your operating system itself has to be written for HTT. And these days most modern operating systems can take advantage of this Intel hyper-threading technology.

You may be familiar with virtualization technologies that you use in your desktop or that we’re using in data centers these days. And it’s a technology that has evolved through the years. It used to be that we would run multiple operating systems in our computer and the entire process was emulated in software. It gave us some limited performance. But at least we could run multiple operating systems at the same time, right there on our desktops or on our server.

Well to take advantage of this capability, the processor manufacturers began integrating virtualization hardware right into the processor itself. So instead of trying to simulate or emulate that virtualization in software, we could have the processor of our computer do that for us. In Intel, they call this Intel Virtualization Technology or VT. And AMD calls this AMD Virtualization or a AMDV.

In today’s CPUs, you’ll often see them referred to as a 32-bit processor or a 64-bit processor. And this is referring to the amount of data that were able to deal with at any one particular time, the width of the data that we have going through the system. When we’re talking about a processor, we’re referring to the size of the registers. We’re referring to how much memory can be addressed at any one particular time.

If we’re talking about the data bus itself, the highway of paths that we have on a motherboard, we’re effectively referring to the size of those pathways. So it may be a 32-bit bus or a much wider 64-bit bus.

Even if your processor is a 32-bit processor that can handle 32-bit integers, it handles addressing memory with a 32-bit addressing structure, it may use the wider 64-bit data bus. So if you’re looking at the data bus of a motherboard and it shows there’s a 64-bit data bus between the CPU and the memory, don’t think that your CPU is automatically going to be 64-bit. It may still only be a 32-bit processor.

If you begin comparing processors and you compare a 32-bit with a 64-bit, it may seem obvious that the 64-bit is going to give you so much more memory that you can address. And it’s going to be able to calculate so much more information at any one particular time. And to some degree, that’s correct. But you also have to keep in mind that your operating system has to be written and optimized for that particular architecture.

When you’re installing an operating system, you’re either installing a 32-bit operating system or a 64-bit operating system. And for a 64-bit operating system, it needs drivers. It needs applications and other components that are going to be optimized for 64-bit.

You can’t simply install a 64-bit architecture and expect all of your drivers for your printer and your scanner and your other devices to automatically work with the 32-bit drivers. They need brand new drivers and brand new capabilities. So if you’re upgrading or moving from 32-bit to 64-bit, check all of your components and make sure that they’re also compatible to work in a 64-bit environment as well.

As human beings, we need to see the results of what we’re doing on a screen. And so each computer has inside of it a graphics processing unit or a GPU. This GPU takes information, renders it, and presents it on some type of display for us. In the past, this graphics rendering was done on a separate card itself. Sometimes, it’s integrated into chips that are on the motherboard. But it requires a lot of hardware to be able to do that.

These days, the latest series of processors, like the Sandy Bridge, notice that there is an entire section that is the GPU. The graphics processing is done on the CPU itself. You don’t need extra chips on the motherboard. You don’t need an extra adapter card.

So this is becoming increasingly common. And of course as you might think, a separate GPU that is on a CPU chip itself is not going to be as powerful as a separate adapter that you might get.

If you’re doing gaming, you’re doing video editing, you need some type of high-end processing for your video, maybe you would still need a separate adapter card. But for the vast majority of people that just need to browse the internet and check their email, they can use the integrated processor that is inside of the CPUs. And the GPU that’s in there is certainly enough to handle the needs for most people, without needing any additional hardware inside of their computer.