Understanding CPU Characteristics – CompTIA A+ 220-901 – 1.6

Our modern processors can do much more than traditional CPUs. In this video, you’ll learn about the speeds of CPUs, on-board caches, hyper-threading, virtualization features, and much more.

<< Previous: An Overview of CPU Socket TypesNext: CPU Cooling >>


If you’re looking to buy a new computer, or you’re buying a CPU to go into a computer that you’re building, very often it will tell you the type or the brand of the CPU. And then it tells you what the speed of that CPU happens to be. And these days that speed is usually listed in gigahertz.

If you look at some older CPUs in older computers, those CPUs may be measured in megahertz. But that is the overall clock speed that that particular CPU runs at. In fact, you’ll see on this CPU here it’s even marked. 3.8 gigahertz is the speed of that particular component.

Of course, the actual speed of your computer may not necessarily be related to that single CPU speed. There’s so many other components inside of your computer that all have an effect on the overall performance of the machine. There’s clock speed. There’s CPU architecture, bus speed, the width of the buses, the size of the caches, and the operating system optimizations themselves, overall will affect what the actual speed of your computer happens to be.

Early on when personal computers were just hitting the market, manufacturers were using the CPU speed to try to tell people what systems were better or worse than any others. But these days, we tend to look at the capabilities of an overall system. We don’t tend to see marketing based on a total CPU speed any longer.

These days it might focus on a particular category of CPU, or particular manufacturer of CPU, but the emphasis is not necessarily solely on that particular clock speed. There really is no single metric that we can point to as describing the overall performance of a computer. That’s because there may be different things that are important to different people. The graphics performance may be more important to one person, and the storage input and output speeds maybe more important to another. You should find a benchmark that works for you, and then you’ll be able to compare that across different systems.

Early processors in our personal computers had one single CPU core. There was one set of calculations that would take place inside of that physical CPU. But these days we’ve added on additional cores to the CPU. And the CPUs themselves have become much more complex.

Here’s an example of a quad core CPU. There There are actually four cores inside of a single physical processor. Not only do you have four CPUs, each one of these CPUs might have a level one cache. There might also be a level two cache on the CPU core itself. And in this particular case, the processor might have a level three cache that’s shared across all of those separate cores. And of course, there could be more than four cores. There are certainly octa-cores and 16 core CPUs and much larger CPUs. And we will continue to see these CPUs become more and more complex as time goes on.

There is a microscopic world of components that we are now fitting into a single CPU. This is just one example of a CPU die. You can see it is a mixture of different components all put together on the same physical CPU.

If we were to look at this in a little bit more abstract form, you might look at a CPU and see something like this. There can be multiple cores on that CPU, those cores having their own level one and level two cache. You can see there is a shared level three cache on this particular abstraction. The memory controller in what used to be the northbridge is now integrated into many of the CPUs. And in fact, this CPU itself has its own set of processor graphics built in. So you don’t need separate components on the motherboard or a separate video adapter card. All of the processing for the graphics takes place right on the same physical CPU die.

As I mentioned before, there are a number of caches that you’ll find on a CPU. And these caches are incredibly important. They dramatically increase the amount of throughput we can get into one of these CPUs. Is it is very, very fast memory, and it’s designed to increase how much information we’re able to get into the CPU and out of the CPU. It’s really holding instructions and data and the results. They’re very, very small pieces of memory that you’ll find. Usually there is a level one cache. It’s the first check, or the first storage area you’ll find. And usually it’s integrated, or close to, the main CPU core itself.

These days, we’ll often see a level two cache that contains secondary data. And even in the case of the CPUs that we’ve been looking at here, there’s a level three cache. In this case, it’s still on the chip itself. This one happens to be shared across multiple cores. You’ll have to look at the architecture of the CPU you’re interested in to really understand how many different levels of caching there is on the CPU and how large those caches happen to be.

As we began trying to optimize our CPUs, Intel realized very early on that we needed some way to get as much information going through CPU as possible. There’s a lot of time where the CPU is waiting to gather information from memory, or to send information back to the memory. So they created a technology called hyper-threading.

This HTT, or Hyper-Threading Technology, takes a single physical CPU but makes it look and work as if it is two separate CPUs. You of course are not going to be able to double your speed from a single physical CPU, but you can gain some performance improvements. And generally, this Hyper-Threading Technology gives you about a 15% to 30% improvement over having a CPU that’s not using the HTT.

To be able to take advantage of this, though, you need an operating system that understands hyper-threading. So something like Windows XP and later are very good at understanding how to take advantage of HTT if your processor supports it.

A technology that has revolutionized the data center is virtualization. And virtualization is a way that we can run different operating systems on a single physical device. This means that we don’t have to have multiple CPUs, multiple computers all doing different things. We can combine all of those capabilities into a single physical box.

We realized early on that running this virtualization in software had some inherent performance limitations. By taking a number of the virtualization functions and moving them into the hardware of the CPU, we were able to improve the overall performance across all of the virtual machines.

If you’re running an Intel architecture, you want to look for Intel Virtualization Technology, or VT. And AMD architecture is going to use AMD Virtualization, or AMD-V. By using these functions in the CPU, we’re able to get better performance, especially when we’re virtualizing our operating systems.

Another important CPU characteristic is whether it is a 32-bit CPU or a 64-bit CPU. This is referring to the total amount of data the CPU can process at a single time. Obviously, the 64-bit CPU can process twice as much information as a 32-bit CPU. It’s a larger data path. It can have larger integer sizes. The memory addresses are much larger. So it is an overall much more powerful set of CPUs if you’re running a 64-bit CPU.

You can also consider that you’re able to move a lot more data at one time. We have previously talked about data paths in the buses inside of our computer as being 64-bit buses or 32-bit buses. And the same thing applies to CPUs as well. If we have a 64-bit data bus, we can move twice as much information than a 32-bit data bus.

Notice that the data buses in your computer may be different than the CPU bus that’s being used. For example, it’s very common to see a memory bus of 64 bits regardless of whether it is a 32-bit CPU or a 64-bit CPU that’s in use.

If you are running an operating system that can take advantage of a 64-bit CPU, then every aspect of that operating system needs to be optimized for that 64-bit platform. So your operating system needs to be a 64-bit operating system. The drivers that you’re using to connect to your hardware need to be 64-bit hardware drivers. And the applications that are running on your operating system need to be written for a 64-bit OS.

One of the more recent innovations in CPU architectures is the ability to embed the graphics hardware directly on the CPU itself. Normally, we would have a separate video adapter or there would be a separate set of chips on the motherboard handling the video. By integrating the graphics processor directly in the CPU, we’re able to get rid of those other components and optimize the performance of our graphic subsystem.

That’s because the graphics subsystem requires quite a bit of work to be able to show you the high resolution images and to be able to work with the graphical operating systems that we use these days. And even though we’ve integrated this graphics capability into the CPU, it’s still not as powerful as having a separate dedicated adapter that’s providing the graphics function. So if you’re a gamer, or you’re doing video editing, or you have some very high-powered graphics application, you may want to consider using a separate adapter, rather than the one that’s built into the CPU.

The security of our computing platforms is certainly of the highest concern these days. And our CPUs have also integrated some security functions within them as well. One of these is called the NX bit. This is called the No-eXecute bit. Intel calls it the XD bit, which stands for eXecute Disable. If you look at an AMD processor, it’s much more literal. They call it the Enhanced Virus Protection capability.

This functionality allows your CPU to designate memory areas where code might be executed. And it will designate other areas as protected areas where code cannot run. This is especially useful to prevent viruses and malware from executing in sections of memory where, normally, applications simply wouldn’t run.

To be able to use this capability, our operating system must be aware of this NX bit. If you look in Windows, you’ll see a feature called the Data Execution Prevention, or DEP. This is turned on automatically by default. And it’s already using the capability of your CPU to keep all of the executable code running in its own space.