Network Architectures – N10-008 CompTIA Network+ : 1.7

There are many different ways to design a network. In this video, you’ll learn about the three-tier architecture, software defined networking, spine and leaf configurations, and the difference between north-south traffic and east-west traffic.


In this video, we’ll look at a number of different architectures we use when designing our networks. We’ll start with a very common type of architecture. This is the three-tier architecture, and we’ll go through what each of those tiers consists of. We’ll start with the core of the network. Sometimes we refer to this as the center of the network. And this is usually where our major services are located.

We might have web servers, database servers, name servers, and other important services would be contained in this core. Almost everybody in the organization will need access to these services, so it makes sense to put these services in the middle of the network. The users, though, don’t connect directly to the core. There’s a midpoint called the distribution layer. This distribution tier manages the communication between all of the end users and the core of the network.

This not only provides the users with a way to connect to the core, but it also provides a way for redundancy and control of traffic into and out of the core. And the last tier is the access tier. This is where the users are located, and usually there is an access switch somewhere close to where those users are located. It’s very common in fact have multiple access, which is on a floor. That floor would roll up to a distribution switch, and that distribution switch would provide access to the core.

The idea of this three-tier architecture is very similar to the way we lay out our cities. For example, there might be a core or downtown area of a city, but this is often where our office buildings are, and we don’t commonly live in the core of the city. Instead, we have a way to get from our home into the center of the city through some type of distribution network or distribution highway. We would then use those distribution highways to get in and out of the core of the city.

And of course, we would live outside of the city. We would take the distribution network to be able to gain access to the core, and then back to our homes. And usually, everything that’s in our local neighborhood are things that we might need immediate access to. For example, if we wanted to go to the local grocery store or a neighbor’s house, we would only stay in that local access area. But if we needed to gain access to any other part of the city, we would use our distribution of highways to gain access to the core.

If we were to show this as a network diagram, this is the topology you would have. All of your users are down here at the bottom, and those users are connected to access switches. The access switches are then connected to distribution switches. And you can see there are multiple distribution switches, and the access switches might connect to those multiple switches to provide redundancy. The distribution switches are then finally connected to the core, providing that final tier between the users and the services in the core.

On larger networks, you can expand this three-tier architecture to even work between different buildings. So you might have an access switch on each floor of a building. Those access which is connect to distribution switches, and those distribution switches finally connect to the core. The same thing would occur in the other building with access switches and distribution switches, meaning everybody in all of these buildings would be able to gain access to the services they need in the core.

In recent years, we’ve taken this idea of physical networking components and we’ve tried to virtualize those systems, similar to what we’ve done with virtual servers. We’ve been able to take the different functions of these networking devices and separate them into separate functional planes of operation. There are three primary planes of operation– the data plane, the control plane, and the management plane. And all of these together act as SDN, or software-defined networking.

This design fits perfectly with our cloud-based architectures. We’re able to take these networking components, break them up into individual functional pieces, and be able to manage each of those separately. For example, we might create an infrastructure layer– we often refer to this as the data plane– as the part that’s doing the real work of the networking component. For example, if this is a switch, it may be processing network frames and network packets. If it’s a firewall or router, it could be performing forwarding, or trunking, or encryption, or network address translation.

All of that work to be able to forward traffic between locations is handled by the data plane. There has to be something to manage what the data plane is doing, and that’s managed through the control layer or the control plane. If you’re keeping track of where routing tables might be, switching tables, or understanding where network address translation may be working, it’s all handled by the control layer. But of course, we as network administrators need some way to control these devices, and we control them through the management plane. This is the application layer of SDN, and this is where you, the network administrator, would control and manage those networking devices.

Let’s overlay this SDN architecture on what is a traditional physical switch. So instead of having a physical switch, we’ll start by breaking out these individual interfaces. This is the data plane, or the infrastructure layer where all of the forwarding really occurs. All of our frames and packets are being moved around thanks to the data plane. We can then take our routing tables, our switching tables, or our network address translation tables and manage those through the control layer or control plane.

And of course, we have our traditional section of the switch that we would connect to be able to manage the device, and that would be pulled into the application layer or the management plane. Now we can remove the physical components that we used to connect with and deal only with the virtualized or the cloud-based architecture. For example, we’d use the infrastructure layer or the data plane to be able to transfer data between network devices. We can then reference the control layer or the control plane to be able to provide updates to routing tables or switching tables.

And then lastly, we need to manage these devices through the application layer or the management plane, and we might use an SSH console to be able to manage the device. Or perhaps this is more programmatic. We might use SNMP or API calls to be able to manage these cloud-based SDN architecture devices.

Another popular architecture for network connectivity is the spine and leaf architecture. This is where you would have services that connect to leaf switches that ultimately connect to spine switches. Each one of these spine switches on the top connect to all of the leaf switches that are in the network. And the leaf switches don’t connect to each other. They all connect back to the spine, and then the spine determines where the traffic goes from there.

You’ll also notice that the spine switches don’t connect directly to each other, that all of the communication is either occurring from leaf to spine or spine to leaf. It’s common to associate the spine and leaf architecture with what we call top of rack switching. This is referring to the physical network rack that might be in your data center. So you can think of all of these leaf switches as being on the top of a particular 19-inch rack, and within the rest of the rack, you might have image servers, directory servers, web servers, or some other type of service.

This allows you to have some very simple cabling between the leaf and the spine. You’ve got built-in redundancy for all of these connections, and this provides some very efficient and very fast communication. However, if you add another rack to your network, which requires another leaf switch, you’ll have to create additional connections for all of the spine switches. So adding additional switches could rapidly increase the cost associated with this connectivity.

When you’re working inside a data center, it’s useful to know where data is originating and where the destination is. We refer to this path between source and destination in directional terms. For example, an east-west traffic is traffic that is going between devices within the same data center. So communication between an image server and a web server inside the same data center is east-west traffic.

The other type of traffic may be going outside of our data center, and we refer to that traffic as north-south traffic. Since this north-south traffic is going outside of our data center, and therefore, outside of our control, we may have different security postures for north-south traffic than we would use for east-west traffic, which all stays within our controlled network. As a network administrator, you may be installing equipment in many different locations.

There may be users in a branch office that need local devices. There might be a local switch, router, or firewall, or you may be installing client devices in that branch office. You might also install information in an on-premises data center. This is an in-house data center that you’re responsible for. You manage the cooling, you manage the electrical systems, and you’re responsible for the ongoing monitoring of those systems.

Your organization also might contract with a third party to use their data center, or a portion of their data center, through something called co-location. This is where multiple companies may have their equipment, and all of them are running within the same facility. You can see there are cages and locked doors set up so that only your organization would have access to your equipment, and you’d be protected from anyone else who might be entering the data center. Usually there’s a third-party company that runs the co-location center, and they’re responsible for the ongoing monitoring and the security of those systems.