Advanced Networking Devices – N10-008 CompTIA Network+ : 2.1

The core of a data center network can contain many different networking devices. In this video, you’ll learn about layer 3 switches, wireless LAN controllers, load balancers, and more.


The manufacturers of network infrastructure devices have realized through the years that if they could put more capabilities into a single device, then they’re able to sell more devices. A good example of this combination is the Layer 3 capable switch. You may hear this referred to as a Layer 3 switch or a multilayer switch. This combines a Layer 2 switch and a Layer 3 router within the same physical device. And although we call it a Layer 3 switch, we’re not really switching at Layer 3. Switching occurs at Layer 2. And at Layer 3, we would be routing.

The manufacturers combined both switching and routing inside of the same device. You’ll still have configuration settings you’ll need to make for the switching side and a completely different set of configuration settings you’ll need to make for the router side.

If you have wireless networks in your office, then you probably have a lot of wireless access points that might be in the ceiling or on different floors. And everywhere you go within the organization, you can connect to a wireless network.

Although it might be easy to manage one or two access points, imagine managing all of these access points across the entire enterprise at all of these different locations. Every time you would need to make a configuration change, you would need to go to every single access point to make that modification.

Another important consideration is that these wireless networks should be seamless to the end users. They should be able to move between floors or between buildings and always remain connected to the wireless network.

The way that you provide a centralized management of all of these access points is by using a wireless LAN controller. You can think of this as using a single pane of glass. That means we have a single management screen. And we can make configuration changes and manage our entire access point infrastructure from one central management station.

Using our wireless LAN controller, we can then deploy new access points by simply sending out the configuration to that device. We can perform monitoring of the performance and the security of these wireless devices. We can make configuration changes and make those configuration changes apply to all of the access points across multiple locations.

You can then create reports on how many access points are in use or how many users may be connected to a particular access point. And these wireless LAN controllers give you complete control of your wireless infrastructure.

Usually, these controllers are proprietary to the type of access point that you’re using. So if you’re using one manufacturer’s access point, you’re probably also using that same manufacturer’s wireless LAN controller.

If you’re on the Professor Messer website right now, then you’re probably taking advantage of a load balancer. The load balancer allows me to have multiple web servers behind the scenes. And when you connect to my site, it chooses one of those servers to provide you with the web pages. This means that I could have many people accessing the site simultaneously and be able to spread that load across multiple individual web servers.

This allows for very large-scale implementations. I could have tens or even hundreds of web servers and be able to scale up or scale down the number of resources I’m managing based on the load that’s coming into these web services.

This also means that if I happen to have one of those web servers fail, the other web server is still up and running. And people connecting to the load balancer will be automatically redirected to the web servers that are still operating.

Here’s a basic overview of a load balancer. You might have users on the internet, and they need to connect to a web server. But instead of connecting directly to the server, they’re connecting to a load balancer, which then connects them to an individual server on the inside.

In this design, there are four separate servers. So anyone connecting to the load balancer might be redirected to any of these available servers. Not only are we balancing the load across these servers, we’re keeping an open TCP connection between these servers and the load balancer, which makes the network communication much more efficient.

We also can provide SSL offloading on the load balancer so that everyone can encrypt into the load balancer and across the internet. But we could have the load balancer decrypt the traffic and send it to the servers. This means the server would not have the overhead of the encryption and decryption process. And instead, that can be pushed off to the load balancer, which has been designed to provide that encryption and decryption.

We might also cache on our load balancer. So if someone from the internet requests a page from server A, that information would be cached on the balancer. The second person who asks for that same page would simply be provided that page from the load balancer without having to access an individual server.

Many load balancers can also provide quality of service. So they may prioritize certain pages or certain applications over others. This means that high-priority applications or web pages would be provided first by the load balancer, and all other pages would have a lower priority.

We might also have our load balancer provide content switching. This means that the load balancer would choose which server to use based on what applications were available on those servers so that users may be requesting one type of application that’s only on server A and server B and other users might be requesting an application that’s only on server C and server D. As the load balancer receives those requests, it can content-switch that information to the correct server.

If you’re concerned about the security of the information within your network traffic, then you’ll want to use an IDS or an IPS. An IDS is an Intrusion Detection System. And an IPS is an Intrusion Prevention System. These provide similar functionality. But the IDS only alerts if it happens to find an attack. And an IPS is able to stop that traffic before it enters your internal network.

These are looking for an intrusion– this might be a known exploit against an operating system or an application– or it could look for more generic intrusion types, such as buffer overflows or cross-site scripting. As you can imagine, it’s much more common to find an intrusion prevention system because many network administrators and security professionals would like to block this traffic before it enters the inside of the network. But there may be times when you don’t want to block any traffic or you might be concerned about blocking good traffic. In those scenarios, you may want to set the system up in detection mode so you’re either alerting or logging if you happen to identify a known attack.

Another type of security control is a proxy. A proxy is a device that sits between one part of the network and another to be able to allow or disallow certain traffic types to traverse the network. In many environments, the proxy handles end user communication. So there may be a user that wants to communicate to a web server. That request is made to the proxy. And then the proxy communicates to that web server.

The proxy then receives the response from that website server. It then examines that response to make sure that everything within the response is safe. And then it sends that response down to the originating station.

Since this proxy sits in the middle of the communication between the end user and the web server, it’s a perfect place to cache information or make security decisions about whether a person might be allowed or not allowed to visit that URL.

Some proxies are configured as explicit proxies. That means that the application you’re using to communicate through the proxy has to be configured with the details of this proxy configuration. In many environments, though, the proxy is invisible to the end users. We call this a transparent proxy. And you don’t have to have any special application or any special configurations on the end user workstation.

Most of the proxies that you’ll use are application-level proxies. This means that the proxy itself understands exactly the way the application should operate. A web proxy, for example, understands the communication that a client makes to a web server and understands the response it should be receiving from that web server. And if your proxy only understands HTTP or HTTPS, then it is a web proxy. But you could also have other applications that a proxy might understand. For example, the proxy might understand FTP or mail and use those applications as proxies as well.

If you work from home, then you’re probably very familiar with a VPN, or a Virtual Private Network. A VPN creates an encrypted tunnel between your device and the location that you’re connecting to.

Your device is usually connecting to a VPN concentrator. Sometimes you’ll hear this referred to as a VPN head-end. This is a device that is purpose-built to provide that encryption and decryption capability, and usually is able to do it very fast in the hardware of the concentrator.

Although we often see the VPN concentrator built into a firewall or running as a purpose-built appliance, you might also find VPN concentrators configured as software that’s running on a server. This is often integrated with client software on your local device. And very often, that software is built into the operating system that you’re using.

In many organizations, we used to have analog telephones that we would connect together using a Private Branch Exchange, or a PBX. Sometimes, we simply refer to this as the phone switch. This allowed us to connect all of the phones in our office to this one single PBX. And then the PBX connected to the phone company’s network.

These days, we’ve updated the type of phones we use to Voice over IP phones. So the type of Private Branch Exchange that we would use would be a Voice over IP PBX. This allowed us to connect all of our Voice over IP phones to each other within our organization. And then we can also combine this with a voice gateway that would convert between Voice over IP to the traditional phone lines. We refer to those as PSTN, or Public Switch Telephone Network. Often, this functionality of the voice gateway is built into the Voice over IP PBX so that we can easily jump between our Voice over IP network and the traditional analog telephone network.

In many of our organizations, we’re connecting directly to the internet. And to be able to provide security for that connection, we might put a firewall between the inside of our network and the internet side of our network. These firewalls traditionally have filter traffic based on a port number or IP address. But modern firewalls are able to recognize the application in use and make forwarding decisions based on the type of application that we’re using. These application-aware firewalls are referred to as Next-Generation Firewalls, or NGFW.

Many firewalls will also have VPN concentrator functionality built into the firewall itself so that we can encrypt traffic to our end users or between different sites. And in many cases, these firewalls can also act as a router or a Layer 3 device. Since these firewalls are often sitting between the inside and outside of our network, we can route between those different subnets. We can provide network address translation. So we can use private IP addresses on the inside of our network. And we might also have dynamic routing functionality. So we can use BGP or other dynamic routing protocols to be able to connect to the internet.