There are many methods of maintaining the uptime and availability of networks. In this video, you’ll learn about load balancing, NIC teaming, port aggregation, and more.
One of the easiest ways to maintain uptime and availability on the network is to include a load balancer as part of the network infrastructure. The load balancer, as its name implies, is going to balance the load between multiple servers so that one person will access the load balancer, and then the load balancer will decide what server is able to provide that particular service. You’ll commonly have multiple servers on a load balancer that are active, which means if anybody is making requests to the load balancer, those servers will be available and provide a response to those requests.
There might also be other servers connected on this load balancer that are up and running and ready to go into action but are currently in a standby mode, and the load balancer will not send any of the traffic to those standby servers. The load balancer is always sending hello messages and checking in with those servers that are active. If any of those active servers suddenly don’t respond back to these health checks, then the load balancer makes a decision to disable the connection to that server and perhaps use a standby server as a replacement.
Here’s an example of a user on the internet that is accessing these servers. When this hits the virtual IP address on this load balancer, the load balancer then determines that server A is available, and it sends the traffic to server A. Most load balancers will also remember which servers were being used by a user, so if this request comes from the same user, the load balancer will remember that this user was using server A, and it will send that traffic to the server A device.
Of course, there may be times when server A becomes unavailable. This might be because of a hardware failure, there might be a power supply that goes out, or the software that’s running on the server suddenly crashes and is no longer able to provide the service. In that case, the load balancer will recognize that that server has failed, and it will put that server into an offline mode. Since this load balancer also has standby servers, it can enable one of those standby servers to be online and available, and any future requests from devices on the internet will use the new server instead of going to the one that’s currently unavailable.
Even if you don’t have a load balancer, you can still provide redundancy to a server using multiple network interface cards on that device. We refer to this as NIC teaming. You might also see this referred to as Load Balancing Fail Over, or LBFO. This allows us to plug in and use these multiple connections to a server, but instead of having a primary connection and a standby connection, we can use both of those connections simultaneously and aggregate the bandwidth between both of them. This provides us with increased throughput, and it also provides us with a way to have redundant paths should one of those connections fail.
On the server, this is usually configured by installing multiple network interface cards on the server, and in the server operating system, those cards are bound together to look as if they are one single interface. We’ll also configure the switch side of this to be able to interpret any of the traffic going to and from those connections as something that’s being NIC teamed in the server. Just as we had a load balancer that sent hello messages to make sure that it would get a response back from those network interfaces, we also have the same functionality within the server. The server is going to have the network interface cards talk to each other, usually over a multicast connection. Those multicast hello messages will go out periodically, and the other interface cards on that server will listen and respond to those hello messages.
If for some reason a network connection becomes unavailable, perhaps a switch has failed or someone accidentally unplugs a cable, that hello message will not get a response from the interface card that’s been disconnected. The server will recognize that that card is no longer talking on the network. It will administratively turn it off and use the other available network interface cards to provide the redundancy and the connectivity for all of the users.
Here’s an example of using this port aggregation where we have a server that has two network interface cards, and both of those network interface cards are going to connect to a switch. We will configure inside the server that will be doing port aggregation across both of those interfaces, and we’ll also configure the switch for port aggregation so that both sides of this connection recognize and understand that this is one single logical connection rather than connecting two physical separate interfaces. This allows multiple users to send traffic through the network and to have a higher bandwidth and throughput between the server and that last switch.
But as you can see from this diagram, there are some places where losing a switch could lose connectivity to the server. For example, there’s a single switch that both of these connections are plugged into, and if that switch fails, then we lose connectivity to the entire server. To provide another level of redundancy, we can configure the connections to go to different switches. We’ll still have the same two network interface cards in the server, but instead of plugging both of those into the same physical switch, we’ll separate them into different switches. That way, if we lose either one of these switches due to a hardware problem or a connectivity issue with the cables, we’ll have a separate path that can be used for all of our users to maintain the connectivity to that server.