Network Segmentation – SY0-601 CompTIA Security+ : 3.3

Network segmentation is a common security control, and there are many ways to implement segmentation. In this video, you’ll learn about VLANs, screened subnets, extranets, intranets, and more.

IT security is all about segmentation. It’s about allowing or disallowing traffic between different devices. We can do this in a number of different ways, but if we’re segmenting across the network, we can segment physically between devices, we can create logical separation within the same device, or we can do virtual segmentation with virtual systems. It’s sometimes common to segment application instances into their own separate private segments. This is especially useful for applications that have high bandwidth requirements and need the fastest amount of throughput possible.

We can also set up segmentation for security. For example, we might have database servers that contain sensitive information and we may segment our users so they can’t talk directly to those servers. Or perhaps we have application instances running inside of the core of our network, but the only protocols that should be in that core should be SQL-type traffic and SSH traffic and we can segment other types of traffic to remain outside of the network core. Or we might be segmenting the network because we are legally required to segment it, or there may be rules or regulations. For example, PCI compliance requires that we have mandated segmentation to prevent any type of user access to credit card information.

One obvious way to segment the network is to have physically separate devices. For example, we might have a switch A, and a switch B, and you can see there is a physical separation. We sometimes refer to this as an air gap because there is no direct connection between these devices, the only thing in the middle of these switches is the air itself. If we did need to communicate between switch A and switch B, we would then need some type of physical connection. We would need to run a cable between these switches or we might put a router in the middle or firewall, and then send all of our traffic through that device so that we can communicate between those two physically separate devices.

We might also implement this type of physical segmentation to keep devices separate from each other. For example one switch may contain all of our web servers, and the other switch may contain all of our database servers. If these are on separate switches then we know the web servers could never accidentally communicate to a database server, and the database servers would have no access to the web servers. Or maybe we’re concerned about customer data mixing with each other, so we keep customer A on one switch and customer B on the other.

Here’s a practical design for keeping our customers separated, you can see we have the customer A switch on the left and the customer B switch on the right. The devices for customer A can communicate with each other and the devices for customer B can communicate with each other, but you can see that there is no direct connection between these two switches so customer A has no way to communicate with anything that’s on customer B and vise versa. There are a number of challenges with this design. One is that we have two separate physical switches that both have to be separately maintained, separately upgraded, and separately powered. We also have a number of interfaces on these switches that we probably aren’t using, so we’re spending a lot of money for a switch but not using all of the capabilities of that switch.

Instead of physically separating these devices, we can do logical segmentation using VLANs, or Virtual Local Area Networks. VLANs have the same functionality where we can have customers on one part of the switch, and another customer on another part of the switch. Because we’ve configured this with separate VLANs, these two different customers can still not communicate directly with each other. It’s as if we had two separate physical devices, but instead now have them logically separated inside of the same device. As with physical segmentation, we would need to have direct communication between these devices using a cable or a third party device like a router that can allow us to communicate between these separate VLANs.

It’s very common to install services on our local network that we would like to provide to people that may be on the internet. But we don’t want to provide access to the internal part of our network for people that are coming in from the internet. So instead, we’ll build a completely separate network just for that incoming traffic. This is sometimes referred to as a screened subnet, you may also hear this referred to as a Demilitarized Zone or a DMZ. This allows people to come in from the internet, usually they connect to a firewall, and the firewall redirects them to the screen subnet. Instead of accessing our internal network, which would be on this side, all of the users access the services on the screen subnet and we can set additional security to make sure that no one has access to the inside of our network while still providing access to the applications that are on our network.

You may find that an extranet is a very similar design to a screened subnet. We have an internet, there’s a firewall, and a separate network that has been designed as an extranet. We still have our internal network for all of our internal resources are but we’ve built out this separate extranet for vendors, suppliers and other partners of ours that need access to our internal resources. Unlike a screened subnet, an extranet commonly has additional authentication that’s required. So we would not allow full access to our extranet from the internet, instead there would be an authentication process or a login screen that would then gain you access to the extranet.

Your organization might also have an intranet. An intranet is very different than a screened subnet or an extranet, because an intranet is only accessible from the inside of your network. So you might be at your headquarters network, your remote site number one, or your remote site number two, and from all of those networks you can access the internal part of your network and gain access to the intranet on your network. This intranet commonly has internal servers that can provide company announcements, maybe employee documents and other company information, and it’s only accessible by employees of the company. Since your intranet usually contains company private information you would never want to make this available to others outside of the network. The only way to access the intranet is if you are on an internal network already, or you’re accessing the internal network through a VPN connection.

If you’re managing data flows within a data center then you have another set of segmentation challenges. There may be hundreds or thousands of devices inside of your data center and there may be hundreds of thousands of users who are accessing those services. For all of these applications running in the data center, it’s important to know what the data flows happen to be. You need to understand where data is coming from, where people are connecting from, and where you’re sending the data to. In a data center it’s very common to refer to these data flows with directions. For example East-West traffic would be traffic between devices that are in the same data center, and because they’re local inside of that same building, you usually get very fast response times between those devices. North-South traffic refers to data that is either inbound or outbound from our data center, and we are usually setting up different security policies for that type of traffic because very often that’s coming from an unknown or an untrusted source.

Here’s a better picture of our data center. We have an internet connection that’s coming into some core routers, those core routers are then connecting to redundant switches, and the switches are connecting to file servers, web servers, directory servers, and image servers. If any of these devices inside of our data center are communicating outside of the data center, then we’re referring to that as North-South traffic. So we may have traffic coming inbound and then going back outbound over the internet, and that would be North-South traffic. Anything that’s communicating internally within our data center, for example our web servers may be communicating to directory services or file servers, all of that traffic that stays within the building is referred to as East-West traffic.

One of the challenges with data centers and other network configurations is once you get inside of the network, we don’t tend to have a lot of security controls. Traditionally, if you’ve been inside the network then there is an inherent level of trust that we’ve associated with that device. But as we’ve seen, once malicious software gets on the inside of your network, having no security controls effectively means that software can spread to every other device inside of your network. Instead of trusting every device, we’ve changed the model to be zero trust, which means you trust nothing else on your network and there has to be additional authentication and additional methods in place to make sure that the data flows that are occurring are data flows that should be occurring for those applications.

With zero trust every device, every application, and every data flow is considered to be untrusted. This means that the flows that would normally go across our network without any checks whatsoever, are now suddenly subject to authentication, encryption, and other security controls. So we might set up multifactor authentication between our devices where no authentication existed before. Maybe we’re including encryption on the core of our network, where normally we didn’t bother with encryption. And of course we’ll have additional permissions, perhaps additional firewalls and other security controls, to make sure that we can verify every data flow that’s occurring on the inside of our network using this zero trust model.