Now that you’ve acquired your security technology, you’ll need somewhere to place it in your network. In this video, you’ll learn about some of the best practices for placing sensors, firewalls, SSL concentrators, and other security technologies.
Most networking and security professionals will have sensors and collectors placed in critical spots in their network. These may be built-in sensors that are part of existing infrastructures, they might be integrated into switches, and routers, and servers, and firewalls. And since all of this information is already available in these devices, these collectors are able to grab this information from the sensors and be able to report on these metrics.
The type of sensors and the information that these sensors can gather might vary widely between one system to another. For example, the type of information you get out of intrusion prevention system is going to be very different than the authentication logs that are brought from an authentication server. The same type of diversity applies from web server access logs, database transaction logs, email logs, and all of the other types of services and logs that you’re collecting in your environment.
The sensor’s job is to provide the raw data. It’s the collector’s job to bring all of that data back to one place and somehow make sense of all of this very diverse information. Sometimes these can be proprietary consoles, they might be the console that you use for your IPS or your firewall, or it might be a more generic SIEM console that you can use to bring lots of different kinds of data back to one single collection point.
An interesting aspect of many SIEMs is they’re able not only to collect all of this data and store it into a central database, these SIEMs can then begin correlating this information together. So they may be able to find information on a switch, on a router, and on a firewall, and piece of these together to indicate that an attack may be occurring on your network.
If you’re looking for a way to allow or block certain types of traffic from going through your network, then you’re probably looking to implement either a packet filter or a firewall. Packet filters are usually following a very simple line of logic. They’re not tracking any type of network state, so there are usually a set of rules for traffic going one direction and a completely different set of rules for traffic that might be going the other direction. It’s common to use something like a packet filter in Linux with IP tables or the Linux firewall, and this is something that is usually running inside of a server or a single device rather than something that will be on an appliance connected to the network.
More advanced state-based filtering occurs in a firewall. Firewalls are able to provide filtering based on IP addresses, port numbers, applications, and other criteria as well. This is usually a device that is configured as a purpose-built piece of hardware, although there are software-based firewalls as well. It’s common to have a firewall at the ingress egress point of your network, usually where your internet is connecting, so that you can set up rules to protect everyone who’s on the inside of your network. There are occasions where organizations will put firewalls on the inside of their network. A good example of this would be firewalling all of your users away from the data that’s in your data center.
In a previous video, we looked at the design of proxy servers and how they operate. Proxy servers designed to be an intermediate point between the user and the service that they’re trying to access. The client makes the request to the proxy server, the proxy server then takes that request and makes it to the service on the client’s behalf. It receives the response from the service, then the proxy server examines the response and makes sure that the response is something that is safe for the user, and then sends the answer down to the original requester.
This is not only a process handling the traffic flow between one device and another, but it’s able to add security controls on top of that. You can define access control, there’s caching, you can define URL filtering, and other types of content filtering in the proxy itself. If you’re installing a proxy to protect your inside users from things that are on the internet, then you’re usually installing a forward proxy.
The forward proxy is installed somewhere within your organization, and you can figure all of your users to communicate to the proxy instead of communicating out to the internet. The proxy will receive communication from the user, send that out to the resource on the internet, receive the response, and then forward that response to the user. So you can see, the proxy is a perfect middle point to be able to provide security and filtering of this data.
If you need to configure VPN communications into and out of your network, then you’re probably going to implement a VPN concentrator. This is usually a purpose-built device. This appliance may be a standalone VPN appliance, or it may be integrated into another technology, such as a firewall. It’s common to install these on one side of an internet connection, and install another VPN appliance on the other end of a VPN connection. And then you can configure a VPN tunnel so that all of the information between these two locations will be encrypted between the different VPN appliances.
If you’re sending encrypted communication to a web server, then you’re probably communicating with the HTTPS protocol. This requires that there is a cryptographic handshake that occurs prior to sending this encrypted data, and that handshake communication takes up a number of CPU cycles. If you have many people all hitting a web server at the same time, it could be quite a load that the web server’s trying to handle just to set up and maintain these cryptographic communications.
Instead of having the user communicate directly to the web server, you can instead put an SSL accelerator in the middle of the communication. This means the client is going to communicate to the SSL accelerator over HTTPS, and the SSL accelerator usually communicates back to the web server over HTTP. It’s effectively off loading this HTTPS encryption process onto a purpose-built appliance. And usually, it is a hardware-based system that is designed to handle this level of encryption. That way, you can have many people communicating simultaneously to your web server, they’re all communicating over an encrypted channel to the SSL accelerator, and the web server is simply handling the web services.
You often see this SSL termination or SSL acceleration function built into load balancers. That’s because these load balancers will sit between the internet side and all of your web services. This load balancer is designed to take these requests from the internet and spread the load across multiple servers, thereby increasing the capabilities you have to provide these types of web services. These web servers can be added and removed at any time. That means that if it’s busy during the day, you can spin up more servers, and as things calmed down at night you can decrease the number of servers that are currently active.
The load balancer is also checking to make sure that all of these servers are up and running. If any of these servers run into a problem and stop communicating, the load balancer will identify that server as being down and will stop sending requests off to the server that’s having the problem. When the server is fixed and comes back up and running, the load balancer automatically recognizes that it’s working again, and begins distributing the load back to that server.
It can be difficult to stop a distributed denial of service attack, but there are some things you can do to help mitigate the impact of this type of attack. One popular method of DDoS mitigation is through a cloud-based provider. You can have all of your users connecting to a reverse proxy, and that reverse proxy is then determining if the traffic is legitimate or not. If it’s good traffic, it continues to send that traffic down to your servers where the response will then go back to the proxy.
You might also have tools on-site that can help with DDoS mitigation. It’s common to have IPS rules that can recognize very popular types of DDoS attacks, and many firewalls have DDoS mitigation built into the firewall functionality. In all of these cases, this DDoS mitigation functionality will sit between you and everybody else in the world. So you want to design your network in a way that you can filter or proxy to be prepared for these types of attacks.
Our networks are designed a lot like the way that we live. We have the core of our network, which would be very much like the downtown area of our large cities, but we also have users that are connected very far away from the core of our network. So just as we have highways that connect us to the suburbs, we have connections between the core that will connect us to our end users. Once we get off of those highways we have our small neighborhoods, and those would be the network connections where our users would be located. All of these are connected with highways. In our networks, we’re connecting copper and fiber together to create this network architecture.
This is a common three-tiered architecture that you might find on a network. We have the core of our network, which would be the downtown area of our world. We have the distribution and aggregation layer– this is where we have larger switches that are able to then connect between the core and our users– these would be the highways that are leading into downtown. And then we have all of the users out in the suburbs. This is our access layer, and this is where the end users are located. All of the end users connect to their neighborhood switch which then connects back to a distribution or aggregation point, and all of those aggregation points are then connected to the core.
If you’re working in information technology, at some point you’re going to need to capture packets. And getting these packets from the network may be a bit of a challenge. There are a number of different ways that you can use to get the raw packet data from the network into an analysis tool. One common way is with a physical tap, like the one I have here. You would disconnect the existing link, put the tap in the middle, and now you’re able to see all of the traffic going back and forth over that single physical connection.
These could be active taps that allow you to switch between many different connections and provide an additional boost of signal as the traffic is going through, or maybe a passive tap which is simply cutting the signal and sending a piece of it off to your analysis tool. There is a software-based version of a tap called a port mirror. You might also see this referred to as a port redirection or a SPAN, which stands for switched port analyzer.
This is a function that’s built into your network switches. So you can plug a network analysis device into your switch and programmatically tell the switch to take all of the traffic from one particular set of interfaces and send a copy of that traffic down to your analysis tool. This port mirroring functionality may have limitations based on what your switch can handle and the amount of bandwidth that you want to send down to the network analysis tool. But this can be a very useful tool to have when you have no other choices available.
Here’s a closer look at a physical tap. This is one where you would break the connection and put this tap in the middle. Normally, these red connections would be connected to each other, and the white connections would be connected to each other. But instead, we’re putting the tap in the middle. So the red connection will go into the tap and simply rotate around and come right back out of the tap again. There will be an extra copy that is made inside of the tap that is sent to an outgoing port on the other side. The same thing happens on the white connection. So now you have these extra monitoring ports that you can then use to plug into your analysis tool and capture all of the data between those two devices.