Managing Security – SY0-601 CompTIA Security+ : 2.1

There are various security requirements that need to be considered when managing an organization’s data. In this video, you’ll learn about geographical considerations, SSL inspection, hashing, and API security considerations.

<< Previous Video: Data Loss Prevention Next: Site Resiliency >>



We often think of our technology being in the cloud, or being at a data center, but we really do have to think about where this technology is located and how that has an effect on how we secure the applications and the data. From a legal perspective, we have to understand how security is affected by this difference in location. Whether we’re doing business across state lines or international business. We also have to think about if we need to get to another country for recovery, for maintenance, or for anything else we need to do. There are additional legal concerns.

Your passport for example is an important legal document that we use when we’re traveling to another country. All of the personnel that are associated with performing functions relating to that business process would probably need to have a passport and have that process already taken care of before you need to be able to use it.

Every business has some type of legal representation, whether it’s someone who’s internal to the company or a third party that you’re contracting with, and they should be always involved in the legalities of how you’re managing and controlling the data and systems in your network. Some of these same geographical considerations apply for our backups. Where are your backups stored? Are your backups stored on site, or off site, and if they are stored off site what type of access and control do you have over that data? Perhaps even more importantly, what type of access and control does a third party have over the data at the location where that information is stored.

And these geographical considerations become important if we need to declare a disaster and we need to bring up a new data center at a remote location. We need to make sure we have a way to get people to that remote location and we have to make sure that the data and processes we’re managing at that location fall under the legalities for that particular geography. It’s also important that we manage the recovery process if we need to respond to an attack. This has become a commonplace issue across all organizations and we need to have a formal process in place so everyone knows what the next steps will be.

Before any type of response to attack, or recovery to attack, we need to document the entire process from beginning to end. This is probably going to be the most time consuming part of this entire process but it’s one of the most important parts as well. We need to have documented processes that help us understand and identify when an attack is occurring, and we need to make sure that if an attack is identified, that we’re able to contain it and limit the scope of that attack.

If we’ve identified an attack then we obviously can’t prevent it from occurring, but we can prevent it from gaining access to information that could be more damaging. For example we need to make sure that we’re able to limit how people are able to get data out of our network. That exfiltration process is important to the attacker, because they’re able to move that data out of your network and into their secure facility.

We also want to be sure that if an attacker does get onto our network, that they are limited in the type and amount of data they have access to. So if there is any sensitive data, we want to be sure that we store it or maintain security to that data so that an attacker would not have access to that sensitive information.

We often talk about the need to secure data as we are sending it across our network and if we’re in a browser connecting to a third party site, we often refer to SSL, or what is now referred to as TLS, to be able to encrypt that data. SSL is Secure Sockets Layer, and TLS is the newer Transport Layer Security that effectively replaced SSL. It would actually be unusual to find anyone using SSL because it is much older and should have been replaced by now by TLS, but we often refer to this pair of protocols as SSL even though most of the time we really are referring to TLS.

One of the challenges for the security professional is that there may be information within this encrypted data that could be malicious and something that we may want to block from coming into, or out of our network. Unfortunately, all of the data is encrypted and if it’s encrypted we have no idea what’s inside of that SSL encrypted information. There are ways though that we can perform SSL or TLS inspection where we can view the information within this encrypted data to be able to determine if there’s anything malicious inside.

Now obviously, this is not something that can be done very easily and it has to be something that has been specially configured on your computer to be able to perform this SSL inspection. But this is a very useful tool for security professionals and you’ll find that many organizations will perform SSL inspection to be able to maintain the security of their data.

The ability to perform this SSL inspection is all based on trust. Your browser trusts the device that it’s connecting to across the network and is able to perform the encryption from end-to-end. What we’re able to do with SSL inspection is put ourself in the middle of the conversation, but continue to have the trust both on the client side and the server side. This trust inside of your browser comes from a list of certificate authorities that is embedded and inside of your browser on your device.

If I was to look at the number of certificate authorities that were trusted by my browser, I would see over 170 different certificates that my browser trusts any time they see a site that has been signed by any of those certs. This means for every site you go on the internet, there has to be some interaction with a trusted certificate authority. That trusted certificate authority has to sign the certificate for that web server, and that is the certificate the web servers providing to you.

You are doing the verification and checking the signature to make sure that the signature on the website matches the signature that would come from one of these certificate authorities, and that is the trust we talk about when we refer to encryption and using SSL. Since we’re simply performing a check to see if this has been signed, we are also trusting that the certificate authority has done their job to make sure the site that they are giving the certificate to is a site that is legitimate.

So they’ve done some checks to make sure that they’re sending the certificate to a trusted email address, they might have made phone calls or performed additional verifications, and only after doing those checks do they sign the web server certificate that you’re seeing when you visit that site. Each time you visit one of these locations, your browser looks at the certificate on that web server, it looks at the signature, and then it checks from its list of over 170 trusted CA certificates, to see if that signature is matched by any one of those. And if that matches, your browser encrypts the information and continues to work normally. If something doesn’t match up, there will be an error message on the screen and a notification that there’s some type of certificate error.

So how can we get into the middle of this conversation and look into the nonencrypted data, even though both the server and the client are sending encrypted information from those devices. The way that you do it, is you first have to have a device in the middle of the conversation, this is often a firewall or some type of SSL decryption device that is specifically designed to provide this proxy functionality. Inside of this device, we’re going to add our own internal certificate authority. This would be one that we create just for use internally within our organization and it would not be used for anything that might be outward facing.

We would then add our internal CA certificate to every user’s device on the network. This means that we become another one of those 170 plus trusted CAs that is listed in a browser. And we’ve done it for every browser, on every device, on our internal network. Now that our internal devices trust our internal CA just as much as they trust an external CA, we can begin the process of communicating securely between a user and a server.

The first thing that will happen when a user tries to communicate using this encryption, is there will be a hello message sent to the server. So in this case, the user is sending the Hello to That Hello message is intercepted by our SSL decryption device and instead of sending that Hello message to prof Mr dotcom that device is going to create a message as a proxy and send that message to The Professor Messer web server is now going to respond back with information about its web server certificate that has been signed by a public CA. This SSL decryption device will check that certificate and make sure that everything in that certificate is valid. If it is, it will create a new certificate for use on the inside of the network. So we’re effectively decrypting traffic so that we can look into the data and then reencrypting it with our own internal communication process and send that to the user. That allows us to sit in the middle of the conversation and everything that’s going outbound and inbound is able to be decrypted, analyzed, verified, and made sure that nothing inside that data flow is malicious.

One of the things you probably noticed in this course is that we use hashing for so many different things. Hashing is used during encryption, digital signatures, and other processes for cryptography. The hashing process itself is simply taking an amount of data and representing it as a short string of text information. We often refer to that short string as a message digest. This hashing process is a one way trip, unlike encryption where we can encrypt data and then decrypt data to be able to receive the original plain text, you can’t do that with hashing.

Once you hash the data and create that message digest there’s no way to undo that process to somehow get the original data that we started with. This is why we use the hashing function for passwords so that no one would be able to look at our hashed information and somehow be able to discern what the original password might have been. If you’ve ever downloaded documents or an application from the internet, you’ve noticed they put a hash next to that download. It’s there to make sure that you can provide integrity once you receive a copy of that download. You can perform your own hashing function and compare it to the hash that was posted on the public website and make sure that both of those match. If the hash is the same on both sides, then you know that the document that you’ve received is exactly the same document that was used to create the original hash.

This hashing is also used for digital signatures, because we are able to use the integrity built into the hash, to confirm that we’re receiving the same message that was sent. We can confirm that this message was sent from a particular individual using the authentication features in a digital signature, and we can also verify that it must have come from a particular person using the non-repudiation functionality of the digital signature.

One of the most important characteristics of a hash, and one of the reasons we will often retire older hashes, is that we run into problems with collisions. When you create a hash of a particular value there should only be one way to get that particular hash, and that’s if you were able to hash that original value. You should never take two very different messages, hash them, and somehow end up with the same hash. We call that a collision and it usually indicates that there is some type of cryptographic vulnerability associated with that hash algorithm.

Let’s take an example of a hash using a good hashing algorithm like SHA256. SHA256 is 256 bits of a message digest, or what turns out to be 64 hexadecimal characters. Let’s take a simple hash such as the term, my name is Professor Messer, with a period at the end. If we perform a SHA256 hash of that, we get this message digest in return. If we perform another hash of some text that’s very similar, except, my name is Professor Messer, ends in an exclamation mark, we also get a SHA256 hash, but notice that the hash values are very, very different between the two. Here we’ll put them right next to each other, and you can see they are dramatically different even though all we really changed was one single character of the original message. This is why we use this hashing function to provide additional security for our data, because if only one single character is changed, we get a dramatically different hash and we know immediately that we should not trust the information that we’ve received.

With the advent of mobile devices and cloud based technologies, we’re taking more and more advantage of APIs, or Application Programming Interfaces. Not only are we communicating and controlling applications with these APIs, but we also have to put security methods in place to be able to protect the data that is being sent and received from these API calls. This means that there could be multiple places to perform functionality. There could be interactive users that are logging in to a login page, and of course, we have to make sure that that process is secure. But there may also be an application or a mobile app that is accessing or logging into the system, not using the login page but using the application programming interface. Although the functionality is very similar, which is logging into the app, the method that’s used to perform that log in is very different between a login page and the API. So we have to make sure that we are managing the security, not only for our interactive users, but also for API based applications.

One method used by attackers to be able to intercept and perhaps even change this API information, is an on-path attack. This is when the attacker sits in the middle of the conversation and is able to view all of the traffic going back and forth, and in many cases modify or replay some of the traffic it has seen in these API calls. This means that an attacker can sit in the middle of the conversation, get an understanding of the way the API operates, and then put their own data into the API flows.

An attacker might see a bank transfer take place and then create their own API call that might transfer information into their account. This is called API injection and it’s another concern we have when making sure that our API applications are secure. And if an API has not been properly developed or tested, an attacker may be able to find a vulnerability that wouldn’t be able to use the functionality of the application, but it might be able to cause the application to fail. This would be a denial of service attack, and it would then prevent anyone else from gaining access to that application. This is why many organizations will have additional security for their API based applications, especially as it relates to authentication.

There will be limited use of the API to only applications or users who are authorized to use it, and you will only use communication to that API over encrypted protocols. The API should have some security controls that would limit what the API is able to do based on the user’s rights and permissions. For example, if a user should only have read only access to data, then those should be the only API calls accessible by that user. And just as we have a next generation firewall that we might use for our network traffic, we have a web application firewall that we commonly use for our application traffic. So if we’re using web based applications or API based applications, we can configure our web application firewall to monitor and control all of the communication to that particular user’s data.