A vulnerability scan can identify many different vulnerability types. In this video, you’ll learn some of the most common types of vulnerabilities.
When a security researcher finds a vulnerability in an operating system or an application, they qualify the type of vulnerability that it is. There are many different kinds of vulnerabilities. Some are digital and based in code, and others are physical and based in the world around us.
They cover a very broad scope. Many of them are based on programming or coding type of errors. Some of them might be network design problems. And others just might be a different procedure that causes the vulnerability to occur. Of course, the bad guy only needs one vulnerability to gain access to your systems. Sometimes, they might combine multiple vulnerabilities together, but you have to be very particular, making sure they don’t have access to any of these vulnerabilities.
A race condition is a coding problem. And that’s because on the systems we use these days, there are usually multiple users performing multiple functions all at the same time. And if your coding has not taken into account that these multiple things could happen simultaneously, you will run into a race condition.
Let’s take an example of having two bank accounts, Bank Account A and Bank Account B. Both of these accounts have $100 in them. You also have User One and User Two and they perform a transfer of money from Account A to Account B simultaneously. If they transfer $50 simultaneously, your system will either tell you that only one of these will go through, so that Account A will have $50 and Account B will have $150. Or it may allow both of these to happen simultaneously. And if it’s performing the right checks, you’ll end up with one account having zero and the other account having all $200.
But in the case of a race condition like this, there has to be validation in place. And if there isn’t validation, you might get a race condition situation like this. User One and User Two check the account balances. There’s $100 in each account. User One transfers $50 from Account A into Account B, so Account A now has $50 and Account B now has $150.
At the same time, User Two also transfers $50 from Account A. If there are no checks in place, then User Two is transferring what is supposed to be a $100– so it’s now down to $50, and Account B now has $200 inside of it. Because these validation checks weren’t in place and a race condition occurred, we now have two accounts that, combined together, have more money in them than should be possible after both of these transfers occurring.
These race conditions can cause significant problems. Take the case in January of 2004. On the planet Mars, we launched a rover called Spirit, and it was simply rebooting over and over again once it arrived on Mars. There was a problem with the file system. And the rover was set up, if there were any problems, to automatically reboot. Since the problem was with the file system that was accessed during the reboot process, we had a reboot loop, where it would reboot, find the problem with the file system, and reboot again. Ultimately, the controllers back here on Earth were able to put the rover into a Safe Mode that bypassed the file system, so that they were able to boot into a minimal type of configuration and correct the problems.
GE Energy also had a race condition with their energy management system. This happened when the management system that is supposed to identify when problems occur ran into a problem where three power lines failed simultaneously. And you ultimately had a race condition where none of those three events were able to get through, because all three of them were waiting for the other one to complete. This is the ultimate race condition.
Because nobody was able to see these alert messages, this ultimately caused the Northeast Blackout of 2003, where 10 million people in Ontario and over 45 million people in the United States were without power. In some areas, it took a week or two for these folks to get their power back.
And a race condition in a radiation therapy machine in the 1980s was deadly. The operators of these therapy machines were very quick, and the software was not prepared for how fast these changes were to be made. And the software interlocks in these systems, ran into a race condition, and did not put the proper precautions in place. Some people received 100 times the normal dose of radiation. And unfortunately, six patients were injured, and there were three deaths just because there was a software race condition.
A vulnerability that might sneak up on you is an end-of-life vulnerability. This is when a device or a component or a piece of software is no longer under support from the vendor. And usually when this happens, the vendor stops providing any type of security patches. We saw this in March of 2017, where Microsoft patched Windows to protect against an SMB vulnerability. Except when they did that, Windows XP, Windows 8, and Windows Server 2003 were end-of-life and were not included with any of those security patches. In May 2017, WannaCrypt ransomware arrived and took advantage of those vulnerabilities. Hundreds of thousands of computers were infected, and all of those systems were wide open. Microsoft had to go back and create patches for these systems that were end-of-life because this was such a vast vulnerability. Ultimately, the answer for an end-of-life vulnerability is not to allow the end-of-life, is to always upgrade to the latest version of the software or the latest version of the operating system so that you continue to get those security patches.
We use embedded systems all the time. Just in our homes, we have doorbells, microwave ovens, and wireless routers that use embedded operating systems. We don’t have any direct access to the operating system. We’re simply using a user interface on the front of that device.
These devices very often are connected to the internet, which makes it really convenient if somebody wants to gain access to these systems. Usually, they’re running an outdated version of software, because embedded systems aren’t usually upgraded. The system is self-contained. It’s one that rarely changes, and therefore upgrades aren’t always necessary. But of course, if these embedded systems aren’t updated with the latest security patches, they could be vulnerable to new exploits.
We saw a very good example of how an embedded system could be exploited. In June 2017, Wikileaks released some CIA files called Vault7. And in those files, we found that the CIA in the United States was taking advantage of unknown vulnerabilities on both Linksys and D-Link routers. They could gain administrative access to those systems. And at that point, they were installing their own firmware that allowed them to watch all of the traffic going through those devices.
It’s import that the vendors that create this software and hardware are always there to maintain it. But of course, you have to have constant diligence to be able to look for vulnerabilities and to provide those patches. Those vendors, especially of these embedded systems, are really the only ones who can fix these problems. They’re the only ones with access to the code. And ultimately, it’s their responsibility to patch these vulnerabilities.
We have a renewed concern with vendor support because of all of the Internet of Things devices that are connected to our networks these days. A good example of this is a thermostat from Trane, the Comfortlink II. You can control the temperature from your phone and monitor the temperature in your home. Trane was notified of three vulnerabilities in April of 2014. It took them until April of 2015 to patch two of them and until January of 2016 to patch the third. This is where the vendor support becomes very important to be able to provide timely updates and security patches.
When us human beings use an application, we’re usually putting input into the application. And the application is giving us something in the output. All of the input that we put into that application should be considered malicious by the application developer.
You should always perform checks of all of the input and trust nobody who’s using that application. If somebody is able to put in a SQL injection, a buffer overflow, or provide some type of input that will cause the application to crash and cause a denial of service, then you have a significant security problem. It can take a lot of time and a lot of effort for security researchers, or the bad guys, to find these vulnerabilities in these applications. But if they can find the right kind of vulnerability with that input, they might be able to gain access to your systems.
With any operating system or application, there will be errors. And there will be error messages that appear on the screen to give you information about where the problem is occurring. We want to be sure that these error messages provide just the right amount of information and not too much detail that could be used for malicious purposes.
You don’t want to provide a lot of network details or memory information. You certainly don’t want to provide stack traces or details about the database. You want all of the information to be just detailed enough so that the application developers can resolve these particular error messages. This is obviously an easy one to find and fix. And it really starts and ends with the developers of the applications to make sure that the error messages provide the right amount of detail.
If there is a weak link in the chain, the bad guys are going to find it. You have to be very diligent about making sure that all of your systems are configured properly and that you haven’t left any doors or windows open. A good example of a misconfiguration occurred in September of 2015 when Patreon was compromised. This was because they use a debugger on their web servers to help provide troubleshooting information.
And ideally, that debugger is only open to the inside of Patreon. This is not something that should be exposed to the internet. People were able to use this debugger to find an opportunity to run remote code executions, which effectively then allowed them to run whatever process they wanted on the Patreon server. Gigabytes of customer data was not only stolen, but was also released online.
An example of a weak configuration was found in June 2017 when a security researcher found 14 million Verizon customer records were exposed publicly to the internet. They were actually left on an Amazon S3 cloud repository, but were left wide open with no authentication credentials at all. Fortunately, this researcher found the data before anybody else could gain access to it, but you can see how a single misconfiguration can turn into a significant security breach.
Every device that you put on your network to manage has a default username and a default password. Unfortunately, not everybody changes the defaults. And very often, the defaults are there and available for anybody to use. A good example of someone who takes advantage of these defaults is the Mirai botnet. It checks all of these different Internet of Things devices that you might have in your home and tries every possible default username and password combination.
It knows of well over 60 different device configurations. And they range between cameras and routers and doorbells and garage door openers and all of those other devices that we now have connected to our network. To layer on additional problems, the Mirai botnet was released as open source software, which allows anyone to grab that data and customize it for their own purposes. I have a feeling we haven’t heard the end of this particular botnet type.
One type of physical vulnerability is an untrained user. It only takes one person to cause a breach. And if you aren’t training your people on security techniques, then you may be open for a significant security incident. We usually recommend when you’re performing these security trainings that it’s not something done remotely. It’s not something done over email. It’s helpful if everybody is in person and paying attention to what’s going on.
And although it may be expensive to get everybody there in person and in a single room for half a day, it’s much less expensive than a security breach that might happen later. This is something that’s ongoing training as well, usually done once a year. And you can bring everyone in and run through different scenarios and see what their response might be to different kinds of attacks.
An improperly configured account can be a significant vulnerability. This is usually more of a process problem than it is a technical issue, but it’s one you have to constantly make sure that you stay on top of. This is one where there might be an account that really isn’t used for anything. Maybe it was set up originally for a particular use but is now no longer used. So of course, you should make sure that those accounts are either disabled or that they’re deleted from your systems.
There might be a number of accounts that have administrative access that probably should not have administrative access. In fact, there should be very few accounts that have that level of power in your systems. So you should make sure that all of your accounts have exactly the amount of access they need to perform those functions. And nobody should be able to log in directly as an administrator unless they are on the console of the server itself.
If your organization doesn’t have a set of checks and balances to be able to handle your most significant business processes, the bad guys may be able to take advantage of that. A good example is with something called SWIFT. This is the Society for Worldwide Interbank Financial Telecommunication. It’s a way that banks can electronically send payments to each other.
Well, in February of 2016, in Bangladesh Bank, malware was used to steal the credentials used for these transactions. The hackers, though, misspelled the word foundation and instead spelled it fundation. And this caught the eye of someone who thought something may not be quite right here. Although $951 million was ultimately not stolen, they did manage to steal $81 million, and SWIFT had to completely redesign the process to prevent anything like that from happening again.
We often say to protect our data you should use encryption. But the reality is, there are hundreds and hundreds of different ways to encrypt data. And if you’re not using a strong cipher suite, you may be encrypting data that could be easily decrypted by the bad guys.
A cipher suite consists of the encryption protocol itself– this could be AES, or 3DES, or a number of other encryption protocols– and includes the length of the encryption key. Generally, the longer the key, the stronger the encryption that is used. And usually, this type of suite will include a hash so that you can perform an integrity check of the data when it’s received. This hash might be SHA or MD5 or one of the many other hashing algorithms out there.
The problem is that some cipher suites are easier to crack than others. You want to make sure that you’re using the strongest ones and that you’re staying updated so that you’re able to avoid any vulnerabilities with those suites. One of the most common cipher suites is TLS or SSL that we use on our web servers. There are over 300 different combinations of these that make up these 300 cipher suites for TLS.
Well, the challenge of a security professional then is to figure out which ones are good and which ones are bad. You should, of course, avoid anything that’s a weak or null encryption, anything that might be less than 128 bits, for example, and make sure that you’re using the latest hashing technologies. Don’t use some MD5 that have known vulnerabilities associated with them.
The applications that we use are executing in memory. So if we can manipulate the memory, we should be able to manipulate the application. One type of memory vulnerability is a memory leak. This is when memory is allocated during the execution of a program and it is never unallocated when it’s finished being used. And as you use more and more of the application, you continue to use more and more of the memory. It slowly begins to grow and use up all available memory and ultimately either crashes the application or it crashes the computer it’s running on.
Another type of memory vulnerability is an integer overflow. This is when you’re trying to put a very large number into a place that has a very small allocated area. So the question is, what happens to that extra number, and how does it affect the execution of that application? Does it allow people to gain access to the system or the data that’s running inside of that system? This is what the bad guys are looking for, is to find ways to manipulate that memory to gain them an advantage inside of that system.
Another type of vulnerability that’s very similar to the integer overflow is a buffer overflow. This is when a certain amount of space has been allocated to store variables in application. And this application allows you to store a variable that’s larger than that allocated space, spills over into other memory areas, and potentially allows more access to the system than normally would be available.
Another type of vulnerability in memory that usually causes the applications to crash or a denial of service is a NULL Pointer dereference. This is where the programmer is dereferencing a portion of memory that’s being used by that application, except in this case there’s nothing at that memory address to dereference and the application crashes.
Another type of vulnerability that you commonly see in an operating system is a DLL injection. These are libraries used by applications. And the bad guys will put their own libraries in place so that when the application references the library, they are effectively referencing the bad guys’ code.
Today’s virtual environments allow us to create a workstation, a server, or any type of other system with just the click of a mouse. Instantly, we have a new service running. And instead of having to wait to get physical devices in and set up in a rack, we can simply click a few buttons and we’ve got an entire infrastructure ready to go.
As you can imagine, in those environments where you’re able to build systems instantly, it can be very difficult to keep track of all of these operating systems that are running in your environment. Of course, from a security perspective, we have to make sure that all of our systems are updated to the latest security patches. So you also have to keep track of where all of these virtual systems live.
We not only have to keep track of our virtual systems, but we also keep track of our physical systems. And in environments where we have machines that are laptops that move around, mobile devices, and even machines that might be under somebody’s desk, we’ve got to make sure that we’re able to cover all of those with security patches. It’s these forgotten or misplaced systems, then, that don’t get a security patch and they become vulnerable to the bad guys. They’ll gain a foothold. And at that point, they have access to your internal network.
If you didn’t put locks on the front door of your building, that would be a significant security weakness. We have the same problem with our operating systems and our applications. We have to make sure we have the proper locks on those systems, so the bad guys don’t simply walk in the door.
You have to make sure you examine every part of the network. Make sure the place where your internet connection is coming in, you have VPN connectivity. There’s third parties that are allowed access into your network. And you have to think about the front door in the conference rooms because there are network connections there that could be used for malicious purposes.
Security professionals are always worried about when the next zero day attack is going to happen. It’s what we don’t know about vulnerabilities in our operating systems and our applications that can really come back to bite us. There are some vulnerabilities that have been sitting there in these versions of software for years, and then suddenly someone will find them and began exploiting those vulnerabilities. Normally, we’re finding those and we’re able to patch them very quickly. But it’s when we either don’t patch a system or we don’t patch it fast enough where we really get burned by a zero day vulnerability.
A good example of this was with WannaCry. It was a very popular ransomware that hit on May the 12th of 2017. This is something that took the world by storm. There were thousands of systems infected with WannaCry, but the patch for this had been available since March the 14th. But because it takes time to roll these patches out, some people were not protected, and the bad guys were able to take advantage of this vulnerability.
The foundation for encryption in your organization is usually handled through a centralized set of certificate authorities. And of course, there are a number of keys and certificates that you have to manage. But how is this done in your organization, and is there a formal process to manage all of these? If you’re somebody just starting out, you will want to think about what the certificate authority is going to be, what software is going to run that certificate authority, and what structure will it have in your organization.
How will you protect the keys and the content on those certificate authorities, and how will you create and manage intermediate certificate authorities to use in your organization? Will there be a formal process to create, validate, and sign the certificates for your organization? And what will that process look like? There needs to be a formal process set up for that. And of course, there will be many other questions about the certificates as you begin rolling these out in your organization.
Category: CompTIA Security+ SY0-501