Code Quality and Testing – CompTIA Security+ SY0-501 – 3.6

Now that you’re written your application, how can determine if your code is secure? In this video, you’ll learn how to test and evaluate your application code.

<< Previous Video: Secure Coding Techniques Next: Virtualization Overview >>


If you’re a developer who would like to test your code to see if there are any security vulnerabilities, you can run a test using a Static Application Security Testing Tool or SAST. The static code analysis tool will go through your source code, and it will try to find vulnerabilities, such as buffer overflows and database injections.

But of course, not everything can be found by simply having an automated tool read through your source code. There might be authentication issues. There might be the implementation of cryptography that makes it insecure. So you can’t rely on an automated tool to find every possible security vulnerability in your code. Static code analyzers are very good at finding security vulnerabilities, but they’re not perfect. You need to examine the output from one of these tools, and make sure that every single finding is one that correlates back to a security problem.

Here’s an example from some static code analyzer output. You can see the name of the file that was tested. You can see that the objection is listed, for instance, this particular line does not check for buffer overflow. It gives you some examples of what that means, and then some options of what you could use instead of the code that you’re currently using.

Once you’ve written your application, the source code doesn’t change. That’s why we use static code analyzers for your source code. But obviously, the input into the application can vary quite a bit. For those types of tests, we need dynamic analysis or fuzzing.

With fuzzing, we’re taking the input to the application and sending random information just to see what the application will do. You might hear this referred to as fault-injecting or syntax testing, but all of these are looking for something to happen that’s out of the ordinary. We’re looking to see if the application can handle all of this data, if it provides a server error, if it crashes completely, something that would be an exception to the normal operation of the application. This concept for fuzzing originated in 1988 with a class project at the University of Wisconsin, called Operating System Utility Program Reliability, and they created, from that, the fuzz generator.

Today, there are many different options for fuzzing, because there are many different kinds of applications. You can get a fuzzer that’s made for a particular platform or particular language, and use that to try to see what happens with that application. All of these fuzzers are constantly inputting data and evaluating the results of that input. They’re using a lot of processor and a lot of resources, and it takes quite a bit of time to go through every possible random iteration of things that you’d like to test just to see what happens with an application.

If you’d like to run some fuzzing tests yourself, you can download a virtual machine from Carnegie Mellon. Their Computer Emergency Response Team, or CERT, has a basic fuzzing framework, or BFF. You can download this from professormesser.link/bff.

Here’s the Carnegie Mellon fuzzer at work. You can see that it’s putting input randomly into this application, and evaluating what the results are of that fuzzing.

Now that we performed test of the source code and of the running application to look for security problems, let’s try increasing the load of this application to see what happens. We can do this by using a stress testing tool, one that can either physically or virtually simulate one or thousands of users, all using this application simultaneously. Once you start to hit the limits of what an application is capable of doing, you start to receive some unintended results. You may get error messages. There may be application information and version information that you didn’t intend to show to the user, but now is being displayed to the user. Or there may be simply crashes of the application, providing kernel and memory dump information to the screen.

There are many different options available to perform this stress testing function. Some of them will automate individual workstations that already exists in your user community. Others will simulate large workstation loads without having the physical workstations there. But in all of these cases, you get extensive reporting, response time, and results of how this stress test affected the application.

When you’re developing an application, it’s common to have a sandbox available for code testing. There’s also a sandbox that you can create during the testing process. This is very different than the one you were using during the development process. For testing, you have built a sandbox that now looks very similar to what you run in production. You aren’t using any production systems, and you aren’t touching any production data, but everything else looks and feels as if it’s running in a production environment. This means that your quality assurance team can perform overload, stress testing, fuzzing, or anything else they may want to do to test the capabilities of this application without worrying that they’re going to bring down any production systems.

Once you reach the end of the development cycle, it’s time to bring everything around full circle and perform verification and validation. At the beginning of this process, you ideally started with a set of objectives, and from those objectives you built this new application. So the first thing you might want to do is perform a verification of the software. Is the software working properly? Are there any bugs that need to be addressed in the code? And is everything being built properly, so that the application is performing as expected?

From a broader perspective, there is validation that needs to occur. We know there were requirements created at the very beginning of the project. Is this application meeting those requirements that were originally set? Is this the correct product that was originally intended to be created? Both the verification and validation are important, and we need to make sure that not only is the application performing the way it should, but it’s the correct application that should have been created, in the first place.

A lot of the software that we run in our operating systems is compiled code. This is when the developer has taken all of the source code, and compiled that into an executable from that source. Once that occurs, you don’t get to see the source code. You’re only provided with this single executable. All of this compiled code has been compiled for a very specific operating system in CPU, so you have to make sure that when you’re compiling the software that it matches what the end user is expecting.

During the compilation, the compiler will tell you if there are any logical errors or bugs in the software that need to be corrected. They can then be resolved. You can recompile, and provide the end user with, hopefully, a more bug free application.

Sometimes the software you’re using is not compiled code. It instead is run time code. For example, you may have purchased a PHP based application, and the PHP code is something that is runtime code. The source code for this runtime application is viewable, and the instructions will execute when the application is initially run. That means that you don’t have the luxury of a compiler that’s able to check for any logical problems in the software.

If there are any bugs with this application, you will find them when this runtime code is executing. That’s different than compiled code, which was able to find logical and syntactical problems with the code prior to providing it to the end user.