Securing Compute Clouds – SY0-601 CompTIA Security+ : 3.6

Our compute cloud instances are the core of a cloud-based application instance. In this video, you’ll learn about security groups, dynamic resource allocation, VPC endpoints, and container security.


When we’re creating our cloud based applications, we need some components that will perform the actual calculations. These are our compute cloud instances. A good example of this would be the Amazon Elastic Compute Cloud. This is often referred to as the EC2 cloud. You have the Google Compute Engine or the GCE. Or an Azure, you have the Microsoft Azure virtual machines.

We commonly manage these compute instances by launching a virtual machine or perhaps this particular instance is launched as a container. We can allocate additional resources to that compute engine. And then when we’re done using that engine, we can disable it or remove it completely from the cloud.

One common way to manage access to these compute engines is from network connectivity using security groups. It will be common to have a firewall that you could use just outside of the compute engine. And then you can control what traffic is inbound and outbound from that instance.

Since this is a firewall, we can commonly control access based on a TCP or UDP port number. This would be something working at OSI layer 4 or, of course, use OSI layer 3, which would be an IP address, either as an individual IP address, perhaps an entire block of addresses. And you can usually add this using CIDR block notation to the firewall. And, of course, we can manage both IPv4 addressing and IPv6 addressing.

This is the network security configuration settings for an EC2 cloud. You can see that you have a number of different criteria that you can choose for the firewall. You can specify your own TCP or UDP port number. You can specify all TCP or UDP. And there may be individual applications that are already pre-configured, and you simply need to select that one to allow or disallow that traffic.

These computing resources can be created on demand so that you would only have them active when they’re needed. And then you can disable them or deprovision them once they are complete. This means as the demand to the application increases, you can provision additional resources automatically to be able to cover the additional load.

You can even automate this process so that you are constantly monitoring the load of the application. And if the application needs more resources, they can automatically be provisioned. And when the load decreases and the number of people accessing that application slows down, you can deprovision those resources.

We refer to this as rapid elasticity. And it’s a very common way to maintain the uptime and availability of your cloud based applications. This would also allow you to minimize how much you’re paying for this cloud based application because you commonly pay for these resources as you use them. That means when the application is busy, you can pay a little bit extra to have more resources available. And when the load on the application is decreasing, you can remove some of those resources so that you’re not having to pay for them.

It’s common to combine this dynamic resource utilization with some type of monitoring system that can identify when the load increases and when the load decreases. For example, you may want to monitor CPU utilization. And when that utilization gets to a certain amount, you can increase additional resources to drop that load across multiple compute instances.

Sometimes it’s useful to have very granular security controls. And be able to manage exactly what type of traffic may be flowing across your cloud configuration. This means you can identify what’s in the data of the application flows and be able to make security decisions based on that.

For example, your organization might be using box.com for sharing files in the cloud. And if users are storing data in the corporate file share, you may have policies that would allow personal information to be stored into that share. And you may allow any department to put information into the corporate file share.

But there may be other shares that require additional security. For instance, there may be a personal file share. There may be policies that are already pre-configured by your corporate headquarters that would allow you to store graphics files in your personal file share, but would deny any spreadsheets.

It might also prevent you from storing any files that would have sensitive information, such as credit card numbers. And there may be security monitoring for these files so that if anybody does try to store this information, an alert is sent to your security team.

Applications developers can build cloud based systems that might be designed for internal use only. That means your internal employees would have applications that they’re able to use. And the data that’s being accessed by that application is private data that’s not available to other people on the internet.

It’s very common to have this private application stored in the cloud. And to have the private data stored into a file share that’s in the cloud as well. To be able to allow access from both of those private locations so that the application can access the data and vise versa, you need a VPC, virtual private cloud, gateway endpoint. This would allow you to have private access between the application instance and the data and restrict access from anyone else.

This also means that you don’t have to have internet connectivity just to be able to access these applications. This means that we need to add another component between our application instance and the data that we’re storing. And that additional component is a VPC endpoint.

Connectivity between different components is made a bit easier when there happens to be an internet in the middle of the conversation. If this is a public application, then there would be a public subnet with virtual machines or other application instances. And there would be a gateway that would provide access to the internet. That would effectively allow the virtual machine to be able to communicate to the cloud storage because the cloud storage is also on the internet. This makes it very easy for your application to communicate to the data that it needs inside of a bucket that is on the cloud storage because you do have this internet connectivity in the middle.

But if this application instance is on a private subnet and needs access to this data, we don’t have an internet connection in the middle that would allow that. So we’re going to add a VPC endpoint. This allows us to have connectivity between a private subnet and another part of the connection that’s in the cloud, in this case, a storage network, so that our application can access that data even though there’s no public internet connection in the middle.

Instead of using separate virtual machines for every application instance that we would like to deploy, these days, we commonly use containers. But just because we’re removing a VM component from our applications doesn’t mean that our application is necessarily more secure. Our containers have similar security concerns as any other application deployment method.

There might be bugs in the application that would introduce vulnerabilities. You could have missing or insufficient security controls built into the application, which would allow others to have access to the data. Or there might be misconfigurations of the application, which could allow others access to the data that’s not intended.

If you aren’t using containers on top of an existing operating system, it might be a good idea to use an OS that is specifically built for containerization. This might be a minimal or hardened operating system that is specifically designed to run containers on top of it.

And we might want to group our containers together so that containers running like services are contained within a single host. And another host would contain a different type of service. That would allow us to focus our security posture on the specifics associated with that particular service that’s running in those containers. And by not mixing different types of containers on the same host, we may be able to limit the scope of any type of intrusion.