Designing the Cloud – SY0-601 CompTIA Security+ : 2.2

Cloud technologies have added new capabilities to application deployments and modularized applications. In this video, you’ll learn about virtual machines, application containerization, microservices, serverless architectures, and more.

<< Previous Video: Edge and Fog Computing Next: Infrastructure as Code >>

 

 


The cloud has forever changed how we deploy applications. We can click a button and have access to amazing amounts of computing power. The cloud also provides elasticity. We can deploy new application instances to handle the load, and then as that load decreases, we can remove those instances.

We can also provide access to our applications from anywhere in the world, using these cloud computing technologies. To provide this functionality, we have to spend a lot of time planning how our applications will be developed, and how we can deploy them in this cloud-based infrastructure.

One place to start with these cloud technologies is to take all of these systems that would normally run on our desk and run those systems in the cloud. We’re able to do this by using a thin client.

Instead of having a full-blown computer, we can simply have one, that provides us with just enough computing power to be able to connect to a desktop that is running in the cloud. You’ll sometimes hear this referred to as a Virtual Desktop Infrastructure or VDI, or if we are running this in the cloud, it may be provided through a third-party service as Desktop as a Service or DaaS.

What you would be running locally is a single device that allows us to connect a keyboard, a mouse, and a monitor, and that device, then connects to our desktop that’s running in the cloud. That means that this device doesn’t need a high-end CPU or a lot of memory, because the application is running in the cloud.

We just need a system that has just enough power to provide us with the remote desktop view of what’s happening with those desktops that are running on the cloud service. There’s obviously an important network connection, and we need to be sure that we have the bandwidth, and the speed, to be able to support running our desktop in the cloud.

The services that we’re using in the cloud are often running on many different operating systems, but all of those operating systems are executing on one single physical piece of hardware. This is virtualization. And allows us to run many different operating systems on the same physical device.

If we were to look at a block diagram of this virtualization, it starts with the hardware itself. So we have the infrastructure in place, and on top of that, we’re running a piece of software called a hypervisor. The hypervisor is the management software, that is able to manage all of the different operating systems that are running on this computer.

And on top of the hypervisor, are these guest operating systems. You might have one virtual machine with an operating system and an application, a separate virtual machine with another guest operating system and application, and there’s a third virtual machine with yet another guest operating system and application.

You can see that with every virtual machine we’re having to run an entire guest operating system for each one of these. This requires additional CPU, additional storage, and additional memory, for each one of these virtual machines, even if we’re running similar guest operating systems for each one of these applications.

From this perspective, we consider virtualization to be relatively expensive because of the resources required to have each separate operating system running simultaneously. Or what if we could run these applications, but instead of having separate guest operating systems, we had a single operating system running.

And that’s exactly what we do by using containerization. This application, containerization, means that we’d still have our physical infrastructure, we would have a single host operating system, and then we would use some type of container software such as Docker, to be able to run multiple applications simultaneously in their own separate sandbox, but not have separate host operating systems for each one of those.

Each one of these applications is self-contained. So everything you would need to run application, A, is in the application A container. These applications also can’t interact with each other because they have no idea that those other applications are there. Application A’s container has no idea that any of these other containers are running at the same time.

These application containers are also in a standard format. So we could take this container off of this particular system, and move it to any other system to be able to create additional instances for this application.

This is also relatively lightweight to deploy an application because you don’t need to deploy a separate operating system, every time you want to deploy a new application. Instead, you’re using the kernel of the operating system that’s currently there, and simply deploying containerization software on top of that, that then runs the separate applications. This allows us to have a very streamlined container and allows modularity between different systems.

Here’s both virtualized and containerized applications next to each other. And you can see what the virtualized applications we have, that separate guest operating system with each one of those applications. With containerized applications, it’s a single host operating system and we can deploy separate application containers on top of that.

Many of the applications we use from day-to-day are one very large application that is built on a single codebase. This application does everything using this enormous amount of code that has been programmed into that app.

Everything within this application is all self-contained in the app. So everything associated with the user interface, with putting data into the application and out, and any business logic, is all contained within the same amount of code. As you can imagine having a very large codebase that has all of these different functions, creates additional complexity. This also creates complexity when you need to upgrade or update just one part of the application.

There’s no method in these monolithic applications, that allow you to update just a single feature. Instead, you have to replace the entire codebase to be able to use those new functions. It would be much more efficient if we could take each one of these individual pieces of the application, and break them out into separate services.

That’s exactly what we’ve done with the Microservice Architecture. Which uses APIs or Application Programming Interfaces, to break up the application into individual services. These are microservices. And there’s usually an API gateway, that manages the communication between the client that we’re using on our systems, and all of those different functions built into the application.

There might be multiple databases or single-shared database, that is able to be used through this API gateway. If we need to add new features to the application, we can simply add new microservices into this view, and if we need to increase the scalability of an application, we only need to increase the microservices that are being used the most. This also means that if certain microservices become unavailable, the entire application doesn’t fail, only that particular service related to the application.

And the segmentation allows you to have much tighter control of data security since you can limit what microservices might have access to different types of data. If an organization needs to roll out additional inventory tracking services, they can increase the number of inventory microservices available.

And if there’s very limited use on a report writing service, we might only need one single microservice to manage the report writing. If any one of those needs to be changed, updated, or new features added, you only need to change the microservice associated with those particular features.

We can expand on this segmentation of functions using a serverless architecture. This allows us to take the operating system completely out of the equation, and instead perform individual tasks based on the functions that are requested by the application.

The developer of the application will then take each individual function of that application and deploy it into what we call, a stateless compute container. These compute containers are simply processors that are designed to respond to our API requests. So our application will send in the API request to the compute container, and the results of that API requests are sent back to the client.

This allows us to have compute containers that are only available as we need them. So as people are doing inventory management, we may have a lot of inventory compute containers that are being built and torn down as people are accessing those services. If nobody is using any of those inventory features of the application, then you don’t have to keep a separate server running and maintained for something that’s no longer in use.

If a user does need to perform an inventory function, we can spin up an individual compute container, perform that request, and then disable that compute container, meaning that it is ephemeral or temporary in use. These containers might run for a single event, and once that event is done, the container disappears. It’s very common to have the serverless architecture running at a third party. So that third party would be in charge of the security of the data and the applications used for this app.

It’s very common for organizations to build cloud-based services that are running at a third-party cloud service provider but are only used for internal use. We would commonly separate these into Virtual Private Clouds or VPCs. This is a pool of application instances, that is contained within one single VPC container. As we build more applications, we might build more VPCs to contain those applications, effectively creating separate clouds for each of those apps.

The challenge we have is getting all of our users wherever they might be, to be able to access the application instances that are running in each of these virtual private clouds. And the way that we provide that access, is through something called a Transit Gateway. You can think of this Transit Gateway as a router that’s in the cloud. This provides us with the connectivity we need to connect all of our users.

Normally, we would have the users connect into this Transit Gateway using some type of virtual private network connection. This means that we could have users at home, or in our offices, connect through this Virtual Private Network into the Transit Gateway, where they would then have access to all of the application instances running on the multiple Virtual Private Clouds.

When you start deploying these different cloud-based applications, there’s a need to protect the data that’s associated with these applications, and a need to protect the applications individually. We commonly do this using resource policies in our cloud services. For example, in Microsoft’s Azure cloud, you can configure a resource policy that would specify which resources may be provisioned by an individual user.

This would limit the functionality that a certain user would have on building out and creating new services within the Azure cloud. So a user might be able to create a service in a particular region, say, the North America region, of the Azure cloud, but deny any type of services being built anywhere else in the Azure cloud.

A resource policy in the Amazon cloud would allow someone to specify a particular resource and then determine what particular actions can be permitted for that individual resource. For example, you might allow access to an API gateway, from a particular IP address range but deny access from any other IP addresses.

And another resource policy feature of the Amazon cloud would be to allow a list of users access to a particular resource. So you could specify the individual users who can access an application instance, and deny access to anyone else.

As you can imagine, organizations who are rolling out these cloud services may be deploying services from multiple cloud providers simultaneously. Some application instances might be running in the Azure cloud, other application instances in the Amazon cloud, and yet another set of instances on the Rackspace cloud. There needs to be some way to consolidate the view of all of these different services into one single management interface.

To be able to do that, we use Service Integration and Management or SIAM. This is the natural next step when you begin deploying these different application instances to multiple providers. This is called multi-sourcing, and it’s a way to ensure that your application will stay up and running and available, regardless of the status of any individual cloud provider.

The problem is that the Azure cloud, the Amazon cloud, and the Rackspace cloud, all work very differently. It’s a different process to deploy application instances. It’s a completely different process to monitor those instances.

A Service Integration and Management Console would allow you to bring all of those service providers into a single view, and allow you to manage them from one single interface. This makes a much simpler management process for organizations that need to deploy and monitor these cloud-based applications.

And of course, this is a constantly changing playing field, with different methods of deploying applications, and different service providers. So your Service Integration and Management Console will be able to bring all of that down into a single unified view, making it much easier to manage these applications ongoing.