What do you think when you hear containers?
You probably think about a container that holds goods, which then gets shipped from one location to another. You are right! That is exactly what a container is. The goods gets loaded in a container, docked to a Ship, and sent to it’s final destination (port). Where it will get unloaded, and again get loaded in to another container that is attached to eighteen wheel truck and sent to it’s customers. This basic formula is followed by many businesses and it’s been working for everyone for many decades.
But what does that has do with information technology (IT)?
Well, if you think about for a moment, what happens when an application is developed? Most application development follow below process:
- Application developer writes the code. (It’s a development process to build a product).
- Then he/she compiles the code and checks in to version control system. (A complete product available to ship).
- Then a build master builds the package in Jenkins, TFS, or any version control system. (Wraps the compiled code into a package as final product to ship it to selected environments).
- After that it will get deployed to an environment such as DEV, Testing, Staging, and PROD either manually or using an automated process. (A release team or Ops team will deploy it in appropriate environment where it is request to go. A final destination!).
This basic process is followed by everyone through application life cycle. Of course there will be some change depending on your business needs, but basic concept is always same at the high level.
If you look at no. 2, a developer compiles the code, which is then stored in to a version control system. It is a good that gets built in to a package as shown in no.3, and it is made available for release or operations team to deploy it to an environment where an application will be hosted and accessible to users. So no.4 is your Vessle that moves your package from version control system to its final destination where the code will be deployment to make it available for users to access.
So this is at high level. Let’s dig in further on no. 4 item.
What is needed to run an application?
- We need a machine such as Virtual Machine (VM) or physical machine.
- A compatible Operating system such as Windows Server or Red Hat Linux.
- A platform that will run your application such as IIS, Apache or NGINX. Only if it is your web application. If it is desktop or console application, then its a different story.
- All the dependencies needed to run your application.
Above process to develop, build, deploy, and running application is a very long process and can be messier. Also a human error can happen if not followed correctly. Such as correct dependencies are not available, or were not deployed correctly, etc. There are many challenges with it, but I will not get to it in very deep in this article.
It is always good to know your current environment how it is working and how can it be improved before getting in to containers. Now that we understand or already know how the process is working for non-container application.
Lets get started with Containers!
What are Containers?
A container is a stripped down minimal OS. It contains resources that your application requires such as IIS, Apache, or NGINX and other dependencies that will run your web application compiled in to one container image.
A container shares the kernel with main Operating System (OS) that is running on Virtual or Physical machine. A kernel provides the access to libraries such as network, storage, or any other hardware resources that your container needs.
A container runs your application in an isolated process, and does not share with other applications. It’s an OS level virtualization and it communicates with the main OS that is running on your VM or Physical machine to acquire resources it needs.
Let’s take an example of shared hosting (Non-Container) platform. You host your application at godaddy or siteground hosting provider. Your application is now accessible over the Internet, but you are not alone. There are other applications on that server that is running along with your application. In shared hosting, you do not get all the hardware resources required for your application. It’s like whoever is first gets the computing power or access to the hardware resources. But now a days hosting providers will restrict your application process it way down so they can fit in more apps. That could slow down your application or break your application if the server goes down. There are also other challenges such as upgrades made to Main OS and the dependencies that are required to run your application gets corrupted due to updates were not compatible. If the hosting provider is not careful, it can break your application without you knowing what happened.
On the other side, let’s say you are hosting your application on a dedicated Virtual or Physical Machine. You probably see good performance, and dedicated resources available for your application. Which is good and you are happy with the performance you are receiving. But you are unhappy because it cost a lot money to run an application on dedicated resources. And most likely you are not fully utilizing hardware resources that are available for your application.
Most corporates today run their applications on VM’s with allocated hardware and they scale infrastructure as resources are needed. But when you look at VM performance, you will probably see less than 50% resource utilization in most cases. So what happens to free resources that are not fully used? They stay just stay unused and you probably paid lot of money to make that resource available.
So as I mentioned earlier, a container is an OS level virtualization, which is isolated from the main OS, and can be configured how much resources are needed for your application, and not have to worry about platform underneath of your container. The orchestration behind container cluster such as Kubernetes or Docker Swarm will take care where to place your container depending on your needs.
A Container can run anywhere without making major changes or worrying about changes made to the main OS. Because your container has all the dependencies that are not touched or shared with main OS. When you decide you have to make updates to your container, for example fixing security vulnerability. You have full control to make updates in your build process, and making sure it will work when it runs in the production environment.
A container is secure and does not share your resources with others because it’s an isolated process and it only access resources that are needed from main OS.
Why Containers are becoming so popular?
Containers are getting popular these days because it has gained its popularity by providing OS level virtualization and fully utilize hardware resources. If the hardware resources are not fully utilized, the orchestration engine will run more containers as needed.
You can scale your containers without having to go through long process of creating VM, allocating IP, storage, etc. The orchestration engine will create same container that was running before it will be exact replica of your container image you have built. This helps improve availability and easily scale as needed.
Containers can also get dedicated resources on Virtual or Physical hardware based on configuration you have requested. Containers are secure, and does not share your application processes with others as they are isolated.
Containers are easy to move from one platform to another. If you are running your container in Google Cloud or AWS and want to move to Azure, you can easily push your container image to Azure with minimum change.