Containerization Technology Defined

Jul 18, 2016

Written by Nenad Rokic, Technical Systems Architect at HighVail Systems Inc.

Recently, I attended the DockerCon Conference in beautiful Seattle, focusing my attention on new and emerging trends in IT. In the short span of two or so years, this event has gained a tremendous amount of popularity. Docker is the world’s leading containerization platform and containerization technology promises new avenues of how to build and distribute applications in the future, along with how to shape IT operations. The new world of IT has seen quite an influx of young people that see and seek the immediate benefits of such model adoption. IT companies are racing to set standards for others to follow—containerization is the fastest growing technology adoption in IT. So, what does that mean for people in the industry such as you or me? And what is a container, anyway? Let me try and explain some of the mystery—at least in the way that I understand it.

To explain what a container is using analogies from everyday life, the easiest way to understand the technology is to think of it as a standard shipping container that is loaded with multiple goods, all packaged separately. The container is sealed until it reaches its final destination, and is then forklifted to the different means of transportation to reach the customer or consumer. Do we care how it is stacked or packaged, or how the goods inside of the container are stacked? No, we don’t. But we do care how quickly that container is delivered. We care about quality, and agility. What if we need, say, ten, one hundred, or even one thousand containers?

To turn this analogy to the IT world: Docker containers are a platform for developing, shipping, and running applications using container technology. Instead of the “pack it all into a single basket,” monolithic app/dev approach, Docker containers enable developers to build agile, distributed applications—with all the components the application would need to run somewhere else or to connect to other containers using declarative language. The container knows what libraries it needs; how the network needs to be connected to other containers; and if it depends on some other containers or services. Once packaged and ready ship (using something like Docker Compose, for example,) the application is “forklifted” into the private trusted registry, or public HUB, where it becomes available for consumption as part of a catalogue. Automation tools enable the user to build multi-container apps in a quick and fairly easy manner. Define the YAML file, and off you go.

Containerization uses the kernel on the host operating system to run multiple root file system. Each of these root file systems is called a container, and each container features its own memory, processes, devices, and network stack. This gives containers some obvious benefits over VMs:

  • Containers are lightweight.
  • No need to install OS.
  • Less strain on physical infrastructure, CPU, and memory.
  • Greatly improved utilization and density.
  • Portability—built in one place, consumed in another.

At the heart of this technology is something called the Docker Engine. The Docker Engine is essentially a program that enables containers to be distributed and run. The Docker Engine uses Cgroups and Namespaces to isolate workloads. Like most distributed solutions, it is based on a server/client architecture. The application is packaged with all dependencies into an image, aka the container template and is built by the developer, me, and you for someone else to consume. It is stored in the Docker Hub, or another HUB repository like GitHub for example. Think of it as an executable file—you need to download a platform, laptop, or server in order to run it. As long as you have the Docker Engine installed, you will be able to run what was packaged for you.

Now, operations can take over managing applications, and images responding in real time to demand and build a meaningful lifecycle in collaboration with developers. Depending on who you ask, you may get different answers on what to use for orchestration, deployment, and lifecycle automation. What would be the best choice? Should you use Kubernetes, OpenShift with Ansible Tower, DC/OS, Apache Mesos (Mesosphere), Apcera, or Docker Cloud (with Docker Swarm)? The answer is that it depends. This article will not be going into detail on how and why one should be picked over the other; there are a number of tools that would help with that decision. The best advice would be to start with one (preferably open source) that you can evaluate to see if it fits your needs. The latest version of Swarm (clustering technology for Docker containers) closely integrates with Docker Engine and showcases a seamless user experience, for example. A large number of enterprise vendors innovate with Docker containers—most noticeably Red Hat—while significantly contributing to the community.

While you set out on your journey to the cloud, System Integrators and consultants with experience like HighVail will be great sources of information. Containerization is only one part of a DevOps evolution and HighVail can help you in your transformation. We have partnerships with many of the key emerging players in the area of DevOps and Open Source is what we do every day.  We’re certain that we can help you decipher much of what appears to be a daily onslaught of “the next great tool” and come to the right decisions, select the right technology and be your partner for tomorrow’s success. For more information, please contact us at (416) 867-3000, or at info@highvail.com. You can also visit us on Facebook, LinkedIn, Twitter @HighVail, and Instagram @HighVailSystems.

– Nenad Rokic

Technical Systems Architect at HighVail Systems Inc.