A book published in 1981, called Nailing Jelly to a Tree, describes software as “nebulous and difficult to get a firm grip on.” That was true in 1981, and it is no less true four decades since. Software, whether it is an application you bought or one that you built yourself, remains hard to deploy, hard to manage, and hard to run.
Docker containers provide a way to get a grip on software. You can use Docker to wrap up an application in such a way that its deployment and runtime issues—how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissions—are handled outside of the application itself, and in a way that is consistent across all “containerized” apps. You can run your Docker container on any OS-compatible host (Linux or Windows) that has the Docker runtime installed.
Docker offers many other benefits besides this handy encapsulation, isolation, portability, and control. Docker containers are small (megabytes). They start instantly. They have their own built-in mechanisms for versioning and component reuse. They can be easily shared via the public Docker Hub or private repository.
Docker containers are also immutable, which has both security and operational benefits. Any changes to a container must be deployed as an entirely new, differently versioned container.
In this article we’ll explore how Docker containers make it easier to both build and deploy software—the issues containers address, how they address them, when they are the right answer to the problem, and when they aren’t.
Before Docker containers
For many years now, enterprise software has typically been deployed either on “bare metal” (i.e. installed on an operating system that has complete control over the underlying hardware) or in a virtual machine (i.e. installed on an operating system that shares the underlying hardware with other “guest” operating systems). Naturally, installing on bare metal made the software painfully difficult to move around and difficult to update—two constraints that made it hard for IT to respond nimbly to changes in business needs.
Then virtualization came along. Virtualization platforms (also known as “hypervisors”) allowed multiple virtual machines to share a single physical system, each virtual machine emulating the behavior of an entire system, complete with its own operating system, storage, and I/O, in an isolated fashion. IT could now respond more effectively to changes in business requirements, because VMs could be cloned, copied, migrated, and spun up or down to meet demand or conserve resources.
Virtual machines also helped cut costs, because more VMs could be consolidated onto fewer physical machines. Legacy systems running older applications could be turned into VMs and physically decommissioned to save even more money.
But virtual machines still have their share of problems. Virtual machines are large (gigabytes), each one containing a full operating system. Only so many virtualized apps can be consolidated onto a single system. Provisioning a VM still takes a fair amount of time. Finally, the portability of VMs is limited. After a certain point, VMs are not able to deliver the kind of speed, agility, and savings that fast-moving businesses are demanding.
Docker container benefits
Containers work a little like VMs, but in a far more specific and granular way. They isolate a single application and its dependencies—all of the external software libraries the app requires to run—both from the underlying operating system and from other containers.
All of the containerized apps share a single, common operating system (either Linux or Windows), but they are compartmentalized from one another and from the system at large. The operating system provides the needed isolation mechanisms to make this compartmentalization happen. Docker wraps those mechanisms in a convenient set of interfaces and metaphors for the developer.
The benefits of Docker containers show up in many places. Here we list some of the major advantages of Docker and containers.
Docker enables more efficient use of system resources
Instances of containerized apps use far less memory than virtual machines, they start up and stop more quickly, and they can be packed far more densely on their host hardware. All of this amounts to less spending on IT.
The cost savings will vary depending on what apps are in play and how resource-intensive they may be, but containers invariably work out as more efficient than VMs. It’s also possible to save on costs of software licenses, because you need many fewer operating system instances to run the same workloads.
Docker enables faster software delivery cycles
Enterprise software must respond quickly to changing conditions. That means both easy scaling to meet demand and easy updating to add new features as the business requires.
Docker containers make it easy to put new versions of software, with new business features, into production quickly—and to quickly roll back to a previous version if you need to. They also make it easier to implement strategies like blue/green deployments.
Docker enables application portability
Where you run an enterprise application matters—behind the firewall, for the sake of keeping things close by and secure; or out in a public cloud, for easy public access and high elasticity of resources. Because Docker containers encapsulate everything an application needs to run (and only those things), they allow applications to be shuttled easily between environments. Any host with the Docker runtime installed—be it a developer’s laptop or a public cloud instance—can run a Docker container.
Docker shines for microservices architecture
Lightweight, portable, and self-contained, Docker containers make it easier to build software along forward-thinking lines, so that you’re not trying to solve tomorrow’s problems with yesterday’s development methods.
One of the software patterns containers make easier is microservices, where applications are constituted from many loosely coupled components. By decomposing traditional, “monolithic” applications into separate services, microservices allow the different parts of a line-of-business app to be scaled, modified, and serviced separately—by separate teams and on separate timelines, if that suits the needs of the business.
Containers aren’t required to implement microservices, but they are perfectly suited to the microservices approach and to agile development processes generally.
Problems Docker containers don’t solve
The first thing to keep in mind about containers is the same piece of advice that applies to any software technology: This isn’t a silver bullet. Docker containers by themselves can’t solve every problem. In particular:
Docker won’t fix your security issues
Software in a container can be more secure by default than software run on bare metal, but that’s like saying a house with its doors locked is more secure than a house with its doors unlocked. It doesn’t say anything about the condition of the neighborhood, the visible presence of valuables tempting to a thief, the routines of the people living there, and so on. Containers can add a layer of security to an app, but only as part of a general program of securing an app in context.
Docker doesn’t turn applications magically into microservices
If you containerize an existing app, that can reduce its resource consumption and make it easier to deploy. But it doesn’t automatically change the design of the app, or how it interacts with other apps. Those benefits only come through developer time and effort, not just a mandate to move everything into containers.
If you put an old school monolithic or SOA-style app in a container, you end up with, well, an old app in a container. That doesn’t make it any more useful to your work; if anything, it might make it less useful.
Containers by themselves don’t have the mechanisms to compose microservice-style apps. One needs a higher level of orchestration to accomplish this. Kubernetes is the most common example of such an orchestration system. Docker swarm mode also can be used to manage many Docker containers across multiple Docker hosts.
Docker isn’t a substitute for virtual machines
One persistent myth of containers is that they make VMs obsolete. Many apps that used to run in a VM can be moved into a container, but that doesn’t mean all of them can or should. If you’re in an industry with heavy regulatory requirements, for instance, you might not be able to swap containers for VMs, because VMs provide more isolation than containers.
The case for Docker containers
Enterprise development work is notorious for being hidebound and slow to react to change. Enterprise developers chafe against such constraints all the time—the limitations imposed on them by IT, the demands made of them by the business at large. Docker and containers give developers more of the freedom they crave, while at the same time providing ways to build business apps that respond quickly to changing business conditions.
Copyright © 2023 IDG Communications, Inc.
Originally posted on January 4, 2023 @ 12:57 pm