Taking Docker to Production with Confidence
A well-implemented, promotion-based model takes immutable binaries through security gates.
by Baruch Sadogursky

While Docker has enabled an unprecedented velocity of software production, it is all too easy to spin out of control. A promotion-based model is required to control and track the flow of Docker images as much as it is required for a traditional software development lifecycle. A well-implemented, promotion-based model takes immutable binaries through security gates. With Docker that is more easily said than done. This article describes the challenges and the solutions in implementing a rock-solid, immutable, promotion-based release model for Docker images.

Many organizations developing software today use Docker in one way or another. If you go to any software development or DevOps conference and ask a big crowd of people, "Who uses Docker?" most people in the room will raise their hands. But if you then ask the crowd, "Who uses Docker in production?" most hands will fall immediately. Why is it that such a popular technology that has enjoyed meteoric growth is so widely used during the early phases of the development pipeline, but rarely used in production?

Software Quality: Developer Tested, Ops Approved

A typical software delivery pipeline looks something like this (and has so for over a decade!):

Figure 1. Typical software delivery pipeline (source: Hüttermann, Michael. Agile ALM. Shelter Island, N.Y.: Manning, 2012. Print.)

At each phase in the pipeline, the representative build is tested, and the binary outcome of a build can pass through to the next phase only if it passes all the criteria of the relevant quality gate. By promoting the original binary, we guarantee that the same binary we built in the continuous integration (CI) server is the one deployed or distributed. By implementing rigid quality gates, we guarantee access control for untested, tested, and production-ready artifacts.

The Unbearable Lightness of $ docker build

Because running a Docker build is so easy, instead of a build passing through a quality gate to the next phase, it is rebuilt at each phase.

"So what," you say? So plenty. Let's look at a typical build script.

Figure 2. Typical build script

 

Figure 2. Typical build script

 

To build your product, you need a set of dependencies, and the build will normally download the latest version of each dependency you need. But because each phase of the development pipeline is built at a different time, you can't be sure that the same version of each dependency in the development version also got into your production version.

But we can fix that. Let's use the following instead:

FROM ubuntu:14.04.

Done. Or are we?

Can we be sure that the Ubuntu version 14.04 downloaded in to the development pipeline will be exactly the same as the one built for production? No, we can't. What about security patches or other changes that don't affect the version number? But wait; there is a way. Let's use the fingerprint of the image. That's rock solid! We'll specify the base image as follows:

FROM ubuntu:0bf3461984f2fb18d237995e81faa657aff260a52a795367e6725f0617f7a56c

But, what was that version again? Is it older or newer than the one I was using last week?

You get the picture. Using fingerprints is neither readable nor maintainable and, in the end, nobody really knows what went into the Docker image.

And what about the rest of the Dockerfile? Most of it is just a bunch of implicit or explicit dependency resolution, either in the form of apt-get or wget commands to download files from arbitrary locations. For some of the commands, you can nail down the version, but with others, you aren't even sure dependency resolution is done! And what about transitive dependencies?

So you end up with this:

Figure 3. Rebuilding the Docker image at every phase

Basically, by rebuilding the Docker image at each phase in the pipeline, you are actually changing it, so you can't be sure that the image that passed all the quality gates is the one that got to production.

Stop Rebuilding and Start Promoting

What we should be doing is taking our development build and—rather than rebuilding the image at each stage—promoting it as an immutable and stable binary through the quality gates to production.

Figure 4. Promoting the development builder

Sounds good. Let's do it with Docker.

Wait—not so fast.

Docker Tags Are a Drag

This is what a Docker tag looks like:

Figure 5. Reference Documentation of a Docker tag Command

The Docker tag limits us to one registry per host. How do you build a promotion pipeline if you can work with only one registry?

"I'll promote using labels," you say. "That way I need only one Docker registry per host." That will work, of course, to some extent. Docker labels (plain key:value properties) might be a fair solution for promoting images through minor quality gates, but are they strong enough to guard your production deployment? Because you can't manage permissions on labels, probably not. What's the name of the property? Did QA update it? Can developers still access (and change) the release candidate? The questions go on and on. Instead, let's look at an example for a more robust promotion solution: one that uses JFrog Artifactory.

Virtual Repositories: Tried and True

Virtual repositories have been in Artifactory since version 1.0. More recently, the capability to deploy artifacts to a virtual repository was added. This means that virtual repositories can be a single entry point for both uploading and downloading Docker images.

Figure 6. Uploading and downloading Docker images using Artifactory

Here's what we're going to do:

  1. Deploy our build to a virtual repository that functions as our development Docker registry.
  2. Promote the build within Artifactory through the pipeline.
  3. Resolve production-ready images from the same (or even a different) virtual repository now functioning as our production Docker registry.

This is how it works:

Our developer (or our Jenkins CI server) works with a virtual repository that wraps a local development repository, a local production repository, and a remote repository that proxies Docker Hub (as the first step in the pipeline, our developer might need access to Docker Hub in order to create our image). Once our image is built, it's deployed through the docker-virtual repository to docker-dev-local.

Figure 7. Using a virtual repository to wrap other repositories

Now, Jenkins steps in again and promotes our image through the pipeline to production.

Figure 8. Promoting the image through the pipeline

At any step along the way, you can point a Docker client at any of the intermediate repositories and extract the image for testing or staging before promoting to production. Once your Docker image is in production, you can expose it to your customers through another virtual repository functioning as your production Docker registry. You don't want customers accessing your development registry or any of the other registries in your pipeline. You want them to access only the production Docker registry. There is no need for them to access any other repositories, because unlike other package formats, the point of a Docker image is that it has everything it needs.

Figure 9. Exposing the production image to customers

So, we've done it. We built a Docker image, promoted it through all phases of testing and staging, and once it passed all those quality gates, the exact same image we created in development was deployed to production servers and made available for download by end users—without incurring the risk of a noncurated image being deployed.

What About Setup?

You might ask if getting Docker to work with all these repositories in Artifactory is easy to set up. Well, it's now easier than ever with the new Reverse Proxy Configuration Generator. If you use Artifactory and NGINX or Apache HTTP Server, you can easily access all of your Docker registries to start promoting Docker images to production.

About the Authors
Baruch Sadogursky (aka JBaruch) hangs out with JFrog's tech leaders, writes code around the JFrog platform and its ecosystem, and then speaks and blogs about it all. He is a professional conference speaker on DevOps, Java, and Groovy topics, and he is a regular at the industry's most prestigious events, including JavaOne (where he was awarded a Rock Star award), DockerCon, Devoxx, DevOpsDays, OSCON, QCon, and many others. His full speaker history is available on Lanyrd. Sadogursky is on Twitter and blogs at http://www.jfrog.com/blog/ and http://blog.bintray.com.
Join the Java Developers Conversation
DEVO_ATTACH_BOTTOM
Experience Oracle Cloud —Get up to 3,500 hours free.