Top 5 Operational Challenges for Docker Applications
The challenges for container-based applications are shifting from developer adoption to operational management as Docker applications move into enterprise production environments.
by Bob Quillin

The worldwide explosion of container-based application development at the developer grassroots level has given rise to a next wave of challenges for operations and DevOps teams. It's a classic system and applications management phenomenon that has plagued the industry for decades—new technologies are introduced to much buzz and excitement but the dreary task of managing and operating these technologies is left as a secondary, next-phase exercise for operations and, now, DevOps groups.

Why Now: Containers, DevOps, CI/CD, and the Cloud

Compared to earlier generations of application technologies, Docker and container-based applications benefit in today's software development environments from some significant advantages and differences. The first major difference is the rise of DevOps methodologies and processes over the last five years. One of the basic premises of DevOps is to eliminate friction between development and operations functions through collaboration and coordination to deploy and deliver software faster and more efficiently. Containerization is a killer app for DevOps. By leveraging the fundamental container design principle of portability, developers can create container-based artifacts on their laptops or development environments that can in turn be handed off down the test, QA, staging, and operations pipeline without the previous system configuration, patch, and release discrepancies. These discrepancies created churn, delays, and finger-pointing, and they drove the adoption of tools such as Chef and Puppet that are designed to avoid drift and keep all these system configurations in sync.

The second major difference is the adoption of continuous integration (CI) and continuous delivery (CD) tools that automate the flow through the pipeline. Docker images can be stored and managed in registries that CI and CD tools curate and control through the development, test, and deployment process.

The final difference is the availability of cloud-based services that facilitate and, in many ways democratize development by making compute, storage, network, and now container management services available to developers and development teams big and small, from students to startups to the largest enterprises. Most, if not all, of these cloud-based services are actually based on and still use container-based technologies to build, run, and scale their cloud platforms. From Docker, which emerged from the dotCloud platform as a service (PaaS) service to Kubernetes, which drew from the infamous Google Borg platform, containerization and its related operational patterns have grown out of the cloud and are well suited to the elasticity and scalability that the cloud provides. But even given all these powerful enablers, operational challenges abound.

Operational Challenges Loom Ahead

The native strengths of containerization also create inherent challenges for operations teams. Docker, container technologies, and microservices-based architectures are in many ways more complex than traditional monolithic or n-tier approaches because there are more (sometimes, many more) services to manage, these services are ephemeral and mobile, and they are intelligent and self-managing (via designed-in cluster management, scheduling, and orchestration policies). For operations teams more familiar with traditional application technologies, we typically hear five major areas of concern:

  • Monitoring and visibility
  • Configuration
  • Control
  • Integration
  • Security
The Visibility Conundrum

There is some debate among those in the container and Docker ecosystem on the need to monitor and visualize container-based applications at all. Docker developers rely on a broad array of command-line tools to deploy and manage their applications during development and test. Thus, Docker-based applications are often handed off to operations with little or no monitoring designed in.

To compensate, many monitoring tools quickly added container instrumentation and container monitoring to their arsenal. But these added capabilities lacked the context to understand the deeper container dependencies that are required to help operations teams map a container or image problem to the related application or tier, the virtual machine (VM) or host the container was running on, or the storage volumes or services the container was mapped to. Just as application performance management (APM) tools such as AppDynamics or New Relic understood the underlying dependencies of more-traditional Java, Ruby, or Node applications, operations teams should seek solutions that provide a deeper level of clarity for Docker and container environments, while also supporting the monitoring, dashboard, alerting, and reporting standards the teams require to support live services in production versus dev/test environments.

Configuration Management, the Next Generation
The mainstays of configuration management over the last 10 years have been tools such as Puppet and Chef, designed to automate system configuration so your applications can run in consistent environments as they move through the development pipeline. Fast forward to the container generation: system configuration consistency is important, but Docker and containerization have eliminated one of the primary drivers of that by ensuring portability across systems. Instead of focusing on the symptom or effect, containers went right after the cause. So, problem solved, right? Well, kind of, sort of. As we all know, problems typically shift up the stack over time and this is no different. Now the challenges focus on the support system surrounding the containers (for example, cluster management, load balancing, resource pooling, access policies, and application lifecycle management for container-based applications). How do DevOps functions optimize host, VM, and container clustering for specific applications to balance cost, performance, reliability, and security concerns? Load balancing operates at not only the host level but also at the container level, and it must address a wide array of potential networking challenges. Operations teams need access to the knobs and dials—or at least they need visibility into the configurations that are most often buried deep inside the container configurations in YAML or JSON files.

Controlling the Uncontrollable
Container orchestration and scheduling policies enable self-healing and self-managing applications—a huge potential benefit of the container application pattern. When a microservices-based application is managing hundreds if not thousands of microservices at a time, manual control is impossible and automation is imperative. But operations teams also have cost, security, scaling, and deployment constraints that have to be factored in. These constraints can involve the use of specific availability domains, image shapes, resource assignments, and cost/performance trade-offs. Setting caps and constraints on orchestration rules can help operations teams gain some level of control and feel confident that self-management is happening within appropriate boundaries. Also, container services that are integrated or are part of an underlying cloud platform can access the native controls, domain, geography, and identity rules that are part of the organization's overall cloud deployment.

Integration Matters
There are good and bad things about new technologies. On the good side, they can solve significant problems and push the industry forward—as containers have done—and do it in new and innovative ways. Unfortunately, that often comes at the expense of leveraging and integrating with everything else that a development, DevOps, or cloud team uses on a daily basis. So, a requisite maturation phase typically follows rapid innovation, and containers and Docker are following that model. There are hard problems to figure out now on the networking and storage front. Equally difficult is creating a consistent on-premises dev/test environment and cloud dev/test environment—and integrating the flow between the two in such a hybrid cloud setup. Standardizing on Docker and maintaining a consistent database platform is a great start. CI, CD, and container registry integration is another simple next step.

Be Secure
Security starts from the bottom up and the best practices for container security really begin with the security of the underlying platform you are running on: compute, networking, and storage. The Docker daemon requires root privileges, and only privileged, trusted users should have that access. In containers as a service (CaaS), this issue is eliminated because the container management system typically controls the Docker daemon and users cannot (and actually do not have to) access the system as root users. Furthermore, use of a trusted registry enables you to securely store and manage which images can run as containers on your systems. Finally, if a multitenant cloud-based container service concerns you, look for single-tenant CaaS services that allow you to run your container-based workloads exclusively on your own clusters of VMs.

Finally, Something Dev and Ops Can Agree On

Container-based applications might present new challenges in visualization, configuration, control, integration, and security, but the pros far outweigh the cons. Operations teams have already been adopting DevOps approaches over the last few years and are much more ready to adapt to new platforms than they were just five years ago. And developers have been eating up Docker and containers as much as they can. So ironically, Docker and containers are something that both development and operations teams can agree on because these technologies benefit both teams and make the dream of DevOps much easier to attain. In the end, driving the end-to-end adoption of Docker apps from development through production will create the glue that cements a true DevOps culture and a rock-solid platform for future success.

About the Author
Bob Quillin As vice president of software development for the Oracle Container Group, Bob Quillin is responsible for the products and the team from the StackEngine acquisition by Oracle in December 2015. Based in Austin, Texas, the group develops container-based services designed to help developers and DevOps teams build, orchestrate, and scale enterprise-grade container apps. In 2016, the group launched Oracle Container Cloud Service, a container management service for building and running Docker applications on Oracle Cloud built from the StackEngine technology. You can find him at @bobquillin.

 

Join the Database Community Conversation
DEVO_ATTACH_BOTTOM
Experience Oracle Cloud —Get up to 3,500 hours free.