Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Sep 10 2019
Data Center

Best Practices for Using Container Technology in Government

Deploying containers when it makes sense to do so can produce big benefits for agencies eyeing the cloud.

Far from a passing fad, containers are a logical outgrowth of the huge success of virtualization and can help to solve a wide range of operational problems, including deployment, scalability and patching.

Government IT managers with a broad portfolio of existing applications should explore how to take advantage of the benefits of container technology. When moving from one computing environment to another, applications may not always run as programmed. But containers collect code and all related dependencies into one virtual package so that an application runs smoothly wherever it’s deployed, from one cloud to another.

Here are some best practices for optimizing container use to achieve quick wins in your environment.

Pick Applications to Put in Containers Carefully

Containers are suitable for applications under active maintenance. They are used to advantage when the application is large, has a lot of moving parts (such as microservices), might need to scale on short notice, or has an active development team using rapid deployment methodologies. Focus on those cases — even though touching a moving target comes with risks — because the payoff for containers will be worth the extra costs and delays associated with mixing in a new technology.

The flip side is also true: Applications that don’t fit into this mold aren’t the best use of containers. Legacy applications should be migrated to dedicated virtual machines, where the isolation provided by virtualization creates its own container. Those applications will need attention sooner or later, but containers don’t provide much benefit given the resources required to deploy them. 

IT%20Infrastructure_IR_1%20(2)_0.jpg

Build Monitoring Infrastructure for Containers

Container-encapsulated applications run on servers, just like any other type of application, so some monitoring tools will continue to provide good information. However, containers are often just one part of a larger strategy to move applications out of local data centers and into cloud-based service providers. When that happens, simple performance management tools fall behind very quickly.

For example, traditional IT management tools might look at CPU and memory use to determine if a server is overloaded. That doesn’t make sense in the world of container-encapsulated microservices and cloud-based providers. Instead, it’s more important to measure service response time to uncover resource bottlenecks and pinpoint any potential performance problems.

Even IT managers who aren’t planning an immediate migration to the cloud should consider monitoring tools specifically designed to collect (and interpret) performance metrics and event logs from container-focused environments. Then, when a cloud migration does occur, the “lift and shift” will be simpler and more transparent. 

MORE FROM STATETECH: Find out how offsite data storage helps local agencies with disaster recovery.

Manage Containers with a Watchful Eye

Application developers have adopted containers enthusiastically because they simplify the process of building large and complicated applications. Typical multitier (front-end, business logic, back-end) architectures are here to stay, but the internals of each tier are no longer monolithic pieces with complicated dependencies. These modern applications often use microservices: small processes that launch, perform a single task (or small set of tasks) and then shut down — all within seconds. Containers are designed explicitly to support this kind of architecture, allowing for quick startup and shutdown of capacity across many servers.

44%

The percentage of survey respondents who plan to replace some virtual machines with containers

Source: Diamanti, "2018 Container Adoption Benchmark Survey," July 31, 2018

Using containers and microservices, developers can easily avoid brittle designs that fall apart when a library is up-graded or patched, and gain free scalability in the process. For them, it’s win-win-win.

But for IT managers, containers bring a lot of moving pieces, and much potential for deployment complexity. For example, containers can be “packed” more tightly into servers to use resources more efficiently. But if two dependent services are running on different hosts, they may create network or inter-VM traffic that must be managed and controlled. The whole idea of DevOps — a set of practices that automates processes between the app development and IT teams — shifts some operational responsibilities to the development team, but in the long run, IT managers still have to see the big picture.

Configuration management databases and application inventories have to drill down deeper into application architecture: What services are needed, how do they interact, and what kind of traffic patterns are expected? Prepare to train operations teams as well as developers when shifting to containers.

Container Security Requires Greater Scrutiny

Containers run on hosts, and IT managers already know how to secure hosts. Everything learned over the years about how to secure the underlying hosts for Unix and Windows applications applies here. In fact, there are guidelines available from the Center for Internet Security specific to hosts running containers.

But container images require a new level of scrutiny, because they are invariably built on top of other open-source tools — all of which may have their own security vulnerabilities. An entire supply chain supports each container image built by developers, and there is a risk of introducing nonsecure components into secure hosts without controls and tools to verify the source and security status of the underlying software.

Developers may resist the notion of having IT security sit between their fast-moving development cycle and production deployment, but some type of gatekeeper is critical to maintaining secure operations. Of course, IT managers have to do their part — the security and validity checks must be automated and fast so that any vulnerability is identified quickly and clearly.

Illustration by Rob Doby