Using a container as a lightweight VM

OS container

In a lot of this blog I have been pushing for the use of containers as an “application execution environment”. You only put the minimal necessary stuff inside the container, treat them as immutable images, never login to them… the sort of thing that’s perfect for 12 factor application.

However there are other ways of using containers. The other main version is to treat a container as a light-weight VM. Sometimes this is called “OS container” because you’ve got a complete OS (except the kernel) here, and you treat it as if it was an OS.

Indeed, this is how earlier container technology was used; on Solaris the zones technology is essentially used to build a lightweight VM. A number of the cheaper virtual host solutions use container technology such as Virtuozo or OpenVZ to present something that looks like a VM.

Characteristics of an OS container

A container used in this way has a number of characteristics that are almost the opposite of what I’ve previously been talking about. So an OS container may have the following characteristics:

  • Persistent
    • Storage is allocated and dedicated to the container. You can treat /home and /usr and /opt the same as you would on any server; reboot the container and the files will remain.
  • Long lived
    • These containers may have uptimes of weeks or months, and even after a reboot the container comes back as it was before
  • Mutable
    • The contents change. Applications may write log files, users may store data, database datafiles may be present…
  • Accessible
    • People may be able to ssh into these things
  • Manual changes
    • If people can login and touch files then we have manual processes.

For me, this sounds exactly like how we would think of a traditional virtual machine (VM). And this means we must treat them like a traditional VM.

Controls

Now in the VM world we have a number of controls that take place inside the VM. You may have things like:

  • Identity Management
  • Privilege Escalation
  • Logging
  • Change management
  • Auditing
  • Backups
  • Patching
  • Monitoring
  • Vulnerability scanning
  • Intrusion detection
  • … and everything else!

For an OS container you will need all of these as well. Container technology doesn’t give you any special wins over traditional VM technology. Because you’re running an OS you have to manage an OS.

Now we also have controls around the management of VMs:

  • Inventory management
    • what VMs are present, who owns them, what is their purpose
  • IP address allocation, DNS, etc
  • Hypervisor access
  • Console controls
  • Control plane access
    • Changing the VM size, attached devices, power management
  • Affinity rules
    • Production and DR should not be on the same physical machine
  • … and everything else!

We’ll need all of these in the OS Container model as well. However some of these controls may look different. It’s easy to control access to the hypervisor, but in a container world the same equivalent would be access to the parent OS. You have to treat those parent OSes as highly privileged, as high as you treat your ESX Hypervisor (for example).

This means building out a lot of infrastructure and tooling to manage these containers; tooling you already have present for VMs.

So why do it?

Since you don’t really gain any of the normal container benefits, why would you use the technology this way? Some of the answers I’ve received to this question are:

  • Increased application density
    • We can fit more OS containers into a machine than we can VMs
  • Easier for developers
    • In a strict development environment we can let dev teams spin up OS containers and experiment in them (to me this points to a failing of existing VM build processes). Perhaps even on their laptops, offline.
  • Lift and shift migration path
    • Build your app into a container, test it, get it working, then migrate it to a public cloud service.

Are there other use cases? If you can think of some then please let me know in the comments!

Conclusion

I commonly hear “you can’t pass audit” type statements with respect to OS containers. I’m not sure that’s true. You can, as long as you have the right controls in place. Since you’re using these containers as a form of lightweight VM then you must treat them as if they are a VM.

All of the existing controls must be in place, and the control layer properly secured. For example:

  • You don’t allow anyone to create VMs without some checks and balances; you shouldn’t allow them to spin up OS containers without the same controls.
  • You don’t allow anyone to access the hypervisor; you shouldn’t allow anyone to access to host OS.
  • You don’t allow unpatched vulnerabilities to exist on VMs; you shouldn’t allow them to exist inside OS containers.

We’re in the early days of OS containers (despite their age), especially when coming to enterprise management of them. We learned how to manage VM farms; we’re going to need to learn to manage OS container farms to the same level of control. Indeed I foresee management planes that will cover both VMs and OS containers in the same way.

Remember that security isn’t absolute; there are gaps in hypervisor security (KVM, Xen, ESX… they’ve all had security gaps in the past). Containers are newer and the kernel exposes a larger footprint and so likely has more gaps. However I can see a day where the line between a VM and an OS container ends up blurring, and the traditional VM itself ends up being reserved for the ability to run different OSes (eg Windows, Solaris, Linux) on the same hardware and OS containers become the predominant way of running multiple Linux installs.