Using Containers Securely in Production

It's the process that matters, not the technology

This is the content of a presentation I put together for Cloud Expo NY 2016. The final presentation had a lot of this ripped out and replaced with stuff from my co-presenter (Sesh Murthy from Cloud Raxak), because he had information he wanted to present as well and we only had 35 minutes. The resulting presentation was, I think, a good hybrid.

This is the original story I wanted to tell. You may recognise some of this content from other blog posts I’ve written :-)

For each section I’ve added in italics my thinking and what sort of thing I’d have talked about.

I attach the LibreOffice Impress presentation for those who want to see the original.


What is a container?

  • Shared kernel

    • Segmented view of resources
    • Separate process ID space
    • Separate filesystem mount space
    • Separate IPC memory segments
    • Separate view of the network
  • Multi-platform

    • Linux VServer (from 2001!)
    • OpenVZ
    • AIX WPARs
    • Solaris ~Containers~ Zones (from Solaris 10, 2005)
    • Linux containers (LXC, cgroups, etc)
    • Docker
    • Warden

The thinking here is that there’s a lot of discussion about containers, but there’s different ways of thinking about containers. I wanted to focus on the lower technology layer, rather than the packaging, at this point to show that this type of technology wasn’t new. I’d used the VServer patches in 2001 to create “bastion” hosts on my firewall.


Container as a VM

Let’s look at the worst case scenario. A container may have:

  • A cut down (maybe!) version of the OS
    • Glibc, bash, web server, support libraries, tools
    • Maybe different versions than those running on the host!
  • Filesystems
    • Private to the container?
    • Shared with the host?
  • Access to the network
    • Bridged to the main network?
    • NAT?
  • Processes that run
  • Network listeners
  • User logins

This sounds like a whole operating system to me. The only difference is the shared kernel.

So, in the worst case, we need to control this the same as the OS. And deal with the unique challenges.

I call this “the hard way”. You can treat containers like this; indeed Solaris zones are exactly like this. But effectively you’re creating a form of “virtual machine”, and so need to control it that way.


Challenges with this model

  Virtual Machine Container New threat
"Parent Access" Hypervisor attack; one VM may be able to use hypervisor bugs to gain access to another VM Parent OS attack; process may be able to escape the container More people may have access to the host OS than to the Hypervisor; more threat actors.
Shared kernel is bigger than hypervisor; bigger attack surface
Kernel may allow dynamically loaded modules; variable attack surface!
"Shared resources" Rowhammer
Noisy neighbour
Overallocation
...
Mostly the same Resource separation is now in the kernel; not as well segregated; not all resources may be segregated
"Unique Code" Each VM may run a different OS, different patch revisions, different software Each container may run different software versions, different patch levels, different libraries Scaling issue; we can run many more containers than VMs
"Inventory management" What machines are out there? What OS are they running? Are they patched? What services are they running?... Mostly the same We typically have strong controls as to who can create new VM's but we allow anyone to spin up a new container 'cos it's so quick and easy!
"Rogue code" Is the software secure? Acceptable license terms? Vulnerability scanned? Mostly the same Anyone can build a container; it may have different versions of core code (eg glibc) than the core OS. Developers may introduce buggy low-level routines without noticing

If you are going to use a container as if it’s a VM then you need to apply all your existing VM level controls to the container; you control VM creation, you’re going to need to control container creation to the same level! Otherwise you run the risk of “Shadow IT” with unknown software and potential vulnerabiltiies (shellshock, heartbleed etc).

Why are you using containers this way? Are you just using them as a packing construct?

I don’t like spending a lot of time on this page, but people typically lean forward and pay attention. Maybe it’s the wall of text that makes them think there’s good content! The point here is to emphasise “the hard way”. And if you’re using containers as a VM technology then you might want to rethink your approach.


What makes a container different?

  • Very dynamic
    • Harder to track, maintain inventory
    • If you don’t know what is out there, how can you patch?
    • How many containers may suffer shellshock, heartbleed or CVE-2015-7547 (glibc) issues because we don’t know where they run?
  • Technology specific issues
    • Docker daemon running as root?
    • SELinux interaction
    • Root inside a container may be able to escape (eg via /sys)
  • Kernel bugs
    • Ensure your parent OS is patched!

What is now highlighted is that the existing solutions may not scale properly. What works for a 1,000 (or even 10,000) VMs won’t work for 100,000 or millions of containers.

Can your solutions deal with containers spinning up, running for a few minutes, then shutting down again? Probably not.

This helps distinguish a container from a VM, especially the scaling factor and dynamic nature


The Easy way

Think of a container as

  • Transient
    • There is no persistent storage
  • Short lived
    • Anywhere from microseconds upwards
  • Immutable
    • You don’t change the running container; you build a new image and deploy that
  • Untouchable
    • You NEVER LOG IN!
  • Automated
    • If you can’t login, you better automate the builds

Now we can start to focus on what matters; the application. Don’t allow people to create containers; have an “application delivery platform” that does it for them.

Short lived also means it has a drop-dead date; this may be minutes, hours, weeks… Have a chaos monkey do it, or have a risk-based evaluation.. but a container should die. They’re not persistent.

The “immutable” and “never log in” parts of this can be hard to swallow. It means a change in AppDev processes (closer to 12-factor applications; it means the app needs to instrument and log stuff better). But this is key to elastic compute.


Minimize attack surface

Basic hygiene rules:

  • Harden the kernel
    • You don’t have to use the vendor provided one
  • Ensure the OS is patched
  • Integrate into your CI/CD process
    • No manual startup of containers
    • Only those that passed testing, vuln scanning, etc can be run
    • No external code without approval
    • Standard code hygiene (provenance, licensing, etc)
  • Automate, automate, automate!

A hardened kernel doesn’t mean “create your own patches”. You can take a kernel direct from www.kernel.org and configure it with a minimal set of options built in. e.g. turn off dynamically loadable modules (they open a large attack surface); only add in drivers necessary for your hardware (easy if you always deploy to a VM platform); don’t add features you don’t need.


Container life cycle

The contents of a container are also important

In general there are a number steps:

  1. Build good containers
    • Known content, internal repos
  2. Deploy containers
    • Orchestration layer, autoscaler, track deployments
  3. Scan existing containers
    • Keep on top of new found bugs
  4. Replace bad containers
    • Replace stuff known to be broken

We’ll go through each of these steps in the next few slides


Build good containers

You already know how to do this… extension of your existing processes!

  • Don’t create a whole copy of the OS to put inside the container
    • If you need an OS image then use a common base (the same base as you use for traditional compute?). Don’t have a RedHat base, a Debian base, a Ubuntu base… stick with one image
    • If you use golang, how much do you actually need inside the container? A few /dev files?
  • Don’t pull code direct from the internet.
    • npm “left pad” chaos
    • Use internal source repositories
    • If using docker repos, use internal ones
    • Curate the content that goes into them
  • Code scan
    • Source code scanning
    • Binary object scanning
    • CVE detection, OWASP best practices
  • Automated deployment
    • Final image push to a registry; deployed to dev (automated)
    • Promoted to QA/UAT
    • Promoted to prod

Really this is meant to be an extension of current best practice that people are already using for code creation and deployment, but extended to handle the container build process. So don’t pull docker images from a public registry; use your own private one… in exactly the same way that you shouldn’t pull code live from the internet but have an internal source repo.

You can then start to automate things; e.g. have a Jenkins job that’s triggered from a code checkin that does the code scanning, builds the image from the internal repos/registries, validates it passes all checks, pushes into the dev registry. Consider that image immutable and your code promotion becomes easier; just promote good images.


Deploy containers

Orchestration/Automation needs to

  • Deploy only containers from registry
  • Track which containers are running where
    • Effectively maintain “inventory” of code running

Key to deployment at scale

  • Auto scaling
  • Auto repair
  • Blue/Green deployments

This is a small slide, but important. Your orchestration layer (Kubernettes, Cloud Foundry, Swarm… whatever you use) needs to be able to track and report on what images are running, and how those images were built. We’ll see later why…


Scan Existing Containers

Target state isn’t static

New CVEs found

  • Libraries
  • Base OS image
  • Compiler
  • JVM

Keep scanning everything in the registry. When new bug (eg shellshock) found we can identify every bad image

There’s a new OpenSSL bug every other month. Just because your container was clean when you built it doesn’t mean it’ll remain clean. So we need to keep scanning these container images in case a new bug is found anywhere in the application stack.


Replace bad containers

Your scanning detected a problem

Your orchestration layer identifies where these images are running

Risk based approach

This is where the orchestration layer importance comes in. It’s all fine and good knowing the “myapp” image is bad, but if we don’t know where it is running then you can’t replace running images.

How quickly you replace it is up to you; if you have an internet facing shellshock then you might want to cycle your containers pretty quickly, but in an internal public-data service may wait until the next container recycle.


Nothing new!

  • You’re already doing this for regular compute.
  • Right?
    • Right?
  • Nasty secret… it’s where we want to be, but large organisations take time to pivot

Use this to fix existing procedures

Same solution can work on VMs

  • Internal IaaS
  • Amazon AWS via AMI images

There’s nothing really unique about containers, here! You can build OS images the same way and build an orchestration layer on top of it and manage virtual machines the same way! Have we been managing VMs wrong for the past 20 years?


Technology

  • Nothing here is technology focused
    • Process is the key
  • Only need to care about low levels if building your own
  • When using docker/Apprenda/CloudFoundry/$XYZZY treat it like any other vendor product
    • Track upstream bugs, patch
    • Follow best practices
    • 1,000s of web pages on “how to docker”
  • Control the environment
    • Automation
    • Hands free deployment
    • You need to do this, anyway!

That’s why this was sub-titled “It’s the process that matters, not the technology”.


Big wins

  • How many licenses are you using? You know because your automation tools will tell you what is running
  • A CVE has been found; you know what images are vulnerable and where they are running
    • Your image is more lightweight so may not be vulnerable in the first place!
  • Patching doesn’t exist; you just redeploy
  • Change management is simpler
    • No need for “tripwire”; no one can login to change contents
    • No need for central IDadmin; no accounts to manage
  • Financial regulatory controls become easier
    • But may require educating auditor and regulators on the “new world”
  • Controls end up being pushed to “code management”
    • Which you need to control, anyway!

And another benefit… if your app can be delivered this way, you can also start to deploy VMs this way. A Linux OS without any form of login? Let’s see someone brute-force ssh bad passwords that way :-)

Teaching regulators and auditors about this new world is going to be interesting; they’re gonna freak out!


Conclusion

  • Use the right tools for the job
    • If you want a VM then use a VM
  • If you treat a container as if it’s a VM then you need to control it as if it’s a VM
    • Heavy process
    • Not always scalable
  • Must control containers
    • Avoid Shadow IT
    • Must not allow a “wild west”
  • Not all tasks are suitable for containers

Reminder

Container security is weaker than VM security; the kernel has a large surface area to be attacked.

VM security is weaker than physical machine security; the hypervisor provides an attack surface

Network security is weaker than air-gap separation.

Air-gapped machines are weaker than a server covered in cement and dropped into the bottom of the ocean.

Your security stance depends on your needs and risk evaluation. Pick the technology that is right for you and use it appropriately.

I was told to always end a presentation with a joke