Container

Where to run Docker?

I was asked an interesting question: I am about to investigate Docker. We are moving to AWS too. So in your opinion, should I put energy on EC2 Container services or should I put my energy on Docker on EC2? Which is better ? I find this type of question interesting because there’s not, really, a “one size fits all” answer. It depends on your use cases.

Docker High Level Challenges with vendor containers

In previous posts I’ve gone into some detail around how Docker works, and some of the ways we can use and configure it. These have been aimed at technologists who want to use Docker, and for security staff who want to control it. It was pointed out to me that this doesn’t really help leadership teams. They’re getting shouted at; “We need Docker! We need Docker!”. They don’t have the time (and possibly not the skills) to delve into the low levels the way I have.

Secrets management with Docker Swarm

One of the big problems with a cloudy environment is in how to allow the application to get the username/password needed to reach a backend service (e.g. a MySQL database). With a normal application the application operate team can inject these credentials at install time, but a cloudy app needs to be able to start/stop/restart/scale without human intervention. This can get worse with containers because these may be started a lot more frequently.

Using placement constraints with Docker Swarm

As we’ve previously seen, Docker Swarm mode is a pretty powerful tool for deploying containers across a cluster. It has self-healing capabilities, built in network load balancers, scaling, private VXLAN networks and more. Docker Swarm will automatically try and place your containers to provide maximum resiliency within the service. So, for example, if you request 3 running copies of a container then it will try and place these on three different machines.

A look at Docker Swarm

In my previous entry I took a quick look at some of the Docker orchestration tools. I spent a bit of time poking at docker-compose and mentioned Swarm. In this entry I’m going to poke a little at Swarm; after all, it now comes as part of the platform and is a key foundation of Docker Enterprise Edition. Docker Swarm tries to take some of the concepts of a single host model and convert it into a cluster.

Simple Docker Orchestration

In earlier posts I looked at what a Docker image looks like and a dig into how it looks at runtime. In this entry I’m going to look at ways of running containers beyond a simple docker run command. docker-compose This is an additional program to be installed, but it’s very common in use. Basically, it takes a YAML configuration file. This can describe networks, dependencies, scaling factors, volumes etc etc.

Looking at how a Docker container runs

In the previous entry we looked at how a Docker container image is built. In this entry we’re going to look a little bit about how a container runs. Let’s take another look at the container we built last time, running apache: % cat Dockerfile FROM centos RUN yum -y update RUN yum -y install httpd CMD ["/usr/sbin/httpd","-DFOREGROUND"] % docker build -t web-server . % docker run --rm -d -p 80:80 -v $PWD/web_base:/var/www/html \ -v /tmp/weblogs:/var/log/httpd web-server 63250d9d48bb784ac59b39d5c0254337384ee67026f27b144e2717ae0fe3b57b % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 63250d9d48bb web-server "/usr/sbin/httpd -.

What is a Docker container?

Container technology, specifically Docker, is becoming an important part of any enterprise. Even if you don’t have development teams targeting Docker you may have a vendor wanting to deliver their software in container form. I’m not so happy with that, but we’re going to have to live with it. In order to properly control the risk around this I feel it helps to have a feeling for the basics of what a Docker container is, and since I come from a technical background I like to look at it from a technology driven perspective.

Cloud Inventory

One of the golden rules of IT security is that you need to maintain an accurate inventory of your assets. After all, if you don’t know what you have then how you can secure it? This may cover a list of physical devices (servers, routers, firewalls), virtual machines, software… An “asset” is an extremely flexible term and you need to look at it from various viewpoints to ensure you have good knowledge of your environment.

Persistent Applications

A while ago I wrote about some of the technology basics that can be used for data persistency. Apparently this is becoming a big issue, so I’m revisiting this from another direction. Why does this matter? In essence, an application is a method of changing data from one state to another; “I charge $100 to my credit card” fires off a number of applications that result in my account being debited, and the merchant being credited.

Big bugs have lesser bugs

The Siphonaptera has various versions. The version I learned as a kid goes: Big bugs have little bugs, Upon their backs to bite 'em, And little bugs have lesser bugs, and so, ad infinitum. We make use of this fact a lot in computer security; a breach of the OS can impact the security of the application. We could even build a simple dependency list: The security of the application depends on The security of the operating system depends on The security of the hypervisor depends on The security of the virtualisation environment depends on The security of the automation tool.

LXD and machine containers

A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-) The published segment is on youtube under the RFG Exchange channel. Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers.

Intel Clear Containers

Containers aren’t secure… but neither are VMs An argument I sometimes here is that large companies (especially financial companies) can’t deploy containers to production because they’re too risky. The people making this argument focus on the fact that the Linux kernel is only a software segregation of resources. They compare this to virtual machines, where the CPU can enforce segregation (eg with VT-x). I’m not convinced they’re right. It sounds very very similar to the arguments about VMs a decade ago.

Building an OS container

In a previous blog entry I described some of the controls that are needed if you want to use a container as a VM. Essentially, if you want to use it as a VM then you must treat it as a VM. This means that all your containers should have the same baseline as your VM OS, the same configuration, the same security policies. Fortunately we can take a VM and convert it into a container.

Using a container as a lightweight VM

In a lot of this blog I have been pushing for the use of containers as an “application execution environment”. You only put the minimal necessary stuff inside the container, treat them as immutable images, never login to them… the sort of thing that’s perfect for 12 factor application. However there are other ways of using containers. The other main version is to treat a container as a light-weight VM.

Persistent data

In this glorious new world I’ve been writing about, applications are non persistent. They spin up and are destroyed at will. They have no state in them. They can be rebuilt, scaled out, migrated, replaced and your application shouldn’t notice… if written properly! But applications are pointless if they don’t have data to work on. In traditional compute an app is associated with a machine (or set of machines). These machines have filesystems.

See me present!

Sesh Murthy from Cloud Raxak asked me to co-present at Cloud Expo NY June 2016. I’ve never done such a thing before, so this was a big deal for me. I put together a base presentation that Sesh modified. The video of this is now on YouTube. My part starts at 8m30, and there was a little Q/A at the end (31m35). “Enjoy” watching me do my first ever public talk!

Container Identity

Containers and other elastic compute structures are good ways of deploying applications, especially if you follow some of the guidelines I’ve made in other posts on this topic. However they don’t exist in a vacuum. They may need to call out to “external” services. For example, an Oracle database, or Amazon S3, or another API service provided by other containers. In order to do this it needs to authenticate to that service.

Network Microsegmentation

A major problem many environments have is a lack of real network control inside the perimeter. They may have large hard border controls (multi-tier DMZs; proxy gateways; no routing between tiers), but once inside traffic is unconstrained. This is sometimes jokingly referred to as “hard shell soft center” network design. If you’re lucky then your prod/dev/qa environments may be segmented. More likely there’s no restriction at all; dev programs may accidentally talk to a prod database.

Using Containers Securely in Production

This is the content of a presentation I put together for Cloud Expo NY 2016. The final presentation had a lot of this ripped out and replaced with stuff from my co-presenter (Sesh Murthy from Cloud Raxak), because he had information he wanted to present as well and we only had 35 minutes. The resulting presentation was, I think, a good hybrid. This is the original story I wanted to tell.

Building a small docker container

In previous posts I’ve written about small containers; don’t bundle a whole OS image with your app, just have the minimum necessary files and support. The Go language makes it easy to build a static executable, so let’s use this for an example: $ cat hello.go package main import "fmt" func main() { fmt.Println("Hello, World") } $ go build hello.go $ strip hello $ ls -l hello -rwxr-xr-x. 1 sweh sweh 1365448 Jun 4 13:48 hello* We can use this as the basis of a docker container (I’m using “docker” here because it’s a very common technology that’s used by lots of people):

Container technology

I’ve spent a few posts talking about the ecosystem required to keep a container secure; hands off automation, code provenance, and the like. But a number of people have asked me about the techology. Mostly they talk about “docker” and the security concerns. I’ve been loathe to talk about technology specifically because it changes. Yesterday docker daemon runs as root; tomorrow it may not. Yesterday the kernel exposed a problem, tomorrow it won’t.

Maybe containers are VMs after all

Back in Container security I said that we need to think about containers as VMs. I then looked at an easier way of looking at containers, by not treating them as VMs. Hopefully, at this point, some of you were thinking “Hmmm!”. Finally I discussed the processes and workflows outside of the container implementation that is needed to keep containers safe (build processes, etc). We can turn what we’ve learned on its head.

Keeping containers safe

In a previous post I showed that if you stop treating containers as if they were VMs then container security is easy. Now we need to look at how to keep the contents of containers safe. In general there are a number steps: Build good containers Scan existing containers Replace bad containers Build good containers This should just be an extension of your existing source control process; your CI/CD process; your “test driven” processes.

Container Security is Easy

People think container security is hard. But it’s not… if you think about it the right way. And that’s where people tend to go wrong, and that’s why they think it is hard. So let’s follow a thought pattern… First we need to consider what is a container and what distinguishes it from a virtual machine. In general a container has the following properties: Shared kernel Segmented view of resources Separate process ID space Separate filesystem mount space Separate IPC memory segments Separate view of the network … Multi-platform Linux VServer (from 2001!

Container security

It started with a set of slides by a friend: My first thought was to wonder wonder how heartbleed, shellshock, cve-2015-7547 and the like fit into this story. He answered “rebuild the world and redeploy”. Which I felt missed the problem. You also need a level of control around what goes into containers, who can build containers, where they get deployed. We have decades of history of knowing that self-run machines are badly patched and badly maintained; if the bug isn’t in the application code then it’s mostly invisible to the developer.