Docker

Where to run Docker?

I was asked an interesting question: I am about to investigate Docker. We are moving to AWS too. So in your opinion, should I put energy on EC2 Container services or should I put my energy on Docker on EC2? Which is better ? I find this type of question interesting because there’s not, really, a “one size fits all” answer. It depends on your use cases.

Docker High Level Challenges with vendor containers

In previous posts I’ve gone into some detail around how Docker works, and some of the ways we can use and configure it. These have been aimed at technologists who want to use Docker, and for security staff who want to control it. It was pointed out to me that this doesn’t really help leadership teams. They’re getting shouted at; “We need Docker! We need Docker!”. They don’t have the time (and possibly not the skills) to delve into the low levels the way I have.

Secrets management with Docker Swarm

One of the big problems with a cloudy environment is in how to allow the application to get the username/password needed to reach a backend service (e.g. a MySQL database). With a normal application the application operate team can inject these credentials at install time, but a cloudy app needs to be able to start/stop/restart/scale without human intervention. This can get worse with containers because these may be started a lot more frequently.

Using placement constraints with Docker Swarm

As we’ve previously seen, Docker Swarm mode is a pretty powerful tool for deploying containers across a cluster. It has self-healing capabilities, built in network load balancers, scaling, private VXLAN networks and more. Docker Swarm will automatically try and place your containers to provide maximum resiliency within the service. So, for example, if you request 3 running copies of a container then it will try and place these on three different machines.

A look at Docker Swarm

In my previous entry I took a quick look at some of the Docker orchestration tools. I spent a bit of time poking at docker-compose and mentioned Swarm. In this entry I’m going to poke a little at Swarm; after all, it now comes as part of the platform and is a key foundation of Docker Enterprise Edition. Docker Swarm tries to take some of the concepts of a single host model and convert it into a cluster.

Simple Docker Orchestration

In earlier posts I looked at what a Docker image looks like and a dig into how it looks at runtime. In this entry I’m going to look at ways of running containers beyond a simple docker run command. docker-compose This is an additional program to be installed, but it’s very common in use. Basically, it takes a YAML configuration file. This can describe networks, dependencies, scaling factors, volumes etc etc.

Looking at how a Docker container runs

In the previous entry we looked at how a Docker container image is built. In this entry we’re going to look a little bit about how a container runs. Let’s take another look at the container we built last time, running apache: % cat Dockerfile FROM centos RUN yum -y update RUN yum -y install httpd CMD ["/usr/sbin/httpd","-DFOREGROUND"] % docker build -t web-server . % docker run --rm -d -p 80:80 -v $PWD/web_base:/var/www/html \ -v /tmp/weblogs:/var/log/httpd web-server 63250d9d48bb784ac59b39d5c0254337384ee67026f27b144e2717ae0fe3b57b % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 63250d9d48bb web-server "/usr/sbin/httpd -.

What is a Docker container?

Container technology, specifically Docker, is becoming an important part of any enterprise. Even if you don’t have development teams targeting Docker you may have a vendor wanting to deliver their software in container form. I’m not so happy with that, but we’re going to have to live with it. In order to properly control the risk around this I feel it helps to have a feeling for the basics of what a Docker container is, and since I come from a technical background I like to look at it from a technology driven perspective.

Persistent Applications

A while ago I wrote about some of the technology basics that can be used for data persistency. Apparently this is becoming a big issue, so I’m revisiting this from another direction. Why does this matter? In essence, an application is a method of changing data from one state to another; “I charge $100 to my credit card” fires off a number of applications that result in my account being debited, and the merchant being credited.

LXD and machine containers

A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-) The published segment is on youtube under the RFG Exchange channel. Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers.

Intel Clear Containers

Containers aren’t secure… but neither are VMs An argument I sometimes here is that large companies (especially financial companies) can’t deploy containers to production because they’re too risky. The people making this argument focus on the fact that the Linux kernel is only a software segregation of resources. They compare this to virtual machines, where the CPU can enforce segregation (eg with VT-x). I’m not convinced they’re right. It sounds very very similar to the arguments about VMs a decade ago.

Docker in production

In previous posts, and even at Cloud Expo, I’ve been pushing the idea that it’s the process that matters, not the technology when it comes to container security. I’ve tried not to make claims that at tied to a specific solution, although I have made a few posts using docker as a basis. I was recently asked my thoughts on Docker in Production: A History of Failure . Basically they can be boiled down to “it’s new; if you ride the bleeding edge you might get cut”.

Building an OS container

In a previous blog entry I described some of the controls that are needed if you want to use a container as a VM. Essentially, if you want to use it as a VM then you must treat it as a VM. This means that all your containers should have the same baseline as your VM OS, the same configuration, the same security policies. Fortunately we can take a VM and convert it into a container.

Using a container as a lightweight VM

In a lot of this blog I have been pushing for the use of containers as an “application execution environment”. You only put the minimal necessary stuff inside the container, treat them as immutable images, never login to them… the sort of thing that’s perfect for 12 factor application. However there are other ways of using containers. The other main version is to treat a container as a light-weight VM.

Persistent data

In this glorious new world I’ve been writing about, applications are non persistent. They spin up and are destroyed at will. They have no state in them. They can be rebuilt, scaled out, migrated, replaced and your application shouldn’t notice… if written properly! But applications are pointless if they don’t have data to work on. In traditional compute an app is associated with a machine (or set of machines). These machines have filesystems.

Container Identity

Containers and other elastic compute structures are good ways of deploying applications, especially if you follow some of the guidelines I’ve made in other posts on this topic. However they don’t exist in a vacuum. They may need to call out to “external” services. For example, an Oracle database, or Amazon S3, or another API service provided by other containers. In order to do this it needs to authenticate to that service.

Network Microsegmentation

A major problem many environments have is a lack of real network control inside the perimeter. They may have large hard border controls (multi-tier DMZs; proxy gateways; no routing between tiers), but once inside traffic is unconstrained. This is sometimes jokingly referred to as “hard shell soft center” network design. If you’re lucky then your prod/dev/qa environments may be segmented. More likely there’s no restriction at all; dev programs may accidentally talk to a prod database.

Using Containers Securely in Production

This is the content of a presentation I put together for Cloud Expo NY 2016. The final presentation had a lot of this ripped out and replaced with stuff from my co-presenter (Sesh Murthy from Cloud Raxak), because he had information he wanted to present as well and we only had 35 minutes. The resulting presentation was, I think, a good hybrid. This is the original story I wanted to tell.

Building a small docker container

In previous posts I’ve written about small containers; don’t bundle a whole OS image with your app, just have the minimum necessary files and support. The Go language makes it easy to build a static executable, so let’s use this for an example: $ cat hello.go package main import "fmt" func main() { fmt.Println("Hello, World") } $ go build hello.go $ strip hello $ ls -l hello -rwxr-xr-x. 1 sweh sweh 1365448 Jun 4 13:48 hello* We can use this as the basis of a docker container (I’m using “docker” here because it’s a very common technology that’s used by lots of people):