A couple of weeks back I got a new case for my PC. Previously I was using a generic mini-tower and then had an external 8-disk tower (Sans Digital TR8MB) connected via an eSATA concentrator (4 disks per channel).
It’s been working OK for years but every so often the controller would reset (especially under write loads); no data lost but annoying. Also after a power reset (eg a failure, or maintenace) then frequently one or two disks (slot 2 in both halves!
Containers aren’t secure… but neither are VMs An argument I sometimes here is that large companies (especially financial companies) can’t deploy containers to production because they’re too risky. The people making this argument focus on the fact that the Linux kernel is only a software segregation of resources. They compare this to virtual machines, where the CPU can enforce segregation (eg with VT-x).
I’m not convinced they’re right. It sounds very very similar to the arguments about VMs a decade ago.
My home server I was doing an upgrade on my home “server” today, and it made me realise that design choices I’d made 10 years still impact how I build this machine today.
In 2005 I got 3*300Gb Maxtor drives. I ran them in a RAID 5; that gave me 600Gb of usable space. It worked well.
In 2007 I upgraded the machine to 500Gb disks. This required additional SATA controllers, so I got enough to allow new and old disks to be plugged in at the same time (cables galore).
In previous posts, and even at Cloud Expo, I’ve been pushing the idea that it’s the process that matters, not the technology when it comes to container security. I’ve tried not to make claims that at tied to a specific solution, although I have made a few posts using docker as a basis.
I was recently asked my thoughts on Docker in Production: A History of Failure .
Basically they can be boiled down to “it’s new; if you ride the bleeding edge you might get cut”.
In previous articles I’ve explained how to use traditional SSH keys and why connecting to a wrong server could expose your password. I was reminded of a newer form of authentication supported by OpenSSH; CA keys.
The CA key model is closer to how SSL certs work; you have an authority that is trusted by the servers and clients, and a set of signed keys.
Creating the CA key Creating a certificate authority key is pretty much the same as creating any other key
Modern web browsers have a number of settings to help protect your site from attack. You turn them on by use of various header lines in your HTTP responses. Now when I first read about them I thought they were not useful; a basic rule for client-server computing is “don’t trust the client”. An attacker can bypass any rules you try to enforce client side.
But then I read what they do and realised that they are primary to help protect the client and, as a consequence, protect your site from being hijacked.
(Side note: in this post I’m going to use TLS and SSL interchangably. To all intents and purposes you can think of TLS as the successor to SSL; most libraries do both).
You can think of security as a stack. Each layer of the stack needs to be secure in order for the whole thing to be secure. Or, alternatively, you can think of it as a chain; the whole thing is only as strong as the weakest link.
In my previous post I wrote about some automation of static and dynamic scanning as part of the software delivery pipeline.
However nothing stays the same; we find new vulnerabilities or configurations are broken or stuff previously considered secure is now weak (64bit ciphers can be broken, for example).
So as well as doing your scans during the development cycle we also need to do repeated scans of deployed infrastructure; espcially if it’s externally facing (but internal facing services may still be at risk from the tens of thousands of desktops in your organisation).
In many organisations an automated scan of an application is done before it’s allowed to “go live”, especially if the app is external facing.
There are typically two types of scan:
Static Scan Dynamic Scan Static scan A static scan is commonly a source code scan. It will analyse code for many common failure modes. If you’re writing C code then it’ll flag on common buffer overflow patterns.
In an earlier blog I wrote that SSH keys need to be managed. It’s important!
In the comments I was asked about keytabs. I felt this such a good question that it deserved it’s own blog entry.
The basics of Kerberos auth In essence, kerberos is a “ticket based” authentication scheme. You start by requesting a TGT (“Ticket granting ticket”). This, typically, is where you enter your password. This TGT is then used to request service tickets to access resources.