Ramblings of a Unix Geek

I've been doing this for a long time... I ramble!

Always Listening Devices

Recently we heard news that the police had requested Alexa recordings to assist with a murder enquiry. The victim had an Amazon Echo and the police feel there’s useful data to be obtained. This leads to speculation about what sort of information is recorded by these devices, and how secure are they? What type of devices are we talking about? There are a number of devices out there these days which you can talk to, to request things.

SSH keeps disconnecting

This blog post is of a more practical nature, and may be of use for people at home who ssh into servers and then come back later to find their session disconnected. It might also help some people in offices with nasty firewalls! Basically the scenario goes something like: ssh into a server lock your screen, go away for a few hours come back, unlock your screen ssh session has been disconnected So how does this happen, and what can we do to stop it?

Backup and restore

Have you tested your backups recently? I’m sure you’ve heard that phrase before. And then thought “Hmm, yeah, I should do that”. If you remember, you’ll stick a tape in the drive and fire up your software, and restore a dozen files to a temporary location. Success! You’ve proven your backups can be recovered. Or have you? What would you do if your server was destroyed? Do you require specialist software to recover that backup?

Using Letsencrypt for TLS

In previous posts I pointed out why TLS is important, how to configure Apache to score an A+ and how to tune HTTP headers. All this is dependent on getting an SSL cert. Some jargon explained Before we delve into a “how to”, some basic jargon should be explained: SSL/TLS TLS (“Transport Layer Security”) is the successor to SSL (“Secure Socket Layer”). SSL was created by Netscape in the mid 90s (I remember installing “Netscape Commerce Server” in 1996).

LXD and machine containers

A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-) The published segment is on youtube under the RFG Exchange channel. Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers.

Building my home server

A couple of weeks back I got a new case for my PC. Previously I was using a generic mini-tower and then had an external 8-disk tower (Sans Digital TR8MB) connected via an eSATA concentrator (4 disks per channel). It’s been working OK for years but every so often the controller would reset (especially under write loads); no data lost but annoying. Also after a power reset (eg a failure, or maintenace) then frequently one or two disks (slot 2 in both halves!!) weren’t always detected and needed reseating and re-adding to the RAID6 (yay for write-intent bitmaps, so recovery is quick!).

Intel Clear Containers

Containers aren’t secure… but neither are VMs An argument I sometimes here is that large companies (especially financial companies) can’t deploy containers to production because they’re too risky. The people making this argument focus on the fact that the Linux kernel is only a software segregation of resources. They compare this to virtual machines, where the CPU can enforce segregation (eg with VT-x). I’m not convinced they’re right. It sounds very very similar to the arguments about VMs a decade ago.

Technical Debt

My home server I was doing an upgrade on my home “server” today, and it made me realise that design choices I’d made 10 years still impact how I build this machine today. In 2005 I got 3*300Gb Maxtor drives. I ran them in a RAID 5; that gave me 600Gb of usable space. It worked well. In 2007 I upgraded the machine to 500Gb disks. This required additional SATA controllers, so I got enough to allow new and old disks to be plugged in at the same time (cables galore).

Docker in production

In previous posts, and even at Cloud Expo, I’ve been pushing the idea that it’s the process that matters, not the technology when it comes to container security. I’ve tried not to make claims that at tied to a specific solution, although I have made a few posts using docker as a basis. I was recently asked my thoughts on Docker in Production: A History of Failure . Basically they can be boiled down to “it’s new; if you ride the bleeding edge you might get cut”.

Using SSH certificates

In previous articles I’ve explained how to use traditional SSH keys and why connecting to a wrong server could expose your password. I was reminded of a newer form of authentication supported by OpenSSH; CA keys. The CA key model is closer to how SSL certs work; you have an authority that is trusted by the servers and clients, and a set of signed keys. Creating the CA key Creating a certificate authority key is pretty much the same as creating any other key $ mkdir ssh-ca $ cd ssh-ca $ ssh-keygen -f server_ca Generating public/private rsa key pair.