I got asked another question. I’m going to paraphrase the question for this blog entry. Given the Russian invasion of Ukraine and the response of other nations (sanctions, asset confiscation, withdrawal of services, isolation of the Russian banking system…) there is a chance of enhanced cyber attacks against Western banking infrastructure in retaliation. How can we be 100% sure our cloud environments are secure from this? Firstly, I want to dispel the “100%” myth.
What is IP Allow-Listing Typically when you want to access a remote resource (e.g. login to a server) you need to provide credentials. It might be a simple username/password, it could be via SSH keys, it could use Mutual TLS with client-side certificates… doesn’t really matter. One concern is “what happens if the credential is stolen”. IP allow-listing is a way of restricting where you can use that credential from.
Previously I described a relatively modern set of TLS settings that would give an A+ score on SSLtest. This was based purely on an RSA certificate. There exist another type of certificate, based on Elliptical Curve cryptography. You may see this referenced as ECC or, for web sites, ECDSA. An ECDSA certificate is smaller than an RSA cert (eg a 256bit ECDSA cert is roughly the equivalent of a 3072bit RSA one).
Back in 2016 I documented how to get an A+ TLS score. With minor changes this still works. But times have changed. In particular older versions of TLS aren’t good; at a very least you must support nothing less than TLS1.2. Consequences of limiting to TLS 1.2 or better If you set your server to deny anything less than TLS 1.2 then sites like SSLlab tell us that older clients can no longer connect.
It’s an article of faith that the development process starts in the part of the network set aside for development work. Then the code may go to the QA area for QA testing, UAT area for UAT, production area for production. That statement almost looks like a truism; development work is done in DEV. So a corporate network may be divided along dev/uat/prod lines, with firewalls between them so that development code can’t impact production services.
A number of years back I saw a mail thread around Java performance. One person made the claim that Java was slow; if you wanted performant apps then write in a different language. This matched my experience, so I felt a little confirmation bias. However the reply really made me think; this may have been true 10 years ago (and, indeed, it’d been over 10 years since I’d done any real Java) but modern techniques meant that Java was perfectly fast.
One of the golden rules of IT security is that you need to maintain an accurate inventory of your assets. After all, if you don’t know what you have then how you can secure it? This may cover a list of physical devices (servers, routers, firewalls), virtual machines, software… An “asset” is an extremely flexible term and you need to look at it from various viewpoints to ensure you have good knowledge of your environment.
Identity and Access Management (IAM) historically consists of the three A’s Authentication What acccount is being accessed? Authorization Is this account allowed access to this machine? Access Control What resources are you allowed to use? Companies spend a lot of time and effort on the Authentication side of the problem. Single signon solutions for web apps, Active Directory for servers (even Unix machines), OAuth for federated access to external resources, 2 Factor for privileged access… there’s a lot of solutions around and many companies know what they should be doing, here.
Part of any good backup strategy is to ensure a copy of your backup is stored in a secondary location, so that if there is a major outage (datacenter failure, office burns down, whatever) there is a copy of your data stored elsewhere. After all, what use is a backup if it gets destroyed at the same time as the original? A large enterprise may do cross-datacenter backups, or stream them to a “bunker”; smaller business may physically transfer media to a storage location (in my first job mumble years ago, the finance director would take the weekly full-backup tapes to her house so we had at most 1 week of data loss).
Have you tested your backups recently? I’m sure you’ve heard that phrase before. And then thought “Hmm, yeah, I should do that”. If you remember, you’ll stick a tape in the drive and fire up your software, and restore a dozen files to a temporary location. Success! You’ve proven your backups can be recovered. Or have you? What would you do if your server was destroyed? Do you require specialist software to recover that backup?
A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-) The published segment is on youtube under the RFG Exchange channel. Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers.
A couple of weeks back I got a new case for my PC. Previously I was using a generic mini-tower and then had an external 8-disk tower (Sans Digital TR8MB) connected via an eSATA concentrator (4 disks per channel). It’s been working OK for years but every so often the controller would reset (especially under write loads); no data lost but annoying. Also after a power reset (eg a failure, or maintenace) then frequently one or two disks (slot 2 in both halves!
My home server I was doing an upgrade on my home “server” today, and it made me realise that design choices I’d made 10 years still impact how I build this machine today. In 2005 I got 3*300Gb Maxtor drives. I ran them in a RAID 5; that gave me 600Gb of usable space. It worked well. In 2007 I upgraded the machine to 500Gb disks. This required additional SATA controllers, so I got enough to allow new and old disks to be plugged in at the same time (cables galore).
Modern web browsers have a number of settings to help protect your site from attack. You turn them on by use of various header lines in your HTTP responses. Now when I first read about them I thought they were not useful; a basic rule for client-server computing is “don’t trust the client”. An attacker can bypass any rules you try to enforce client side. But then I read what they do and realised that they are primary to help protect the client and, as a consequence, protect your site from being hijacked.
(Side note: in this post I’m going to use TLS and SSL interchangably. To all intents and purposes you can think of TLS as the successor to SSL; most libraries do both). You can think of security as a stack. Each layer of the stack needs to be secure in order for the whole thing to be secure. Or, alternatively, you can think of it as a chain; the whole thing is only as strong as the weakest link.
In a previous blog entry I described some of the controls that are needed if you want to use a container as a VM. Essentially, if you want to use it as a VM then you must treat it as a VM. This means that all your containers should have the same baseline as your VM OS, the same configuration, the same security policies. Fortunately we can take a VM and convert it into a container.
In a lot of this blog I have been pushing for the use of containers as an “application execution environment”. You only put the minimal necessary stuff inside the container, treat them as immutable images, never login to them… the sort of thing that’s perfect for 12 factor application. However there are other ways of using containers. The other main version is to treat a container as a light-weight VM.
A phrase you might hear around cloud computing is lift and shift. In this model you effectively take your existing application and move it, wholesale, into a cloud environment such as Amazon EC2. There’s no re-architecting of the application; there’s no application redesign. This make it very quick and very easy to move into the cloud. It’s not much different to a previous p2v (physical to virtual) activity that companies performed when migration to virtual servers (eg VMware ESX).
Building a secure web application has multiple layers to it. In previous posts I’ve spoken about some design concepts relating to building a secure container for your app, and hinted that some of the same concepts could be used for building VMs as well. You also need to build secure apps. OWASP is a great way to help get started on that. I’m not going to spend much time on this blog talking about application builds beyond some generics because I’m not really a webdev.
Shadow IT isn’t a new thing. Any large corporation has seen it. Sometimes called “server under desk” or “production desktop”. Sometimes it grows out of a personal project that started on a spare machine and that gradually morphed into a mission critical machine… but without any of the controls and tools normally associated with production infrastructure (patches, backups, DR, access admin, security scanning…). Other times it grows out of a desire to do things quickly; all of those controls and tools take time and can hinder the developer experience.
I have a linode and a Panix v-colo. These servers do everything I want. But OVH do a physical server rental program. It’s not necessarily the best server in the world, but it’s pretty good. The So You Start(SYS) server starts at around $50 month. Now a linode is $20/month. 10% discount for paying 1 yr in advance, so $18/month. To compare; for $18/month I get a 8vCPU, 1Gb RAM, 48Gb of disk on my linode.