My home server
I was doing an upgrade on my home “server” today, and it made me realise that design choices I’d made 10 years still impact how I build this machine today.
In 2005 I got 3*300Gb Maxtor drives. I ran them in a RAID 5; that gave me 600Gb of usable space. It worked well.
In 2007 I upgraded the machine to 500Gb disks. This required additional SATA controllers, so I got enough to allow new and old disks to be plugged in at the same time (cables galore). I copied the data over. But the “architecture” was roughly the same.
I bought a new PC, moved the disks. I got an external disk array (8 bays) using an eSata concentrator. Switched to 5*1Tb disks. Replaced the PC (quad core!), upgraded the memory, switch to 2Tb disks, switch to 4Tb disks.
Today I bought a new case that could handle 10 5.25” drives, and put in 3 “4*3.25 bays in a 3 bay space” in them. Used some SAS controllers.
So today I have 4*2Tb in a raid 10, 8*4Tb in a raid 6 and 2*500Gb SSD in raid 1.
But the design choice was set 10 years ago. Just because I picked Linux mdraid, and KVM for virtualisation a few years later.
This means I can’t easily switch to another OS base. I’d actually like to try out SmartOS because it has Solaris based ZFS, which strikes me as a lot better way to handle this level of storage. But how do I migrate 24Tb of data, without buying another 8 disks?
That’s technical debt; I could overcome this buy spending money (basically buy a whole new machine, with all the disks and controllers). It might even be a good thing (I could upgrade from my old 2010 based Core i5 750 machine to something more modern). But it’s a lot of money.
Instead I’ve been spending smaller chunks of money to do incremental upgrades of my existing machine. Most likely my next upgrade will be a new motherboard, memory, CPU… which will go into the existing case with the existing disks and existing controllers. I can’t easily switch to a newer better solution.
So what does all the above have to do with software development?
We also have technical debt here; we’ve built processes and procedures. We may have switched from waterfall to agile (and done it badly, abusing the “scrum” approach because the larger organisation is still waterfall oriented). We’ve added test driven design methodologies. We’ve done peer programming, extreme programming…
But they all focus on small aspects of the overall picture. We’re still looking at deploying operating systems, databases, software. We may have support and change management constraints.
This technical debt is stopping us from using new technologies better; in particular cloud computing.
The number of times I’ve heard lift and shift when talking about moving to the cloud is scary. All you’re doing, here, is “outsourcing the datacenter”. You’re not making use of new technologies, you’re just replicating existing structures. This means you’re going to need to have a DR solution, HA data replication, testing, failover. You’re still needing to patch systems, have reboot and maintenance windows. People still login and su/sudo.
This isn’t cloud computing; this is traditional computing on virtual servers stored off premise.
Just like my “duplicate and rebuild from scratch” home server we may need to do the same with our software. Build with a 12 factor focus. Design for failure. Scale horizontally. Design for hands-off compute, automated deployment, stateless applications, attached storage.
Now we can start to make use of the new technologies. We don’t need a DR solution because the design, itself, is resilient. We don’t do failover testing because there is no failover. We don’t need to patch the OS; we redeploy…
There’s a lot of benefits to be gained from re-factoring your app, your infrastructure. However getting there isn’t necessarily cheap. We’ve a tonne of historical restrictions to overcome (design, architecture, process, support). We could spend a whole year rewriting our code to the new paradigm, which is time not being spent improving the product.
Can you afford to do this? Can you afford not to do this?