In the past I’ve pushed for automation of server builds, of application configuration.
Indeed, for my home setup, I’ve been using ansible
for over a decade;
I still see a config file for CentOS 6 postfix dated 2015.
I’ve started a migration from CentOS/Rocky Linux (i.e. RedHat) to Debian for my personal servers. And I’ve realised this automation is causing me more problems than it’s solving:
Differences between distributions
Debian is configured very differently to RedHat. This shows in many
places, but one of the most obvious is with Apache. With RedHat
the configuration files all live in /etc/httpd
. With Debian it’s
/etc/apache2
. With Debian you manage symlinks with commands
such as a2enmod
and a2ensite
; with RedHat it’s more “just put
stuff into the config directory”.
Now you can put stuff into the Debian directory, but it’s different
(/etc/apache2/sites-enabled
and /etc/apache2/conf-enabled
and /etc/apache2/mods-enabled
, vs /etc/httpd/conf.d
and
/etc/httpd/conf.modules.d
)
And the contents have to be different as well; eg the log directory on
RedHat Apache is just logs/
but on Debian its ${APACHE_LOG_DIR}
.
Apache is just one example. Other tools have similar configuration differences.
All this means that the configs I built for RedHat pretty much need to be rewritten from scratch for Debian. It was bad enough having to deal with differences between RHEL 6, 7, 8, 9 but there was sufficient consistency.
Maybe everything is a pet.
When I looked at all my playbooks I realised a large number of them were targeting single machines. Which kinda makes sense; in my home environment every OS instance is doing something different; it might be my desktop, or a media player, or a plex server, or a home assistant server, or a bastion host, or a router, or…
Pretty much only two machines were configured “the same” (or close to, allowing for different hosting providers). Everything else was custom.
My Apache config for my bastion “reverse proxy” is very different to my config for the server hosting this blog. (They’re on different OS releases, to start with!).
So although I had tried to automate the build for each of these pets, I kinda never used those playbooks again ‘cos I never rebuilt the server; if there had been a failure I would have restored from backup.
Reduced build automation
So rather than rewrite my playbooks and create either duplication (different OSes with different playbooks and config files) or horrendous conditional logic in the config files (which really makes it harder to read, and brings in tech-debt which will need to be removed as I finally move off the older OSes), I’m mostly doing things by hand.
The Debian OS deployment, itself, is automated with netinstall and preseed configurations. In this preseed I have a post-install step that deploys common configs (eg telling postfix to point to my mailhost), handles some custom scripting and so on.
But from then on, the upper layers are manual. One server got a docker
deployment; another got grafana
and influxdb
(the configs and data
for those got migrated from the old machine to the new); the reverse proxy
got apache2
; and so on.
Maintenance automation
There is still some requirement for automation. Most commonly to
redeploy new TLS certificates every 80 days. Because most of my
servers can’t be reached from the internet and because those that
are accessible may not have DNS pointing to them (warm standby)
I can’t just use certbot
to manage them simply. So, instead, I use
dehydrated to get the
certs and then deploy them to all the servers and services (apache,
postfix, grafana, etc etc) with a set of ansible playbooks.
Whether I keep using ansible for this “maintenance” work or if I’ll write
a set of scripts that run via ssh
I’m not sure.
Summary
In an enterprise environment there can be little doubt that automated processes are a win. They can allow for hands-off deployments; repeatable, testable processes; controlled access and more. An enterprise also shouldn’t really have much in the way of pets (at the very least there should be a BC/DR environment!). Yes, if you have 1000 apps you might have 1000 playbooks, but each app team should be maintaining their own.
In a small business or a home environment, though, the case for full automation isn’t so clear. When each server is unique then maybe documentation of how a server is built, along with good backups to enable a restore in case of failure, might be a simpler and quicker solution.
But even in my pet-heavy environment there is still a need for a level of automation, just not full automation!