Can't Patch, Won't Patch

Alternate approaches to vulnerabilities

Whenever a new “critical” vulnerability is found, the cry goes out across the land;

Patch!
Patch!
Patch!

Whenever a major incident is caused by known vulnerabilities the question is always

Why didn’t they patch?
We’ve known about this for months!
They should have patched!

Sometimes this is valid criticism, and learning why the organisation wasn’t patched can lead to some insights into failure modes.

Other times, however, it may not be possible.

Why can’t we always patch?

Sometimes patching raises challenges that aren’t always obvious. An annoying use case is where the computer may control some regulated piece of equipment. Let’s take the X-Ray machine at your dentist’s office; I’ve seen some of those controlled with software running on Microsoft Windows. I don’t know about you, but I’d rather not have non-experts apply a Windows patch that causes radiation exposure 100x greater than planned! Further, in the medical field, may be computers controlling CAT or MRI machines, which cost megabucks to replace. Now you may argue that users shouldn’t buy into this sort of hardware in the first place, but when it’s that industry standard then they don’t have a lot of choice. Here we’re definitely into “can’t patch” territory.

Related to this are the “appliance” type devices, where you are forced to wait for a vendor to release updates. You can’t patch until that happens!

Another scenario may be that legacy hardware needs to be supported; the software that drives that hardware doesn’t work under newer versions of the OS, so you’ve been limited to old unsupported unpatchable installs. In some cases there may be a path, but the cost is prohibitive; in other cases it may be a legal requirement (document retention requirements are big here; a need to restore backups from 7 years ago means needing to maintain equipment that can drive the hardware to do the restore). We may be in a combination of can’t/won’t patch.

And, of course, there’s a potential resource constraint issue; how long would it take you to patch your estate?

Whatever the reason, sometimes patching just isn’t feasible. Pointing fingers at victims of a cyber breach and shouting “you shoulda patched!” without knowing the full set of constraints the organisation was under doesn’t help you learn from these incidents and prevent them in your own organisation.

Compensating controls

So if we can’t always patch, then what can we do? This is where the “defense in depth” type controls can come in handy.

Web servers should be behind a WAF, for example. Can’t patch against shellshock (to take an example) because you have a gazillion instances? If your WAF can block the pattern, then it should do so. Beware false positives, of course…

Legacy devices may need to be placed on a protected network behind firewalls, requiring explicit permission (jumphosts, firewall ingress rules, whatever) to be reachable from the core network. Of course, such devices may need to be protected from each other, leading to a proliferation of firewall rules!

Emulation may work in some cases; run the older app under an emulation layer. It may work and may mitigate some of the issues…

There’s no one solution to these problems; the mitigation will be unique to the challenge.

Risk management

In some cases no mitigation may be possible (e.g. an MRI scanner that expects to write results to a standard Windows share; WannaCry will happily encrypt all that data!). So now we need to look at managing the risk.

The first step, as always, is to be aware that you have a risk. Inventory tracking, patch state… all the standard vulnerability management stuff is a prerequisite; you can’t evaluate what you don’t know.

Perform an assessment; is an unpatched device really at risk? My TiVo runs Linux at its heart; is it at risk to Meltdown or Spectre? Given the software stack on top, I’d guestimate “low risk” (even if the hardware was susceptible, which it may not be) due to the limited input channels to the OS. Is your Windows 95 machine, running your door swipe system, at risk?

Determine what mitigation is possible. Stop the caretaker using the door entry PC to surf the web :-)

And so on. Y’know… all the standard risk management stuff.

At the end of the day you end up with an evaluation of the criticality of the exposure, the likelyhood of it happening, the consequences. Outside of a VM farm (including clouds), is Meltdown a big risk to your organisation? Possibly not. Shellshock may have been, though.

And then you decide whether to carry that risk, or plan a remediation strategy (spend money, upgrade, convert backups to a new format…).

And perhaps consider insurance, just in case you’re wrong! (Although can you insure against reputational loss when you leak a gazillion credit cards?)

Summary

Not every issue can be patched. Not ever issue needs to be patched. But organisations need to be aware of their risk and determine an approach.

Risk management and vulnerability management needs to be a critical function in any organisation. Doing it wrong can cost lots of money (either in wasted effort due to emergency ‘patch now patch now’ mentality, or in incident response after a breach). It’s not a side issue to be minimally funded just because some NIST or SANS chart tells you it should be there.