Technology is not enough

People are involved

“To summarise the summary of the summary; people are a problem” - Douglas Adams, The Restaurant At The End Of The Universe

The above quote is one of my favourite jokes (I’ve used it in a previous post); it highlights how people can complicate any situation. We can try to avoid this by automating as much as possible but, at the end of the day, there’s always a human involved somewhere; even if it’s the team that manages the automation!

A few weeks back I was down the pub with some friends and they got to talking about a Red Team run against the solution they managed. Now I was interested in this because I’d been involved in some of the design and implementation of this solution.

It had great technology in place; firewalls, segregated networks, jumphosts, keystroke logging, multi-factor requirements and, of course, automation at multiple levels of the stack.

This team had developed a robust set of processes and procedures. Key credentials were stored in a password vault that only non-operations staff could access, thus enforcing “two person” controls. All admin access was audited and tickets generated for review and signoff.

Now there was a gap; the product’s admin interface is accessed via the same server as applications running on the platform. This isn’t uncommon; in a PaaS many of the admin tools run on the PaaS itself. This made it hard to segregate admin API traffic from application traffic so this stuff was hard to block at layer 4, and layer 7 would have introduced complications and performance problems - especially for microservice architectures, adding overhead to every service call really hurt things.

So the team accepted this risk.

I’m sure you can see where this is going…

The Red Team were unable to break the design; the technology was secure. The endpoints had no flaws (no SQLi, no XSS, no buffer overflows). The interactive admin pathway was secure and they couldn’t break that.

So they turned to the staff. This path bore fruit. An intern in the Ops team followed a phishing link and entered his AD details. The Red Team used those to access his VDI. They found, on his desktop, a text file with the admin password in it. They were able to use this to access the API.

At this point the detective audit controls tripped but it was too late; an attacker had admin in the environment and it’s now considered compromised. The best we can do is limit the spread of damage and determine if any sensitive data had been exposed.

At this point it’s tempting to point fingers. Why did an intern have the admin password? Why did he store it on his desktop? Who was his supervisor? But that doesn’t help; finger pointing doesn’t solve the underlying problem. Stop, step back, take a minute, slow down.

The human weakness

Any technology deployment needs to recognise that, at some point, humans will be involved and that humans will make mistakes.

Many organisations have “cyber awareness training” processes. It’s even a requirement in some regulatory environments. But these tend to be a “one size fits all” training, and don’t take into account unique circumstances; an operations person with admin access to a PaaS has a different risk profile than an admin assistant and the consequences of succumbing to an attack is different, so why do they have the same training?

Additionally this training is typically done on a yearly basis with some form of online “click click click” course. It becomes more of a box ticking exercise than a true training exercise. I really loved learning about “Know Your Customer” requirements and “structuring money transfers”; that training was so relevant to my job!

Also our poor intern probably fell between training cycles and never did any of the courses; he hadn’t been there long enough! Even new hires typically have a window to get the training completed by, and in the mean time they may have access to production systems.

What can be done?

Earlier I described a number of technology controls this team had in place to protect their service. What was lacking was a set of non-technology controls. An obvious one might be “no one gets admin access to production systems before they’ve completed these training courses”. Another might be “people with privileged access to production systems require those additional courses”. Yet another might be having some people require refresher courses more frequently than the minimum standard.

Of course there are reasons why organisations don’t do this; primarily cost. If someone can’t do their job until they’ve done a handful of courses then that’s a week or two of time wasted. Who has that kind of slack? I would counter than these are false economies and may open you to greater expense in the future.

This means that the people creating courses also need to be trained, in order to create effective security programs. SANS calls this Securing The Human.

Conclusion

I’m not a security training expert. I don’t know the answer. I can create great technology controls, but if I don’t take into account human fallibilities then these controls will not be sufficient. I can teach people on a one-on-one basis (and I do!) but this training doesn’t easily propagate through the enterprise.

Organisations need to create a “secure human” process that encourages culture change so that new people automatically get inducted into the correct way of doing things, both via tollgates and via co-workers.

And this means more than pop up boxes “By accessing this system I promise not to be a bloody idiot [ OK ]“. They don’t work. But that’s a rant for another day.