Lessons from a pentest run

Over on Twitter, @TinkerSec live tweeted a pentest and created a moment thread of it. It’s fascinating reading, and well worth reading. Even non-technical people should be able to get something out of this. I like that it’s a form of insider attack (industrial espionage by a newly hired employee? disgruntled employee? vendor allowed unaccompanied access?) rather than an external attack. One of the things that typically comes out of an event like this is a series of action items.

Offsite Backups in the cloud

Part of any good backup strategy is to ensure a copy of your backup is stored in a secondary location, so that if there is a major outage (datacenter failure, office burns down, whatever) there is a copy of your data stored elsewhere. After all, what use is a backup if it gets destroyed at the same time as the original? A large enterprise may do cross-datacenter backups, or stream them to a “bunker”; smaller business may physically transfer media to a storage location (in my first job mumble years ago, the finance director would take the weekly full-backup tapes to her house so we had at most 1 week of data loss).

Managing the cloud management layer

A typical cloud engagement has a dual responsibility model. There’s stuff that can be considered “below the line” and is the responsibility of the cloud service provider (CSP) and there’s stuff above the line, which is the responsibility of the customer. Amazon have a good example for their IaaS: Where the line lives will depend on the type of engagement; the higher up the abstraction tree (IaaS->PaaS->SaaS) the more the CSP has responsibility.

Big bugs have lesser bugs

The Siphonaptera has various versions. The version I learned as a kid goes: Big bugs have little bugs, Upon their backs to bite 'em, And little bugs have lesser bugs, and so, ad infinitum. We make use of this fact a lot in computer security; a breach of the OS can impact the security of the application. We could even build a simple dependency list: The security of the application depends on The security of the operating system depends on The security of the hypervisor depends on The security of the virtualisation environment depends on The security of the automation tool...

Always Listening Devices

Recently we heard news that the police had requested Alexa recordings to assist with a murder enquiry. The victim had an Amazon Echo and the police feel there’s useful data to be obtained. This leads to speculation about what sort of information is recorded by these devices, and how secure are they? What type of devices are we talking about? There are a number of devices out there these days which you can talk to, to request things.

Backup and restore

Have you tested your backups recently? I’m sure you’ve heard that phrase before. And then thought “Hmm, yeah, I should do that”. If you remember, you’ll stick a tape in the drive and fire up your software, and restore a dozen files to a temporary location. Success! You’ve proven your backups can be recovered. Or have you? What would you do if your server was destroyed? Do you require specialist software to recover that backup?

Deep scanning your deployment

In my previous post I wrote about some automation of static and dynamic scanning as part of the software delivery pipeline. However nothing stays the same; we find new vulnerabilities or configurations are broken or stuff previously considered secure is now weak (64bit ciphers can be broken, for example). So as well as doing your scans during the development cycle we also need to do repeated scans of deployed infrastructure; espcially if it’s externally facing (but internal facing services may still be at risk from the tens of thousands of desktops in your organisation).

Scanning your code

In many organisations an automated scan of an application is done before it’s allowed to “go live”, especially if the app is external facing. There are typically two types of scan: Static Scan Dynamic Scan Static scan A static scan is commonly a source code scan. It will analyse code for many common failure modes. If you’re writing C code then it’ll flag on common buffer overflow patterns. If you’re writing Java with database connectors it’ll flag on common password exposure patterns.

Man in the middle attacks

A fair number of security advisories mention Man In The Middle (MITM) attacks. It’s quite an evocative phrase, but it’s a phrase meant mainly for the infosec community; it doesn’t help your typical end user understand the risks. So what is a MITM attack, and how can I avoid becoming a victim? Before we get into technology let’s look at something we all know about; the boring snail mail postal system.

How public cloud can change your security stance

The core problem with a public cloud is “untrusted infrastructure”. We could get a VM from Amazon; that’s easy. What now? The hypervisor isn’t trusted (non company staff access it and could use this to bypass OS controls). The storage isn’t trusted (non company staff could access it). The network isn’t trusted (non company…). So could we store Personal Identifying Information in the cloud? Could a bank store your account data in a public cloud?

The People Problem

“To summarise the summary of the summary; people are a problem” - Douglas Adams, The Restaurant At The End Of The Universe In a traditional compute environment we may have a lot of controls. There may be a lot of audit regulations. Organisations create a lot of processes and procedures. Want to login to a Unix machine? Better have an approved account, with the right authorisations. DMZ machines may require 2FA.

Vulnerability, Threat, Risk

From Twitter came this gem: This is a cute way of helping people understand the difference between the three concepts. It also helps start to drive conversation around remediation activities and risk assessment. (Let’s not get too tied down with interpretation; all analogies have holes :-)) What if the door was a bedroom door, rather than a house front door? How does this change the probability of a bear getting in and thus getting mauled?