Shellshocked? My Heartbleeds for you


Author Marc Wickenden

Date 3 October 2014

The dust has barely settled on Heartbleed and then we get Shellshock. Let’s be clear, this was a biggie, the holy grail of any hacker - criminal or professional. Unauthenticated, remote code execution. Even better, it’s been there for a long time.

Like everything in hindsight it’s incredible it wasn’t found before. How many security researchers never found it? How many penetration tests didn’t find it? How is it possible that such a relatively simple bug went undiscovered for so long?

The needle in the haystack

The Internet is probably the largest haystack on Planet Earth. How many millions and millions of lines of code are responsible for the operation of computers and networks around our planet every day? A few erroneous characters here and there are the needle and all it takes to end up with a Heartbleed or Shellshock, so it shouldn’t be that surprising.

How many more as yet undiscovered critical bugs are there lurking in our core software stacks just waiting to cause security nightmares for companies everywhere? My guess is hundreds if not thousands. And that doesn’t include the ones which have been discovered, just not by people with the wider world’s best interests at heart. Unless you count the “greater good” (cough GCHQ/NSA).

FUDar ALERT!!!

The point of the above statement is not to alarm or elicit panic. I could be wrong, I could be right. The point is we don’t know but there are good ways of dealing with uncertainty and….less good ways.

Picture of man with head in sand ignoring information security problems

Image source: http://briansphirstblog.blogspot.co.uk/2012/03/sticking-with-it.html

When implementing security controls, expect them to fail. Then re-evaluate the risk and consider additional controls. Repeat until you have reduced your risk to a level you are comfortable with.

Do what you can to reduce the impact. If you can’t adequately reduce the likelihood of a control failure resulting in a breach then this is especially important.

Consider the following for starters:

  1. Reliable, documented (preferably) backup and restore capability
  2. Some form of basic security logging and alerting
  3. File integrity monitoring (or other appropriate change detection)

Even these three simple things, all of which can be implemented cheaply if required, can make the difference between significant loss and significant inconvenience. Compare these two statements:

“We lost it all”

and

“We’ll be down for three hours while we restore, we’re going through the logs now to work out what happened”

I know which position I’d rather be in.

Defence in depth

Yeh, it’s that old security chestnut, the layered defence in depth approach - but it works. Every business and its systems are different so giving targeted advice in a blog post just isn’t practical and I appreciate not everyone has the budget to really go to town on the defensive layers. But just think for a second about what some of the big boys are doing.

Google runs its infrastructure on a disposable instance basis (citation required). If a single node steps out of line with its configured security baseline, i.e. a file gets changed, the node is instantly taken out of service, an alert is raised and the box is isolated. Depending on which reports you read the instance is wiped and restored to its out-of-the-box position. This would seem counter to good forensics to me but hey, either way it makes an attacker’s job pretty hard, Shellshock or no Shellshock.

Others are reportedly running all information siloed by column so even if you managed to compromise a backend database you can only retrieve maybe all the first lines of customers’ addresses. Of course, at some point that information has to come together in the application to make a full postal address and that point becomes the chink but, if you know that and have designed it that way, guess where you focus your monitoring and security assurance efforts?

Chroot. Remember that?

Getting back to slightly more practical design or deployment choices, something I see so rarely these days is the humble chroot jail (for you *NIX folks). So often we do application penetration tests that result in command execution in the context of the web server user - and I’m prepared to bet we might see a few more over the coming months - that end up coughing up full access to the rest of web or application server.

Consider the difference if the web server were chrooted to a single directory that contains only the essential binaries, libraries and scripts required to run the service and then file integrity monitoring/change detection software is configured to look at that directory and alert on anything which alters. It doesn’t stop Shellshock but it does start to make life a lot easier to contain it.

I could go on but hopefully you get the idea.

Have we learned anything?

I’m willing to bet a lot of companies now have a much better patching regime after Heartbleed and Shellshock but, while us security professionals bang on about patch management until we’re blue in the face is that really the biggest take away from this?

As defenders we are behind the curve and always will be. It’s simple economics. So should we just accept that fact, make sure we have a patch management process in place, keep up to date with the news and make sure we send in the patching cavalry if something like Heartbleed or Shellshock drops again?

Let’s build on these foundations

No. Let’s get ahead of the game. Let’s use these incidents and any management focus it’s given to security and build on these foundations. Let’s assume tomorrow they’ll find a fundamental flaw in the IPv4 RFC and that the entire Internet can overnight be turned into a botnet. Think about where your data is (and don’t necessarily assume it’s just in your database, it’s probably in your employees personal iPads too - could be worse, it could be in their Android tablet ;-)) and threat model attacks.

Engage your penetration testers to do scenario-based, authenticated testing. Run some what-if scenarios, roll the dice and see where it leaves you. Evaluate how your incident response went to both Heartbleed and Shellshock. What worked? What didn’t? Keep up the momentum to chisel out a better process for dealing with it. Take control.

Braveheart declaring Shellshock won’t take our data

Share:

About The Author

Marc Wickenden

Technical Director at 4ARMED, you can blame him for our awesome technical skills and business-led solutions. You can tweet him at @marcwickenden.