Blog: How Tos
Patch management, it’s sexy, here’s why
Today is your lucky day. Yup, that’s right, it’s patch management Tuesday. Except rather than babble on about the merits of doing it, and berate you for not doing it, I’m going to try a different tack, the path less trodden if you like.
Patching is not sexy
The thing with managing patches is it’s just not sexy. Because it’s often seen as an admin task it really doesn’t get the coverage and credit it deserves.
…and because it isn’t sexy it’s a job that causes people to want to abdicate responsibility for it, by getting a third party to handle it, or worse give it to someone who doesn’t appreciate it’s worth.
…but it is
Of all the recent high profile breaches the headline grabbers were the clever malware, or the exciting delivery mechanisms employed to get the fudgeware onto the network in the first place. However, chances are if those systems were patched properly there would have been fewer opportunities for the attackers to capitalise on their cleverness so easily.
Once inside a network, getting from a compromised host to those prized data assets often isn’t difficult, because people haven’t patched things properly, leaving a trail of fail that’s relatively easy to sniff out and follow.
So, how do you actually make this unsexy admin task resonate with meaning? I’ve got some ideas (well, my colleagues’ ideas, but standing on the shoulders of giants is my “thing”).
Tactical hands-on management
Windows Server Update Services (WSUS) is good as far as it goes, but it depends on having all the machines correctly enrolled, and on the operator correctly releasing patches for installation. (This can be done using Group Policy, but then you occasionally have the situation where machines are not dropped into the correct Organisational Unit, and hence don’t get the correct Group Policy.)
The problems come when machines are not enrolled correctly, or are not rebooted by their users – you can force reboots but that tends to annoy users, and is impractical on servers entirely.Then you can end up with machines which you thought were patched, but are actually quite a bit out of date. Or you may have released Office and OS patches, but not MS SQL server – and then you might find the SQL server installations are getting out of date on hosts which you thought were being managed correctly.
If you’re talking non-MS software, then solutions vary quite a lot. Personally, I like Secunia’s personal version https://secunia.com/vulnerability_scanning/personal/ but have not deployed the enterprise version, so I can’t comment on that.
It will go through all installed programs and warn about stuff, like the Apache/Mysql/PHP/openssl installs which are often thrown onto Windows boxes and then forgotten about, and lack any of the UNIX patch management infrastructure. For example, if you install LAMP stacks using “apt-get” on Debian-derived distributions, “apt-get update” will do the patch management for it all.
If I was running a significant volume of machines again, I would use WSUS for general updates and have another process to sweep and look for any boxes which are not enrolled – probably an authenticated Nessus scan on repeat.
Even if you cover off Windows with Secunia and WSUS for example, you still have loads of embedded kit to worry about, from Wi-Fi access points, routers and switches to things like F5s which all need periodic firmware updates.
I don’t like to rely on a single solution to do anything; it’s always better to run periodic checks to make sure that the process is actually doing what it’s meant to.
So that’s a pragmatic man-on-the-ground view. What about that tricky situation when you’re not 100% sure about the unknowns, the dark, unaudited boxes that have been forgotten or neglected?
Pen testing (yep, there’s that plug, you knew it was coming, right?) can be used as a substantive audit tool to locate areas where patches are missing, especially where hosts are not known. Maybe because they were created ad-hoc for a long forgotten reason, or a developer simply forgot to turn the lights off when they left.
Testing provides a tactical fix in that you can deploy a patch but by taking a larger sample of the network. That audit can then detect sysadmin behaviours on the network i.e. 100% windows patching, 0% Linux patching, or maybe 40% of devices have weak credentials.
This information helps provide strategic fixes at a process and governance level, whereby we take the issues and work out the processes which need fixing. In that 100/0/40% example there’s a process to patch Windows, however Linux likely doesn’t have an automated option so it’s forgotten. For the weak credentials the build guidelines and documents do not inform the user to change the password when the system is built.
These strategic fixes then allow the company to have a proactive security stance, as opposed to a reactive one, “Ah, we need to patch system X as the pen test reports says so”.