Blog: Vulnerability Disclosure

When disclosure goes wrong. People

Ken Munro 12 Jan 2020

My experience of vulnerability disclosure is that it is rarely as easy or simple as it could be. I had hoped that bug bounty programmes and vulnerability disclosure programmes (VDPs) would to help matters. Broadly that doesn’t seem to be the case.

It’s not all terrible though. Bug bounties do help deal with low hanging vulnerabilities, and they also help seed talent in the cyber security industry.

Perhaps the hardest part is trying to help VDP people effect change in their organisations when their organisation is patently not interested. Many organisations claim to take customer security seriously, but through inaction and abdication of responsibility they clearly do not.

It’s not my job to shoot the messenger. It’s not fair or right for a researcher to hold someone in the vendor’s PSIRT team responsible for all of the vendor’s failings. But, if you’re my only available point of contact what choice do I have?

Instead of going through a VDP, if the vulnerability is serious enough I now make my first contact via LinkedIn, right at the top of the organisation.


The VDP team are almost never truly empowered to make the changes needed.

Sure, they can put requests in for fixes and might even be capable of raising its priority, but if the dev team’s priority is elsewhere, good luck with getting that fixed.

I say ‘almost never’ as there are a few wonderful cases where people have listened, understood the threat to their customers and their brand, and taken action quickly. They’ve been empowered to pull in resource and make change.

It’s interesting to note the lack of correlation between good or bad outcomes and the size of the organisation, or even whether they had a VDP or not. Our experience is just a snapshot of a bigger picture, but it says something of the chaos and inconsistency that plagues disclosure, and why it’s a PITA to do correctly.

What’s going wrong?

Creating a VDP is for nothing if the PSIRT or VDP team involved aren’t empowered.

If the team don’t have the power to take down live, revenue generating systems, then they aren’t empowered.

An example

My colleague Vangelis (@evstykas) found a vulnerability in the Sonicwall cloud management platform. It allowed unauthenticated remote account compromise, which led on to remote compromise of any of hundreds of thousands of their customers networks. If this wasn’t a CVSS 10, I don’t know what would be!

We reported it through the published VDP. It was acknowledged. That was about it. Nearly two weeks later we asked for an update. They deflected and the vulnerability remained present.

So I searched my LinkedIn network and found the Sonicwall CEO. I dropped him a message, which he quickly acknowledged and immediately involved his CTO.

They fixed it in 8 hours.

What baffled me was that they are a security vendor running their own VDP. It shouldn’t have taken the CEO to intervene.

The downside of empowerment

Empowering your people comes with costs and risks though. You’d need to put highly experienced (expensive) security staff in the incident response team if you want to give them the power to turn off production servers, or you risk taking down servers unnecessarily.

Also, would your experienced security people be best utilised with beg bounties? No. Maybe it’s better to attempt to triage objectively rather than subjectively from a biased perspective… then send to the expensive staff?


Here’s the problem: triaging the volume of reports that many PSIRT teams receive is difficult. From beg bounty requests to irrelevant scanner output, VDPs are often bombarded with junk.

To make triaging less onerous there are some things a VDP could do.

One example is to ask the researcher a list of 10 yes / no questions to score it. For instance:

  1. Does the vulnerability allow access to other users PII
  2. Does it give shell access to other machines.

Triage like this could be used to automatically push it through to the CTO / security team as a high priority.


You should avoid asking researchers to self-score their findings with CVSS. It’s a pitfall of the highest order. If any bounty payments are tied to CVSS it’ll mean that the researcher has latitude to skew the risk in their favour. More importantly their lack of experience might mean that any significant risks are not flagged.

Bug bounty platforms

We’re starting to see bug bounty platforms causing problems for vulnerability disclosure too. Increasingly, organisations are outsourcing their VDP to a bug bounty operation. As part of this, the researcher attempting to disclose an issue is expected to accept the Ts&Cs of the bug bounty platform.

That will often include public disclosure only on their Ts&Cs. I understand this if money is changing hands. That’s fair, but it assumes that all those disclosing security issues actually want payment for their efforts.

I will not sign up to Ts&Cs which limit our ability to disclose publicly, as that removes one of the few levers in my control with which to push the organisation in to doing the ‘right thing’ by their customers.

Actually disclosing publicly is a completely different matter, subject to extensive internal ethical debate first, but I will not give up the option to do so.

Bug bounty programs that take on outsourced VDP management for organisations must accept that motivation for researchers is not always for money. They must offer paths to facilitate disclosure that do not remove the ability to disclose publicly in future, or apply Ts&Cs that limit that ability.

I’m done with VDPs

So anyway, I’m giving up with VDPs and going straight to the top of organisations.

The types of vulnerability we report are invariably critical account compromise, exploitation of which would hugely impact the brand of a firm. CEOs understand brand and reputational impact better than most.

In the meantime, if CEOs don’t want to deal with ‘pesky’ researchers, they would do well to properly fund their PSIRT teams and actually empower them.