Blog: Vulnerability Disclosure

When disclosure goes wrong. People

Ken Munro 02 Sep 2022

My experience of vulnerability disclosure is that it is rarely as easy or simple as it could be. I had hoped that bug bounty programmes and vulnerability disclosure programmes (VDPs) would help matters. Broadly that doesn’t seem to be the case, often for unexpected reasons.

It’s not all bad though. Bug bounties incentivise bringing organisations and independent researchers together, with rewards for researchers efforts. They’re also quite handy for sifting out the dross and allowing organisations to focus on important vulnerabilities.

Perhaps the hardest part is trying to help VDP people effect change in their organisations when their organisation is patently not interested. Many organisations claim to take customer security seriously, but through inaction and abdication of responsibility they clearly do not. Outsourcing to a bug bounty platform does not relieve said organisation of its responsibility to listen to researchers or its responsibility to fix vulnerabilities.

It’s not my job to shoot the messenger. It’s not fair or right for a researcher to hold someone in the vendor’s Product Security Incident Response Team (PSIRT) responsible for all of the vendor’s failings. But, if they’re my only available point of contact what choice do I have?

If I don’t get a sensible response when going through a VDP, if the vulnerability is serious enough I simply go to the top of the organisation.


The VDP team are rarely truly empowered to make the changes needed.

Sure, they can put requests in for fixes and might even be capable of raising its priority, but if the dev team’s priority is elsewhere, good luck with getting that fixed.

I say ‘rarely’ as there are a few wonderful cases where people have listened, understood the threat to their customers and their brand, and taken action quickly. They’ve been empowered to pull in resource and make change.

It’s interesting to note the lack of correlation between good or bad outcomes and the size of the organisation, or even whether they had a VDP or not. Our experience is just a snapshot of a bigger picture, but it says something of the inconsistency that plagues disclosure, and why it’s a PITA to do correctly.

What’s going wrong?

Creating a VDP is for nothing if the PSIRT or VDP team involved aren’t empowered.

If the team don’t have the power to quickly escalate to those capable of making fast decisions to take down live, revenue generating systems, then they aren’t empowered.

An example

We found a vulnerability in the Sonicwall cloud management platform. It allowed unauthenticated remote account compromise, which led on to remote compromise of any of hundreds of thousands of their customers networks. If this wasn’t a CVSS 10, I don’t know what would be!

We reported it through the published VDP. It was acknowledged. That was about it. Nearly two weeks later we asked for an update. They deflected and the vulnerability remained present.

So I searched my LinkedIn network and found the Sonicwall CEO. I dropped him a message, which he quickly acknowledged and immediately involved his CTO.

They fixed it in 8 hours.

What baffled me was that they are a security vendor running their own VDP. It shouldn’t have taken the CEO to intervene.

This is just one of many similar experiences we have had.

The downside of empowerment

Empowering your people comes with costs and risks though. You’d need to put highly experienced (expensive) security staff in the incident response team if you want to give them the power to turn off production servers, or you risk taking down servers unnecessarily.

Also, would your experienced security people’s time be best spent dealing with beg bounties? No. Maybe it’s better to attempt to triage objectively rather than subjectively from a biased perspective… then send to the expensive staff?


Here’s the problem: triaging the volume of reports that many PSIRT teams receive is difficult. From beg bounty requests presenting irrelevant scanner output, to people reporting ‘vulnerabilities that aren’t vulnerabilities’ (e.g. this, this, and this) VDPs are often bombarded with junk.

To make triaging less onerous there are some things a VDP could do.

One example is to ask the researcher a list of 10 yes / no questions to score it. For instance:

  1. Does the vulnerability allow access to other users PII?
  2. Does it give shell access to other machines?

Triage like this could be used to automatically push it through to the CTO / security team as a high priority.


You should avoid asking researchers to self-score their findings with CVSS. It’s a pitfall of the highest order. If any bounty payments are tied to CVSS it’ll mean that the researcher has latitude to skew the risk in their favour.

More importantly their lack of experience might mean that significant risks are not flagged.

Bug bounty platforms

We’re starting to see bug bounty platforms causing problems for vulnerability disclosure too. Increasingly, organisations are outsourcing their VDP to a bug bounty operation. As part of this, the researcher attempting to disclose an issue is expected to accept the terms and conditions of the bug bounty platform.

These will often include public disclosure only on their terms, which may mean that the vulnerability is never disclosed to the world. I understand this if money is changing hands, that’s fair …but it assumes that everyone disclosing a security issue wants payment for their efforts.

I will not sign up to Ts&Cs which limit our ability to disclose publicly, as that removes one of the few levers in my control with which to push the organisation to do the ‘right thing’ by their customers.

Actually disclosing publicly is a completely different matter, subject to extensive internal ethical debate first, but I will not give up the option to do so lightly.

Bug bounty programs that take on outsourced VDP management for organisations must accept that motivation for researchers is not always for money. They must offer paths to facilitate disclosure that do not remove the ability to disclose publicly in future, or apply Ts&Cs that limit that ability.

When VDPs don’t work…

When and if a disclosure through a VDP is stalled or simply impossible as a result of restrictive terms, I’ll be going straight to the top of the organisation, to the CEO.

The types of vulnerability we report are invariably critical account compromises, exploitation of which would hugely impact the brand of a firm.

Because after all, CEOs understand brand and reputational impact better than most.