Skip to main content
AI noise and the effect it’s having on vulnerability disclosure programs 
  • Artificial Intelligence
  • Vulnerability Disclosure

AI noise and the effect it’s having on vulnerability disclosure programs 

Ken Munro

09 Jan 2026 4 Min Read

Managing vulnerability reports is difficult for an organisation. 

In an ideal world, something like this happens: 

  • A valid report is received from a responsible researcher 
  • A well-resourced team takes the report, acknowledges it and investigate it quickly 
  • Any questions are quickly addressed between both parties 
  • A remediation plan and timeline are proposed and agreed upon 
  • The organisation fixes the issue, and the researcher confirms that the vulnerability has been addressed
  • Some agreed communications are published 

Everyone is happy. 

In practice, things are rarely this smooth. 

From the organisation’s perspective:

  • Trivial ‘beg bounty’ reports are sent, containing irrelevant vulnerabilities 
  • AI generated slop reports are sent, with meaningless or hallucinated findings
  • Huge volumes of near identical issues, often Cross-Site Scripting, identified by automated tooling and submitted individually, overwhelming PSIRT teams 
  • inexperienced researchers making unreasonable demands or escalating prematurely before sharing full details 

From the perspective of the researcher: 

  • Organisations fail to acknowledge reports 
  • VDPs are outsourced to third parties (e.g Bugcrowd, Zeropcopter, HackerOne, etc.) where escalation to the affected organisation can be slow or ineffective 
  • There is a skills gap within the organisation over effectively triaging reports 
  • Reports that are acknowledged aren’t dealt with 
  • Attempts are made to silence the researcher, such as NDAs or legal threats 

In these situations, no-one wins 

AI output vs signal 

Perhaps the biggest challenge is resourcing the vulnerability disclosure program team to deal with the number of requests. We see stories in the media about ‘AI finding more vulnerabilities than researchers’ yet digging deeper we discover that all the AI is actually doing is finding different cases of the same vulnerabilities, e.g. Cross-Site Scripting with different payloads. Finding a thousand instances of the same underlying problem does not materially improve security. It inflates reporting metrics, but it does not help teams understand or address root causes.  

A small VDP team will quickly become totally overwhelmed with this style of report. What was needed was one XSS finding and then the dev team check that input validation is applied across all input points. 

As a result, genuine human researchers with valid, important discoveries cannot get the attention of the VDP. Frustration grows on both sides. The longer-term consequence is more serious. Important vulnerabilities are dropped publicly as zero days or sold to access brokers. Either that or the reputations of both the organisation and the researcher are damaged through uncoordinated disclosure. Everyone loses. 

Some solutions 

Running an effective vulnerability disclosure programme requires deliberate design. Getting the scope right requires real effort – it’s very easy to create a tight scope, but then accidentally put key findings out of scope. 

If you then use a bug bounty platform, you risk important findings being ignored because ‘computer says no’. 

You don’t really want to know about trivial issues, but you also want to know what you don’t know. So, take care. 

Exclude AI driven findings. There, I said it. Or at the very least, require that similar findings are consolidated. 

“You’ve got a problem with input validation” is way more useful than a thousand XSS-able parameters! 

Have flexibility in your bounty payments: do you want to burn $0.5M on a thousand identical XSS, or would you rather pay $10k to know you’ve got a widespread input validation problem? 

Allow escalation 

This is a tricky one to manage – the bug bounty platforms want to keep the beg bounty and AI away from you, but you want to know about the important stuff. 

We often run into challenges when engaging with more junior analysts at bug bounty platforms. Due to their level of experience, the severity of a vulnerability is not always recognised, which makes escalation more difficult. 

One way is to offer a route to direct disclosure with the client organisation, particularly where no bounty is wanted by the researcher. Don’t bat the researcher back to the bug bounty platform operator if the vulnerability warrants your direct attention. 

Effectively, you’re using the bug bounty platform to triage the mundane but still allow the ‘good stuff’ to get directly to you. 

You can outsource vulnerability disclosure to bug bounty platforms, but you can’t outsource your responsibility to run secure systems.