Blog: How Tos
A security researcher has made contact. What do I do?
Businesses say that they take security of customer data seriously but, when presented with a vulnerability, are often more concerned about their own reputation than the security of their customers.
Handle disclosure correctly and you can do both: protect your customers and protect your reputation. Do it wrong and you damage both.
By far the most painful part of vulnerability research is responsible disclosure. If we find something bad in a smart thing, it would be fairly irresponsible to publish a method to do bad things without giving the manufacturer/vendor/service provider a fair chance to fix it first.
But in order for them to fix it, we need to safely and securely get the information to them first.
Some manufacturers have published the method by which the security team can be contacted. That makes life really straightforward for both us and them.
However, many manufacturers don’t have a clue what a security vulnerability is, let alone a security researcher.
We’ll try email addresses initially: ‘contact us’ details on your website, maybe [email protected]
It will be a simple email asking how we disclose a security vulnerability. We won’t disclose the exact vulnerability at this stage, but we usually explain the consequences of it. I’ve also started adding a sentence explaining that the email should be forwarded to the internal security team or to a director/VP of the business. I appreciate that a customer service operator looking after inbound customer enquiries may not be equipped to deal with a security vulnerability report.
Whatever you do, don’t ignore this email. Respond immediately to prevent the issue escalating.
Here’s some content that you might consider replying with:
Thanks for your email. We are investigating this as a matter of urgency.
Would you like our PGP key to encrypt communications, or would you prefer another method of securing our discussion?
Could you be available for a call?
Communicating is critical. If you ignore the report, you will antagonise the researcher. It is often frustration with vendor communications that leads researchers to publish vulnerabilities publicly.
Briefing Your Staff
It’s so hard for a customer contact centre operator to handle a security report correctly. You need to ensure that certain key words are flagged so that you can intercept or ensure these communications are escalated.
This is the most common screw-up by far – the researcher tries to make contact, yet no-one listens because they don’t understand the report and don’t know what to do with it.
Out of sheer frustration, the researcher makes public contact on social media. Others may be alert to this, particularly sections of the security media. As a result, all eyes are already on your business and the potential security flaw.
Social media is often used as an initial contact point by researchers. Is your social media agency briefed? So much damage can be done to your brand by a social media agent who doesn’t know how to respond to a researcher.
Some businesses publish details of their press office, often also an outsourced media agency. Researchers may contact these, so ensure that you have an escalation process when a third party agency is involved.
Making Contact Easy
Create an email address of [email protected]<yourdomain> and MONITOR it. Ensure those emails get straight to your security team.
Publish details of how you would like to be contacted at www.yourdomain.com/.well-known/security.txt
Publish a vulnerability disclosure policy on your web site too. There’s some great guidance here: https://www.ntia.doc.gov/files/ntia/publications/ntia_vuln_disclosure_early_stage_template.pdf
Accepting Constructive Criticism
It can be very difficult for a business to accept an alert that their security isn’t up to standard. Common reactions to perceived criticism are denial, anger and aggression towards the researcher.
THESE ARE NOT HELPFUL and will generally make the situation worse.
Consider for a moment the motivations of the researcher:
- They’ve often done the work in their own time for little to no reward or payment.
- They are generally trying to make security better.
- If they were just trying to embarrass you, hack you or have others hack you, they wouldn’t have contacted you at all
It’s rare that researchers ‘go too far’ and access data that they shouldn’t have. Therefore, threatening a researcher with legal action is likely to result in media attention and cause significant, needless damage to your brand. The Streisand Effect.
That said, there are times when less ethical researchers do copy excess data, far more than was necessary to prove the vulnerability. In those very rare cases, it’s time to get legal advice.
Careful negotiation can defuse most situations and result in data being deleted/returned and no reputational harm occurring.
We’ve had a crazy range of responses over the years from vendors:
“Are you trying to sell me something?”
No, we’re trying to tell you about a security vulnerability!
“The product is end of life. We don’t care”
“Are you working for our competitor?”
“Cease & Desist”
“We’ve had no reports of our product being hacked before”
“Without a support contract we cannot accept your report”
“We don’t consider that to be a vulnerability”
So, take the report constructively. Acknowledge that someone at your business or in your supply chain may have screwed up. Don’t shoot the messenger.
Fixing The Vulnerability
So far, all we’ve discussed is how to respond. It’s also important to manage the process of actually fixing the bug.
Accepted practice usually results in public disclosure 90 days after first contact, unless the bug is fixed sooner.
Simply stating that ‘you’ll look in to it’, or ‘will fix it at the next release’ isn’t good enough.
You will need to commit to a timeline to investigate, plan and release a fix.
Researchers understand that vulnerabilities can be complex to fix. If you communicate regularly with the researcher, they can see that you are serious about fixing the issues.
However, simply stating that it’s going to take longer than expected isn’t enough. Why is it taking longer than expected? Can you implement a workaround that solves the problem temporarily?
If the researcher feels that you aren’t taking the vulnerability seriously enough, or committing enough resource, you risk antagonising them. Uncoordinated public disclosure may result.
Remember: they are helping you protect your customers data.
Some researchers will ask if you have a bug bounty scheme, where the business offers reward payments to security researchers as a ‘thankyou’ for their findings.
If you don’t have a scheme, don’t try to rush one through for this one researcher.
Many researchers will be happy to receive a ‘thanks’, public credit for the finding once fixed and maybe you could send them some ‘swag’.
A suitable ‘swag’ gesture may be related to your business: a t-shirt, a mug, maybe even some free credit for your service.
If you make smart devices, how about sending a couple to the researcher? Having them ‘on side’ for the future may even get some new vulnerabilities found for you.
Keep marketing & PR out of initial discussions
I worked in marketing for a while. Ones first instinct is to protect the brand at all costs. This is unlikely to end well.
You will be publicly judged on your response to the security report. If you handle it well, you will be seen as a cool vendor who really does take security seriously and works to protect customer data.
If you handle it badly (saying ‘we take security seriously’ is a no-no BTW) you will attract the interest of other researchers, less ethical hackers and the media.
You protect your brand better by engaging with the researcher. Take external advice from specialist cyber incident management firms if you feel ill-equipped to deal with this process.
There will be a time for a public statement to be made, at which point you will need your PR and marketing specialists.
My advice for your statement:
- Do NOT attempt to play down the significance of the vulnerability just to protect your brand. This will go wrong
- If you do feel that the researcher hasn’t understood mitigating factors, explain them clearly
- Explain what you’ve done to mitigate this and future issues. Have you re-trained your developers or maybe changed your supplier?
- Detail what you’ve done to protect customers going forward
- Thank the researcher and credit them publicly
We’ve had initial vulnerability reports intercepted and handled by an outsourced PR agency. This went sideways fast. Fortunately, the vendor’s IT security chief realised and stepped in just before matters got out of hand.
Reminder: Keep PR and marketing out of initial discussions.
How bad is it?
Once you’ve established a dialogue with the researcher, it’s time to understand the vulnerability and its significance.
What does it affect? What is the method? What requests / parameters / data are affected?
Is there any evidence of data having been previously compromised (this is only relevant if there is evidence and you will need to consider mandatory breach notification / DPA reporting).
What’s the worst case scenario? What in their opinion is the best case?
What is the advice of the researcher for fixing the vulnerability? Don’t ask for too much advice; it’s not as if you’re paying for it!
Who are you dealing with?
Search online; research the vulnerability researcher. Find out who you are dealing with.
If their name is available, you may be able to find other vulnerability reports they’ve made.
Is this a kid, or is it someone well known in the industry? Does the researcher have a media presence?
If they are well known in the security research community, they are more likely to follow a recognised process with vulnerability reporting and disclosure. They have an ethical reputation to protect too.
If they aren’t well known or you can’t determine who they are, you will have to manage the process more carefully: Ask clearly what they would like to happen and what steps they would like to take.
I strongly caution against requiring a non-disclosure agreement, unless the researcher suggests one, or a significant bug bounty is to be paid.
Attempting to silence a researcher is a very bad idea. They will want to tell the story of the cool vulnerability they found and the cool vendor who fixed it.
Or they will just go public and talk about the uncool vendor who tried to slap them down with an NDA.
A couple of scenarios
Here’s one that went well: https://www.pentestpartners.com/security-blog/pwning-a-siemens-scalance-ics-switch-through-arm-reversing/
Siemens had a vulnerability reporting process, acknowledged the report quickly, kept in touch, asked for a little more time as the bug was larger than we thought, fixed it, informed customers, and credited us.
And here’s one that went badly: https://www.pentestpartners.com/security-blog/flir-fx-lorex-video-stream-hijack-disclosure-train-wreck/
The vendor failed to respond promptly, failed to address the issue quickly, sent out a factually incorrect press statement and ended up damaging their reputation in the media.
Be like Siemens. Be a cool vendor.
Some useful resources:
Make it easy for security researchers to contact you.
Accept their reports in good faith and act on them.
Over-communicate with the researcher.
Coordinate disclosure with the researcher.
So, that’s vulnerability disclosure covered. However, you may be approached by a security researcher or hacker who claims that they’ve either found your customer data online, or have it in their possession.
In this case it’s possible that you may have already been breached, so a different course of action may be required.
First, call your insurer. If you take action without their input, you may void your cover. Specialist cyber insurers also have access to incident response firms who can help you manage a potential breach. There may be regulatory reporting requirements also, particularly in light of GDPR.