Blog: Opinions

CVSSv3. What’s changed? …or why even bother?

David Lodge 12 Jun 2015


After three years in the making CVSS version 3 was released to an eager security and incident response audience this week. The trouble is that instead of being pleased I can’t help but feel disappointed; it appears to fix very few of the faults of v2.



For those who aren’t from a risk management background, let me explain: CVSS, or the Common Vulnerability Scoring System, is a score from 0 to 10 which describes how risky a vulnerability is, and its potential impact on services.

CVSSv2 has been in use for a long time (since 2006) and in my opinion has performed its job moderately badly (see what I did there?) for those long years. It has some major problems with the metrics it uses and has taken ages to be updated to accommodate modern risk profiles.

It does offer one major advantage though. It gives a simple number that can be appropriately fitted into a risk based score sheet, to prioritise how risks should be resolved. The scores are also used in some audit considerations as a cut off line for an audit “fail”.

To score something, your risk assessor will review the level of impact, the difficulty of exploitation and the physical location of exploitation to create a base metric. This is then given to the organisation where they can rescore it to match their environment. For example the risk of attack to a network will be reduced if that network is only accessible to 5 security checked people, rather than by hordes of users.

Sounds Great. What’s Wrong with It?

A lot of the problems with CVSSv2 can be found in Risk Based Security’s open letter to First (the organisation that owns CVSS):

In essence, although the system is meant to be objective it has several problems with subjectivity. A perfect example is the POODLE vulnerability. This got scores ranging from 2.6 to 5.4 with 3.2 being the median, and in my opinion the appropriate value.

This disparity is due to the wooliness in the definitions of what is constitutes complexity and confidentiality, and what makes them “Low” and “High” risks.

Most organisations tend not to apply the environmental metric markers to account for their unique situation, meaning that several vulnerabilities will be given a massively inappropriate score. The CVSSv2 framework makes this hard to alter as only some of the modifiers can be altered for the environment.

There is also no way to indicate that different attacks can be daisy chained, to increase the risk levels of one risk.

What does CVSSv3 change and fix?

It adds some useful catch-up stuff, like:

  • New Physical attack vector
  • New field for User Interaction (i.e. do we need the mark to visit a page?)
  • New field for Scope (i.e. changing execution environment)

It also allows the base score to be modified to be more reflective of the environment (which will be useful if the recipients have a full risk management function). For example, changing the complexity of an attack because the environment has extra protective measures.

What does CVSSv3 not do?

CVSSv3 doesn’t fix the major disparities with data confidentiality. Instead the whole flawed section is exactly the same. Here’s an extract from the document that defines the confidentiality impact metric:

High (H) There is total loss of confidentiality, resulting in all resources within the impacted component being divulged to the attacker. Alternatively, access to only some restricted information is obtained, but the disclosed information presents a direct, serious impact. For example, an attacker steals the administrator’s password, or private encryption keys of a web server.
Low (L) There is some loss of confidentiality. Access to some restricted information is obtained, but the attacker does not have control over what information is obtained, or the amount or kind of loss is constrained. The information disclosure does not cause a direct, serious loss to the impacted component.

So, we can assume that a web server banner is Low and a password is High; but what about a phone number, or an address? What about random data that could be a password, but it’s more likely to be a banner? These all tend to get overloaded on the Low rating.


I’m cheesed off that the Forum of Incident Response and Security Teams (FIRST), the group behind the scoring system, didn’t seem to take on board the years of feedback and apply even a one level increase in the confidentiality impact level that would have resolved many of the problems I’ve mentioned.

CVSSv3 is not a reimagining or even a fix. It fails to address major flaws in a widely used system and appears to be an update designed to placate critics rather than address needs and concerns.

So what’s the answer? I really don’t know.

There is a requirement for an objective, numeric risk score. This means having a system that is truly objective and doesn’t force a consultant who is testing a system that they will never use to try and work out the fundamentals between a “high” and “low” risk state on confidentiality.

Will any of this change my behaviour? Probably not. Like most security consultants I’ll end up using CVSSv2 AND CVSSv3 as best I can because it provides a number score and multiple frameworks use it, and to be honest, most people really only want a high, medium, low scoring range.