There are many ways to assess vulnerabilities. CERT/CC produces a numeric score ranging from 0 to 180, considering factors such as whether the Internet’s infrastructure is at risk and what sort of preconditions are required to exploit the vulnerability. The SANS vulnerability analysis scale considers whether the weakness is found in default configurations of client or server systems. Microsoft’s proprietary scoring system tries to reflect the difficulty of exploitation and the overall impact of the vulnerability in general and broad terms. I prefer to use the CVSS or Common Vulnerability Scoring System, to provide a uniform and reliable assessment framework. CVSS takes into consideration what I consider the 3 key information elements that need to be considered when assessing a vulnerability:
- BASE components: Represents the intrinsic and fundamental characteristics of a vulnerability that are constant over time and user environments. These include Access Vectors for exploiting the vulnerability, Access Complexity for how difficult it is to exploit, Authentication to measure what level is required to exploit, and the impacts of successful exploitation to confidentiality, integrity and availability.
- TEMPORAL components: Represents the characteristics of a vulnerability that change over time but not among user environments. These include Exploitability indicating how far code has progressed, Remediation Level indicating what fixes are available, and Report Confidence which examines sources of information.
- ENVIRONMENTAL components: Represents the characteristics of a vulnerability that are relevant and unique to a particular user’s environment. These include Collateral Damage Potential to estimate costs, Target Distribution to estimate relevance and spread, as well as requirements for confidentiality, integrity and availability sensitivity of the organization.
The purpose of the CVSS base group of metrics is to define and communicate the fundamental characteristics of a vulnerability. This objective approach to characterizing vulnerabilities provides users with a clear and intuitive representation of a vulnerability. Analysts can then invoke the temporal and environmental groups to provide contextual information that more accurately reflects the risk to their unique environment. This allows for more informed decisions when trying to mitigate risks posed by the vulnerabilities.
Questions that should be asked when preparing or performing a vulnerability assessment include:
- What are my most valuable systems and data?
- What are my most vulnerable systems and data?
- What systems and data is the business charged with protecting from a regulatory perspective?
- What services is the business charged with protecting from a regulatory or contractual perspective?
- What are the basic characteristics of the vulnerability?
- How serious is the impact of the vulnerability if exploited?
- How many vulnerable systems are present in the environment?
- Are there any preconditions or requirements for exploitation?
- What is the most likely attack scenario and target base?
- How far has exploit code development progressed?
- Has there been an actual attack in the wild?
- How will I know of additional code developments?
- Are there characteristics that could serve as indicators of attack?
- Are there IDS signatures that may help to detect exploitation?
- Are there anti-virus signatures that may help to prevent exploitation?
- What work-arounds, reconfigurations or other risk mitigation strategies are applicable?
Every organization is different, and although many use the same tools, they are often deployed and configured in differing ways. The mandates, values and mission statements of organizations are all different, indicating that any changes to the environment will probably also be unique to the particular organization. That being said, there are some commonalities. Mitigation is the lessening or attempt to lessen the risk of exploitation and the seriousness or extent of damage that can be inflicted in an attack. Mitigation development can take place before a patch is ready, as a patch is announced, and after the patch has been deployed if significant risk still remains.
Risk can never be totally eliminated, but can be managed and mitigated to lessen the likelihood and or impact. Once vulnerabilities have been assessed, those that have been identified as unacceptable to management will require development of a risk mitigation strategy. A mitigation strategy refers to the additional efforts that must be taken to lower the likelihood of the risk occurring and to minimize the impact if the vulnerability were to be exploited.
Any risk mitigation strategy should include:
- Roles and responsibilities for developing, implementing and monitoring the strategy.
- Timelines for notification, remediation, verification, validation and escalations.
- Conditions present in order for risk level to be acceptable.
- Resources required to carry out the planned actions.
The vendor will hopefully cooperate with the vulnerability researcher in acknowledging the vulnerability, examine it closely, identify the root cause of the vulnerability, develop solutions for the underlying problems, and test that the fix is good and doesn’t introduce additional vulnerabilities. With the complexity of software these days, it is not surprising that patch development takes considerable time, effort and cash resources to deliver. The researcher will hopefully remain patient and silent about the vulnerability while the vendor works towards a solution.
The vendor may not have enough information from the initial report, and may not know the configurations of the researcher’s vulnerable system. Armed with the information on hand, the triage team kicks into high gear. Analysis becomes a detailed, complicated process, looking at all supported versions of product to confirm the vulnerability, understand the complete picture, and determine the impact it could have on customers.
In the midst of investigations, the vendor continues to scour the Internet to determine whether public exploits are circulating. The team is now thinking like hackers, looking at the vulnerability from the hackers’ perspective. If its a buffer overflow and its Internet-facing, the alarms go off, and they have to figure out if it’s easy to exploit. The product team is also investigating if there are additional vulnerabilities in associated code. It becomes a complete audit of the vulnerable program and all associated programs from the vendor.