Effective Vulnerability Management

In this post I want to focus on responding to new threats and vulnerabilities effectively.  I won’t solve all of the world’s VM problems, but I might highlight a few issues that might not have already been thought of when considering a VM program.  I am not talking about incident response, attack analysis, or forensics here, as these are disciplines that are started after an event has already occurred.  I refer to how an organization responds to the discovery of critical vulnerabilities within its environment.  Especially those with exploit code or attacks taking place “in the wild”.

Software vulnerability scanning remains the primary means to determine and measure an organization’s security posture against external threat agents. The security group will typically scan the environment against a database of known vulnerabilities, and then task the operations team with resolving the vulnerable conditions.  Many companies are stuck with this never-ending, non-scalable, false-positive prone, snapshot in time approach to improving their security posture.  They attempt to measure and understand what their security profile looks like at a single frozen point in time, against a fully dynamic threat environment.

Information security must evolve beyond just building a catalog of tens of thousands of vulnerable conditions that may exist, and comparing that list against tens of thousands of organizational assets in the environment.  What does a large organization expect to do with a 600 page report of unique, distinct software vulnerabilities for each asset?

  1. Nothing, they scan periodically, review or distribute the reports, and move on.  Now it is someone else’s problem.
  2. Focus efforts on remediating “critical” vulnerabilities.  Of course the list of critical vulnerabilities changes on a regular basis, often influenced by politics and the work effort involved to remediate them rather than exploitation likelihood or asset value.
  3. Struggle through the list of vulnerabilities with the security team pushing operations to fix this item, patch that system, disable this option and uninstall that application.  Of course the list changes often, the network also changes, the threats constantly evolve; most organizations are far too dynamic for this to be even remotely effective.
  4. Scan only for a small set of newly announced vulnerabilities, say on a given Tuesday or whenever exploit code appears in the wild, and then attempt to rapidly patch systems.

To know what vulnerability is most likely to be exploited in a large, complex, globally distributed environment by a dynamic and increasingly hostile threat environment requires a tremendous amount of foresight and research.  This includes cataloging and valuation of each asset, and understanding each system’s role in business and in an attack situation, as either attacker, intermediary target, or objective target.

Zero Day Threat

When a critical vulnerability is announced, being actively exploited with no available patch, it forces organizations into firefighting mode. The typical response has been to rapidly patch all potentially affected assets. This presents many challenges, can be operationally disruptive, logistically challenging, and in some cases remains an unavailable or unattainable option.

The Equation:

  •         Define configuration management policy
  •     + Define software policy
  •     + Audit against policies
  •     + Monitor for change
  •     + Enforce policies                                     
  • = Elimination of a significant percentage of vulnerabilities and exposures.

Organizations looking to achieve effective vulnerability management should enhance vulnerability scanning efforts by implementing Security Configuration Management (SCM), allowing vulnerability scanning to focus on those conditions that are outside of SCM’s scope.  This establishes a standard baseline of good practice, greatly reduces the excessive noise that vulnerability scanning creates, and supports a move towards a higher level of operational and security maturity by:

  • Defining the desired state of assets against a security configuration standard.
  • Periodically auditing the environment to identify non-compliant elements.
  • Constantly monitoring for the addition of new elements and unauthorized changes.
  • Enforcing compliance by remediating non-compliant systems.

Any system currently deployed, or that will be deployed in the future, must adhere to a common security configuration baseline. Organizations like NIST, NSA, CIS, and vendors such as Microsoft and Cisco have already defined templates with settings for common operating environments and network elements. Any software in the environment should be catalogued, assessed, documented, understood and explicitly white-listed, or removed and implicitly black-listed by exclusion.

Unlike vulnerability scanning, SCM provides operationally actionable output since the orientation is towards maintaining system integrity by ensuring compliance with a defined standard rather than a moving target.  This ability to describe deviations from policy in terms of remediation activities provides a level of efficiency that cannot be obtained through vulnerability scanning alone.  For example if you perform a vulnerability scan against a system running an older version of IE the result would be hundreds of vulnerabilities.

  • Do these individual vulnerabilities really matter to the business?
  • Is it important to understand each of these conditions?
  • Would remediating one or more actually change the attack surface?
  • What would the operations team be expected to do in response to such a list?
  • Is the response the same regardless of platform (ie: Server vs Desktop, ‘Nix vs Windows, Switch vs Router)

If the organization has a policy that states all systems running IE must be running version 7 with patch X,Y & Z applied, it is immediately clear what action must be taken by the operations team, none of the questions above must be answered, testing requirements are simplified, and hundreds of vulnerabilities are resolved in the process.  Extrapolate this out to other system attributes, such as open ports, protocols in use, services enabled, patches applied, as well as installed applications and it becomes clear that tens of thousands of vulnerabilities can more easily be measured and expressed as resolutions in the form of security baselines.

Organizations must also look to incorporate a wide range of mitigating controls to shield the environment from attack prior to removing the root cause of an exploit.  Essentially the response should be to shield the environment, then remove the root cause, which in most cases means patch, upgrade, or remove the vulnerable item.  So how does an organization shield itself against attack?   They must incorporate and facilitate coordination of all network and host-based technologies as part of their vulnerability and threat management program.  In the case of the 2008 – 2009 Microsoft DNS vulnerabilities, there were clear work-arounds provided that included registry changes and firewall blocking rules to prevent exploitation. 

The Major Challenges

Security and operations teams face major challenges that they will need to overcome.  These challenges are not simple or minor, however, they are not insurmountable either.  They will take concerted and focused effort to overcome, and will often involve full-on cultural shifts for some departments and organizatiosn.

  • Operations teams are primarily driven and measured by the availability of services that they are responsible for provisioning.
  • Security teams are driven and measured by thier abilitiy to maintain confidentiality and integrity, reducing the number of breaches or policy violations.
  • Logistical challenges are present due to most organizations’ inability to distribute patches to mobile, non-managed, and non-Microsoft assets.
  • Technical challenges abound due to the commonly heterogeneous computing environment. Most organizations currently have difficulty gaining visibility into what actually requires a patch, what can handle a patch, or has a supported patch management technology or process.
  • Exposed applications, middleware and databases offer enormous complexity.
  • Internally developed applications, especially web-based or externally exposed components have many hiddend dependencies.
  • Companies face testing challenges.  They may be forced to deploy a patch that has unknown adverse affects on the computing environment.
  • Patches are not always available, and the vulnerability may have active exploit code in the wild.  Organizations that have become reliant on patch management technologies and processes are often ill-equipped to implement mitigating prevention mechanisms when confronted with vulnerability conditions that do not have a corresponding patch.
  • The exploit development window and patching window is constantly shrinking.   In 2000-2001, the time allotted to deploy a patch was easily averaging 30 – 90 days. It often took that long for the attackers to reverse engineer a patch, organize distribution mechanisms, and deploy their wares.   In 2009, this window has shrunk to between 24 hours to 10 days of the patch’s release.
  • We also now have such “wonderful” tools, such as MetaSploit to aid in accelerating exploit use and distribution.

I hope this post helps someone out there that is thinking about implementing Vulnerabiltiy Management, and is uncertain of the challenges that they may face.  It is important to consider this tactical process strategically, considering what other tactical processes can mesh with and enhance it.

Advertisements