Vendor Vulnerability Analysis – How To Part 3


Vendors deliver patches, updates and upgrades in very different ways.  Some require their users to remain informed and patch manually.  Others will notify their constituents by email that a patch or update is available, and provide download links for the patches.  Still others will provide notifications in the software, and allow the user to configure how these updates are handled.

These strategies work great for one-off applications, home users, and the smallest of businesses, however the medium to enterprise class environment is moving towards automated delivery for several reasons.  They are looking for methods to centralize testing, deployment and inventory so as to maintain a standard build.  This build is kept as close to the same as possible in order to control costs, minimize special handling requirements when troubleshooting, simplify system upgrades and replacement, and to meet regulatory requirements regarding system hardening and access controls.

Vendors also need to be cognizant of the fact that malicious users will send out authentic looking emails to take advantage of the vendor’s customers or seek ways to attack auto-update mechanisms if the vendor pursues these methods of delivery.


As vendors release patches to fix vulnerabilities, these too need to be assessed.  They should be assessed using the same methods used to assess the vulnerability they were meant to address.  Look for answers to the same questions, but this time, the focus is on how quickly has code developed or expected to develop, and how quickly do we need to patch, rather than what damage can be done.  Not that the other questions are irrelevant, just that now there is immediacy on fixing the root cause of the problem.  Removing the vulnerability.

  • How serious is the impact of the vulnerability if exploited?
  • How many vulnerable systems are present in the environment?
  • What is the most likely attack scenario and target base?
  • How far has exploit code development progressed?
  • What work-arounds, reconfigurations or other risk mitigation strategies remain applicable, or should be removed?



Where patch delivery focused on the vendor bringing a patch to the public, patch deployment is internal to an organization, and focuses on how to get the patch out to the end point.  There are many deployment tools that are capable of performing the task, some of them free, some of them commercial.  Many do not scale to enterprise class.  Do your homework before committing to one product over another. 

Although the vendor has tested the patches thoroughly, and would not intentionally release code that does not function as expected or has unforeseen conflicts with other software, it happens from time to time.  There is no excuse not to test at least the basic and critical components that make up or enhance your business.  This is your responsibility.

It is also critical to patch as quickly as possible, because the meter is running.  The meter was started back when the researcher started playing with the code, and it has been ticking away relentlessly.  If there wasn’t code out there before the patch was released, it is almost certain that someone, somewhere, is reverse engineering the patch to determine what it fixes, and how.  Once they know this, they can readily produce code that can exploit the vulnerability.

The key message in this section is TEST & DEPLOY SECURITY PATCHES, ASAP.


Throughout the lifecycle of a vulnerability, its exploit code will develop, from proof of concept code that simply attempts to show that what is claimed can be achieved, to working code which shows that not only can code be executed, it can carry and deliver a destructive payload, to weaponized attack modules like those prepared fro Metasploit or other push-button frameworks.

Although not identified in the vulnerability lifecycle, exploit code development is worth talking about in this context.  As stated above, the meter started running as soon as the researcher started probing the vulnerable code with a “fuzzer”.  Fuzzers are tools that provide unexpected inputs to software programs, delivering and logging the outputs.  The intention is to change the output, influence other resources that should not be accessible, to overwrite or access  segments of memory, or to crash the running program, replacing the it in memory with another program.

Why do people keep findiing vulnerabilities?  There are at least as many different motivations as there are ways to fall into the field of vulnerability research and exploit code development.  The going rate for a good security vulnerability can help an undergrad pay their tuition, or a security professional put a down payment on a car, and that’s just if they sell it to a legitimate security vendor, which pays anywhere from $2,000 to $10,000 a pop.

The underground can be even more lucrative.  A black hat researcher can get $20,000 to $30,000 for a “weaponized” exploit.  (See Getting Buggy with the MOBB.)  The two markets share potential impact: The more targets a vulnerability can affect if converted into an exploit, the more it pays.

ImmunitySec, 3Com/TippingPoint, iDefense, and Digital Armaments are among the security firms who do business with vulnerability researchers and exploit writers.   They pay for information about vulnerabilities so that they are able to better protect their clients from exploits, and work with vendors to help them develop fixes.  It’s a controversial practice.  IDefense has been criticized for reselling the vulnerability and exploit information it buys, as well as for its promotions.  It recently held a contest that paid $10,000 for remotely exploitable Windows vulnerabilities, for example.

Professional Researcher:  A professional researcher is often someone that has made a concerted effort to know and understand programming code and the math behind it.  They are curious by nature, and love the thrill of discovery.  they are often well credentialed, well educated in their field of study, and desire recognition of their skills and talents.  They are generally employed by commercial ventures, but may also offer their services for free in order to garner recognition and speaking engagements.

Independent researcher:  An independent researcher is a professional researcher that has struck out on their own.  They are often someone that has made a concerted effort to know and understand programming code and the math behind it.  They are curious by nature, and love the thrill of discovery.  they are often well credentialed, well educated in their field of study, and desire recognition of their skills and talents.  They may offer their analysis services for free in order to garner recognition and paid speaking engagements.

Research Hobbyist:  A hobbyist shares many of the characteristics of the professional researcher, but may have foregone formal training and opted for more hands-on experimentation.  They are usually more driven by notoriety amongst a clique of like-minded individuals than general fame and fortune, although I am sure that most would not pass up a lucrative speaking engagement in front of their peers.

Black Hat Researcher:  A generalization, as any of the above could actually be a black hat.  The term black hat refers to the tendency of the individual disrupt the operations of businesses to seek personal gain, glory or a feeling of power or justice.

Deployment Validation & Remediation Verification

These two terms are often used interchangeably, however in the field of Vulnerability Management, they are mutually exclusive. 

The term validation is defined as:  The process of checking if something satisfies certain criteria, meeting the needs of the intended end-user or customer.

The term verification is defined as:  The act of reviewing, inspecting or testing, in order to establish and document that a product, service or system meets regulatory or technical standards.

Validating that patches have been deployed is the first step in Quality Control regarding vulnerability management.  If standard testing procedures were followed, the patches that were deployed were examined to ensure that they did what they claimed to do, and did not break anything else when applied.  Measuring their presence in the environment is a reasonable indicator of compliance to management’s decision to deploy the patches.

Later, it would be pertinent to test the effectiveness of the patches as deployed through vulnerability scanning or penetration testing, as deemed necessary by the organization.  This is Remediation Verification, demonstrating beyond a doubt that the problem has been solved, and that systems are indeed protected against the vulnerability.

Both of these items have their place, and deliver specific value to an organization.  Use these tools appropriately and wisely.