Do Not Use Admin Accounts

Anyone that has read this blog for more than a week should be aware of the importance of running as a “normal user” instead of as root (UNIX/Linux) or administrator (Windows).  It’s often hard to illustrate just how important this simple precaution is.  To aid in that illustration, a report by BeyondTrust looks at how many security bulletins issused by Microsoft are mitigated by simply not running as administrator.

Despite the advances made by Microsoft to secure Windows by default, the fact remains that the first account created on a new system always has administrator capabilities.  Most Windows users will take the first account available rather than think ahead and setup a less powerful account for everyday use, and will end up running as an administrator.  That is convenient, but incredibly insecure.

Microsoft published 190 security vulnerabilities last year, and 121 of them are thwarted by running without administrator rights.  That’s 64% mitigated by removing administrator rights!  Breaking it down per product, the figures become even more interesting. 

  • Microsoft reported 55 Office vulnerabilities in 2009, and all of them are mitigated by removing admin rights.
  • Of the 33 Internet Explorer issues reported, 94% were thwarted by removing admin rights.
  • For Internet Explorer 8, 100% would be thwarted by removing admin rights.
  • If we restrict the vulnerabilities to just Windows, we see that 53% can be mitigated by not running as admin.

The threat posed by the highest risk vulnerabilities, the ones that would allow arbitrary remote code execution, can be greatly reduced by not running day-to-day operations using an admin account:  87% of these attacks are ineffective when you simply do not run as administrator.  All the more reason for Microsoft to stop making the administrator account avaialble as the first user created.  Force the user to create a normal account after password protecting the admin account.

BeyondTrust

Advertisements

Trojan Imitates Update Utility

Email malware that promises security updates from trusted companies is a frequent ruse used by hackers to fool users into downloading their cruft.  Malware authors have begun creating malware that imitates and overwrites software update applications from Adobe and other vendors. 

Nguyen Minh Duc, director of Bkis Security, writes that the recently detected Fakeupver trojan establishes a backdoor on compromised systems while camouflaging its presence by using the same icons and version number as the official Adobe update packages.  Variants of the malware also pose as updaters for Java and other software applications.

The Register

OSIX Has Had A Breach

Looks like one of the Open Source Institute’s own has recently taken it upon himself to do a little rogue pen-testing.  He managed to grab the unsalted password hashes and brute force himself a couple of passwords.  He logged into a couple of accounts, and alerted the admins to a weakness.  The site admins are taking some heat, and have reported that they have fixed the problem introduced during a hosting change. 

If you have an account there, and perhaps have used the same password, (first of all shame on you!) you’d best be gettin’ on over there and changing that one before going through all of your other locales that used that same password and changing those too…

RSA Conference 2010 – FREE Tool

Did I mention that I like FREE TOOLS?? Matasano Security today rolled out a new Web-based open-source tool that scans for any firewall
rules that are outdated, redundant, or could potentially expose a network to security threats.

“Flint” makes sure nothing in your firewall changes and configurations creates a security problem,” according to Matasano, a
security consulting and research firm. PCI and other regulatory compliance requirements as well as secure software development efforts are forcing organizations to take a closer look at their firewall configurations. As applications get retired and get new revisions, part of their assessments drives them back to the firewall rules that let those apps run.

Flint is the second product offering from Matasano: Its first product, Playbook, is a VMWare-based virtual appliance that centralizes and
synchronizes the control and management of multiple vendors’ firewalls. Flint can work with Playbook by ensuring any changes to firewalls are
correct and don’t open security holes into the network, according to Matasano. Flint also can run as a standalone tool for checking
firewalls. I have yet to experiment, but I certainly will as I find the time…
Matasano.com

Patch Management

When I talk about Vulnerability or Configuration Management, most IT people initially imagine a process that involves applying patches to keep software up to date.  If you have read any of my earlier posts to this blog, you will recognize that this is only a small component, common to both processes, but not defined in great detail within either.  

  • Configuration Management is the practice of strategically controlling the amount of risk that an organization will allow a hardware and/or software platform to bring into an organization.  The risk is measured through base-lining, stated through policy creation, monitored, and enforced.
  • Vulnerability Management looks to catalogue and understand the immediate risk landscape of the organization, provide alerts to potential compromises of a defined acceptable risk threshold, and takes an active role in monitoring the events that would allow action to be taken on those vulnerabilities that are present, while offering guidance in risk reduction and vulnerability remediation.

Patch Management is a critical process, consisting of people, procedures and technology.  Its main objective is;  to create a consistently configured environment that is secure against known vulnerabilities in Operating System and application software.  

Unfortunately, managing updates for all of the applications and Operating System versions used in a small company is a fairly complicated undertaking, and the situation only becomes more complex when scaled up to include multiple platforms, availability requirements, remote offices and workers. 

Patch Management is a complex process that generally comes into play after Configuration and Vulnerability Management have run their initial courses, and links these two processes (and several others) together.  It’s complete embodiment is typically unique to each organization, based upon the operating systems and software that are in use, and the business model that drives operations. 

There are some key issues that should be addressed and included in all patch management efforts.  This post provides a technology-neutral look at these basic items.  The tips and suggestions provided here are rooted in best practice, so use this overview as a means of assessing your current patch management efforts, or as a framework for designing a new program.  

Why Do We Patch

“Install and forget” was a fairly common practice among early IT and network managers.  Once deployed, many systems were rarely or never updated, unless they faced a specific technical issue.  Most IT people will associate software patches with delivering “bug fixes”.  This perception is inaccurate, and the “don’t fix what isn’t broken” approach is no longer an option if you intend to remain in business for any foreseeable period of time.  

The rise of fast spreading worms, malicious code targeting known vulnerabilities on un-patched systems, and the downtime and expense that they bring are probably the biggest reasons organizations are focusing on patch management.  Along with these malware threats, increasing concern around governance and regulatory compliance (IE: HIPAA, Sarbanes-Oxley) has pushed the enterprise to gain better control of its information assets.  Increasingly interconnected partners and customers, the rise of broadband connections, remote workers, are all contributing to the perfect storm, making patch management a major priority for most businesses and end-users.  Patches are commonly used for the following purposes: 

  • Delivery of software “bug fixes”.
  • Delivering new functionality.
  • Enabling support for new hardware components and capabilities.
  • Performance enhancements of older hardware through code optimization.
  • Enhancements to existing tool functions and capabilities.
  • Large scale changes and optimizations of software components (Service Packs)
  • Firmware upgrades to enable new or enhanced functionality.

Any change to pre-existing code is typically delivered via patches.  

Acquiring Patch Intelligence

A key component of all patch management strategies is the intake and vetting of information regarding both security and utility patch releases.  You must know which security issues and software updates are relevant to your environment. 

An organization needs a point person or team that is responsible for keeping up to date on newly released patches and security issues that affect the systems and applications deployed.  This team should also take the lead in alerting those responsible for the maintenance of systems to security issues or updates to the applications and systems they support.  Intelligence gathering can be accomplished by subscribing to vendor supplied alerts, free RSS feeds, or paid professional service subscriptions.  Each alerting mechanism will provide specific benefits and have specific shortcomings. 

  • Not all vendors offer alerting mechanisms for their security and utility patch releases.
  • Those that do typically offer this service free of charge.
  • Vendors have historically announced patch release, not development, although that is starting to change of late.
  • Vendors will typically not monitor or alert users to the various stages of exploit code development.
  • Some security vendors, researchers and popular social networking sites offer breaking vulnerability news feeds.
  • These feeds are generally free, and can offer advice before a patch is released, as well as potential work-arounds.
  • Advice from 3rd party sources ranges from lame to authoritative, and may not be vendor recommended or supported.
  • Paid services have hit the market, offering deep-dive analysis, consultative recommendations, and advance warning.
  • Prices for these services can range from a few hundred dollars a month to hundreds of thousands a year.
  • Public web sites and mailing lists should be regularly monitored at a minimum, including Bugtraq, SecurityFocus lists, and patchmanagement.org. 
  • A comprehensive and accurate Asset Management system can be pivotal in determining whether all existing systems have been considered when researching and processing information regarding vulnerabilities and patches.  

Deploying Patches

The recommended method for applying patches is to use some form of patch deployment automation.  Users and administrators should not be permitted to apply patches arbitrarily.  While this should be addressed at a policy and procedural level with acceptable use policies, change management processes, and established maintenance windows, it may also be appropriate to apply additional technical controls to limit when and by whom patches can be applied.  Even for smaller businesses, the savings that can be realized through deployment automation can be significant.  Imagine, patching one system for developing an image, testing it in a virtualized environment that mimics your production environment, and then at the press of a button, consistently upgrading your entire organization to a more secure configuration.  

The benefits of using deployment automation include the following: 

  • Reduced time spent patching.
  • Reduced human error factored into each deployment exercise.
  • Significant reductions in overtime and associated costs.
  • Decrease in downtime because patching is done in non-working hours, or often as a background task.
  • Consistent operating system and application image across the environment, reducing service desk calls.
  • Auditing reports, including asset inventory, licensing, and other standard reports.

Change Management is vital to every stage of the Patch Management process.  As with all system modifications, patches and updates must be tracked through the change management process.  Like any environmental change, plans submitted through change management must have associated contingency and back out plans.  Also, information on risk mitigation should be included in the change management solution.  For example: 

  • How are desktop patches going to be scheduled and rolled out to prevent mass outages and service desk overload?  Monitoring and acceptance plans should be included.
  • How will updates be certified as successful?  There should be specific milestones and acceptance criteria to guide the verification of the patches’ success, and to allow for the closure of the update in the change management system.

Applying security and utility patches in a timely manner is critical, however these updates must be made in a controlled and predictable fashion, properly assessed and prioritized.  Without an organized and controlled patch deployment process, system state will tend to drift from the norm quickly, and compliance with mandated patch levels will diminish.    

Patch Management Strategies

The strategies outlined here are to be considered only guidelines.  There are four basic strategies for patch management that I am aware of: 

  1. New system installation.
  2. Reactive patch management.
  3. Proactive patch management.
  4. Security patch management.

Note: From the perspective of accessing patches, all vendors that I am aware of make security patches available free of charge.  In most cases, patches that provide new hardware drivers are also free.  A valid support contract is often required to access most other utility patches and updates.    

Installing a New System

The absolute best time to proactively patch a system is while it is being installed.  This ensures that when the system boots, it has the latest patches installed, avoiding any known issues that may be outstanding.  It also lets you perform testing on the configuration in advance, if testing has been scheduled into the provisioning plan.  It also allows you to create a baseline for all other installations.  Unfortunately, budgets do not usually allow for frequent system refreshes.  

Thin clients allow for a form of refresh, always pulling down a consistent image, and removing any tampering or corruption introduced during daily use.  Thin client adoption does offer some advantages, and some distinct disadvantages as well, however this post is not specifically about the merits of thin versus thick client. 

Ensure that you follow all Change Management requirements, test and document thoroughly, and that the new image is recorded as part of your Configuration Management process.  

Reactive Patch Management Strategy

The main goal in reactive patch management is to reduce the impact of an outage.  Reactive patching occurs in response to an issue that is currently affecting the running system, and that needs immediate relief. The most common response to such a situation is usually to apply the latest patch or patches, which might be perceived as being capable of fixing the issue.  Unfortunately, if the patch implementation does not work, you are often left worse off than before you applied the patch. 

There are two main reasons why this approach is fundamentally incorrect: 

  • Even if a known problem appears to go away, you don’t know whether the patch or patches actually fixed the underlying problem or simply masked the symptoms.  The patches might have simply changed the system in such a way as to obscure the issue for now. 
  • Applying patches in a reactive patching session introduces a considerable element of risk. When you are in a reactive patching situation, you must try to minimize risk at all costs.  In proactive patching, you can and should have tested the change you are applying.  In a reactive situation, if you apply a large number of changes, you still may not have identified root cause.  Also, there’s a greater chance that the changes you applied will have negative consequences elsewhere on the system, which leads to more reactive patching. 

So, even when you experience an issue that is affecting the system, spend time investigating root cause.  If a fix can be identified from such investigation, and that fix involves applying one or more patches, then at least the change is minimized to just the patch or set of patches required to fix the problem.  Depending on the severity of the problem, the patch or patches that fix the issue will be installed at one of the following times: 

  • Immediately to gain relief.
  • At the next regular maintenance window, if the problem is not critical or a workaround exists.
  • During an emergency maintenance window that is brought forward to facilitate applying the fix.

Identifying Patches for Reactive Patching

Identifying patches that are applicable in a reactive patching scenario can often be complex.  In a lot of cases, depending on support, contact with official vendor channels will be initiated, but as a starting point, you should do some analysis.  There is no single standard way of analyzing a technical issue, because each issue involves different choices.  Using debug level logging and examining log files usually provides some troubleshooting guidance.  Also, a proper recording system that records changes to the system should be considered.  Recent system configuration changes can be investigated as possible root causes.

Proactive Patch Management Strategy

 

The main goal in proactive patch management is to prevent unplanned downtime. The idea behind proactive patching is that in most cases, problems that can occur have already been identified, and patches have already been released.  So, the problem becomes mainly one of identifying the most important patches, and applying them in a safe and reliable manner. 

In all cases of proactive patching, it is assumed that the system is functioning normally. Why patch a system that is functioning normally, since any change implies risk and downtime?  As with any system that is functioning normally, there is always the chance that some underlying, known issue can cause a problem.  Such issues can include the following: 

  • Memory corruption that has not yet caused a problem.
  • Data corruption, which is typically unnoticed until the data is re-read.
  • Latent security issues.

Security issues are a good example of the value of proactive patching.  Most security issues are latent issues, meaning they exist in the system, but are not causing issues yet.  It is important to take proactive action to prevent security vulnerabilities from being exploited. 

In comparison to reactive patching, proactive patching generally implies more change, and additional planning, for regularly scheduled maintenance windows and testing. 

Proactive patching is the strategy of choice.  Proactive patching is recommended mainly for the following reasons: 

  • Proactive patching reduces unplanned downtime.
  • Proactive patching prevents systems from experiencing known issues.
  • Proactive patching provides the ability to plan ahead and do appropriate testing before deployment.
  • Planned downtime for proactive maintenance is usually much less expensive than unplanned downtime for addressing issues reactively.  

 

Security Patch Management

Security patch management requires a separate strategy because it requires you to be proactive and yet requires reactive patching’s  sense of urgency.  In other words, security fixes deemed relevant to the environment might need to be installed proactively, before the next scheduled maintenance window.  The same general rules apply to security patches as to proactively or reactively applying utility patches.  Plan, test, and automate. 

All security patches should be assessed independently.  Although vendors have begun to standardize on a single patch assessment methodology, they cannot take into account the most important factors, the environmental factors.  They are also reticent in bringing attention to exploit code development, and have been prone to understating the severity and impact that vulnerabilities in their products pose.  The framework for analysis that vendors are adopting, and is strongly recommended for all businesses, is the Common Vulnerability Scoring System, or CVSS. 

CVSS was developed by the NIST team, is now in its second version, and has spawned a series of other assessment tools to solve a multitude of problems from malware naming conventions to configuration issues. 

Security patch planning should be performed based on the factored risk rating of the vulnerability, and a standard sliding patch window adopted for each of the platforms in use, that ties directly back to the rating.  

  • If the organization is smaller, consider a single monthly window for applying all missing security patches. 
  • Medium sized organizations might consider having a second maintenance period every month, as they are more likely to have multiple platforms present (IE: Windows & Unix). 
  • Larger, Enterprise environments might consider having several maintenance periods as well.

In the case of larger environments, complexity of the platforms in use and their inter-dependencies must be taken into account.  If the front-end systems are going to be down for utility patches and updates, it may be a perfect time to apply security patches for the back-end databases, for instance. 

Make certain that no matter what the size of the organization or the nature of the patch, a back out plan has been developed and tested.  

Audit and Assessment

Regular audit and assessment helps gauge the success and extent of patch management efforts.  There are typically two phases in the auditing and assessment portion of the patch management program.  Verification and Validation.  You are essentially trying to answer two very different questions: 

  • Verification –  What systems need to be patched?
  • Validation   –  Are the systems that were supposed to be patched, actually patched and protected?

The audit and assessment component will help answer these questions, but there are dependencies. The most critical success factor here is accurate and effective asset management information.  The major requirement for any asset management system is the ability to accurately track deployed hardware and software throughout the enterprise, including remote users and office locations.  Ideally, asset management software will allow the administrator to generate reports that will be used to drive the effort toward consistent installation of patches and updates across the organization. 

System discovery is an essential component of the audit and assessment process.  While asset management systems can help administer and report on known systems, there are likely a number of systems that have been unknowingly or intentionally excluded from inventory databases and management infrastructures.  System discovery tools can help uncover these systems and assist in bringing them under the umbrella of formal asset management and patch management compliance 

Regardless of the tools used, the goal is to discover all systems within your environment and assess their compliance with the organization’s patch and configuration policies and standards.  

Conclusion

Focusing solely on technology to solve the patch management issue is not the best answer.  Installing patch management software and vulnerability assessment tools without supporting policies, standards, guidelines, and oversight will be a wasted effort.  Instead, solid patch management programs will team technological solutions with policy and operationally-based components that work together to address each organization’s unique needs.  

Resources:

Effective Vulnerability Management

In this post I want to focus on responding to new threats and vulnerabilities effectively.  I won’t solve all of the world’s VM problems, but I might highlight a few issues that might not have already been thought of when considering a VM program.  I am not talking about incident response, attack analysis, or forensics here, as these are disciplines that are started after an event has already occurred.  I refer to how an organization responds to the discovery of critical vulnerabilities within its environment.  Especially those with exploit code or attacks taking place “in the wild”.

Software vulnerability scanning remains the primary means to determine and measure an organization’s security posture against external threat agents. The security group will typically scan the environment against a database of known vulnerabilities, and then task the operations team with resolving the vulnerable conditions.  Many companies are stuck with this never-ending, non-scalable, false-positive prone, snapshot in time approach to improving their security posture.  They attempt to measure and understand what their security profile looks like at a single frozen point in time, against a fully dynamic threat environment.

Information security must evolve beyond just building a catalog of tens of thousands of vulnerable conditions that may exist, and comparing that list against tens of thousands of organizational assets in the environment.  What does a large organization expect to do with a 600 page report of unique, distinct software vulnerabilities for each asset?

  1. Nothing, they scan periodically, review or distribute the reports, and move on.  Now it is someone else’s problem.
  2. Focus efforts on remediating “critical” vulnerabilities.  Of course the list of critical vulnerabilities changes on a regular basis, often influenced by politics and the work effort involved to remediate them rather than exploitation likelihood or asset value.
  3. Struggle through the list of vulnerabilities with the security team pushing operations to fix this item, patch that system, disable this option and uninstall that application.  Of course the list changes often, the network also changes, the threats constantly evolve; most organizations are far too dynamic for this to be even remotely effective.
  4. Scan only for a small set of newly announced vulnerabilities, say on a given Tuesday or whenever exploit code appears in the wild, and then attempt to rapidly patch systems.

To know what vulnerability is most likely to be exploited in a large, complex, globally distributed environment by a dynamic and increasingly hostile threat environment requires a tremendous amount of foresight and research.  This includes cataloging and valuation of each asset, and understanding each system’s role in business and in an attack situation, as either attacker, intermediary target, or objective target.

Zero Day Threat

When a critical vulnerability is announced, being actively exploited with no available patch, it forces organizations into firefighting mode. The typical response has been to rapidly patch all potentially affected assets. This presents many challenges, can be operationally disruptive, logistically challenging, and in some cases remains an unavailable or unattainable option.

The Equation:

  •         Define configuration management policy
  •     + Define software policy
  •     + Audit against policies
  •     + Monitor for change
  •     + Enforce policies                                     
  • = Elimination of a significant percentage of vulnerabilities and exposures.

Organizations looking to achieve effective vulnerability management should enhance vulnerability scanning efforts by implementing Security Configuration Management (SCM), allowing vulnerability scanning to focus on those conditions that are outside of SCM’s scope.  This establishes a standard baseline of good practice, greatly reduces the excessive noise that vulnerability scanning creates, and supports a move towards a higher level of operational and security maturity by:

  • Defining the desired state of assets against a security configuration standard.
  • Periodically auditing the environment to identify non-compliant elements.
  • Constantly monitoring for the addition of new elements and unauthorized changes.
  • Enforcing compliance by remediating non-compliant systems.

Any system currently deployed, or that will be deployed in the future, must adhere to a common security configuration baseline. Organizations like NIST, NSA, CIS, and vendors such as Microsoft and Cisco have already defined templates with settings for common operating environments and network elements. Any software in the environment should be catalogued, assessed, documented, understood and explicitly white-listed, or removed and implicitly black-listed by exclusion.

Unlike vulnerability scanning, SCM provides operationally actionable output since the orientation is towards maintaining system integrity by ensuring compliance with a defined standard rather than a moving target.  This ability to describe deviations from policy in terms of remediation activities provides a level of efficiency that cannot be obtained through vulnerability scanning alone.  For example if you perform a vulnerability scan against a system running an older version of IE the result would be hundreds of vulnerabilities.

  • Do these individual vulnerabilities really matter to the business?
  • Is it important to understand each of these conditions?
  • Would remediating one or more actually change the attack surface?
  • What would the operations team be expected to do in response to such a list?
  • Is the response the same regardless of platform (ie: Server vs Desktop, ‘Nix vs Windows, Switch vs Router)

If the organization has a policy that states all systems running IE must be running version 7 with patch X,Y & Z applied, it is immediately clear what action must be taken by the operations team, none of the questions above must be answered, testing requirements are simplified, and hundreds of vulnerabilities are resolved in the process.  Extrapolate this out to other system attributes, such as open ports, protocols in use, services enabled, patches applied, as well as installed applications and it becomes clear that tens of thousands of vulnerabilities can more easily be measured and expressed as resolutions in the form of security baselines.

Organizations must also look to incorporate a wide range of mitigating controls to shield the environment from attack prior to removing the root cause of an exploit.  Essentially the response should be to shield the environment, then remove the root cause, which in most cases means patch, upgrade, or remove the vulnerable item.  So how does an organization shield itself against attack?   They must incorporate and facilitate coordination of all network and host-based technologies as part of their vulnerability and threat management program.  In the case of the 2008 – 2009 Microsoft DNS vulnerabilities, there were clear work-arounds provided that included registry changes and firewall blocking rules to prevent exploitation. 

The Major Challenges

Security and operations teams face major challenges that they will need to overcome.  These challenges are not simple or minor, however, they are not insurmountable either.  They will take concerted and focused effort to overcome, and will often involve full-on cultural shifts for some departments and organizatiosn.

  • Operations teams are primarily driven and measured by the availability of services that they are responsible for provisioning.
  • Security teams are driven and measured by thier abilitiy to maintain confidentiality and integrity, reducing the number of breaches or policy violations.
  • Logistical challenges are present due to most organizations’ inability to distribute patches to mobile, non-managed, and non-Microsoft assets.
  • Technical challenges abound due to the commonly heterogeneous computing environment. Most organizations currently have difficulty gaining visibility into what actually requires a patch, what can handle a patch, or has a supported patch management technology or process.
  • Exposed applications, middleware and databases offer enormous complexity.
  • Internally developed applications, especially web-based or externally exposed components have many hiddend dependencies.
  • Companies face testing challenges.  They may be forced to deploy a patch that has unknown adverse affects on the computing environment.
  • Patches are not always available, and the vulnerability may have active exploit code in the wild.  Organizations that have become reliant on patch management technologies and processes are often ill-equipped to implement mitigating prevention mechanisms when confronted with vulnerability conditions that do not have a corresponding patch.
  • The exploit development window and patching window is constantly shrinking.   In 2000-2001, the time allotted to deploy a patch was easily averaging 30 – 90 days. It often took that long for the attackers to reverse engineer a patch, organize distribution mechanisms, and deploy their wares.   In 2009, this window has shrunk to between 24 hours to 10 days of the patch’s release.
  • We also now have such “wonderful” tools, such as MetaSploit to aid in accelerating exploit use and distribution.

I hope this post helps someone out there that is thinking about implementing Vulnerabiltiy Management, and is uncertain of the challenges that they may face.  It is important to consider this tactical process strategically, considering what other tactical processes can mesh with and enhance it.