Poor Problem Management

Waiting...

CA’s Rich Graves posts “Problem Management sits alone in the corner and cries and cries. It’s the loneliest ITIL process as it’s always the last one picked to play on the Service Operations team. Poor little Problem Management sits and watches while Incident and Change Management get to play. And Configuration Management gets to play too, even though it is a complete mess and isn’t even wearing shoes.”

So true.  Problem Management is a misunderstood process, even moreso than Configuration Management.  Without it though, so many issues will go unresolved, or be closed with an inconclusive response.  No lessons will be learned, and the problems won’t just go away.

“And let’s be honest: root cause analysis is boring. Who wants to deal with that all the time? I’d rather just restore service and move on. What’s that you say? Eliminating the root cause could prevent further outages and free IT from dealing with critical incidents? OK then. We need to do Problem Management.”

Check it out and make a resolution to improve your IT and business processes in 2012.  Shoot for the moon, that way, even if you just miss, you still stand a chance to fall among the stars.

The ITIL Service Catalog

The ITIL framework is based on the concepts of Service and Customer Care, and the Service Catalog is at the core of these fundamental concepts.  Having a menu of available services is critical for effective IT service provisioning and management.  So many IT departments have grown up without maturing the way that they manage, support and offer services to their constituents, and have ended up in sheer chaos.  Usually, a user will have a need, and place a request to IT through the Service Desk.  The Service Desk staff member may not be able to help them, or will simply turn down the request since no procedures are in place to handle it.  Worse, there is the potential to just by-pass “the IT run-around” altogether because of the availability of downloadable applications and external services that may add additional, unmeasured, and unrecoverable costs and risks to the organization.  Do you know ALL of the applications in use in your organization and where they came from?

By taking the time to document the services that IT provides currently, the services that IT plans to offer soon, and asking what services the customers would like to consider in the future, IT departments can gain an understanding of their current environment, plan for the future, and engage their customers in developing new services.  The development of a service catalog can also aid in understanding what resources are needed for support, where the budget is being spent, what factors should be measured to gauge efficiency, what services can be automated or optimized, and where costs may be recovered or saved.

This available list of services should include everything that IT does, for instance, requests for a new laptop, new software, account provisioning, access requests, file permissions, or de-provisioning an employee’s account when they leave.  A help desk without a service catalog will not be able to provide its customers with consistent information about the services available and time requirements for delivery.

Service Catalog Contents:
Each service within the catalog typically includes:
  • A description of the service provided.
  • Service level agreement commitments for fulfilling the service.
  • Who is entitled to request or approve the service.
  • Costs and charge backs (if any).
  • How the service is fulfilled.
Benefits:

ITIL Service Lifecycle Overview

Traditionally, IT has been managed and maintained through fire-fighting efforts, remaining reactive and with a technology focus.  The world view is one of “users”, isolated silos of information and responsibility, ad-hoc problem solving, informal processes, and operational in nature.  The frequently cited objective of “alignment with the Business” characterizes a common problem faced by the leadership of IT organizations.   Those who succeed in meeting this objective are the ones who understand the need to be Business-minded.   When an IT organization has an internal focus on the technology being delivered and supported, they lose sight of the actual purpose and benefits that their efforts deliver to the Business.

ITIL builds upon existing IT practices by providing a process driven focus, pro-active problem prevention, viewing the world through service colored glasses with “customers” rather than users, seeking integration and information sharing, making processes SMART – simple, manageable, achievable, repeatable and timely.  ITIL has a service and service level orientation, focusing on continuous measurement and improvement.

The objective of the ITIL Service Management practice framework is to provide services to business customers that are fit for purpose, stable, and reliable.  The core disciplines provide structure, stability and strength to service management through durable principles, best practices, formal methods and tools, while protecting investments, and providing the necessary basis for measurement, learning and improvement.  The ITIL Framework has been redesigned in version 3 to make building out IT services strategy more straightforward and maintaining or improving them, logical.  The ITIL service life cycle consists of 5 major considerations, containing several processes for managing and developing the services IT provides through to maturity.  The life cycle itself is iterative, and multi-dimensional, ensuring that lessons learned in one area can be applied to other areas as well.

It is often helpful to understand the bigger picture when discussing a framework as large and multi-layered as Information Technology and Service Management.  Below is an overview of some of the key terms and ITIL practice areas.  The ITIL core guidance consists of 6 books.  Each volume is consistently structured, making interpretation and cross referencing easier.

  1. Introduction to ITIL Service Management
  2. Service Strategy
  3. Service Design
  4. Service Transition
  5. Service Operation
  6. Continual Service Improvement

In addition to the core guidance there is large body of official and unofficially developed complementary guidance available, as well as examples and templates for many tasks.  Additionally, other frameworks are referenced and related to align with ITIL practices, such as CoBIT, Six-Sigma, and ISO.  To me, ITIL is quite simply; documented common sense that works.  Continue reading

What Is ITIL?

I met with an acquaintance recently, who was looking for some input into forming a cohesive IT strategy, aligned more closely with business strategy and processes, and supporting the anticipated growth of the company.  I hope that she doesn’t mind my sharing some of our meeting dialogue as a learning experience for others.

The company that she presently works for is well established across Canada, and has started to reach into the states as it steadily grows.  My acquaintance is concerned that current growth may exceeded IT’s ability to keep pace shortly, and the company will be facing capacity, capability and security issues in the mid to long-term.  Better to identify and plan to address these issues now, than to wait for a major flame up.  I couldn’t agree more.

IT is all about providing, managing, measuring and changing services for the constituents within the organization.  The first question that I asked was, “how is change managed in the organization?”  There was a pause, and I followed up for clarity “do you manage change, using say, ITIL practices?”  It turns out that she has had little exposure to ITIL, and asked for a quick explanation of the term.  My short two sentence expansion was probably far too brief to offer any real guidance or expression of value, so I am following up in more detail, here.

ITIL is short for the Information Technology Infrastructure Library.  This library provides an organized set of core IT concepts and a framework of practices and processes for Information Technology Services Management (ITSM), development and operations.  Each of the core concepts and process groups inter-relate, with input and feedback linkages.  Each concept is designed to create order from chaos, improve service delivery and customer service, increase productivity, reduce complexity, and streamline costs.  ITIL gives detailed descriptions of a number of important IT practices and provides comprehensive checklists, tasks and procedures that any IT organisation can tailor to its needs.  Read this white paper for more information on ITIL basics.

A 2007 article in IT World Canada reports that an “implementation of ITIL was estimated to save 10 to 20 percent in technology support costs over a five-year period.  Actual returns have been higher, according to Senior Vice President of Enterprise Technology Operations, Robert Turned, but it’s difficult to attribute all of the savings directly to ITIL.”

ITIL does not require adoption of the entire body of its framework in order to be successful in bringing to light substantial benefits to the organization.  A company can choose what to adopt, how far to mature the model and framework, if and when to introduce automation, and can choose to adopt only a single module if that is all that it requires.  In fact I have been responsible for introducing select modules at several places of work, and have worked at others that had elected to introduce formal Change Management only, because that was all that they needed at the time.  CoBIT has been mapped to ITIL, as have other best practice sets, and Microsoft’s own Operational Framework is based directly on the ITIL model.

ITIL is published in a series of books, each of which covers an IT management topic.  Each topic contains one or more sub-processes.  Version 3 is a significant update to the framework and its processes.  The Version 3 IT Service Management core process group includes: Continue reading

M&A Security Challenges

Merging IT and security strategies that were developed at different times, under different conditions, and different management teams is no simple task.  In one organization that I worked for, innovation and growth was handled through merger and acquisition.  A trend that is quite common in the current economy as businesses look for opportunities to gain new markets, increase their corporate strengths, and bring in new talent and ideas.

The organization when I arrived had just completed 2 substantial acquisitions, extending its reach across Canada, parts of the UK, and 2 US states.  The IT team and I faced huge challenges in merging technologies, introducing a structured IT strategy, and unifying information security practices.

All 3 businesses were considerably behind the times in terms of their security programs.  There were no security policies to speak of, and head office relied primarily on contract IT and information security staff used primarily for after-hours support and fire-fighting missions.  The smaller units had basically no security considerations beyond the firewall.  It was basically building the program from the ground up in terms of staffing, training, equipment, policies and procedures.

Continue reading

Disaster Recovery or Business Continuity, Plan, Plan, Plan

In various companies, I have assumed the role of IT Manager in many shapes, forms and job titles.  One of the first things that I have usually done as part of that transition has been to look for Disaster Recovery & Business Continuity plans.  Mostly, they didn’t exist.  Occasionally, they were in various states of readiness.  One firm in particular had an excellent Network Manager who didn’t realize that he had been preparing and updating a pretty good tactical DR plan for several years. 

Every single year without fail, the highrise office tower that the company was headquartered in would pull the plug on all 40-some-odd floors, and make repairs and updates to its electrical, mechanical, HVAC and other life supporting systems.  In preparation for this big event, every single server, every router, switch and even desktops, had to be visited in order to prepare and shut down clean so as to protect critical data and resources.  This often involved taking the extra time to patch, test, fail-over and repeat, before everything goes black.  This is a monumental task, and I think I still owe that guy a big thank you and a small beer for maintaining such a good inventory checklist, as well as doing the majority of the heavy lifting during those crazy weekends.  (Cheers Al!)

With this documentation in hand, it was fairly easy to determine what were the “crown jewels” within the organization, what the business could not afford to be without for an extended length of time, and also, what needed to be stood up fast in the event of catastrophe.  The exercise also made clear what needed to be backed up, what needed to be duplicated, and what required full, live replication in order to meet both disaster and continuity goals.

What are those goals?

Continue reading

WikiLeaks – Could It Happen To You?

For enterprise IT managers and security professionals, the on-going WikiLeaks disclosures underscore the information security gaps that exist even when common security controls are in use by large organizations.  It is not necessarily the controls themselves that are flawed, but more often the supporting processes and procedures that were quickly pulled together under pressure, and seldom if ever revisited or audited at a granular level for optimal performance and completeness.

This entire ordeal also serves to highlight the importance of adopting a “trust, but verify” approach to hiring practices and access control.  This means that you need to be just a little bit more paranoid regarding your practices, without distrusting your employees.  Remember that everyone that you hire is human, and that people will make mistakes if mistakes are possible.  They are (hopefully) hired due to their capabilities and experience, but what really separates them from the other candidates that showed up for an interview?  Were you able to validate their claims of reliability and trustworthiness?  Trust that they will exercise good judgement, work towards corporate betterment, but verify that each access to sensitive data or corporate intellectual property is properly justified.   Remove the temptation to go astray, and by all means, let them know that you verify.  Your intentions are to DISCOURAGE criminal or damaging behavior, not ENTRAP those who may err or fall prey to social engineering.

What controls should be in place?  That depends on the type and classification of the information that is at risk.  When it comes to client financial and personal information, it is clear that monitoring, notification and escalation controls are a requirement.  Take a lesson from PCI, even if you don’t adopt it formally.  The PCI DSS is simply basic computer security.  A quick review of the 12 main PCI requirements shows nothing revolutionary, and they offer a solid starting point for virtually any security compliance engagement. 

Continue reading

Capturing Value From Business Process Improvement

“Business processes occupy the middle ground of enterprise architecture:  They are driven by the business model and in turn, drive the technology model.  Although business processes are well positioned to be a source of significant value, you need to take a holistic approach to understand their impact.”

Baseline’s Jeff Bruckner ‘s Workbook has provided another thought provoking article that outlines a 5 step BPI metrics development and gathering process:

  1. Assessment
  2. Road Map
  3. Analyze & Select Scenarios
  4. Develop & Implement Programs
  5. Track Performance

Although the article is only a single page, there is enough food for thought here to feed and grow a working methodology.  Read the article here:  BaselineMag

NYPD Visit Elderly Couple – Again

Here is a real world example of why you DON’T use real data in testing.  Embarrassed cops on Thursday cited a “computer glitch” as the reason police targeted the home of an elderly, law-abiding couple more than 50 times in futile hunts for bad guys.  Apparently, the address of Walter and Rose Martin’s Brooklyn home was used to test a department-wide computer system in 2002.  What followed was years of cops appearing at the Martins’ door looking for murderers, robbers and rapists – as often as three times a week.

NYdailyNews

Patch Management

When I talk about Vulnerability or Configuration Management, most IT people initially imagine a process that involves applying patches to keep software up to date.  If you have read any of my earlier posts to this blog, you will recognize that this is only a small component, common to both processes, but not defined in great detail within either.  

  • Configuration Management is the practice of strategically controlling the amount of risk that an organization will allow a hardware and/or software platform to bring into an organization.  The risk is measured through base-lining, stated through policy creation, monitored, and enforced.
  • Vulnerability Management looks to catalogue and understand the immediate risk landscape of the organization, provide alerts to potential compromises of a defined acceptable risk threshold, and takes an active role in monitoring the events that would allow action to be taken on those vulnerabilities that are present, while offering guidance in risk reduction and vulnerability remediation.

Patch Management is a critical process, consisting of people, procedures and technology.  Its main objective is;  to create a consistently configured environment that is secure against known vulnerabilities in Operating System and application software.  

Unfortunately, managing updates for all of the applications and Operating System versions used in a small company is a fairly complicated undertaking, and the situation only becomes more complex when scaled up to include multiple platforms, availability requirements, remote offices and workers. 

Patch Management is a complex process that generally comes into play after Configuration and Vulnerability Management have run their initial courses, and links these two processes (and several others) together.  It’s complete embodiment is typically unique to each organization, based upon the operating systems and software that are in use, and the business model that drives operations. 

There are some key issues that should be addressed and included in all patch management efforts.  This post provides a technology-neutral look at these basic items.  The tips and suggestions provided here are rooted in best practice, so use this overview as a means of assessing your current patch management efforts, or as a framework for designing a new program.  

Why Do We Patch

“Install and forget” was a fairly common practice among early IT and network managers.  Once deployed, many systems were rarely or never updated, unless they faced a specific technical issue.  Most IT people will associate software patches with delivering “bug fixes”.  This perception is inaccurate, and the “don’t fix what isn’t broken” approach is no longer an option if you intend to remain in business for any foreseeable period of time.  

The rise of fast spreading worms, malicious code targeting known vulnerabilities on un-patched systems, and the downtime and expense that they bring are probably the biggest reasons organizations are focusing on patch management.  Along with these malware threats, increasing concern around governance and regulatory compliance (IE: HIPAA, Sarbanes-Oxley) has pushed the enterprise to gain better control of its information assets.  Increasingly interconnected partners and customers, the rise of broadband connections, remote workers, are all contributing to the perfect storm, making patch management a major priority for most businesses and end-users.  Patches are commonly used for the following purposes: 

  • Delivery of software “bug fixes”.
  • Delivering new functionality.
  • Enabling support for new hardware components and capabilities.
  • Performance enhancements of older hardware through code optimization.
  • Enhancements to existing tool functions and capabilities.
  • Large scale changes and optimizations of software components (Service Packs)
  • Firmware upgrades to enable new or enhanced functionality.

Any change to pre-existing code is typically delivered via patches.  

Acquiring Patch Intelligence

A key component of all patch management strategies is the intake and vetting of information regarding both security and utility patch releases.  You must know which security issues and software updates are relevant to your environment. 

An organization needs a point person or team that is responsible for keeping up to date on newly released patches and security issues that affect the systems and applications deployed.  This team should also take the lead in alerting those responsible for the maintenance of systems to security issues or updates to the applications and systems they support.  Intelligence gathering can be accomplished by subscribing to vendor supplied alerts, free RSS feeds, or paid professional service subscriptions.  Each alerting mechanism will provide specific benefits and have specific shortcomings. 

  • Not all vendors offer alerting mechanisms for their security and utility patch releases.
  • Those that do typically offer this service free of charge.
  • Vendors have historically announced patch release, not development, although that is starting to change of late.
  • Vendors will typically not monitor or alert users to the various stages of exploit code development.
  • Some security vendors, researchers and popular social networking sites offer breaking vulnerability news feeds.
  • These feeds are generally free, and can offer advice before a patch is released, as well as potential work-arounds.
  • Advice from 3rd party sources ranges from lame to authoritative, and may not be vendor recommended or supported.
  • Paid services have hit the market, offering deep-dive analysis, consultative recommendations, and advance warning.
  • Prices for these services can range from a few hundred dollars a month to hundreds of thousands a year.
  • Public web sites and mailing lists should be regularly monitored at a minimum, including Bugtraq, SecurityFocus lists, and patchmanagement.org. 
  • A comprehensive and accurate Asset Management system can be pivotal in determining whether all existing systems have been considered when researching and processing information regarding vulnerabilities and patches.  

Deploying Patches

The recommended method for applying patches is to use some form of patch deployment automation.  Users and administrators should not be permitted to apply patches arbitrarily.  While this should be addressed at a policy and procedural level with acceptable use policies, change management processes, and established maintenance windows, it may also be appropriate to apply additional technical controls to limit when and by whom patches can be applied.  Even for smaller businesses, the savings that can be realized through deployment automation can be significant.  Imagine, patching one system for developing an image, testing it in a virtualized environment that mimics your production environment, and then at the press of a button, consistently upgrading your entire organization to a more secure configuration.  

The benefits of using deployment automation include the following: 

  • Reduced time spent patching.
  • Reduced human error factored into each deployment exercise.
  • Significant reductions in overtime and associated costs.
  • Decrease in downtime because patching is done in non-working hours, or often as a background task.
  • Consistent operating system and application image across the environment, reducing service desk calls.
  • Auditing reports, including asset inventory, licensing, and other standard reports.

Change Management is vital to every stage of the Patch Management process.  As with all system modifications, patches and updates must be tracked through the change management process.  Like any environmental change, plans submitted through change management must have associated contingency and back out plans.  Also, information on risk mitigation should be included in the change management solution.  For example: 

  • How are desktop patches going to be scheduled and rolled out to prevent mass outages and service desk overload?  Monitoring and acceptance plans should be included.
  • How will updates be certified as successful?  There should be specific milestones and acceptance criteria to guide the verification of the patches’ success, and to allow for the closure of the update in the change management system.

Applying security and utility patches in a timely manner is critical, however these updates must be made in a controlled and predictable fashion, properly assessed and prioritized.  Without an organized and controlled patch deployment process, system state will tend to drift from the norm quickly, and compliance with mandated patch levels will diminish.    

Patch Management Strategies

The strategies outlined here are to be considered only guidelines.  There are four basic strategies for patch management that I am aware of: 

  1. New system installation.
  2. Reactive patch management.
  3. Proactive patch management.
  4. Security patch management.

Note: From the perspective of accessing patches, all vendors that I am aware of make security patches available free of charge.  In most cases, patches that provide new hardware drivers are also free.  A valid support contract is often required to access most other utility patches and updates.    

Installing a New System

The absolute best time to proactively patch a system is while it is being installed.  This ensures that when the system boots, it has the latest patches installed, avoiding any known issues that may be outstanding.  It also lets you perform testing on the configuration in advance, if testing has been scheduled into the provisioning plan.  It also allows you to create a baseline for all other installations.  Unfortunately, budgets do not usually allow for frequent system refreshes.  

Thin clients allow for a form of refresh, always pulling down a consistent image, and removing any tampering or corruption introduced during daily use.  Thin client adoption does offer some advantages, and some distinct disadvantages as well, however this post is not specifically about the merits of thin versus thick client. 

Ensure that you follow all Change Management requirements, test and document thoroughly, and that the new image is recorded as part of your Configuration Management process.  

Reactive Patch Management Strategy

The main goal in reactive patch management is to reduce the impact of an outage.  Reactive patching occurs in response to an issue that is currently affecting the running system, and that needs immediate relief. The most common response to such a situation is usually to apply the latest patch or patches, which might be perceived as being capable of fixing the issue.  Unfortunately, if the patch implementation does not work, you are often left worse off than before you applied the patch. 

There are two main reasons why this approach is fundamentally incorrect: 

  • Even if a known problem appears to go away, you don’t know whether the patch or patches actually fixed the underlying problem or simply masked the symptoms.  The patches might have simply changed the system in such a way as to obscure the issue for now. 
  • Applying patches in a reactive patching session introduces a considerable element of risk. When you are in a reactive patching situation, you must try to minimize risk at all costs.  In proactive patching, you can and should have tested the change you are applying.  In a reactive situation, if you apply a large number of changes, you still may not have identified root cause.  Also, there’s a greater chance that the changes you applied will have negative consequences elsewhere on the system, which leads to more reactive patching. 

So, even when you experience an issue that is affecting the system, spend time investigating root cause.  If a fix can be identified from such investigation, and that fix involves applying one or more patches, then at least the change is minimized to just the patch or set of patches required to fix the problem.  Depending on the severity of the problem, the patch or patches that fix the issue will be installed at one of the following times: 

  • Immediately to gain relief.
  • At the next regular maintenance window, if the problem is not critical or a workaround exists.
  • During an emergency maintenance window that is brought forward to facilitate applying the fix.

Identifying Patches for Reactive Patching

Identifying patches that are applicable in a reactive patching scenario can often be complex.  In a lot of cases, depending on support, contact with official vendor channels will be initiated, but as a starting point, you should do some analysis.  There is no single standard way of analyzing a technical issue, because each issue involves different choices.  Using debug level logging and examining log files usually provides some troubleshooting guidance.  Also, a proper recording system that records changes to the system should be considered.  Recent system configuration changes can be investigated as possible root causes.

Proactive Patch Management Strategy

 

The main goal in proactive patch management is to prevent unplanned downtime. The idea behind proactive patching is that in most cases, problems that can occur have already been identified, and patches have already been released.  So, the problem becomes mainly one of identifying the most important patches, and applying them in a safe and reliable manner. 

In all cases of proactive patching, it is assumed that the system is functioning normally. Why patch a system that is functioning normally, since any change implies risk and downtime?  As with any system that is functioning normally, there is always the chance that some underlying, known issue can cause a problem.  Such issues can include the following: 

  • Memory corruption that has not yet caused a problem.
  • Data corruption, which is typically unnoticed until the data is re-read.
  • Latent security issues.

Security issues are a good example of the value of proactive patching.  Most security issues are latent issues, meaning they exist in the system, but are not causing issues yet.  It is important to take proactive action to prevent security vulnerabilities from being exploited. 

In comparison to reactive patching, proactive patching generally implies more change, and additional planning, for regularly scheduled maintenance windows and testing. 

Proactive patching is the strategy of choice.  Proactive patching is recommended mainly for the following reasons: 

  • Proactive patching reduces unplanned downtime.
  • Proactive patching prevents systems from experiencing known issues.
  • Proactive patching provides the ability to plan ahead and do appropriate testing before deployment.
  • Planned downtime for proactive maintenance is usually much less expensive than unplanned downtime for addressing issues reactively.  

 

Security Patch Management

Security patch management requires a separate strategy because it requires you to be proactive and yet requires reactive patching’s  sense of urgency.  In other words, security fixes deemed relevant to the environment might need to be installed proactively, before the next scheduled maintenance window.  The same general rules apply to security patches as to proactively or reactively applying utility patches.  Plan, test, and automate. 

All security patches should be assessed independently.  Although vendors have begun to standardize on a single patch assessment methodology, they cannot take into account the most important factors, the environmental factors.  They are also reticent in bringing attention to exploit code development, and have been prone to understating the severity and impact that vulnerabilities in their products pose.  The framework for analysis that vendors are adopting, and is strongly recommended for all businesses, is the Common Vulnerability Scoring System, or CVSS. 

CVSS was developed by the NIST team, is now in its second version, and has spawned a series of other assessment tools to solve a multitude of problems from malware naming conventions to configuration issues. 

Security patch planning should be performed based on the factored risk rating of the vulnerability, and a standard sliding patch window adopted for each of the platforms in use, that ties directly back to the rating.  

  • If the organization is smaller, consider a single monthly window for applying all missing security patches. 
  • Medium sized organizations might consider having a second maintenance period every month, as they are more likely to have multiple platforms present (IE: Windows & Unix). 
  • Larger, Enterprise environments might consider having several maintenance periods as well.

In the case of larger environments, complexity of the platforms in use and their inter-dependencies must be taken into account.  If the front-end systems are going to be down for utility patches and updates, it may be a perfect time to apply security patches for the back-end databases, for instance. 

Make certain that no matter what the size of the organization or the nature of the patch, a back out plan has been developed and tested.  

Audit and Assessment

Regular audit and assessment helps gauge the success and extent of patch management efforts.  There are typically two phases in the auditing and assessment portion of the patch management program.  Verification and Validation.  You are essentially trying to answer two very different questions: 

  • Verification –  What systems need to be patched?
  • Validation   –  Are the systems that were supposed to be patched, actually patched and protected?

The audit and assessment component will help answer these questions, but there are dependencies. The most critical success factor here is accurate and effective asset management information.  The major requirement for any asset management system is the ability to accurately track deployed hardware and software throughout the enterprise, including remote users and office locations.  Ideally, asset management software will allow the administrator to generate reports that will be used to drive the effort toward consistent installation of patches and updates across the organization. 

System discovery is an essential component of the audit and assessment process.  While asset management systems can help administer and report on known systems, there are likely a number of systems that have been unknowingly or intentionally excluded from inventory databases and management infrastructures.  System discovery tools can help uncover these systems and assist in bringing them under the umbrella of formal asset management and patch management compliance 

Regardless of the tools used, the goal is to discover all systems within your environment and assess their compliance with the organization’s patch and configuration policies and standards.  

Conclusion

Focusing solely on technology to solve the patch management issue is not the best answer.  Installing patch management software and vulnerability assessment tools without supporting policies, standards, guidelines, and oversight will be a wasted effort.  Instead, solid patch management programs will team technological solutions with policy and operationally-based components that work together to address each organization’s unique needs.  

Resources: