Enterprise Information Security, is it BROKEN??

An industry reporter asked me a couple of pointed questions recently as part of an Weak Linkinterview for a feature article.  He wanted to know if I felt that Enterprise Information Security was broken, and what could be done to fix it.

“Given the increasing number of denial of service attacks, Java exploits, break-ins, malware delivered by spam etc. , is Enterprise Security broken?”

No, I don’t believe that Enterprise Security is broken.  I do believe that some of the fundamental assumptions that we in the Information Technology industry made early on in IT and communication development were flawed and are now being abused.  Enterprise Information Security is a strategic model whose intent is to formalize and promote security practices in a consistent manner across an organization remains a fundamentally correct objective.

One of the biggest concerns that I have had over my 30+ year IT career has been that of consistency.  Remember that Information Security as a recognized discipline didn’t exist when Information Technology was born, and came about well after IT and technology had started to mature.  We built the communications protocols at the heart of TCPIP to support and focus on resilience, continuity, and speed.  The naive belief was, if a set of rules was cast that delivered reliable communication, the job was pretty much done.  The entire concept was based on trust.  What else could you possibly want?

What was missing was consideration of the human factor; an authentication layer, a repudiation criteria, the guarantee of confidentiality, the assurance of data integrity, and the practices of controlled access and least privilege.

People are creative, curious, and in many cases, selfish creatures.  If they find a weakness in an application, or a way to take advantage of a process that will provide them with notoriety, wealth, or some other desired benefit, I guarantee that it will be exploited.  Look at how games get hacked for online gold, extra advantage, or simply bragging rights, to underline the problem.  The abuser doesn’t consider or perhaps even care that the author views the game as a years of work and a revenue stream, and doesn’t gauge the impact that player actions have on the developers’ livelihood.  They just want the desired item.

Until we can replace or rebuild the TCPIP suite with those missing pieces at its core, we need to put in place a governance and architectural model, policies, processes, standards, controls and guidance that when taken together, provide a consistent information security architecture.  That architecture should apply evenly across the enterprise, not only to this group or that region, and should be able to manage and adapt to the upcoming disruptive factors that will make up our IT world in the future.

“What are some of these recent disruptive factors?”

  • BYOD – Employees recently fell in love with the idea of using their own smartphones and tablets for work.  Management embraced the concept, since it enhanced the bottom line, eliminating the need to purchase and maintain hardware that tends to become obsolete within a calendar year anyway. 

BYOD introduced consumer tech into the enterprise, and although I like others resisted it, we all knew it was inevitably going to happen.   These new consumer devices come with all of the warts that you would expect from a consumer device; no standard image, little focus on security and data protection, few points of control, fewer points of integration, and no separation of personal versus corporate identities.

Employees are just now beginning to question how deep they will let work intrude into their personal lives.  Did IT just turn their beloved smartphone into a tracking device?  Can the company now monitor and examine their personal emails, chats, and browsing habits?   Employees are beginning to resent that personal time is now becoming potentially unpaid work time.  Managing these challenges must be part of the new Information Security Architecture.

  • MalwareMalicious software has evolved from a nuisance to a plague.  It’s been monetized, and has grown into a full blown industry unto itself.  Malware is now custom developed, the developers are organized, and they coordinate their efforts.  Some of them specialize, and offer their services to one another, mercenary style.  Our vendors need to do the same, and change the model from signature based detection to signature, characteristic (white-listing), and behavior based protection.  All of them, not one of them.

Vendors also need to move away from the “backwards compatible with everything” development model.  Bloating code to support multiple Operating Systems, especially those that are no longer being developed or supported by their creators, perpetuates vulnerabilities on several fronts.  It potentially brings all of the previous versions’ vulnerabilities into the new version, it perpetuates the existence of out dated software amongst businesses and home users, and it complicates business processes like asset and license management.  All of these result in a larger attack surface to be exploited, and liabilities to customer organizations.

Malware distribution is undergoing a major shift, from being widely distributed so as to have the maximum effect on a target rich environment, from quick in – acquire target – quick out blitzing strategies, to custom-made, no signature available, targeted to a specific industry, business, or user to limit solution development, and placed where it will be most effectively consumed by the target.  The new malware is being tweaked to avoid detection, doing nothing observably destructive, and maintaining a discrete profile for as long as possible.  It stays in the environment, collecting information, trickling out intelligence, and potentially offering backdoor access for its author or owner.  These little nasties tend to stay embedded within an organization for years.

  • Data Leakage –  I used to worry about the impacts malware had, the downtime it incurred, the mess it made, and the time it takes to clean up after an infection.  Incident Response, Business Continuity and Disaster Recovery practices have matured, alleviating the bulk of those concerns, and now I don’t have to worry as much about what sort of malware gets into the environment.  Over the years, I have adopted an attitude that concerns itself more and more with egress management.  I now worry more about what data is getting out.  In order to maximize my nightly pillow time, I develop or procure capabilities to monitor traffic flows, and to identify the types of documents, contents of documents, and other materials that should not be leaving the network.

The challenges here are accounting for every egress method, every potential removal vehicle, every characteristic that makes a document sensitive, and dealing with each one in an appropriate and manageable fashion.  The electronic communications are the low hanging fruit, they are easily monitored.  It is the physical devices that pose the greatest challenges.

  • Next Generation Firewalls – The Internet Protocol suite was built to support communication using a set of rules, identifying specific ports and protocols, packet and frame sizes, and expecting specific content to be in each frame.  The developers assumed that applications and people would operate within those rules.  We also assumed that technology would present a perimeter that could be easily controlled and managed.  If the protocol used matched the port designated for it, and that port/protocol set was allowed to pass through the firewall, it was all good.  Unfortunately, attackers do not play by those rules.  They use them against us.

Next Generation Firewalls are emerging that analyze relationships and behaviors.  They inspect traffic to ensure that someone or something is accountable for each packet on the network, that it fits within an expected data request stream, conforms to much more granular rules based on expected and observed behavior, and that it is shaped and formed the way the rules expect it to be.

  • The Cloud – Every silver lining has a cloud, and every cloud has security implications.  We experimented in the past with out-sourcing our IT worker bees in order to save costs.  In some places that was successful, and not so successful in others.  We are now doing the same thing with applications, services, data, and infrastructure.  The risks to those assets remain the same, but we are now concentrating those assets along with many other assets in one place, and giving up visibility and control, while increasing the value of the hosting target.

The arguments make sense, we are not an IT company, why do we need to invest in so much hardware, software, and staff to maintain it?  Someone else can do this better, focus entirely on it, and save us money by providing it to the masses as a Service. The other side of the coin is that the risks don’t go away, the liabilities don’t go away, but the ability to directly control and manage the out-sourced entities becomes more difficult.  Accountability becomes fuzzy, but ultimately lies with the data owner, not the hosting comapny.  In a cloud-based model, you are trusting someone else to do a better job of managing and protecting your data, you are trusting them not to mis-use your data, and you are trusting them to provide access to the right people while blocking access of the wrong folks.  Audit and Compliance issues become evident.

Ultimately, if this new juicy data target is breached by someone attacking you or one of the many other customers that use this service, your data may be exposed, and your business is liable and accountable.  Your data may not even be exposed, but if you use the breached vendors’ services, the perception may be that you were breached.  Your customers won’t care if the breach happened at your data center or your provider’s.  You were trusted with their data, and it was at risk of exposure on your watch.  You may also increase your dependency on the cloud service, and that increases your susceptibility to denial of service attacks.

  • Attacker Motivation & Capability – The enemy has found that those annoying virus and worm characteristics developed in the past for notoriety or destructive power can be used for financial gain, espionage, and they have gotten organized.  The dark side has put forth significant effort into developing a diverse set of tools, expertise, and strategies.  We need to model our defenses after those of the attackers.  Vendors need to start integrating, working together, and providing the enterprise with consumable, actionable, accurate intelligence about what is going on inside and outside of their networks.  SIEM is a step in the right direction, but let’s not stop walking forward.

 “Do we need a fundamental change in the way enterprises approach/design security?”

Here, I would say yes, and I believe that this change has been cooking along for quite some time in a very slow, “bolt-it-on” fashion.  Technology changes seem to be revolutionary, coming out of nowhere and establishing themselves quickly in response to disruptive factors and needs.  Changes in protection capabilities tend to be evolutionary, taking their own sweet time to develop and mature in reaction to unforeseen circumstances that arise post-implementation of technology.  Physicist Niels Bohr said, “Prediction is very difficult, especially if it’s about the future.”

We in IT as an industry, and businesses in general, need to realize that the perimeter is continuing to melt, to focus on monitoring the network and protecting the data, to insist on integration, increased visibility, and to demand built-in security from our products, vendors, service providers, and business partners.  Enterprise Information Security offers a conduit through architecture and governance to provide a well thought out strategy that can adapt and react to disruptive advancements in technology.  It lays the ground work, and operates best by implementing consistent governance over people, processes and technology at the enterprise level for the purpose of supporting management, operation, and the protection of information and assets.

2011 PCI Breach Research

There is a very good article regarding research into 2011 breach statistics by Trustwave over at InfoWorld Security Central.  A great source for much IT & Security information, by the way.  According to the article, hackers infiltrated 312 businesses making off with customer payment-card information.  Their primary access point was through 3rd-party vendor remote-access apps, or VPNs setup for remote systems maintenance.  Seventy six percent!  These external ingress paths introduced security deficiencies that were exploited by attackers.

The vast majority of the 312 companies were retailers, restaurants or hotels, and they came to Trustwave for incident response help after one of the payment-card organizations traced stolen cards back to their businesses, demanding a forensics investigation within a matter of days.  Only 16% of the 312 companies detected the breach on their own!

The businesses hit claimed to be compliant with Payment Card Industry (PCI) security standards, when in reality there were gaps.  The remote-access provisions were poorly protected by simple, re-used, shared, and seldom changed passwords.

I will leave the most scary statistics, how long the attackers were able to maintain their ownership of the networks in these cases, for you to seek out yourself on the second page of the article.  It is not a happy number!

The lesson to take away from this article is, PCI compliance is the bare minimum that an organization should do, and DOES NOT equate to comprehensive security.  A PCI-DSS pass score does not ensure actual compliance either.  It is a good starting point to ensure that the bare minimum, common sense, security controls are implemented at a single point of time, but good security practices must spread out from the center.  If your security efforts don’t include other servers and the workstations that access them AND the Internet, you are not managing security, you are faking it for compliance sake.  Russian roullette with a fully loaded gun.

Adobe Sandboxes Flash in Firefox

I am happy to post that Adobe has released beta code for sandboxing Flash content within Firefox.  Sandboxing is an excellent way to isolate ancillary code from the operating system and other applications.  I have been using it for years to keep my browser and its myriad vulnerabilities isolated after experimenting with it in malware analysis.  It just makes sense to contain the raft of cruft that tends to come in from an uncontroled, but necessary network, like the Internet.

It is not a foolproof method for containing all malware or avoiding malicious content, but it cuts down significantly on the impact of what mal-content can do by restricting its reach, and it increases the cost, package size, and effort required on the part of the bad guys to get through an additional layer of defense.  Every defensive layer that they have to identify and circumvent presents another opportunity to discover and analyze their attack code…

Adobe used elements of Google’s Chrome sandboxing technology in its Reader code after a flurry of vulnerability announcements and high profile attacks targeting the application.  Adobe says that since its launch in November 2010, they have not seen a single successful exploit in the wild against Adobe Reader X, where they initially offered sandboxing technology.

The new code currently supports Firefox 4.0 or later running on Windows 7 or Vista.  Adobe promises wider browser protection soon.  More details will be given at the CanSecWest security conference in Vancouver, BC next month.  I sure would like to attend this conference.  Maybe I will meet some of you there?!

UPDATE:  ComputerWorld reports that IE is next on Adobe’s list to “sandbox” its popular Flash Player within browsers, Adobe’s head of security said today.

How Was FBI Call Compromised?

I am pretty sure that everybody knows that the FBI and Scotland Yard were embarassed recently by the notorious hacking group, Anonymous, when they spilled the beans that they were now watching the watchers, listening in to a confidential phonecall taking place between investigators accross the pond.  If you haven’t heard it, find it here.  The New Statesman has an overheated article here that can provide additional details.

So how did this brazen and seemingly high tech hack take place?  A conference call was arranged two weeks earlier by FBI agent Timothy Lauster, who wanted to discuss on-going investigations into Anonymous and other hacktivist groups.  In an email to Scotland Yard’s e-crimes unit, the time, date and phone number to call were provided, along with the pass code for entry. Continue reading

Secure Coding Practices

Here is a list of Secure Coding Standards links from Source Code Auditing, Reversing, Web Security, re-posted here for my own easy reference.  Code review is admittedly not (currently) my strong suit.  I have done some old school reverse engineering in the lab back in the day, and messed around with static and behavioral analysis, even done some 3D game programming, but I am still a n00b.

If you have any more, please add it in the comment.

Metrics. Not Just For Breakfast Anymore

Over the past couple of years, I have found myself being drawn back to my IT roots, looking to solve the same old problems that plagued IT when I was so much younger had a full head of hair, and still had to learn that I hadn’t learned it all quite yet.  Back in the day, my boss asked me how the systems were running, and how IT was performing.

I thought a moment, and responded, “All of the systems appear to be running well, we haven’t had any downtime lately, and the server room is humming along nicely.”  He waited.  I broke the silence with “It’s all good.”  My boss, being the patient and well mannered fellow that he was, reiterated, “So the systems are all up, but how is IT doing?  Are we at capacity on any of the systems, and are our processes working like they should?”  I couldn’t respond honestly, so I admitted it.  He had never asked me before how our processes were working, so it must have been all that golf he had been playing lately that had gotten to him.  We were blind to whether we were doing the right things, and doing them well or poorly.  My engineers and I had put together some fantastic systems and processes for the company, reliable, scalable, capable, but had forgotten to consider how we would be able to measure when we needed to scale, improve, support, or replace them.  DOH!  We did have basic system health gauges, but that was just for monitoring CPU and RAM thresholds.  Time to think bigger, and smaller.

Why do we collect metrics?  Metrics are a critical component of Management, whether it be Information Security, or Projects, and Programs.  If you aren’t monitoring your exposures and measuring your results, how will you know whether you have been successful?  IT is all about strategy.  We implement systems in order to meet business objectives.  IT systems support the objectives of the business.  The business could still run without IT.  Much slower, ineffecively, inefficiently, and at a retarded pace, but the business could still run.  Without metrics, how do you prove the value that your IT or Security team is bringing to the organization?  How do you justify continued spending on improvements, new tools, new technologies? Continue reading

14 Patches Coming From Microsoft For February

Microsoft will release 14 bulletins for next Tuesday’s update.

3 items are rated “critical” and 11 are rated as “important”.

.

.

.

.

  • All three critical items deal with remote code execution vulnerabilities in Windows.
  • The important rated bulletins consist of vulnerabilities in Windows, Office, IE, Media Player and Publisher.
    • Seven remote code execution vulnerabilities
    • Three elevation of privileges issues
    • One information disclosure flaw

Get ready to drop some patches next week.  These remote code execution vulnerabilities will only remain “important” for as long as it takes to reverse engineer the patch code and identify the changes.  After that, they become critical.

Six Major Identity & Privacy Trends To Watch

According to Gartner, six major trends will drive identity and access management (IAM) and privacy in 2012.  Businesses will need to increase their focus on projects in that space that can achieve quick value and deliver real benefits to the business.

Organizational boundaries continue to erode due to M&A’s, converging environments, and outsourcing complexities, and IT’s control continues to weaken as mobile devices and cloud services proliferate.  Identity management is becoming more important than ever.

Six IAM Trends:

  • Tactical identity: The scope and budgets for identity management projects will remain constrained.  A major cause of failure for these projects has been an overly broad scope combined with a lack of focus on business value.
  • Identity assurance: Demands for stronger authentication and more mature practices will intensify.  Organizations need to know who they are trusting, why, and for what.
  • Authorization: Authorization requirements will grow more complex and urgent in response to regulatory pressure and more complex IT and business environments. the real magic of IAM lies in authorising access and in the creation of logs used to hold people accountable for their actions. Authorization and enforcement of access control policies is less mature than other processes in many organizations.
  • The identity bridge: Identity management must span the chasm between organizations. A new architectural component will be needed to manage identity information flows between cooperating companies.
  • The sea of ID tokens: Identity information frequently has to be adapted by each domain that receives it, and pass it to downstream domains. Identity information is transmitted via tokens.  These tokens may be carried in protocol headers or in protocol payloads.
  • Policy battles: Concerns over identity theft and privacy are alarming the public, and having a serious impact on operations.  The business community, privacy lobby, law enforcement and national security communities will continue to wrangle over laws and regulations continuing to drive changes in the identity infrastructure.

As usual, gartner is right on the money.  Read the entire article to get the deatils.

Global Security Defence Agenda Report

McAfee and the Security and Defence Agenda (SDA) have revealed their findings in a report that attempts to paint a global view of the current cyber-threat, (sigh* Cyber?  Really?) defensive measures, and an assessment of the road ahead.  The report was created to identify key areas for discussion, highlight trends, and to help governments and organizations understand how their security defense posture compares to others.

This report involved a survey and interviews with roughly 250 leading authorities worldwide with over 80 security experts in government, international organizations and academia.  It is aimed at the “influential layperson”, and deliberately avoids technical jargon.

Some Key Findings:

  • 57% of global experts believe an arms race is taking place in cyber space.
  • 45% of respondents believe that online security is as important as border security.
  • 43% identified damage or disruption to critical infrastructure as the greatest single threat with wide economic consequences.
  • 36% believe information security is more important than missile defense.
  • US, Australia, UK, China and Germany all ranked behind smaller countries for their state of incident readiness. Continue reading