Enterprise Information Security, is it BROKEN??

An industry reporter asked me a couple of pointed questions recently as part of an Weak Linkinterview for a feature article.  He wanted to know if I felt that Enterprise Information Security was broken, and what could be done to fix it.

“Given the increasing number of denial of service attacks, Java exploits, break-ins, malware delivered by spam etc. , is Enterprise Security broken?”

No, I don’t believe that Enterprise Security is broken.  I do believe that some of the fundamental assumptions that we in the Information Technology industry made early on in IT and communication development were flawed and are now being abused.  Enterprise Information Security is a strategic model whose intent is to formalize and promote security practices in a consistent manner across an organization remains a fundamentally correct objective.

One of the biggest concerns that I have had over my 30+ year IT career has been that of consistency.  Remember that Information Security as a recognized discipline didn’t exist when Information Technology was born, and came about well after IT and technology had started to mature.  We built the communications protocols at the heart of TCPIP to support and focus on resilience, continuity, and speed.  The naive belief was, if a set of rules was cast that delivered reliable communication, the job was pretty much done.  The entire concept was based on trust.  What else could you possibly want?

What was missing was consideration of the human factor; an authentication layer, a repudiation criteria, the guarantee of confidentiality, the assurance of data integrity, and the practices of controlled access and least privilege.

People are creative, curious, and in many cases, selfish creatures.  If they find a weakness in an application, or a way to take advantage of a process that will provide them with notoriety, wealth, or some other desired benefit, I guarantee that it will be exploited.  Look at how games get hacked for online gold, extra advantage, or simply bragging rights, to underline the problem.  The abuser doesn’t consider or perhaps even care that the author views the game as a years of work and a revenue stream, and doesn’t gauge the impact that player actions have on the developers’ livelihood.  They just want the desired item.

Until we can replace or rebuild the TCPIP suite with those missing pieces at its core, we need to put in place a governance and architectural model, policies, processes, standards, controls and guidance that when taken together, provide a consistent information security architecture.  That architecture should apply evenly across the enterprise, not only to this group or that region, and should be able to manage and adapt to the upcoming disruptive factors that will make up our IT world in the future.

“What are some of these recent disruptive factors?”

  • BYOD – Employees recently fell in love with the idea of using their own smartphones and tablets for work.  Management embraced the concept, since it enhanced the bottom line, eliminating the need to purchase and maintain hardware that tends to become obsolete within a calendar year anyway. 

BYOD introduced consumer tech into the enterprise, and although I like others resisted it, we all knew it was inevitably going to happen.   These new consumer devices come with all of the warts that you would expect from a consumer device; no standard image, little focus on security and data protection, few points of control, fewer points of integration, and no separation of personal versus corporate identities.

Employees are just now beginning to question how deep they will let work intrude into their personal lives.  Did IT just turn their beloved smartphone into a tracking device?  Can the company now monitor and examine their personal emails, chats, and browsing habits?   Employees are beginning to resent that personal time is now becoming potentially unpaid work time.  Managing these challenges must be part of the new Information Security Architecture.

  • MalwareMalicious software has evolved from a nuisance to a plague.  It’s been monetized, and has grown into a full blown industry unto itself.  Malware is now custom developed, the developers are organized, and they coordinate their efforts.  Some of them specialize, and offer their services to one another, mercenary style.  Our vendors need to do the same, and change the model from signature based detection to signature, characteristic (white-listing), and behavior based protection.  All of them, not one of them.

Vendors also need to move away from the “backwards compatible with everything” development model.  Bloating code to support multiple Operating Systems, especially those that are no longer being developed or supported by their creators, perpetuates vulnerabilities on several fronts.  It potentially brings all of the previous versions’ vulnerabilities into the new version, it perpetuates the existence of out dated software amongst businesses and home users, and it complicates business processes like asset and license management.  All of these result in a larger attack surface to be exploited, and liabilities to customer organizations.

Malware distribution is undergoing a major shift, from being widely distributed so as to have the maximum effect on a target rich environment, from quick in – acquire target – quick out blitzing strategies, to custom-made, no signature available, targeted to a specific industry, business, or user to limit solution development, and placed where it will be most effectively consumed by the target.  The new malware is being tweaked to avoid detection, doing nothing observably destructive, and maintaining a discrete profile for as long as possible.  It stays in the environment, collecting information, trickling out intelligence, and potentially offering backdoor access for its author or owner.  These little nasties tend to stay embedded within an organization for years.

  • Data Leakage –  I used to worry about the impacts malware had, the downtime it incurred, the mess it made, and the time it takes to clean up after an infection.  Incident Response, Business Continuity and Disaster Recovery practices have matured, alleviating the bulk of those concerns, and now I don’t have to worry as much about what sort of malware gets into the environment.  Over the years, I have adopted an attitude that concerns itself more and more with egress management.  I now worry more about what data is getting out.  In order to maximize my nightly pillow time, I develop or procure capabilities to monitor traffic flows, and to identify the types of documents, contents of documents, and other materials that should not be leaving the network.

The challenges here are accounting for every egress method, every potential removal vehicle, every characteristic that makes a document sensitive, and dealing with each one in an appropriate and manageable fashion.  The electronic communications are the low hanging fruit, they are easily monitored.  It is the physical devices that pose the greatest challenges.

  • Next Generation Firewalls – The Internet Protocol suite was built to support communication using a set of rules, identifying specific ports and protocols, packet and frame sizes, and expecting specific content to be in each frame.  The developers assumed that applications and people would operate within those rules.  We also assumed that technology would present a perimeter that could be easily controlled and managed.  If the protocol used matched the port designated for it, and that port/protocol set was allowed to pass through the firewall, it was all good.  Unfortunately, attackers do not play by those rules.  They use them against us.

Next Generation Firewalls are emerging that analyze relationships and behaviors.  They inspect traffic to ensure that someone or something is accountable for each packet on the network, that it fits within an expected data request stream, conforms to much more granular rules based on expected and observed behavior, and that it is shaped and formed the way the rules expect it to be.

  • The Cloud – Every silver lining has a cloud, and every cloud has security implications.  We experimented in the past with out-sourcing our IT worker bees in order to save costs.  In some places that was successful, and not so successful in others.  We are now doing the same thing with applications, services, data, and infrastructure.  The risks to those assets remain the same, but we are now concentrating those assets along with many other assets in one place, and giving up visibility and control, while increasing the value of the hosting target.

The arguments make sense, we are not an IT company, why do we need to invest in so much hardware, software, and staff to maintain it?  Someone else can do this better, focus entirely on it, and save us money by providing it to the masses as a Service. The other side of the coin is that the risks don’t go away, the liabilities don’t go away, but the ability to directly control and manage the out-sourced entities becomes more difficult.  Accountability becomes fuzzy, but ultimately lies with the data owner, not the hosting comapny.  In a cloud-based model, you are trusting someone else to do a better job of managing and protecting your data, you are trusting them not to mis-use your data, and you are trusting them to provide access to the right people while blocking access of the wrong folks.  Audit and Compliance issues become evident.

Ultimately, if this new juicy data target is breached by someone attacking you or one of the many other customers that use this service, your data may be exposed, and your business is liable and accountable.  Your data may not even be exposed, but if you use the breached vendors’ services, the perception may be that you were breached.  Your customers won’t care if the breach happened at your data center or your provider’s.  You were trusted with their data, and it was at risk of exposure on your watch.  You may also increase your dependency on the cloud service, and that increases your susceptibility to denial of service attacks.

  • Attacker Motivation & Capability – The enemy has found that those annoying virus and worm characteristics developed in the past for notoriety or destructive power can be used for financial gain, espionage, and they have gotten organized.  The dark side has put forth significant effort into developing a diverse set of tools, expertise, and strategies.  We need to model our defenses after those of the attackers.  Vendors need to start integrating, working together, and providing the enterprise with consumable, actionable, accurate intelligence about what is going on inside and outside of their networks.  SIEM is a step in the right direction, but let’s not stop walking forward.

 “Do we need a fundamental change in the way enterprises approach/design security?”

Here, I would say yes, and I believe that this change has been cooking along for quite some time in a very slow, “bolt-it-on” fashion.  Technology changes seem to be revolutionary, coming out of nowhere and establishing themselves quickly in response to disruptive factors and needs.  Changes in protection capabilities tend to be evolutionary, taking their own sweet time to develop and mature in reaction to unforeseen circumstances that arise post-implementation of technology.  Physicist Niels Bohr said, “Prediction is very difficult, especially if it’s about the future.”

We in IT as an industry, and businesses in general, need to realize that the perimeter is continuing to melt, to focus on monitoring the network and protecting the data, to insist on integration, increased visibility, and to demand built-in security from our products, vendors, service providers, and business partners.  Enterprise Information Security offers a conduit through architecture and governance to provide a well thought out strategy that can adapt and react to disruptive advancements in technology.  It lays the ground work, and operates best by implementing consistent governance over people, processes and technology at the enterprise level for the purpose of supporting management, operation, and the protection of information and assets.

2011 PCI Breach Research

There is a very good article regarding research into 2011 breach statistics by Trustwave over at InfoWorld Security Central.  A great source for much IT & Security information, by the way.  According to the article, hackers infiltrated 312 businesses making off with customer payment-card information.  Their primary access point was through 3rd-party vendor remote-access apps, or VPNs setup for remote systems maintenance.  Seventy six percent!  These external ingress paths introduced security deficiencies that were exploited by attackers.

The vast majority of the 312 companies were retailers, restaurants or hotels, and they came to Trustwave for incident response help after one of the payment-card organizations traced stolen cards back to their businesses, demanding a forensics investigation within a matter of days.  Only 16% of the 312 companies detected the breach on their own!

The businesses hit claimed to be compliant with Payment Card Industry (PCI) security standards, when in reality there were gaps.  The remote-access provisions were poorly protected by simple, re-used, shared, and seldom changed passwords.

I will leave the most scary statistics, how long the attackers were able to maintain their ownership of the networks in these cases, for you to seek out yourself on the second page of the article.  It is not a happy number!

The lesson to take away from this article is, PCI compliance is the bare minimum that an organization should do, and DOES NOT equate to comprehensive security.  A PCI-DSS pass score does not ensure actual compliance either.  It is a good starting point to ensure that the bare minimum, common sense, security controls are implemented at a single point of time, but good security practices must spread out from the center.  If your security efforts don’t include other servers and the workstations that access them AND the Internet, you are not managing security, you are faking it for compliance sake.  Russian roullette with a fully loaded gun.

Adobe Sandboxes Flash in Firefox

I am happy to post that Adobe has released beta code for sandboxing Flash content within Firefox.  Sandboxing is an excellent way to isolate ancillary code from the operating system and other applications.  I have been using it for years to keep my browser and its myriad vulnerabilities isolated after experimenting with it in malware analysis.  It just makes sense to contain the raft of cruft that tends to come in from an uncontroled, but necessary network, like the Internet.

It is not a foolproof method for containing all malware or avoiding malicious content, but it cuts down significantly on the impact of what mal-content can do by restricting its reach, and it increases the cost, package size, and effort required on the part of the bad guys to get through an additional layer of defense.  Every defensive layer that they have to identify and circumvent presents another opportunity to discover and analyze their attack code…

Adobe used elements of Google’s Chrome sandboxing technology in its Reader code after a flurry of vulnerability announcements and high profile attacks targeting the application.  Adobe says that since its launch in November 2010, they have not seen a single successful exploit in the wild against Adobe Reader X, where they initially offered sandboxing technology.

The new code currently supports Firefox 4.0 or later running on Windows 7 or Vista.  Adobe promises wider browser protection soon.  More details will be given at the CanSecWest security conference in Vancouver, BC next month.  I sure would like to attend this conference.  Maybe I will meet some of you there?!

UPDATE:  ComputerWorld reports that IE is next on Adobe’s list to “sandbox” its popular Flash Player within browsers, Adobe’s head of security said today.

How Was FBI Call Compromised?

I am pretty sure that everybody knows that the FBI and Scotland Yard were embarassed recently by the notorious hacking group, Anonymous, when they spilled the beans that they were now watching the watchers, listening in to a confidential phonecall taking place between investigators accross the pond.  If you haven’t heard it, find it here.  The New Statesman has an overheated article here that can provide additional details.

So how did this brazen and seemingly high tech hack take place?  A conference call was arranged two weeks earlier by FBI agent Timothy Lauster, who wanted to discuss on-going investigations into Anonymous and other hacktivist groups.  In an email to Scotland Yard’s e-crimes unit, the time, date and phone number to call were provided, along with the pass code for entry. Continue reading

Secure Coding Practices

Here is a list of Secure Coding Standards links from Source Code Auditing, Reversing, Web Security, re-posted here for my own easy reference.  Code review is admittedly not (currently) my strong suit.  I have done some old school reverse engineering in the lab back in the day, and messed around with static and behavioral analysis, even done some 3D game programming, but I am still a n00b.

If you have any more, please add it in the comment.

Metrics. Not Just For Breakfast Anymore

Over the past couple of years, I have found myself being drawn back to my IT roots, looking to solve the same old problems that plagued IT when I was so much younger had a full head of hair, and still had to learn that I hadn’t learned it all quite yet.  Back in the day, my boss asked me how the systems were running, and how IT was performing.

I thought a moment, and responded, “All of the systems appear to be running well, we haven’t had any downtime lately, and the server room is humming along nicely.”  He waited.  I broke the silence with “It’s all good.”  My boss, being the patient and well mannered fellow that he was, reiterated, “So the systems are all up, but how is IT doing?  Are we at capacity on any of the systems, and are our processes working like they should?”  I couldn’t respond honestly, so I admitted it.  He had never asked me before how our processes were working, so it must have been all that golf he had been playing lately that had gotten to him.  We were blind to whether we were doing the right things, and doing them well or poorly.  My engineers and I had put together some fantastic systems and processes for the company, reliable, scalable, capable, but had forgotten to consider how we would be able to measure when we needed to scale, improve, support, or replace them.  DOH!  We did have basic system health gauges, but that was just for monitoring CPU and RAM thresholds.  Time to think bigger, and smaller.

Why do we collect metrics?  Metrics are a critical component of Management, whether it be Information Security, or Projects, and Programs.  If you aren’t monitoring your exposures and measuring your results, how will you know whether you have been successful?  IT is all about strategy.  We implement systems in order to meet business objectives.  IT systems support the objectives of the business.  The business could still run without IT.  Much slower, ineffecively, inefficiently, and at a retarded pace, but the business could still run.  Without metrics, how do you prove the value that your IT or Security team is bringing to the organization?  How do you justify continued spending on improvements, new tools, new technologies? Continue reading