Automated Google-Hacking

Hackers are conducting reconnaissance efforts on a massive scale.  Attackers are increasingly leveraging the power of search engines, like Google, to carry out probe and enumeration exercises against vulnerable websites, according to a report by Imperva.  Google and other search engines have put in place “anti-automation” measures to hamper search engine abuse, but staying ahead of a determined opponent is proving to be quite challenging.

Dubbed “Google Hacking,” hackers are using specially crafted search queries run on their botnet zombies’ browsers to generate more than tens of thousands of queries a day and by-pass the deterrent measures.  The aim of running these queries is to ultimately identify potential attack targets and to build an accurate picture of the resources within that server that can be potentially exposed.  By automating the queries, using zombies to distribute the load and parse the results, the attacker can carry out a very large number of search queries, filter the returned results and only needs to bother with a short list of potentially exploitable sites in a very short time and with minimal effort.  Hackers take advantage of a botnet’s dispersed nature, giving search engines the impression that specific individuals are performing a routine search.

One common feature of most search engines is that they can be directed to return results that are focused on specific potential targets by using a set of query operators.  For example, the attacker may focus on all potential victims in a specified geographic location (i.e. per country).  In this case, the query includes a “location” search operator.  In another scenario, an attacker may want to target
all vulnerabilities in a specific web site, and achieves this by issuing different queries containing the “site” search operator.  Only those sites that expose that particular weakness or use that specific code will be present in the results displayed.

From the report, here is the Hacker’s 4 Step Industrialized Attack:

  1. Get a botnet. This is usually done by renting a botnet from a bot farmer who has a global network of compromised computers under his control.
  2. Obtain a tool for coordinated, distributed searching. This tool is deployed to the botnet agents and it usually contains a database of “dorks”.
  3. Launch a massive search campaign through the botnet. Our observations show that there is an automated infrastructure to control the distribution of dorks and the examination of the results between botnet parts.
  4. Craft a massive attack campaign based on search results. With the list of potentially vulnerable resources, the attacker can create, or use a ready-made, script to craft targeted attack vectors that attempt to exploit vulnerabilities in pages retrieved by the search campaign. Attacks include: infecting web applications, compromising corporate data or stealing sensitive personal information.

So what does this add to your Monitoring and Incident Response strategy?  Not much, really.  The results at the target site won’t be readily apparent until an attack actually takes place.  The precursors to the attack would most likely appear just line any other probing attack, increased traffic, scans and enumeration attempts.  Pretty much the same things that all publicly addressable IPs see on a daily basis.  If you are watching an externally exposed IDS, you might get lucky and notice an uptick, but won’t necessarily know what the potential vulnerability or exploit payload is going to be.

At the search engine’s interface, it may be possible to identify repetitive queries, the sources of those queries, and contact the owners of the offending networks to investigate a potential botnet infection, however that is unlikely, as even ISPs have been reluctant to get involved in an unpleasant discussion with little evidence and a hostile customer.

Organizations should protect their applications from being publicly exposed through search engines.  This recommendation is right out of the report.  It too is unlikely, as we all want to advertise our services.  A Web Application Firewall should detect and block attempts at exploiting application vulnerabilities.  Reputation-based controls could block attacks originating from known malicious sources.

Advertisements