Search engines such as Google are increasingly being used by hackers against Web applications that hold sensitive data, according to a security expert.
Even with rising awareness about data security, it takes all of a few seconds to pluck Social Security numbers from Web sites using targeted search terms.
The fact that Social Security numbers are even on the Web is a human error; the information should never be published in the first place. But hackers are using Google in more sophisticated ways to automate attacks against Web sites.
Study recently discovered a way to execute a SQL injection attack that comes from an IP (Internet Protocol) address that belongs to Google.
In a SQL injection attack, a malicious instruction is entered on a Web-based form and answered by a Web application. It often can yield sensitive information from a backend database or be used to plant malicious code on the Web page.
Tools such as Goolag and Gooscan can execute broad searches across the Web for specific vulnerabilities and return lists of Web sites that have those problems.
Another attack method is so-called Google worms, which use the search engine to find specific vulnerabilities. With the inclusion of additional code, the vulnerability can be exploited.
Google and other search engines are taking steps to stop the abuse. For example, Google has stopped certain kinds of searches that could yield a trove of Social Security numbers in a single swoop. It also puts limits on the number of search requests sent per minute, which can slow down mass searches for vulnerable Web sites.
In reality, it just forces hackers to be a bit more patient. Putting limits on search also hurts security professionals who want to do automated daily searches of their Web sites for problems.
There is another kind of attack called “site masking,” which causes a legitimate Web site to simply disappear from search results.
Google’s search engine penalizes sites that have duplicate content and will drop one from its index. Hackers can take advantage of this by creating a Web site that has a link to a competitor’s Web page but is filtered through a proxy server.
Google indexes the content under the proxy’s domain. If this is done enough times with more proxy servers, Google will consider the targeted Web page a duplicate and drop it from its index.
One way Web site administrators can defend against this is barring their Web site from being indexed by anything other than the legitimate IP address of a search engine.