Some of you might recall the 2012 story of a man who port-scanned the entire internet using the Nmap Scripting Engine. What started as a joke ultimately launched into a project that evolved into developing a quick, efficient distributed port scanner. It logged into devices using default telnet credentials, uploaded a binary, continued scanning across networks, and returned data about devices on the network.
The vast scope of that project returned a robust amount of vulnerability data, which was helpful in studying the overall security of the devices connected to the internet. Fast-forward to the present day, security researchers at RedHunt Labs endeavored to scan the internet.
The results of their findings were staggering. The researchers were able to discover more than 1.6 million public-facing sensitive data points, exposing secrets leaked by websites. This included upward of 395,000 secrets exposed by at least a million popular domains.
The task was carried out in a series of two mammoth scans, or massive scans. In one scan, the objective was to scan one million websites that attract the most traffic. This is a tactful approach, seeing that analyzing the security of the world’s most popular websites translates to a search for possible critical bugs that could affect millions of users.
Two “Mammoth Scans” Revealed Thousands of Alarming Secrets
This scan produced 395,713 points where secrets were exposed, most of which were akin to Google services such as reCAPTCHA, Google Cloud, or Google OAuth vulnerabilities, the latter of which can allow a threat actor to bypass authentication.
Nearly 500 million hosts were scanned during the second scan, which exposed 1,280,920 secrets. They were largely associated with Stripe, the financial payment platform, including Google reCAPTCHA, Google Cloud API, AWS, and Facebook.
Add today’s standard web applications to the mix which usually contain embedded API keys, cryptographic keys, as well as other credentials that can be contained within client-side source code embedded in JavaScript files and the scope of the exposure should cause alarm.
Security researcher Pinaki Mondal explained in a blog post that the number of secrets the scan revealed through the front end of hosts is alarmingly huge.
Mondal states--
“Once a valid secret gets leaked, it paves the path for lateral movement amongst attackers, who may decide to abuse the business service account leading to financial losses or total compromise.”
The vulnerable points mostly exist in JavaScript served through content delivery networks (CDNs). What’s more, the CDN of the popular website builder platform Squarespace topped 197,000 data exposures alone.
According to Mondal, the reason for the extensive exposure is what he describes as a “decades” old issue of leaked data based on the “complexities of the software development lifecycle.” He added, saying, “As the code base enlarges, developers often fail to redact the sensitive data before deploying it to production.”
Hackers Love Google for Many Reasons
The aforementioned statement is interesting because 77% of the exposures were related to front-end JavaScript files found in webpages served through Google. Hackers and Google have an interesting relationship. What is it that hackers are so often searching for?
Vulnerabilities. Thus, Google can disclose to an inquisitive hacker a wealth of sensitive information.
As a hacker, Google itself isn’t just a supreme search engine. For us, it is a door that unfolds into a nexus linking microcosms of data intended to be private, from exposed website directories to proprietary company memos, to nanny cameras and a variety of IoT devices.
The alarming amount of data revealed by the RedHunt scans that were leaked from Google homepages certainly will not surprise most hackers, who frequently use Google services. Using search-specific syntax, hackers are able to enumerate vast amounts of search results to specific vulnerabilities.
For example, years ago, I routinely searched Google Dorks. These are advanced search query strings that can return very specific information not normally found on websites. Additionally, one of my favorite past times was to enumerate a list of websites with public-facing directories that might allow write privileges.
Google Dorks, also called Google Hacking, can allow threat actors to quickly locate sensitive information, such as usernames, passwords, unprotected files, and financial data that are not kept in a secure environment.
It is not uncommon for hackers to enumerate a list of websites and services running on antedated servers and operating systems that are also unpatched and unprotected against known exploits.
Oftentimes when researching Google Dorks, I would come across a vast number of website homepages that had been defaced by Google hackers who managed to compromise the sites simply because every web server harbored the same vulnerability and Google conveniently put them all in order.
In the same vein as the man who scanned the entire internet using his distributed port scanner, I used to fantasize about a possible future. A future where an unnamed white hat created a stealthy but all-intrusive vaccine-like virus that could “infect” every device on the web and patch every hole, eradicate every threat, and prompt users to change weak or exposed passwords.
It would be a legal fiasco, but it would also be the equivalent of a cyber Good Samaritan helping to protect the world from bad actors. Until a cyber guardian like that arises, or people and companies alike adopt better security policies with the competent implementation of today’s standard security practices, vulnerabilities will prevail, and so will the bad actors that find them.
An article by
Jesse McGraw
Edited by
Anne Caminer
Comments