Click here to close now.

Welcome!

Cloud Expo Authors: Ian Khan, Liz McMillan, Ed Featherston, John Wetherill, Mike Kavis

Related Topics: Security, Java, Linux, Virtualization, Web 2.0, Cloud Expo

Security: Blog Feed Post

Facebook Exploit Is Not Unique

Facebook isn't unique in the ability to use it to attack a third party, it's just more effective

This week's "bad news" with respect to information security centers on Facebook and the exploitation of HTTP caches to affect a DDoS attack. Reported as a 'vulnerability', this exploit takes advantage of the way the application protocol is designed to work. In fact, the same author who reports the Facebook 'vulnerability' has also shown you can use Google to do the same thing. Just about any site that enables you to submit content containing links and then retrieves those links for you (for caching purposes) could be used in this way. It's not unique to Facebook or Google, for that matter, they just have the perfect environment to make such an exploit highly effective.

The exploit works by using a site (in this case Facebook) to load content and takes advantage of the general principle of amplification to effectively DDoS a third-party site. This is a flood-based like attack, meaning it's attempting to overwhelm a server by flooding it with requests that voraciously consume server-side resources and slow everyone down - to the point of forcing it to appear "down" to legitimate users.

The requests brokered by Facebook are themselves 110% legitimate requests. The requests for an image (or PDF or large video file) are well-formed, and nothing about the requests on an individual basis could be detected as being an attack. This is, in part, why the exploit works: because the individual requests are wholly legitimate requests.

How it Works
The trigger for the "attack" is the caching service. Caches are generally excellent at, well, caching static objects with well-defined URIs. A cache doesn't have a problem finding /myimage.png. It's either there, or it's not and the cache has to go to origin to retrieve it. Where things get more difficult is when requests for content are dynamic; that is, they send parameters that the origin server interprets to determine which image to send, e.g. /myimage?id=30. This is much like an old developer trick to force the reload of dynamic content when browser or server caches indicate a match on the URL. By tacking on a random query parameter, you can "trick" the browser and the server into believing it's a brand new object, and it will go to origin to retrieve it - even though the query parameter is never used. That's where the exploit comes in.

HTTP servers accept as part of the definition of a URI any number of variable query parameters. Those parameters can be ignored or used at the discretion of the application. But when the HTTP server is looking to see if that content has been served already, it does look at those parameters. The reference for a given object is its URL, and thus tacking on a query parameter forces (or tricks if you prefer) the HTTP server to believe the object has never been served before and thus can't be retrieved from a cache.

Caches act on the same principles as an HTTP server because when you get down to brass tacks, a cache is a very specialized HTTP server, focused on mirroring content so it's closer to the user.

<img src=http://target.com/file?r=1>
<img src=http://target.com/file?r=2>
<img src=http://target.com/file?r=3>
...
<img src=http://target.com/file?r=1000>

Many, many, many, many (repeat as necessary) web applications are built using such models. Whether to retrieve text-based content or images is irrelevant to the cache. The cache looks at the request and, if it can't match it somehow, it's going to go to origin.

Which is what's possible with Facebook Notes and Google. By taking advantage of (exploiting) this design principle, if a note crafted with multiple image objects retrieved via a dynamic query is viewed by enough users at the same time, the origin can become overwhelmed or its network oversubscribed.

This is what makes it an exploit, not a vulnerability. There's nothing wrong with the behavior of these caches - they are working exactly as they were designed to act with respect to HTTP. The problem is that when the protocol and caching behavior was defined, such abusive behavior was not considered.

In other words, this is a protocol exploit not specific to Facebook (or Google). In fact, similar exploits have been used to launch attacks in the past. For example, consider some noise raised around WordPress in March 2014 that indicated it was being used to attack other sites by bypassing the cache and forcing a full reload from the origin server:

If you notice, all queries had a random value (like “?4137049=643182″) that bypassed their cache and force a full page reload every single time. It was killing their server pretty quickly.

 

But the most interesting part is that all the requests were coming from valid and legitimate WordPress sites. Yes, other WordPress sites were sending that random requests at a very large scale and bringing the site down.

The WordPress exploit was taking advantage of the way "pingbacks" work. Attackers were using sites to add pingbacks to amplify an attack on a third party site (also, ironically, a WordPress site).

It's not just Facebook, or Google - it's inherent in the way caching is designed to work.

Not Just HTTP
This isn't just an issue with HTTP. We can see similar behavior in a DNS exploit that renders DNS caching ineffective as protection against certain attack types. In the DNS case, querying a cache with a random host name results in a query to the authoritative (origin) DNS service. If you send enough random host names at the cache, eventually the DNS service is going to feel the impact and possibly choke.

In general, these types of exploits are based on protocol and well-defined system behavior. A cache is, by design, required to either return a matching object if found or go to the origin server if it is not. In both the HTTP and DNS case, the caching services are acting properly and as one would expect.

The problem is that this proper behavior can be exploited to affect a DDoS attack - against third-parties in the case of Facebook/Google and against the domain owner in the case of DNS.

These are not vulnerabilities, they are protocol exploits. This same "vulnerability" is probably present in most architectures that include caching. The difference is that Facebook's ginormous base of users allows for what is expected behavior to quickly turn into what looks like an attack.

Mitigating
The general consensus right now is the best way to mitigate this potential "attack" is to identify and either rate limit or disallow requests coming from Facebook's crawlers by IP address. In essence, the suggestion is to blacklist Facebook (and perhaps Google) to keep it from potentially overwhelming your site.

The author noted in his post regarding this exploit that:

Facebook crawler shows itself as facebookexternalhit. Right now it seems there is no other choice than to block it in order to avoid this nuisance.

The post was later updated to note that blocking by agent may not be enough, hence the consensus on IP-based blacklisting.

The problem is that attackers could simply find another site with a large user base (there are quite a few of them out there with the users to support a successful attack) and find the right mix of queries to bypass the cache (cause caches are a pretty standard part of a web-scale infrastructure) and voila! Instant attack.

Blocking Facebook isn't going to stop other potential attacks and it might seriously impede revenue generating strategies that rely on Facebook as a channel. Rate limiting based on inbound query volume for specific content will help mitigate the impact (and ensure legitimate requests continue to be served) but this requires some service to intermediate and monitor inbound requests and, upon seeing behavior indicative of a potential attack, the ability to intercede or apply the appropriate rate limiting policy. Such a policy could go further and blacklist IP addresses showing sudden increases in requests or simply blocking requests for the specified URI in question - returning instead some other content.

Another option would be to use a caching solution capable of managing dynamic content. For example, F5 Dynamic Caching includes the ability to designate parameters as either indicative of new content or not. That is, the caching service can be configured to ignore some (or all) parameters and serve content out of cache instead of hammering on the origin server.

Let's say the URI for an image was: /directory/images/dog.gif?ver=1;sz=728X90 where valid query parameters are "ver" (version) and "sz" (size). A policy can be configured to recognize "ver" as indicative of different content while all other query parameters indicate the same content and can be served out of cache. With this kind of policy an attacker could send any combination of the following and the same image would be served from cache, even though "sz" is different and there are random additional query parameters.

/directory/images/dog.gif?ver=1;sz=728X90; id=1234
/directory/images/dog.gif?ver=1;sz=728X900; id=123456
/directory/images/dog.gif?ver=1;sz=728X90; cid=1234 

By placing an application fluent cache service in front of your origin servers, when Facebook (or Google) comes knocking, you're able to handle the load.

Action Items
There have been no reports of an attack stemming from this exploitable condition in Facebook Notes or Google, so blacklisting crawlers from either Facebook or Google seems premature. Given that this condition is based on protocol behavior and system design and not a vulnerability unique to Facebook (or Google), though, it would be a good idea to have a plan in place to address, should such an attack actually occur - from there or some other site.

You should review your own architecture and evaluate its ability to withstand a sudden influx of dynamic requests for content like this, and put into place an operational plan for dealing with it should such an event occur.

For more information on protecting against all types of DDoS attacks, check out a new infographic we’ve put together here.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
SYS-CON Events announced today that robomq.io will exhibit at SYS-CON's @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. robomq.io is an interoperable and composable platform that connects any device to any application. It helps systems integrators and the solution providers build new and innovative products and service for industries requiring monitoring or intelligence from devices and sensors.
Today, IT is not just a cost center. IT is an enabler and driver of business. With the emergence of the hybrid cloud paradigm, IT now has increasingly more capabilities to create new strategic opportunities for a business. Hybrid cloud allows an organization to utilize multi-tenant public clouds, dedicated private clouds, bare metal hosting, and the associated support and services for the right use cases through an on-demand, XaaS model. This model of IT creates tremendous opportunities for busi...
Business as usual for IT is evolving into a “Make or Buy” decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud busi...
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, will discuss how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at...
Businesses are looking to empower employees and departments to do more, go faster, and streamline their processes. For all workers – but mobile workers especially – utilizing the cloud to reconnect documents and improve processes without destructing existing workflows can have a dramatic impact on productivity. In his session at 16th Cloud Expo, Mark Grilli, vice president of Acrobat Solutions marketing at Adobe Systems Incorporated, will outline new ways that the cloud is changing the way peo...
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
SYS-CON Events announced today that Solgenia will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between Personal and Professional S...
WSM International has launched a DevOps services division that offers assessment, consulting and implementation to large enterprises and organizations with complex infrastructures. The concept of DevOps is to blend information technology (IT) software development with operations to optimize the computing infrastructure according to the specific needs of the organization. According to a recent press release from Gartner, "By 2016, DevOps will evolve from a niche strategy employed by large cloud ...
SYS-CON Events announced today that QTS Realty Trust, one of the nation’s largest and fastest-growing providers of data center facilities and cloud services and a leader in security and compliance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. QTS Realty Trust, Inc. (NYSE: QTS) is a leading national provider of data center solutions and fully managed services, and a leader in security and compliance...
SYS-CON Events announced today that WSM International (WSM), the world’s leading cloud and server migration services provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. WSM is a solutions integrator with a core focus on cloud and server migration, transformation and DevOps services.
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY., and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides private all-in-one social intranets allowing workers to securely collaborate from anywhere in the world and from any device. Social, mobile, and eas...
With the arrival of the Big Data revolution, a data professional is expected to master a broad spectrum of complex domains including data processing, mathematics, programming languages, machine learning techniques, and business knowledge. While this mastery is undoubtedly important, this narrow focus on tool usage has divorced many from the imagination required to solve real-world problems. As the demand for analysis increases, the data science community must transform from tool experts to "data...
Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, will explain the best practices of continuous testing at high scale, which is r...
SYS-CON Events announced today that Emcien will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Emcien’s vision is to let anyone use data to know the future. Emcien has built an automated, predictive analysis product that improves the lives of real people. Emcien allows people to automate their data analysis so they can build a better future.
Hosted PaaS providers have given independent developers and startups huge advantages in efficiency and reduced time-to-market over their more process-bound counterparts in enterprises. Software frameworks are now available that allow enterprise IT departments to provide these same advantages for developers in their own organization. In his workshop session at DevOps Summit, Troy Topnik, ActiveState’s Technical Product Manager, will show how on-prem or cloud-hosted Private PaaS can enable organ...
SYS-CON Events announced today that Litmus Automation will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Litmus Automation’s vision is to provide a solution for companies that are in a rush to embrace the disruptive Internet of Things technology and leverage it for real business challenges. Litmus Automation simplifies the complexity of connected devices applications with Loop, a secure and scalable clou...
The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on T...
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Ras...
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...