Welcome!

@CloudExpo Authors: Jason Bloomberg, Dana Gardner, David Bermingham, Charlotte Spencer-Smith, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Next-Generation Content Delivery: Cloud Acceleration

Cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content

It would be an understatement to say that this past decade belongs to the Internet. Starting primarily as a research tool, the Internet has now infiltrated every aspect of life - there is very little today that users do not, or cannot, do online. Moreover, new ways to leverage the Internet to personal and professional advantage arise every day.

Evidently, all this progress has had an insidious side-effect - user expectations regarding website performance have sky-rocketed over the years. People expect websites, video and audio to load faster than ever before; otherwise, they lose interest and go to other websites. In fact, research firms have ample findings to support this correlation. A 2009 ResearchLink survey found that 26 percent of respondents would move to a competitor's website if a vendor's website failed to perform, resulting in immediate revenue loss of 26 percent and a future loss of 15 percent. Forrester Research also found that 36 percent of unique visitors to a website will leave it if it fails to load within the first three seconds. Three seconds - that's not a lot of time. Yet, user expectations are warranted, seeing how much progress content delivery technology has made in the past few years. Couple these user demands with an architecture that is not fit to deliver the kind of performance they expect, and what we have on our hands is a big problem for companies whose business thrives on web content and e-commerce.

But as always, the IT people of a decade ago conferred and found a solution. Content Delivery Network (CDN) companies invested a lot of time and money into a solution that is still being used today. The solution was to store content as close to the end user as possible, a technique known as edge caching. It allows users to access cached versions of the web or applications for faster, easier access. In addition to edge caching, some of the more sophisticated CDNs have gone one step further and developed unique algorithms and massive distributed networks that would help them proactively identify trouble spots over the public Internet, and reroute content around them. While this additional technique allows websites to deliver asymmetrical traffic a little more efficiently, applications like streaming video and audio, and even software downloads, are still cached at the edge of the network despite this routing technique.

Clearly, this technique is effective for content up to a certain size, but it is not enough to meet the high throughput demands of today's growing business reliance on data and larger residential Internet connections over great distances. Edge caching is best suited for static content, and not the dynamic, rich content we see today, since static content doesn't change very often and can easily be stored on low-cost disks in a multitude of locations around the Internet. Even if it does change fairly frequently, it is easy to script these updates to ensure that copies sitting at the edge are up-to-date. But the reality is that today, in 2010, static content forms an increasingly smaller percentage of all the content that requires transfer. The need of the hour is to be able to transfer dynamic content with the speed and ease - and it is yet unfulfilled. CDNs and their edge-caching capabilities are not nearly as successful with dynamic content since, by its very nature, it cannot simply be thrown on to the edge of a network due to inherent size and constituency. The character of dynamic content dictates that while it may be live at this moment, it may not exist two seconds from now. What's more, content that falls under this category includes most of what we use today: VoIP, FTP, live video and so on.

The question then remains: How do content providers ensure that end users (both business and consumers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?

Enter the CDN's newer, more sophisticated cousin - cloud acceleration - which does what CDNs do, but faster and more able to deal with dynamic content. Cloud acceleration is best suited for dynamic content because it does not rely on edge caching - in fact, it works best without edge caching. In addition, it is more cost-effective as users are not paying for a decade's worth of infrastructure designed and built-out to enhance edge-caching capabilities. And last, but not least, it can fight common Internet problems, not only by working (routing) around them, but by actually fixing the core problem associated with long distance networks altogether. There's definitely something to be said for a solution that addresses the real issue, performs better, costs less and results in happy website visitors and increased revenues.

But how does cloud acceleration work its magic in the first place?

For starters, as previously mentioned, cloud acceleration doesn't rely on edge caching. Instead, it optimizes the entire delivery path, over the network managed by the service provider. Content is therefore delivered directly from origin servers to the end user, at the same level of performance as if they were in the same building. How is that better? For one, the most significant and important portion of the delivery path is surprisingly not the public Internet, but rather a high-performance, private, 100 percent optical network designed for speed. The cloud acceleration service provider is in full control of traffic and congestion on the network, and therefore controls Quality of Service (QoS). Of course, that also adds an element of security to the entire journey undertaken by the data. But most important, in this context, is the fact that cloud acceleration providers no longer have to rely on third-party Internet providers to work around common Internet problems such as latency, jitter and packet loss using algorithmic rerouting calculations common to CDNs. They are now in the position to actually fix them.

For more clarity, let's revisit how CDNs function. We can logically break it down into three distinct "miles" that cover the path between the content origin and the end user requesting it. The first mile is the distance between the origin server and a backbone - e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally the last mile is the end-user's connection, such as a DSL, cable, wireless or other connection.

Simply put, CDNs that rely on caching frequently request objects at the edge of the network and are designed to avoid all three of these "miles" as much as possible. Because we know that an increase in distance always results in increased latency and often greater packet loss, it's best to place as many global object copies close to end users as possible. Well-designed caching CDNs do this fairly well by placing object caching servers within the network of the last mile provider. Other caching CDNs that do not have the luxury of placing servers within the network of the last mile provider will place them at key Internet peering points. While not as close to the end user, this is still a fairly effective approach to avoiding problems altogether.

The more sophisticated CDNs that also attempt to choose alternate Internet paths still suffer because they inherently rely on the Internet to get from point A to point B. Since they don't own the network, and therefore have no ultimate control over any "mile" of the route, they are at the mercy of the Internet. Providers that do own a network attempt to inject QoS by using multiprotocol label switching (MPLS), but are ultimately still at the mercy of the effects of latency, jitter and packet loss over longer distances.

With the CDN protocol established, how does a cloud acceleration service provider do things differently? First, it is important to understand that the overall objective is still similar to traditional CDNs: minimize the amount of public Internet utilized for moving content from the origin to the end user. The more Internet travel that can be avoided, the better the result in terms of end-user web performance. The goal of acceleration, however, is to accomplish this without caching at the edge, because optimally future dynamic content will not be cached. In fact, much of what we view today as dynamic content requires a persistent connection between the origin and the end user, which is achieved through the following three steps:

Step one involves opening a connection to the origin server over the first mile so the data stream can be brought onto the accelerated network as quickly as possible for the optimization process to begin. Installing an optional origin appliance starts the optimization process right at the origin datacenter, which gets the optimization going even sooner. The cloud acceleration service provider should have multiple origin capture nodes around the world, or at least close to the origin of its customer base. This, coupled with routing algorithms, will pull content onto the network as close to the origin as possible.

Step two involves hauling the content over the highly engineered private network. Because this middle mile is the longest portion of the trip, it is where the bulk of the data stream optimization happens. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider's entire private backbone to an identical device at the edge node near the end user. These devices constantly talk to each other, optimizing flow to ensure maximum throughput with window scaling, selective acknowledgement, round-trip measurement and congestion control. Packet-level forward error correction is an important feature to reconstitute lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Packets are also resequenced at the edge node using packet order correction to avoid retransmissions that occur when packets arrive out of order. Byte-level data deduplication eliminates retransmission of identical bytes that could otherwise be created at the edge, and multiplexing is utilized to minimize unnecessary chatter and further compress data as it traverses the tunnel.

Step three involves taking advantage of direct peering to eyeball networks, or the last mile, so the content can be dropped back on the Internet just before it reaches the end user. Because you can't expect users to install software applications or hardware appliances in their homes or on their devices, placing nodes close to the end user is critical to the maximum success of the process. Generally, if you can place the node from 5 to 10 ms from the end user, the experience will still feel like a LAN. Furthermore, the benefit of placing content inside the eyeball or last mile network ensures that delivery of content will not be affected by congestion at the ISP's Internet drain during peak usage, which is a common problem.

Through these three steps, cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content - places it right in the user's lap. With a continuous open data stream equivalent to that of a super highway, it is now possible to optimize VoIP, live video, interactive e-media file transfer applications like FTP, CIFS, and NFS, and any new technologies and content that rely on rapid Internet performance in the future.

More Stories By Jonathan Hoppe

Jonathan Hoppe is President & CTO of Cloud Leverage. He has 15 years of technology experience in application development, Internet, networks and enterprise management systems. As president & CTO, he sets the long-term technology strategy of the company, acts as the technical liaison to partners, representatives and vendors, oversees large enterprise-level projects and is the chief architect for all e-business solutions, software applications and platforms. Additionally, Jonathan leads the architecture, operation, networking and telecom for each globally positioned data center as well as the Network Operations Center.

Prior to heading Cloud Leverage, Jonathan held various positions including president and CEO, CTO and senior applications developer for various e-business solution providers in Canada and the United States.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
In most cases, it is convenient to have some human interaction with a web (micro-)service, no matter how small it is. A traditional approach would be to create an HTTP interface, where user requests will be dispatched and HTML/CSS pages must be served. This approach is indeed very traditional for a web site, but not really convenient for a web service, which is not intended to be good looking, 24x7 up and running and UX-optimized. Instead, talking to a web service in a chat-bot mode would be muc...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
Advances in technology and ubiquitous connectivity have made the utilization of a dispersed workforce more common. Whether that remote team is located across the street or country, management styles/ approaches will have to be adjusted to accommodate this new dynamic. In his session at 17th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., focused on the challenges of managing remote teams, providing real-world examples that demonstrate what works and what do...
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli...
It's easy to assume that your app will run on a fast and reliable network. The reality for your app's users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn't work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection.
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
As someone who has been dedicated to automation and Application Release Automation (ARA) technology for almost six years now, one of the most common questions I get asked regards Platform-as-a-Service (PaaS). Specifically, people want to know whether release automation is still needed when a PaaS is in place, and why. Isn't that what a PaaS provides? A solution to the deployment and runtime challenges of an application? Why would anyone using a PaaS then need an automation engine with workflow ...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies adopt disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevO...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including clou...
Your business relies on your applications and your employees to stay in business. Whether you develop apps or manage business critical apps that help fuel your business, what happens when users experience sluggish performance? You and all technical teams across the organization – application, network, operations, among others, as well as, those outside the organization, like ISPs and third-party providers – are called in to solve the problem.