Welcome!

@CloudExpo Authors: Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White, Aruna Ravichandran

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Next-Generation Content Delivery: Cloud Acceleration

Cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content

It would be an understatement to say that this past decade belongs to the Internet. Starting primarily as a research tool, the Internet has now infiltrated every aspect of life - there is very little today that users do not, or cannot, do online. Moreover, new ways to leverage the Internet to personal and professional advantage arise every day.

Evidently, all this progress has had an insidious side-effect - user expectations regarding website performance have sky-rocketed over the years. People expect websites, video and audio to load faster than ever before; otherwise, they lose interest and go to other websites. In fact, research firms have ample findings to support this correlation. A 2009 ResearchLink survey found that 26 percent of respondents would move to a competitor's website if a vendor's website failed to perform, resulting in immediate revenue loss of 26 percent and a future loss of 15 percent. Forrester Research also found that 36 percent of unique visitors to a website will leave it if it fails to load within the first three seconds. Three seconds - that's not a lot of time. Yet, user expectations are warranted, seeing how much progress content delivery technology has made in the past few years. Couple these user demands with an architecture that is not fit to deliver the kind of performance they expect, and what we have on our hands is a big problem for companies whose business thrives on web content and e-commerce.

But as always, the IT people of a decade ago conferred and found a solution. Content Delivery Network (CDN) companies invested a lot of time and money into a solution that is still being used today. The solution was to store content as close to the end user as possible, a technique known as edge caching. It allows users to access cached versions of the web or applications for faster, easier access. In addition to edge caching, some of the more sophisticated CDNs have gone one step further and developed unique algorithms and massive distributed networks that would help them proactively identify trouble spots over the public Internet, and reroute content around them. While this additional technique allows websites to deliver asymmetrical traffic a little more efficiently, applications like streaming video and audio, and even software downloads, are still cached at the edge of the network despite this routing technique.

Clearly, this technique is effective for content up to a certain size, but it is not enough to meet the high throughput demands of today's growing business reliance on data and larger residential Internet connections over great distances. Edge caching is best suited for static content, and not the dynamic, rich content we see today, since static content doesn't change very often and can easily be stored on low-cost disks in a multitude of locations around the Internet. Even if it does change fairly frequently, it is easy to script these updates to ensure that copies sitting at the edge are up-to-date. But the reality is that today, in 2010, static content forms an increasingly smaller percentage of all the content that requires transfer. The need of the hour is to be able to transfer dynamic content with the speed and ease - and it is yet unfulfilled. CDNs and their edge-caching capabilities are not nearly as successful with dynamic content since, by its very nature, it cannot simply be thrown on to the edge of a network due to inherent size and constituency. The character of dynamic content dictates that while it may be live at this moment, it may not exist two seconds from now. What's more, content that falls under this category includes most of what we use today: VoIP, FTP, live video and so on.

The question then remains: How do content providers ensure that end users (both business and consumers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?

Enter the CDN's newer, more sophisticated cousin - cloud acceleration - which does what CDNs do, but faster and more able to deal with dynamic content. Cloud acceleration is best suited for dynamic content because it does not rely on edge caching - in fact, it works best without edge caching. In addition, it is more cost-effective as users are not paying for a decade's worth of infrastructure designed and built-out to enhance edge-caching capabilities. And last, but not least, it can fight common Internet problems, not only by working (routing) around them, but by actually fixing the core problem associated with long distance networks altogether. There's definitely something to be said for a solution that addresses the real issue, performs better, costs less and results in happy website visitors and increased revenues.

But how does cloud acceleration work its magic in the first place?

For starters, as previously mentioned, cloud acceleration doesn't rely on edge caching. Instead, it optimizes the entire delivery path, over the network managed by the service provider. Content is therefore delivered directly from origin servers to the end user, at the same level of performance as if they were in the same building. How is that better? For one, the most significant and important portion of the delivery path is surprisingly not the public Internet, but rather a high-performance, private, 100 percent optical network designed for speed. The cloud acceleration service provider is in full control of traffic and congestion on the network, and therefore controls Quality of Service (QoS). Of course, that also adds an element of security to the entire journey undertaken by the data. But most important, in this context, is the fact that cloud acceleration providers no longer have to rely on third-party Internet providers to work around common Internet problems such as latency, jitter and packet loss using algorithmic rerouting calculations common to CDNs. They are now in the position to actually fix them.

For more clarity, let's revisit how CDNs function. We can logically break it down into three distinct "miles" that cover the path between the content origin and the end user requesting it. The first mile is the distance between the origin server and a backbone - e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally the last mile is the end-user's connection, such as a DSL, cable, wireless or other connection.

Simply put, CDNs that rely on caching frequently request objects at the edge of the network and are designed to avoid all three of these "miles" as much as possible. Because we know that an increase in distance always results in increased latency and often greater packet loss, it's best to place as many global object copies close to end users as possible. Well-designed caching CDNs do this fairly well by placing object caching servers within the network of the last mile provider. Other caching CDNs that do not have the luxury of placing servers within the network of the last mile provider will place them at key Internet peering points. While not as close to the end user, this is still a fairly effective approach to avoiding problems altogether.

The more sophisticated CDNs that also attempt to choose alternate Internet paths still suffer because they inherently rely on the Internet to get from point A to point B. Since they don't own the network, and therefore have no ultimate control over any "mile" of the route, they are at the mercy of the Internet. Providers that do own a network attempt to inject QoS by using multiprotocol label switching (MPLS), but are ultimately still at the mercy of the effects of latency, jitter and packet loss over longer distances.

With the CDN protocol established, how does a cloud acceleration service provider do things differently? First, it is important to understand that the overall objective is still similar to traditional CDNs: minimize the amount of public Internet utilized for moving content from the origin to the end user. The more Internet travel that can be avoided, the better the result in terms of end-user web performance. The goal of acceleration, however, is to accomplish this without caching at the edge, because optimally future dynamic content will not be cached. In fact, much of what we view today as dynamic content requires a persistent connection between the origin and the end user, which is achieved through the following three steps:

Step one involves opening a connection to the origin server over the first mile so the data stream can be brought onto the accelerated network as quickly as possible for the optimization process to begin. Installing an optional origin appliance starts the optimization process right at the origin datacenter, which gets the optimization going even sooner. The cloud acceleration service provider should have multiple origin capture nodes around the world, or at least close to the origin of its customer base. This, coupled with routing algorithms, will pull content onto the network as close to the origin as possible.

Step two involves hauling the content over the highly engineered private network. Because this middle mile is the longest portion of the trip, it is where the bulk of the data stream optimization happens. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider's entire private backbone to an identical device at the edge node near the end user. These devices constantly talk to each other, optimizing flow to ensure maximum throughput with window scaling, selective acknowledgement, round-trip measurement and congestion control. Packet-level forward error correction is an important feature to reconstitute lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Packets are also resequenced at the edge node using packet order correction to avoid retransmissions that occur when packets arrive out of order. Byte-level data deduplication eliminates retransmission of identical bytes that could otherwise be created at the edge, and multiplexing is utilized to minimize unnecessary chatter and further compress data as it traverses the tunnel.

Step three involves taking advantage of direct peering to eyeball networks, or the last mile, so the content can be dropped back on the Internet just before it reaches the end user. Because you can't expect users to install software applications or hardware appliances in their homes or on their devices, placing nodes close to the end user is critical to the maximum success of the process. Generally, if you can place the node from 5 to 10 ms from the end user, the experience will still feel like a LAN. Furthermore, the benefit of placing content inside the eyeball or last mile network ensures that delivery of content will not be affected by congestion at the ISP's Internet drain during peak usage, which is a common problem.

Through these three steps, cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content - places it right in the user's lap. With a continuous open data stream equivalent to that of a super highway, it is now possible to optimize VoIP, live video, interactive e-media file transfer applications like FTP, CIFS, and NFS, and any new technologies and content that rely on rapid Internet performance in the future.

More Stories By Jonathan Hoppe

Jonathan Hoppe is President & CTO of Cloud Leverage. He has 15 years of technology experience in application development, Internet, networks and enterprise management systems. As president & CTO, he sets the long-term technology strategy of the company, acts as the technical liaison to partners, representatives and vendors, oversees large enterprise-level projects and is the chief architect for all e-business solutions, software applications and platforms. Additionally, Jonathan leads the architecture, operation, networking and telecom for each globally positioned data center as well as the Network Operations Center.

Prior to heading Cloud Leverage, Jonathan held various positions including president and CEO, CTO and senior applications developer for various e-business solution providers in Canada and the United States.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
In the fast-paced advances and popularity in cloud technology, one of the most critical factors revolves around concerns for security of your critical data. How to assure both your company and your customers they can confidently trust and utilize your cloud environment is most often top on the list. There is a method to evaluating and providing security that exceeds conventional modes of protecting data both within the cloud as well externally on mobile and other devices. With the public failure...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...
Microsoft Azure Container Services can be used for container deployment in a variety of ways including support for Orchestrators like Kubernetes, Docker Swarm and Mesos. However, the abstraction for app development that support application self-healing, scaling and so on may not be at the right level. Helm and Draft makes this a lot easier. In this primarily demo-driven session at @DevOpsSummit at 21st Cloud Expo, Raghavan "Rags" Srinivas, a Cloud Solutions Architect/Evangelist at Microsoft, wi...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migr...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http:...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Transforming cloud-based data into a reportable format can be a very expensive, time-intensive and complex operation. As a SaaS platform with more than 30 million global users, Cornerstone OnDemand’s challenge was to create a scalable solution that would improve the time it took customers to access their user data. Our Real-Time Data Warehouse (RTDW) process vastly reduced data time-to-availability from 24 hours to just 10 minutes. In his session at 21st Cloud Expo, Mark Goldin, Chief Technolo...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
SYS-CON Events announced today that CAST Software will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CAST was founded more than 25 years ago to make the invisible visible. Built around the idea that even the best analytics on the market still leave blind spots for technical teams looking to deliver better software and prevent outages, CAST provides the software intelligence that matter ...
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...