Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Olivier Huynh Van, Cloud Best Practices Network, Lori MacVittie

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Next-Generation Content Delivery: Cloud Acceleration

Cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content

It would be an understatement to say that this past decade belongs to the Internet. Starting primarily as a research tool, the Internet has now infiltrated every aspect of life - there is very little today that users do not, or cannot, do online. Moreover, new ways to leverage the Internet to personal and professional advantage arise every day.

Evidently, all this progress has had an insidious side-effect - user expectations regarding website performance have sky-rocketed over the years. People expect websites, video and audio to load faster than ever before; otherwise, they lose interest and go to other websites. In fact, research firms have ample findings to support this correlation. A 2009 ResearchLink survey found that 26 percent of respondents would move to a competitor's website if a vendor's website failed to perform, resulting in immediate revenue loss of 26 percent and a future loss of 15 percent. Forrester Research also found that 36 percent of unique visitors to a website will leave it if it fails to load within the first three seconds. Three seconds - that's not a lot of time. Yet, user expectations are warranted, seeing how much progress content delivery technology has made in the past few years. Couple these user demands with an architecture that is not fit to deliver the kind of performance they expect, and what we have on our hands is a big problem for companies whose business thrives on web content and e-commerce.

But as always, the IT people of a decade ago conferred and found a solution. Content Delivery Network (CDN) companies invested a lot of time and money into a solution that is still being used today. The solution was to store content as close to the end user as possible, a technique known as edge caching. It allows users to access cached versions of the web or applications for faster, easier access. In addition to edge caching, some of the more sophisticated CDNs have gone one step further and developed unique algorithms and massive distributed networks that would help them proactively identify trouble spots over the public Internet, and reroute content around them. While this additional technique allows websites to deliver asymmetrical traffic a little more efficiently, applications like streaming video and audio, and even software downloads, are still cached at the edge of the network despite this routing technique.

Clearly, this technique is effective for content up to a certain size, but it is not enough to meet the high throughput demands of today's growing business reliance on data and larger residential Internet connections over great distances. Edge caching is best suited for static content, and not the dynamic, rich content we see today, since static content doesn't change very often and can easily be stored on low-cost disks in a multitude of locations around the Internet. Even if it does change fairly frequently, it is easy to script these updates to ensure that copies sitting at the edge are up-to-date. But the reality is that today, in 2010, static content forms an increasingly smaller percentage of all the content that requires transfer. The need of the hour is to be able to transfer dynamic content with the speed and ease - and it is yet unfulfilled. CDNs and their edge-caching capabilities are not nearly as successful with dynamic content since, by its very nature, it cannot simply be thrown on to the edge of a network due to inherent size and constituency. The character of dynamic content dictates that while it may be live at this moment, it may not exist two seconds from now. What's more, content that falls under this category includes most of what we use today: VoIP, FTP, live video and so on.

The question then remains: How do content providers ensure that end users (both business and consumers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?

Enter the CDN's newer, more sophisticated cousin - cloud acceleration - which does what CDNs do, but faster and more able to deal with dynamic content. Cloud acceleration is best suited for dynamic content because it does not rely on edge caching - in fact, it works best without edge caching. In addition, it is more cost-effective as users are not paying for a decade's worth of infrastructure designed and built-out to enhance edge-caching capabilities. And last, but not least, it can fight common Internet problems, not only by working (routing) around them, but by actually fixing the core problem associated with long distance networks altogether. There's definitely something to be said for a solution that addresses the real issue, performs better, costs less and results in happy website visitors and increased revenues.

But how does cloud acceleration work its magic in the first place?

For starters, as previously mentioned, cloud acceleration doesn't rely on edge caching. Instead, it optimizes the entire delivery path, over the network managed by the service provider. Content is therefore delivered directly from origin servers to the end user, at the same level of performance as if they were in the same building. How is that better? For one, the most significant and important portion of the delivery path is surprisingly not the public Internet, but rather a high-performance, private, 100 percent optical network designed for speed. The cloud acceleration service provider is in full control of traffic and congestion on the network, and therefore controls Quality of Service (QoS). Of course, that also adds an element of security to the entire journey undertaken by the data. But most important, in this context, is the fact that cloud acceleration providers no longer have to rely on third-party Internet providers to work around common Internet problems such as latency, jitter and packet loss using algorithmic rerouting calculations common to CDNs. They are now in the position to actually fix them.

For more clarity, let's revisit how CDNs function. We can logically break it down into three distinct "miles" that cover the path between the content origin and the end user requesting it. The first mile is the distance between the origin server and a backbone - e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally the last mile is the end-user's connection, such as a DSL, cable, wireless or other connection.

Simply put, CDNs that rely on caching frequently request objects at the edge of the network and are designed to avoid all three of these "miles" as much as possible. Because we know that an increase in distance always results in increased latency and often greater packet loss, it's best to place as many global object copies close to end users as possible. Well-designed caching CDNs do this fairly well by placing object caching servers within the network of the last mile provider. Other caching CDNs that do not have the luxury of placing servers within the network of the last mile provider will place them at key Internet peering points. While not as close to the end user, this is still a fairly effective approach to avoiding problems altogether.

The more sophisticated CDNs that also attempt to choose alternate Internet paths still suffer because they inherently rely on the Internet to get from point A to point B. Since they don't own the network, and therefore have no ultimate control over any "mile" of the route, they are at the mercy of the Internet. Providers that do own a network attempt to inject QoS by using multiprotocol label switching (MPLS), but are ultimately still at the mercy of the effects of latency, jitter and packet loss over longer distances.

With the CDN protocol established, how does a cloud acceleration service provider do things differently? First, it is important to understand that the overall objective is still similar to traditional CDNs: minimize the amount of public Internet utilized for moving content from the origin to the end user. The more Internet travel that can be avoided, the better the result in terms of end-user web performance. The goal of acceleration, however, is to accomplish this without caching at the edge, because optimally future dynamic content will not be cached. In fact, much of what we view today as dynamic content requires a persistent connection between the origin and the end user, which is achieved through the following three steps:

Step one involves opening a connection to the origin server over the first mile so the data stream can be brought onto the accelerated network as quickly as possible for the optimization process to begin. Installing an optional origin appliance starts the optimization process right at the origin datacenter, which gets the optimization going even sooner. The cloud acceleration service provider should have multiple origin capture nodes around the world, or at least close to the origin of its customer base. This, coupled with routing algorithms, will pull content onto the network as close to the origin as possible.

Step two involves hauling the content over the highly engineered private network. Because this middle mile is the longest portion of the trip, it is where the bulk of the data stream optimization happens. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider's entire private backbone to an identical device at the edge node near the end user. These devices constantly talk to each other, optimizing flow to ensure maximum throughput with window scaling, selective acknowledgement, round-trip measurement and congestion control. Packet-level forward error correction is an important feature to reconstitute lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Packets are also resequenced at the edge node using packet order correction to avoid retransmissions that occur when packets arrive out of order. Byte-level data deduplication eliminates retransmission of identical bytes that could otherwise be created at the edge, and multiplexing is utilized to minimize unnecessary chatter and further compress data as it traverses the tunnel.

Step three involves taking advantage of direct peering to eyeball networks, or the last mile, so the content can be dropped back on the Internet just before it reaches the end user. Because you can't expect users to install software applications or hardware appliances in their homes or on their devices, placing nodes close to the end user is critical to the maximum success of the process. Generally, if you can place the node from 5 to 10 ms from the end user, the experience will still feel like a LAN. Furthermore, the benefit of placing content inside the eyeball or last mile network ensures that delivery of content will not be affected by congestion at the ISP's Internet drain during peak usage, which is a common problem.

Through these three steps, cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content - places it right in the user's lap. With a continuous open data stream equivalent to that of a super highway, it is now possible to optimize VoIP, live video, interactive e-media file transfer applications like FTP, CIFS, and NFS, and any new technologies and content that rely on rapid Internet performance in the future.

More Stories By Jonathan Hoppe

Jonathan Hoppe is President & CTO of Cloud Leverage. He has 15 years of technology experience in application development, Internet, networks and enterprise management systems. As president & CTO, he sets the long-term technology strategy of the company, acts as the technical liaison to partners, representatives and vendors, oversees large enterprise-level projects and is the chief architect for all e-business solutions, software applications and platforms. Additionally, Jonathan leads the architecture, operation, networking and telecom for each globally positioned data center as well as the Network Operations Center.

Prior to heading Cloud Leverage, Jonathan held various positions including president and CEO, CTO and senior applications developer for various e-business solution providers in Canada and the United States.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Most of us already know that adopting new cloud applications can boost a business’s productivity by enabling organizations to be more agile and ready to change course in our fast-moving and connected digital world. But the rapid adoption of cloud apps and services also brings with it profound security threats, including visibility and control challenges that aren’t present in traditional on-premises environments. At the same time, the cloud – because of its interconnected, flexible and adaptable...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Leading cloud-centric IT organizations are establishing core capabilities to improve productivity, control costs and provide a highly responsive end-user experience. Key steps along this journey include creating an end-user cloud services catalog, automating workflows and provisioning, and implementing IT showback and chargeback. In his session at 19th Cloud Expo, Mark Jamensky, executive vice president of Products at Embotics, will walk attendees through an in-depth case study of enterprise I...
Your business relies on your applications and your employees to stay in business. Whether you develop apps or manage business critical apps that help fuel your business, what happens when users experience sluggish performance? You and all technical teams across the organization – application, network, operations, among others, as well as, those outside the organization, like ISPs and third-party providers – are called in to solve the problem.
SYS-CON Events announced today that Niagara Networks will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service. In his session at 19th Cloud Exp...
SYS-CON Events announced today that Roundee / LinearHub will exhibit at the WebRTC Summit at @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LinearHub provides Roundee Service, a smart platform for enterprise video conferencing with enhanced features such as automatic recording and transcription service. Slack users can integrate Roundee to their team via Slack’s App Directory, and '/roundee' command lets your video conference ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Digital transformation is too big and important for our future success to not understand the rules that apply to it. The first three rules for winning in this age of hyper-digital transformation are: Advantages in speed, analytics and operational tempos must be captured by implementing an optimized information logistics system (OILS) Real-time operational tempos (IT, people and business processes) must be achieved Businesses that can "analyze data and act and with speed" will dominate those t...
IoT is fundamentally transforming the auto industry, turning the vehicle into a hub for connected services, including safety, infotainment and usage-based insurance. Auto manufacturers – and businesses across all verticals – have built an entire ecosystem around the Connected Car, creating new customer touch points and revenue streams. In his session at @ThingsExpo, Macario Namie, Head of IoT Strategy at Cisco Jasper, will share real-world examples of how IoT transforms the car from a static p...
While DevOps promises a better and tighter integration among an organization’s development and operation teams and transforms an application life cycle into a continual deployment, Chef and Azure together provides a speedy, cost-effective and highly scalable vehicle for realizing the business values of this transformation. In his session at @DevOpsSummit at 19th Cloud Expo, Yung Chou, a Technology Evangelist at Microsoft, will present a unique opportunity to witness how Chef and Azure work tog...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business in 2016. However, IoT is far more complex than most firms expected. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, a renowned visionary and thought leader, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology and business models to adopt and leverage IoT. He will drill down to the components in this fra...