Click here to close now.




















Welcome!

@CloudExpo Authors: Liz McMillan, Carmen Gonzalez, Pat Romanski, Elizabeth White, Pete Waterhouse

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Next-Generation Content Delivery: Cloud Acceleration

Cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content

It would be an understatement to say that this past decade belongs to the Internet. Starting primarily as a research tool, the Internet has now infiltrated every aspect of life - there is very little today that users do not, or cannot, do online. Moreover, new ways to leverage the Internet to personal and professional advantage arise every day.

Evidently, all this progress has had an insidious side-effect - user expectations regarding website performance have sky-rocketed over the years. People expect websites, video and audio to load faster than ever before; otherwise, they lose interest and go to other websites. In fact, research firms have ample findings to support this correlation. A 2009 ResearchLink survey found that 26 percent of respondents would move to a competitor's website if a vendor's website failed to perform, resulting in immediate revenue loss of 26 percent and a future loss of 15 percent. Forrester Research also found that 36 percent of unique visitors to a website will leave it if it fails to load within the first three seconds. Three seconds - that's not a lot of time. Yet, user expectations are warranted, seeing how much progress content delivery technology has made in the past few years. Couple these user demands with an architecture that is not fit to deliver the kind of performance they expect, and what we have on our hands is a big problem for companies whose business thrives on web content and e-commerce.

But as always, the IT people of a decade ago conferred and found a solution. Content Delivery Network (CDN) companies invested a lot of time and money into a solution that is still being used today. The solution was to store content as close to the end user as possible, a technique known as edge caching. It allows users to access cached versions of the web or applications for faster, easier access. In addition to edge caching, some of the more sophisticated CDNs have gone one step further and developed unique algorithms and massive distributed networks that would help them proactively identify trouble spots over the public Internet, and reroute content around them. While this additional technique allows websites to deliver asymmetrical traffic a little more efficiently, applications like streaming video and audio, and even software downloads, are still cached at the edge of the network despite this routing technique.

Clearly, this technique is effective for content up to a certain size, but it is not enough to meet the high throughput demands of today's growing business reliance on data and larger residential Internet connections over great distances. Edge caching is best suited for static content, and not the dynamic, rich content we see today, since static content doesn't change very often and can easily be stored on low-cost disks in a multitude of locations around the Internet. Even if it does change fairly frequently, it is easy to script these updates to ensure that copies sitting at the edge are up-to-date. But the reality is that today, in 2010, static content forms an increasingly smaller percentage of all the content that requires transfer. The need of the hour is to be able to transfer dynamic content with the speed and ease - and it is yet unfulfilled. CDNs and their edge-caching capabilities are not nearly as successful with dynamic content since, by its very nature, it cannot simply be thrown on to the edge of a network due to inherent size and constituency. The character of dynamic content dictates that while it may be live at this moment, it may not exist two seconds from now. What's more, content that falls under this category includes most of what we use today: VoIP, FTP, live video and so on.

The question then remains: How do content providers ensure that end users (both business and consumers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?

Enter the CDN's newer, more sophisticated cousin - cloud acceleration - which does what CDNs do, but faster and more able to deal with dynamic content. Cloud acceleration is best suited for dynamic content because it does not rely on edge caching - in fact, it works best without edge caching. In addition, it is more cost-effective as users are not paying for a decade's worth of infrastructure designed and built-out to enhance edge-caching capabilities. And last, but not least, it can fight common Internet problems, not only by working (routing) around them, but by actually fixing the core problem associated with long distance networks altogether. There's definitely something to be said for a solution that addresses the real issue, performs better, costs less and results in happy website visitors and increased revenues.

But how does cloud acceleration work its magic in the first place?

For starters, as previously mentioned, cloud acceleration doesn't rely on edge caching. Instead, it optimizes the entire delivery path, over the network managed by the service provider. Content is therefore delivered directly from origin servers to the end user, at the same level of performance as if they were in the same building. How is that better? For one, the most significant and important portion of the delivery path is surprisingly not the public Internet, but rather a high-performance, private, 100 percent optical network designed for speed. The cloud acceleration service provider is in full control of traffic and congestion on the network, and therefore controls Quality of Service (QoS). Of course, that also adds an element of security to the entire journey undertaken by the data. But most important, in this context, is the fact that cloud acceleration providers no longer have to rely on third-party Internet providers to work around common Internet problems such as latency, jitter and packet loss using algorithmic rerouting calculations common to CDNs. They are now in the position to actually fix them.

For more clarity, let's revisit how CDNs function. We can logically break it down into three distinct "miles" that cover the path between the content origin and the end user requesting it. The first mile is the distance between the origin server and a backbone - e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally the last mile is the end-user's connection, such as a DSL, cable, wireless or other connection.

Simply put, CDNs that rely on caching frequently request objects at the edge of the network and are designed to avoid all three of these "miles" as much as possible. Because we know that an increase in distance always results in increased latency and often greater packet loss, it's best to place as many global object copies close to end users as possible. Well-designed caching CDNs do this fairly well by placing object caching servers within the network of the last mile provider. Other caching CDNs that do not have the luxury of placing servers within the network of the last mile provider will place them at key Internet peering points. While not as close to the end user, this is still a fairly effective approach to avoiding problems altogether.

The more sophisticated CDNs that also attempt to choose alternate Internet paths still suffer because they inherently rely on the Internet to get from point A to point B. Since they don't own the network, and therefore have no ultimate control over any "mile" of the route, they are at the mercy of the Internet. Providers that do own a network attempt to inject QoS by using multiprotocol label switching (MPLS), but are ultimately still at the mercy of the effects of latency, jitter and packet loss over longer distances.

With the CDN protocol established, how does a cloud acceleration service provider do things differently? First, it is important to understand that the overall objective is still similar to traditional CDNs: minimize the amount of public Internet utilized for moving content from the origin to the end user. The more Internet travel that can be avoided, the better the result in terms of end-user web performance. The goal of acceleration, however, is to accomplish this without caching at the edge, because optimally future dynamic content will not be cached. In fact, much of what we view today as dynamic content requires a persistent connection between the origin and the end user, which is achieved through the following three steps:

Step one involves opening a connection to the origin server over the first mile so the data stream can be brought onto the accelerated network as quickly as possible for the optimization process to begin. Installing an optional origin appliance starts the optimization process right at the origin datacenter, which gets the optimization going even sooner. The cloud acceleration service provider should have multiple origin capture nodes around the world, or at least close to the origin of its customer base. This, coupled with routing algorithms, will pull content onto the network as close to the origin as possible.

Step two involves hauling the content over the highly engineered private network. Because this middle mile is the longest portion of the trip, it is where the bulk of the data stream optimization happens. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider's entire private backbone to an identical device at the edge node near the end user. These devices constantly talk to each other, optimizing flow to ensure maximum throughput with window scaling, selective acknowledgement, round-trip measurement and congestion control. Packet-level forward error correction is an important feature to reconstitute lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Packets are also resequenced at the edge node using packet order correction to avoid retransmissions that occur when packets arrive out of order. Byte-level data deduplication eliminates retransmission of identical bytes that could otherwise be created at the edge, and multiplexing is utilized to minimize unnecessary chatter and further compress data as it traverses the tunnel.

Step three involves taking advantage of direct peering to eyeball networks, or the last mile, so the content can be dropped back on the Internet just before it reaches the end user. Because you can't expect users to install software applications or hardware appliances in their homes or on their devices, placing nodes close to the end user is critical to the maximum success of the process. Generally, if you can place the node from 5 to 10 ms from the end user, the experience will still feel like a LAN. Furthermore, the benefit of placing content inside the eyeball or last mile network ensures that delivery of content will not be affected by congestion at the ISP's Internet drain during peak usage, which is a common problem.

Through these three steps, cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content - places it right in the user's lap. With a continuous open data stream equivalent to that of a super highway, it is now possible to optimize VoIP, live video, interactive e-media file transfer applications like FTP, CIFS, and NFS, and any new technologies and content that rely on rapid Internet performance in the future.

More Stories By Jonathan Hoppe

Jonathan Hoppe is President & CTO of Cloud Leverage. He has 15 years of technology experience in application development, Internet, networks and enterprise management systems. As president & CTO, he sets the long-term technology strategy of the company, acts as the technical liaison to partners, representatives and vendors, oversees large enterprise-level projects and is the chief architect for all e-business solutions, software applications and platforms. Additionally, Jonathan leads the architecture, operation, networking and telecom for each globally positioned data center as well as the Network Operations Center.

Prior to heading Cloud Leverage, Jonathan held various positions including president and CEO, CTO and senior applications developer for various e-business solution providers in Canada and the United States.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Learn what is going on, contribute to the discussions, and e...
Puppet Labs is pleased to share the findings from our 2015 State of DevOps Survey. We have deepened our understanding of how DevOps enables IT performance and organizational performance, based on responses from more than 20,000 technical professionals we’ve surveyed over the past four years. The 2015 State of DevOps Report reveals high-performing IT organizations deploy 30x more frequently with 200x shorter lead times. They have 60x fewer failures and recover 168x faster
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small,...
This Enterprise Strategy Group lab validation report of the NEC Express5800/R320 server with Intel® Xeon® processor presents the benefits of 99.999% uptime NEC fault-tolerant servers that lower overall virtualized server total cost of ownership. This report also includes survey data on the significant costs associated with system outages impacting enterprise and web applications. Click Here to Download Report Now!
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
The 3rd International WebRTC Summit, to be held Nov. 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 15th International Cloud Expo, 6th International Big Data Expo, 3rd International DevOps Summit and 2nd Internet of @ThingsExpo. WebRTC (Web-based Real-Time Com...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
In 2014, the market witnessed a massive migration to the cloud as enterprises finally overcame their fears of the cloud’s viability, security, etc. Over the past 18 months, AWS, Google and Microsoft have waged an ongoing battle through a wave of price cuts and new features. For IT executives, sorting through all the noise to make the best cloud investment decisions has become daunting. Enterprises can and are moving away from a "one size fits all" cloud approach. The new competitive field has ...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of ...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and a...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
Moving an existing on-premise infrastructure into the cloud can be a complex and daunting proposition. It is critical to understand the benefits as well as the challenges associated with either a full or hybrid approach. In his session at 17th Cloud Expo, Richard Weiss, Principal Consultant at Pythian, will present a roadmap that can be leveraged by any organization to plan, analyze, evaluate and execute on a cloud migration solution. He will review the five major cloud transformation phases a...
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for ‘normal’ companies without hyper-scale resources. In his session at 17th Cloud Expo, David Cauthron, founder and chief executive officer of Nimboxx, will discuss the evolution of virtualization (hardware, application, memory, storage) and how commodity / open source hyper converged infrastructure (HCI) so...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
In his session at @ThingsExpo, Lee Williams, a producer of the first smartphones and tablets, will talk about how he is now applying his experience in mobile technology to the design and development of the next generation of Environmental and Sustainability Services at ETwater. He will explain how M2M controllers work through wirelessly connected remote controls; and specifically delve into a retrofit option that reverse-engineers control codes of existing conventional controller systems so the...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
U.S. companies are desperately trying to recruit and hire skilled software engineers and developers, but there is simply not enough quality talent to go around. Tiempo Development is a nearshore software development company. Our headquarters are in AZ, but we are a pioneer and leader in outsourcing to Mexico, based on our three software development centers there. We have a proven process and we are experts at providing our customers with powerful solutions. We transform ideas into reality.