Click here to close now.

Welcome!

Cloud Expo Authors: Plutora Blog, John Savageau, Carmen Gonzalez, Pat Romanski, Elizabeth White

Related Topics: Cloud Expo, SOA & WOA

Cloud Expo: Article

Next-Generation Content Delivery: Cloud Acceleration

Cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content

It would be an understatement to say that this past decade belongs to the Internet. Starting primarily as a research tool, the Internet has now infiltrated every aspect of life - there is very little today that users do not, or cannot, do online. Moreover, new ways to leverage the Internet to personal and professional advantage arise every day.

Evidently, all this progress has had an insidious side-effect - user expectations regarding website performance have sky-rocketed over the years. People expect websites, video and audio to load faster than ever before; otherwise, they lose interest and go to other websites. In fact, research firms have ample findings to support this correlation. A 2009 ResearchLink survey found that 26 percent of respondents would move to a competitor's website if a vendor's website failed to perform, resulting in immediate revenue loss of 26 percent and a future loss of 15 percent. Forrester Research also found that 36 percent of unique visitors to a website will leave it if it fails to load within the first three seconds. Three seconds - that's not a lot of time. Yet, user expectations are warranted, seeing how much progress content delivery technology has made in the past few years. Couple these user demands with an architecture that is not fit to deliver the kind of performance they expect, and what we have on our hands is a big problem for companies whose business thrives on web content and e-commerce.

But as always, the IT people of a decade ago conferred and found a solution. Content Delivery Network (CDN) companies invested a lot of time and money into a solution that is still being used today. The solution was to store content as close to the end user as possible, a technique known as edge caching. It allows users to access cached versions of the web or applications for faster, easier access. In addition to edge caching, some of the more sophisticated CDNs have gone one step further and developed unique algorithms and massive distributed networks that would help them proactively identify trouble spots over the public Internet, and reroute content around them. While this additional technique allows websites to deliver asymmetrical traffic a little more efficiently, applications like streaming video and audio, and even software downloads, are still cached at the edge of the network despite this routing technique.

Clearly, this technique is effective for content up to a certain size, but it is not enough to meet the high throughput demands of today's growing business reliance on data and larger residential Internet connections over great distances. Edge caching is best suited for static content, and not the dynamic, rich content we see today, since static content doesn't change very often and can easily be stored on low-cost disks in a multitude of locations around the Internet. Even if it does change fairly frequently, it is easy to script these updates to ensure that copies sitting at the edge are up-to-date. But the reality is that today, in 2010, static content forms an increasingly smaller percentage of all the content that requires transfer. The need of the hour is to be able to transfer dynamic content with the speed and ease - and it is yet unfulfilled. CDNs and their edge-caching capabilities are not nearly as successful with dynamic content since, by its very nature, it cannot simply be thrown on to the edge of a network due to inherent size and constituency. The character of dynamic content dictates that while it may be live at this moment, it may not exist two seconds from now. What's more, content that falls under this category includes most of what we use today: VoIP, FTP, live video and so on.

The question then remains: How do content providers ensure that end users (both business and consumers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?

Enter the CDN's newer, more sophisticated cousin - cloud acceleration - which does what CDNs do, but faster and more able to deal with dynamic content. Cloud acceleration is best suited for dynamic content because it does not rely on edge caching - in fact, it works best without edge caching. In addition, it is more cost-effective as users are not paying for a decade's worth of infrastructure designed and built-out to enhance edge-caching capabilities. And last, but not least, it can fight common Internet problems, not only by working (routing) around them, but by actually fixing the core problem associated with long distance networks altogether. There's definitely something to be said for a solution that addresses the real issue, performs better, costs less and results in happy website visitors and increased revenues.

But how does cloud acceleration work its magic in the first place?

For starters, as previously mentioned, cloud acceleration doesn't rely on edge caching. Instead, it optimizes the entire delivery path, over the network managed by the service provider. Content is therefore delivered directly from origin servers to the end user, at the same level of performance as if they were in the same building. How is that better? For one, the most significant and important portion of the delivery path is surprisingly not the public Internet, but rather a high-performance, private, 100 percent optical network designed for speed. The cloud acceleration service provider is in full control of traffic and congestion on the network, and therefore controls Quality of Service (QoS). Of course, that also adds an element of security to the entire journey undertaken by the data. But most important, in this context, is the fact that cloud acceleration providers no longer have to rely on third-party Internet providers to work around common Internet problems such as latency, jitter and packet loss using algorithmic rerouting calculations common to CDNs. They are now in the position to actually fix them.

For more clarity, let's revisit how CDNs function. We can logically break it down into three distinct "miles" that cover the path between the content origin and the end user requesting it. The first mile is the distance between the origin server and a backbone - e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally the last mile is the end-user's connection, such as a DSL, cable, wireless or other connection.

Simply put, CDNs that rely on caching frequently request objects at the edge of the network and are designed to avoid all three of these "miles" as much as possible. Because we know that an increase in distance always results in increased latency and often greater packet loss, it's best to place as many global object copies close to end users as possible. Well-designed caching CDNs do this fairly well by placing object caching servers within the network of the last mile provider. Other caching CDNs that do not have the luxury of placing servers within the network of the last mile provider will place them at key Internet peering points. While not as close to the end user, this is still a fairly effective approach to avoiding problems altogether.

The more sophisticated CDNs that also attempt to choose alternate Internet paths still suffer because they inherently rely on the Internet to get from point A to point B. Since they don't own the network, and therefore have no ultimate control over any "mile" of the route, they are at the mercy of the Internet. Providers that do own a network attempt to inject QoS by using multiprotocol label switching (MPLS), but are ultimately still at the mercy of the effects of latency, jitter and packet loss over longer distances.

With the CDN protocol established, how does a cloud acceleration service provider do things differently? First, it is important to understand that the overall objective is still similar to traditional CDNs: minimize the amount of public Internet utilized for moving content from the origin to the end user. The more Internet travel that can be avoided, the better the result in terms of end-user web performance. The goal of acceleration, however, is to accomplish this without caching at the edge, because optimally future dynamic content will not be cached. In fact, much of what we view today as dynamic content requires a persistent connection between the origin and the end user, which is achieved through the following three steps:

Step one involves opening a connection to the origin server over the first mile so the data stream can be brought onto the accelerated network as quickly as possible for the optimization process to begin. Installing an optional origin appliance starts the optimization process right at the origin datacenter, which gets the optimization going even sooner. The cloud acceleration service provider should have multiple origin capture nodes around the world, or at least close to the origin of its customer base. This, coupled with routing algorithms, will pull content onto the network as close to the origin as possible.

Step two involves hauling the content over the highly engineered private network. Because this middle mile is the longest portion of the trip, it is where the bulk of the data stream optimization happens. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider's entire private backbone to an identical device at the edge node near the end user. These devices constantly talk to each other, optimizing flow to ensure maximum throughput with window scaling, selective acknowledgement, round-trip measurement and congestion control. Packet-level forward error correction is an important feature to reconstitute lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Packets are also resequenced at the edge node using packet order correction to avoid retransmissions that occur when packets arrive out of order. Byte-level data deduplication eliminates retransmission of identical bytes that could otherwise be created at the edge, and multiplexing is utilized to minimize unnecessary chatter and further compress data as it traverses the tunnel.

Step three involves taking advantage of direct peering to eyeball networks, or the last mile, so the content can be dropped back on the Internet just before it reaches the end user. Because you can't expect users to install software applications or hardware appliances in their homes or on their devices, placing nodes close to the end user is critical to the maximum success of the process. Generally, if you can place the node from 5 to 10 ms from the end user, the experience will still feel like a LAN. Furthermore, the benefit of placing content inside the eyeball or last mile network ensures that delivery of content will not be affected by congestion at the ISP's Internet drain during peak usage, which is a common problem.

Through these three steps, cloud acceleration essentially does the same thing for dynamic content that a CDN does for static content - places it right in the user's lap. With a continuous open data stream equivalent to that of a super highway, it is now possible to optimize VoIP, live video, interactive e-media file transfer applications like FTP, CIFS, and NFS, and any new technologies and content that rely on rapid Internet performance in the future.

More Stories By Jonathan Hoppe

Jonathan Hoppe is President & CTO of Cloud Leverage. He has 15 years of technology experience in application development, Internet, networks and enterprise management systems. As president & CTO, he sets the long-term technology strategy of the company, acts as the technical liaison to partners, representatives and vendors, oversees large enterprise-level projects and is the chief architect for all e-business solutions, software applications and platforms. Additionally, Jonathan leads the architecture, operation, networking and telecom for each globally positioned data center as well as the Network Operations Center.

Prior to heading Cloud Leverage, Jonathan held various positions including president and CEO, CTO and senior applications developer for various e-business solution providers in Canada and the United States.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
SYS-CON Events announced today that GENBAND, a leading developer of real time communications software solutions, has been named “Silver Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. The GENBAND team will be on hand to demonstrate their newest product, Kandy. Kandy is a communications Platform-as-a-Service (PaaS) that enables companies to seamlessly integrate more human communications into their Web and mobile applicatio...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
Companies today struggle to manage the types and volume of data their customers and employees generate and use every day. With billions of requests daily, operational consistency can be elusive. In his session at Big Data Expo, Dave McCrory, CTO at Basho Technologies, will explore how a distributed systems solution, such as NoSQL, can give organizations the consistency and availability necessary to succeed with on-demand data, offering high availability at massive scale.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, de...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driv...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes ...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, it is now feasible to create a rich desktop and tuned mobile experience with a single codebase, without compromising performance or usability.
SYS-CON Events announced today Arista Networks will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Arista Networks was founded to deliver software-driven cloud networking solutions for large data center and computing environments. Arista’s award-winning 10/40/100GbE switches redefine scalability, robustness, and price-performance, with over 3,000 customers and more than three million cloud networking ports depl...
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, will explain the best practices of continuous testing at high scale, which is r...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
NaviSite, Inc., a Time Warner Cable company, has opened a new enterprise-class data center located in Santa Clara, California. The new data center will enable NaviSite to meet growing demands for its enterprise-class Cloud and Managed Services from existing and new customers. This facility, which is owned by data center solution provider Digital Realty, will join NaviSite’s fabric of nine existing data centers across the U.S. and U.K., all of which are designed to provide a resilient, secure, hi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data th...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein...
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driv...