Welcome!

Cloud Expo Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan, Adrian Bridgwater

Related Topics: SDN Journal, SOA & WOA, Virtualization, Web 2.0, Cloud Expo, Big Data Journal

SDN Journal: Blog Feed Post

Bandwidth, Bandwidth, Bandwidth!

To really provision bandwidth efficiently you have to get inside the application

One of the most commonly cited use cases for SDN (the classical, architectural definition) centers on ensuring quality of service for applications, usually by adjusting bandwidth constraints and prioritization, sometimes dynamically based on the operating conditions present on the network.

In such a scenario the application magically informs the SDN controller of its bandwidth and service-level requirements and the controller adjusts the network and distributes the appropriate flow tables to the network fabric to support the application.

This is a great vision, but it is not without challenges.

The most significant obstacle is actually not getting the application to talk to the SDN controller. Northbound APIs could be used for this purpose, or some other API-based mechanism that is used to instruct the controller on application specific requirements. Let's not rat hole on that and assume that this is easily enough accomplished.

At this point the SDN controller has some requirements dictated by an application. Given the way in which an SDN controller distributes forwarding information to the network fabric, one has to ask how the SDN controller will represent the requirements of the application and, more importantly, how it will distribute those requirements.

Assuming a classical SDN architecture and the use of OpenFlow or a protocol similar in capability, the flow table in the network fabric will only be able to distinguish packets on a per-IP / port combination. Let's assume that's an accurate representation of the overall topology; that is, every application has a distinct IP / port combination. That means the SDN controller can, in fact, push flow table rules that are able to provision the appropriate bandwidth for those application flows as well enforce prioritization (if that's needed, too).

So far so good. You're thinking I'm barking up a pedantic tree or something, aren't here? Nope, here comes a significant problem starting with the question: How does the application define its need for bandwidth?

"Applications" today are comprised of a variety of functions and capabilities ranging from the delivery of simple text to dozens of images to embedded multi-media to video (and probably a few others I'm missing). The bandwidth needs of video is different from text is different from images is different for real-time messaging applications. Sensitivity to latency, throughput, bandwidth - these characteristics are peculiar to content-types, not the application itself (capabilities of the client-side network and device not withstanding, either). Given an application will varying - sometimes wildly - content types and requirements, should it simply request from the network the highest throughput and lowest latency required of all content being delivered? That's terribly inefficient.

HTTP is the new TCP
At the root of the problem is the reality that HTTP is the new TCP, with a significant percentage (62% in our research) of applications all using HTTP. A smaller percentage of those applications use port 8080 and port 443, but are still HTTP. In an increasingly API-enabled application world, the best chance we have to profile bandwidth needs for an "application" is at the URI level.

http-the-new-tcp-f5All the interesting application-layer stuff is going on above layer 7 (HTTP) or more precisely within layer 7, in the payload (and across multiple packets and flows, but that's a different discussion). To really define the specific bandwidth needs of an application you have to look at the content being delivered. In many cases that content-type can be deduced from clues in the URI (file extensions like JPG, PNG, CSS, etc...) or extracted from the HTTP header Content-Type, which spells it out. In either case, you must be able to inspect and evaluate data in the HTTP payload, not merely IP and TCP parameters.

The biggest problem is that the current SDN architectural model, which focuses heavily on packet and flow-based processing, does not have the depth of visibility necessary to properly distinguish content type within an application and thus apply routing and forwarding policies based on each content type's unique requirements. An application delivering both video (a plurality of video is delivered via HTTP today, and it's increasing rapidly) and text will either need to be optimized for one or the other, but not both. The same is true for images, and even for different delivery models (push, pull, real-time, static) of text-based information.

To do that you need visibility into the application, down to the payload in some cases. That's just not a capability that the classical SDN architecture today is able to provide, for a variety of reasons. Current SDN architectures assume visibility and action on L2-4 only. Unfortunately the data necessary is at and above L7.

Ultimately the answer to this conundrum is to include L7 capable data path elements in the SDN architecture. The standard L2-3 SDN fabric can then optimally route packets through the network based on general, application-oriented network requirements while allowing the L7 aware data path elements the ability to do what they do best: inspect, analyze, evaluate and even modify (optimize) application messages in order to optimally deliver data to the end-user.

Application awareness, as it's often referred to, is not enough. To really ensure the network - and thus SDN - is able to offer application-specific services in the network requires application fluency. And application fluency isn't something you find by peeking at packets from layer 2-4. You've got to go deeper - to layer 7 and beyond.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Latest Stories
With the explosion of the cloud, more businesses are transitioning to a recurring revenue model to generate reliable sales, grow profits, and open new markets. This opportunity requires businesses to get to market quickly with the pricing and packaging options customers want. In addition, you will want to take advantage of the ensuing tidal wave of data to more effectively upsell, cross-sell and manage your customers. All of this is possible, but only with the right approach. At 15th Cloud Expo, Brendan O'Brien, Co-founder at Aria Systems and the inventor of cloud billing panelists, will lead a panel discussion on what it takes to launch and manage a successful recurring revenue business. The panelists will offer their insights about what each department will need to consider, from financial management to line of business and IT. The panelists will also offer examples from their success in recurring revenue with companies such as Audi, Constant Contact, Experian, Pitney-Bowes, Teleko...
Planning scalable environments isn't terribly difficult, but it does require a change of perspective. In his session at 15th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will broaden your views to think on an Internet scale by dissecting a video publishing application built with The SoftLayer Platform, Message Queuing, Object Storage, and Drupal. By examining a scalable modular application build that can handle unpredictable traffic, attendees will able to grow your development arsenal and pick up a few strategies to apply to your own projects.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
The cloud provides an easy onramp to building and deploying Big Data solutions. Transitioning from initial deployment to large-scale, highly performant operations may not be as easy. In his session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, will discuss the benefits, weaknesses, and performance characteristics of public and bare metal cloud deployments that can help you make the right decisions.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Is your organization struggling to deal with skyrocketing volumes of digital assets? The amount of data is growing exponentially and organizations are having a hard time managing this growth. In his session at 15th Cloud Expo, Amar Kapadia, Senior Director of Open Cloud Strategy at Seagate, will walk through the essential considerations when developing a cloud storage strategy. In this discussion, you will understand the challenges IT is facing, why companies need to move to cloud, and how the right cloud model can help your business economically overcome the data struggle.
If cloud computing benefits are so clear, why have so few enterprises migrated their mission-critical apps? The answer is often inertia and FUD. No one ever got fired for not moving to the cloud – not yet. In his session at 15th Cloud Expo, Michael Hoch, SVP, Cloud Advisory Service at Virtustream, will discuss the six key steps to justify and execute your MCA cloud migration.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution