Welcome!

Cloud Expo Authors: Roger Strukhoff, Yeshim Deniz, Sandi Mappic, Liz McMillan, Pat Romanski

Related Topics: Virtualization, SOA & WOA, Cloud Expo

Virtualization: Blog Feed Post

Amazon Elastic Load Balancing Only Simple On the Outside

Amazon’s ELB is an exciting mix of well-executed infrastructure 2.0 and the proper application of SOA, but it takes a lot

The notion of Elastic Load Balancing, as recently brought to public attention by Amazon’s offering of the capability, is nothing new. The basic concept is pure Infrastructure 2.0 and the functionality offered via the API has long been available on several application delivery controllers for many years. In fact, looking through the options for Amazon’s offering leaves me feeling a bit, oh, 1999. As if load balancing hasn’t evolved far beyond the very limited subset of capabilities exposed by Amazon’s API.

That said, that’s just the view from the outside.

Though Amazon’s ELB might be rudimentary in what it exposes to the public it is certainly anything but primitive in its use of SOA and as a prime example of the power of Infrastructure 2.0. In fact, with the exception of GoGrid’s integrated load balancing capabilities, provisioned and managed via a web-based interface, there aren’t many good, public examples of Infrastructure 2.0 in action. Not only has Amazon leveraged Infrastructure 2.0 concepts with its implementation but it has further taken advantage of SOA in the way it was meant to be used.

NOTE: What follows is just my personal analysis, I don’t have any especial knowledge about what really lies beneath Amazon’s external interfaces. The diagram is a visual interpretation of what I’ve deduced seems likely in terms of the interactions with ELB given my experience with application delivery and the information available from Amazon and should be read with that in mind.

 


WHAT DOES THAT MEAN?


When I say Amazon has utilized SOA in a way that it was meant to be used I mean that their ELB “API” isn’t just a collection of Web Services, or POWS, wrapped around some other API. It’s actually a well-thought out and designed set of interfaces that describe tasks associated with load balancing and not individual product calls. For example, if you take a look at the ELB WSDL you can see a set of operations that describe tasks, not management or configuration options, such as:

 

  • CreateLoadBalancer
  • DeleteLoadBalancer
  • RegisterInstancesWithLoadBalancer
  • DeregisterInstancesFromLoadBalancer

To understand why these are so significant and most certainly represent tasks and not individual operations you have to understand how a load balancer is imagetypically configured, and how the individual configuration components fit together. Saying “DeleteLoadBalancer” is a lot easier than what really has to occur under the covers. Believe me, it’s not as easy as a single API call to any load balancing solution. There’s a lot of relationships inherent in a load balancing configuration between the virtual server/IP address and the (pools|farms|clusters) and individual nodes, a.k.a. instance in Amazon-speak. Yet if you take a look at the parameters required to “register instances” with the load balancer, you’ll see only a list of instance ids and a load balancer name. All must be configured, but the APIs make this process appear almost magical.

The terminology used here indicates (to me at least) an abstraction which means these operations are not communicating directly with a physical (or even virtual) device but rather are being sent to a management or orchestration system that in turn relays the appropriate API calls to the underlying load balancing infrastructure.

The abstraction here appears to be pure SOA and it is, if you don’t mind my saying, a beautiful thing. Amazon has abstracted the actual physical implementation of not only the management or orchestration system, but also decoupled (as is proper) the physical infrastructure implementation from the services being provided. There is a clear separation of service from implementation, which allows for Amazon to be using product X or Y, hardware or software, virtual or concrete, and even one or more vendor solutions at the same time without the service consumer being aware of what that implementation may be.

The current offering appears to be pure layer 4 load balancing which is a good place to start, but lacks the robustness of a full layer 7 capable solution and eventually Amazon will need to address some of the challenges associated with load balancing stateful applications for its customers; challenges that are typically addressed by the use of persistence, cookies, and URI rewriting type functionality. Some of this type of functionality appears built-in, but is not well-documented by Amazon.

For example, the forwarding of client-IP addresses is a common challenge with load-balanced applications, and is often solved by using the HTTP custom header: X-Forwarded-For. Ken Weiner addresses this is a blog post, indicating Amazon is indeed using common conventions to retain the client IP address and forward it to the instances being load balanced. It may be the case that more layer 7 specific functionality is exposed than it appears, but is simply not as well documented. If the underlying implementation is capable – and it appears to be given the way ELB addresses client IP address preservation - it is a pretty good bet that Amazon will be able to address other challenges with relative ease given the foundation they’ve already built.

That’s agility; that’s Infrastructure 2.0 and SOA. Can you tell I’m excited about this? I thought you might.

This gives Amazon some pretty powerful options as it could switch out physical implementations with relative ease, as it so desires/needs, with virtually (sorry) no interruption to consumer services. Coupling this nearly perfect application of SOA with Infrastructure 2.0 results in an agility that is often mentioned as a benefit but rarely actually seen in the wild.

 


THIS IS INFRASTRUCTURE 2.0 IN ACTION


This is a great example of the power of Infrastructure 2.0. Not only is the infrastructure automated and remotely configured by the consumer, but it is integrated with other Amazon services such as CloudWatch (monitoring/management) and Auto Scaling. The level of sophistication under the hood of this architecture is cleverly hidden by the simplicity and elegance of the overlying SOA-based control plane which encompasses all aspects of the infrastructure necessary to deliver the application and ensure availability.

 

Several people have been trying to figure out what, exactly, is providing the load balancing under the covers for Amazon. Is it a virtual appliance version of an existing application delivery controller? Is it a hardware implementation? Is it a proprietary, custom-built solution from Amazon’s own developers? The reality is that you could insert just about any Infrastructure 2.0 capable application delivery controller or load balancer into the “?” spot on the diagram above and achieve the same results as Amazon. Provided, of course, you were willing to put the same amount of effort into the design and integration as has obviously been put into ELB.

While it would certainly be interesting to know for sure, the answer to that question is overridden in my mind by a bigger one: what other capabilities does the physical implementation have and will they, too, surface in yet another service offering from Amazon? If the solution has other features and functionality, might they, too, be exposed over time in what will slowly become the Cloud Menu from which customers can build a robust infrastructure comprising more than just simple application delivery? Might it grow to provide security, acceleration, and other application delivery-related services, too?

If the underlying solution is Infrastructure 2.0 capable – and it certainly appears to be - then the feasibility of such service offerings is more likely than not.

Follow me on TwitterView Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Latest Stories
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.