Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo

Containers Expo Blog: Blog Feed Post

Amazon Elastic Load Balancing Only Simple On the Outside

Amazon’s ELB is an exciting mix of well-executed infrastructure 2.0 and the proper application of SOA, but it takes a lot

The notion of Elastic Load Balancing, as recently brought to public attention by Amazon’s offering of the capability, is nothing new. The basic concept is pure Infrastructure 2.0 and the functionality offered via the API has long been available on several application delivery controllers for many years. In fact, looking through the options for Amazon’s offering leaves me feeling a bit, oh, 1999. As if load balancing hasn’t evolved far beyond the very limited subset of capabilities exposed by Amazon’s API.

That said, that’s just the view from the outside.

Though Amazon’s ELB might be rudimentary in what it exposes to the public it is certainly anything but primitive in its use of SOA and as a prime example of the power of Infrastructure 2.0. In fact, with the exception of GoGrid’s integrated load balancing capabilities, provisioned and managed via a web-based interface, there aren’t many good, public examples of Infrastructure 2.0 in action. Not only has Amazon leveraged Infrastructure 2.0 concepts with its implementation but it has further taken advantage of SOA in the way it was meant to be used.

NOTE: What follows is just my personal analysis, I don’t have any especial knowledge about what really lies beneath Amazon’s external interfaces. The diagram is a visual interpretation of what I’ve deduced seems likely in terms of the interactions with ELB given my experience with application delivery and the information available from Amazon and should be read with that in mind.

 


WHAT DOES THAT MEAN?


When I say Amazon has utilized SOA in a way that it was meant to be used I mean that their ELB “API” isn’t just a collection of Web Services, or POWS, wrapped around some other API. It’s actually a well-thought out and designed set of interfaces that describe tasks associated with load balancing and not individual product calls. For example, if you take a look at the ELB WSDL you can see a set of operations that describe tasks, not management or configuration options, such as:

 

  • CreateLoadBalancer
  • DeleteLoadBalancer
  • RegisterInstancesWithLoadBalancer
  • DeregisterInstancesFromLoadBalancer

To understand why these are so significant and most certainly represent tasks and not individual operations you have to understand how a load balancer is imagetypically configured, and how the individual configuration components fit together. Saying “DeleteLoadBalancer” is a lot easier than what really has to occur under the covers. Believe me, it’s not as easy as a single API call to any load balancing solution. There’s a lot of relationships inherent in a load balancing configuration between the virtual server/IP address and the (pools|farms|clusters) and individual nodes, a.k.a. instance in Amazon-speak. Yet if you take a look at the parameters required to “register instances” with the load balancer, you’ll see only a list of instance ids and a load balancer name. All must be configured, but the APIs make this process appear almost magical.

The terminology used here indicates (to me at least) an abstraction which means these operations are not communicating directly with a physical (or even virtual) device but rather are being sent to a management or orchestration system that in turn relays the appropriate API calls to the underlying load balancing infrastructure.

The abstraction here appears to be pure SOA and it is, if you don’t mind my saying, a beautiful thing. Amazon has abstracted the actual physical implementation of not only the management or orchestration system, but also decoupled (as is proper) the physical infrastructure implementation from the services being provided. There is a clear separation of service from implementation, which allows for Amazon to be using product X or Y, hardware or software, virtual or concrete, and even one or more vendor solutions at the same time without the service consumer being aware of what that implementation may be.

The current offering appears to be pure layer 4 load balancing which is a good place to start, but lacks the robustness of a full layer 7 capable solution and eventually Amazon will need to address some of the challenges associated with load balancing stateful applications for its customers; challenges that are typically addressed by the use of persistence, cookies, and URI rewriting type functionality. Some of this type of functionality appears built-in, but is not well-documented by Amazon.

For example, the forwarding of client-IP addresses is a common challenge with load-balanced applications, and is often solved by using the HTTP custom header: X-Forwarded-For. Ken Weiner addresses this is a blog post, indicating Amazon is indeed using common conventions to retain the client IP address and forward it to the instances being load balanced. It may be the case that more layer 7 specific functionality is exposed than it appears, but is simply not as well documented. If the underlying implementation is capable – and it appears to be given the way ELB addresses client IP address preservation - it is a pretty good bet that Amazon will be able to address other challenges with relative ease given the foundation they’ve already built.

That’s agility; that’s Infrastructure 2.0 and SOA. Can you tell I’m excited about this? I thought you might.

This gives Amazon some pretty powerful options as it could switch out physical implementations with relative ease, as it so desires/needs, with virtually (sorry) no interruption to consumer services. Coupling this nearly perfect application of SOA with Infrastructure 2.0 results in an agility that is often mentioned as a benefit but rarely actually seen in the wild.

 


THIS IS INFRASTRUCTURE 2.0 IN ACTION


This is a great example of the power of Infrastructure 2.0. Not only is the infrastructure automated and remotely configured by the consumer, but it is integrated with other Amazon services such as CloudWatch (monitoring/management) and Auto Scaling. The level of sophistication under the hood of this architecture is cleverly hidden by the simplicity and elegance of the overlying SOA-based control plane which encompasses all aspects of the infrastructure necessary to deliver the application and ensure availability.

 

Several people have been trying to figure out what, exactly, is providing the load balancing under the covers for Amazon. Is it a virtual appliance version of an existing application delivery controller? Is it a hardware implementation? Is it a proprietary, custom-built solution from Amazon’s own developers? The reality is that you could insert just about any Infrastructure 2.0 capable application delivery controller or load balancer into the “?” spot on the diagram above and achieve the same results as Amazon. Provided, of course, you were willing to put the same amount of effort into the design and integration as has obviously been put into ELB.

While it would certainly be interesting to know for sure, the answer to that question is overridden in my mind by a bigger one: what other capabilities does the physical implementation have and will they, too, surface in yet another service offering from Amazon? If the solution has other features and functionality, might they, too, be exposed over time in what will slowly become the Cloud Menu from which customers can build a robust infrastructure comprising more than just simple application delivery? Might it grow to provide security, acceleration, and other application delivery-related services, too?

If the underlying solution is Infrastructure 2.0 capable – and it certainly appears to be - then the feasibility of such service offerings is more likely than not.

Follow me on TwitterView Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.