Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Miska Kaipiainen, Liz McMillan, Olivier Huynh Van

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Post

Load Balancing in a Cloud

How should auto-scaling work, and why doesn’t it?

Although “rapid elasticity” is part of NIST’s definition of cloud computing, it may be interesting to note that many cloud computing environments don’t include this capability at all – or charge you extra for it. Many providers offer the means by which you can configure a load balancing service and manually add or remove instances, but there may not be a way to automate that process. If it’s manual, it’s certain “rapid” in the sense that’s it’s probably faster than you can do it (because you’d have to acquire hardware and deploy the application, as opposed to simply hitting a button and “cloning” the environment ‘out there’) but it’s not necessarily as fast as it could be (because it’s manual) nor is it automated. There’s a number of reasons for that – but we’ll get to that later. First, let’s look at how auto-scaling is supposed to work, at least theoretically.

image

1. Some external entity that is monitoring capacity for “Application A” triggers an event that indicates a new instance is required to maintain availability. This external entity could be an APM solution, the cloud management console, or a custom developed application. How it determines capacity limitations is variable and might be based solely on VM status (via VMware APIs), data received from the load balancer, or a combination thereof.

2. A new instance is launched. This is accomplished via the cloud management console or an external API as part of a larger workflow/orchestration.

3. The external entity grabs the IP address of the newly launched instance and instructs the load balancer to add it to the pool for resources for “Application A.” This is accomplished via the standards-based API which presents the configuration and management control plane of the load balancer to external consumers as services.

4. The load balancer adds the new application instance to the appropriate pool and as soon as it has confirmation that the instance is available and responding to requests, begins to direct traffic to the instance.

This process is easily reversed upon termination of an instance. Note: there are other infrastructure components that are involved in this process that must also be notified on launch and decommission, but for this discussion we’re just looking at the load balancing piece as it’s critical to the concept of auto-scaling.

The important thing to note here is that the process for adding – and subsequently using – the new application instance to the load balancer should be automatic. Once the instance is  launched, all that remains is to inform the load balancer that a new instance is available. The load balancer should take care of the rest, and it should do so transparently. That means no interruption of service, no reboot, no reload the configuration, nothing. Similarly the termination and removal of the application instance from the pool of resources should be as seamless and non-disruptive. It does not matter whether the load balancing solution is hardware or a VNA (Virtual Network Appliance) as long as it has an API through which available resources can be added and removed on-demand.

This process is relatively straightforward and can even be accomplished without the “external entity” by directly integrating the application with the load balancing service. By hooking the appropriate setup and tear-down routines in imagethe application, the load balancer can be automatically instructed to perform the appropriate actions whenever an instance of that application is launched or terminated.

1. Application is initialized and instructs load balancer to add it to the appropriate resource pool.

2. Load balancer adds the application instance to the right pool of resources and begins directing traffic to it as configured.

3. Application is terminating, instructs load balancer to remove it from the available resource pool.

4. Load balancer removes the application instance from pool of resources.

This requires tighter coupling of the load balancer with the application, however, which may be less than ideal unless you are very tied to your load balancing solution. The advantage of this solution is that regardless of whether the application is “physical” or “virtual”, the same process occurs, which lets you mix and match and be a lot more flexible in your architecture.


WHY is THIS so HARD THEN?

It’s not, if the load balancer has an API. Herein lies the problem: building out an automated, dynamic infrastructure requires the ability to communicate, to collaborate, to integrate network and application network infrastructure with the data center management system. That means some sort of API is required. Oh, you could do it with a little SSH and a local script, but that’s not nearly as efficient or flexible as using an API that’s designed to be leveraged for integration and automation in the first place.

While just about every hardware load balancer (and their virtualized equivalents) is enabled with a standards-based control plane, most web-server-turned-rudimentary-load-balancers  are not enabled with an API that allows this level of integration. That means hacking the system with scripts that inject or remove configurations, forcing reloading of the configuration (or worse, stopping and starting the system) and most likely interrupting service. Not only does it mean hacking the system but the scripts must somehow pull from customer-specific meta-data information in order to properly configure the load balancing solution.

 

 

So in order to achieve this in a way that maintains availability and is easy to use and essentially a multi-tenant solution needs the services of an API. When it comes to integration there are few options and the most flexible method is to use a standards-based API to manage a solution that supports the notion of multi-tenant configuration natively. The problem is, of course, that most of the API-enabled (and therefore easy to integrate and automate) load balancers cost money. It requires an investment, up front, and many folks just aren’t willing to do that. Except they are doing that; they’re just trading capital expenses for longer-term development of scripts and research that approximate (but never quite reach the reliability) of an API that’s been specifically developed, tested, and deployed for the purpose of integrating with external management systems and scripts. What’s worse is that eventually most of these solutions developed on sweat and scripts will end up turning to a more sophisticated, proven API-based solution anyway because of the inherent challenges associated with scalability of the solution itself. That results in a financially inefficient way to go about building a dynamic infrastructure that supports rapid, elastic scalability.

So many organizations faced with the choice of investing in a more robust, flexible solution when they’ve already sunk a lot of time, effort, and money into their existing “free” solution, simply continue to hack and paste and shove a square peg into a round hole, which means auto-scaling never works quite as expected and advanced features come at a snail’s pace because of the difficulty in scripting and integration.

It doesn’t have to be so hard, but you do have to view load balancing for what it is: a key component of the cloud computing strategy, and give it the focus it deserves, even if that means it’ll cost you more up front because in the end it’ll cost you far less than the alternative.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at EMC, will introduce a methodology for capturing, enriching and sharing data (and analytics) across the organizati...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
I'm a lonely sensor. I spend all day telling the world how I'm feeling, but none of the other sensors seem to care. I want to be connected. I want to build relationships with other sensors to be more useful for my human. I want my human to understand that when my friends next door are too hot for a while, I'll soon be flaming. And when all my friends go outside without me, I may be left behind. Don't just log my data; use the relationship graph. In his session at @ThingsExpo, Ryan Boyd, Engi...
SYS-CON Events announced today that Secure Channels will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The bedrock of Secure Channels Technology is a uniquely modified and enhanced process based on superencipherment. Superencipherment is the process of encrypting an already encrypted message one or more times, either using the same or a different algorithm.
The vision of a connected smart home is becoming reality with the application of integrated wireless technologies in devices and appliances. The use of standardized and TCP/IP networked wireless technologies in line-powered and battery operated sensors and controls has led to the adoption of radios in the 2.4GHz band, including Wi-Fi, BT/BLE and 802.15.4 applied ZigBee and Thread. This is driving the need for robust wireless coexistence for multiple radios to ensure throughput performance and th...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, will discuss the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports. The session will include a working demo and a technical d...
SYS-CON Events announced today the Enterprise IoT Bootcamp, being held November 1-2, 2016, in conjunction with 19th Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA. Combined with real-world scenarios and use cases, the Enterprise IoT Bootcamp is not just based on presentations but with hands-on demos and detailed walkthroughs. We will introduce you to a variety of real world use cases prototyped using Arduino, Raspberry Pi, BeagleBone, Spark, and Intel Edison. Y...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
While DevOps promises a better and tighter integration among an organization’s development and operation teams and transforms an application life cycle into a continual deployment, Chef and Azure together provides a speedy, cost-effective and highly scalable vehicle for realizing the business values of this transformation. In his session at @DevOpsSummit at 19th Cloud Expo, Yung Chou, a Technology Evangelist at Microsoft, will present a unique opportunity to witness how Chef and Azure work tog...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
SYS-CON Events announced today that China Unicom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE F...
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Digital innovation is the next big wave of business transformation based on digital technologies of which IoT and Big Data are key components, For example: Business boundary innovation is a challenge to excavate third-party business value using IoT and BigData, like Nest Business structure innovation may propose re-building business structure from scratch, as Uber does in the taxicab industry The social model innovation is also a big challenge to the new social architecture with the design fr...
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
IoT is fundamentally transforming the auto industry, turning the vehicle into a hub for connected services, including safety, infotainment and usage-based insurance. Auto manufacturers – and businesses across all verticals – have built an entire ecosystem around the Connected Car, creating new customer touch points and revenue streams. In his session at @ThingsExpo, Macario Namie, Head of IoT Strategy at Cisco Jasper, will share real-world examples of how IoT transforms the car from a static p...
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...