@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Article

What Does Elastic Really Mean?

Defining elastic application environments

In terms of cloud computing in application environments, elasticity is perhaps one of the more alluring and potentially beneficial aspects of this new delivery model. I'm sure that to many of those responsible for the operational and administrative aspects of these application environments, the idea that applications and associated infrastructure grows and shrinks based purely on demand, without human intervention mind you, sounds close to a utopia. While I would never dispute that such capability can make life easier and your environments much more responsive and efficient, it's important to define what elasticity means for you before you embark down this path. In this way, you can balance your expectations against any proposed solutions.

For me, the idea of elastic application environments starts with the implication that there is some sort of policy or service level agreement (SLA) that determines when to grow and when to shrink. However, just having the capability to govern your runtime with SLAs isn't enough. The SLAs should be applicable to performance metrics directly related to your applications. For example, it may be nice to be able to make operational decisions in your application environment based on the CPU usage of the physical machines supporting that environment, however it is much nicer to make those same decisions based on the average response time for requests sent to your application instances or perhaps the average time a particular message waits in your application's queue. When you have the ability to define SLAs based on these kinds of application performance metrics you can remove a lot of ambiguity that otherwise could creep in when making expansion/contraction decisions.

What's obvious is that there's no reason to have SLAs that cannot be enforced. When I think about SLA enforcement there are a couple of things that come to mind. The first is that the party responsible for enforcement should be configurable. In many cases you may want your application environment to grow and shrink based on the system's autonomic enforcement of SLAs, but I doubt this will always be the case. For example, if you are running in a pay-for-use public cloud environment, you may, in an attempt to keep costs under control, want to insert a manual approval process before the system grows. As another example, you may insert manual approval processes for contracting application environments in a production setting where demand fluctuates wildly. In any case, the ability to configure who is responsible for SLA enforcement is useful.

The second thing that comes to mind with respect to SLA enforcement is that you should be able to prioritize such enforcement. The ability to prioritize SLA enforcement means that you can ensure that conditions in some applications warrant a faster response than in other applications. This is just an acknowledgment that not all applications are created equally. Obviously if a user-facing, revenue-generating application starts to violate its SLA you want that condition addressed before you address any SLA slippage in an internal environment.

Besides the ability to define and enforce SLAs, there are certainly other capabilities that contribute to the robustness of a truly elastic application environment. One area that warrants attention is the degree to which application health monitoring and maintenance can be automated. For instance, when an application begins to leak memory and response times slow to the point that SLAs are violated, it may be more efficient to automatically address the leak by say restarting the application as opposed to adding more application instances.

These are just a few of what I'm sure are many facets that contribute to elasticity in an application environment. They happen to be the ones that bubble to the top for me, but I have no doubt there are others that may be more important for you. If you have your own ideas for elastic application environments I'd like to hear them. Drop me a line on Twitter @WebSphereClouds.

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.