Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

Private Cloud: Elastic or a Hair Ball of Rubber?

Private cloud elasticity

Is your private cloud elastic, or just a tightly wrapped ball of rubber. One of the essential characteristics defined by NIST for cloud computing is “Rapid Elasticity.”  I blogged about Workload Elasticity Profiles in a previous post , but in the war between private and public cloud, people question whether you can actually unleash the elasticity from that ball of rubber bands.

“Can a private cloud can be elastic?”

To answer the question first need to understand elasticity in the cloud. A simple definition is the ability to “scale-up”  & “scale down” capacity automatically based on demand, and only pay for what is provisioned.

The issue is not the technology you implement. The technology that powers the public cloud is the same technology available to power the private cloud at every level in the stack.  You can create a private cloud using all the same building blocks. The biggest issues is; Can you take advantage of this ?

Let’s think about the problem from two perspectives:

  • Tactical - where time frame is measured in minutes and hours
  • Strategic - time is measured in years and multiples of years

Tactical Elasticity
This is the most common problem space when people design private clouds. Creating a platform that auto-scales can be a technical challenge but if you are thinking about it this way, your going about it the wrong way. The first problem you should be solving is:

“Can your application scale?”

If you throw more infrastructure underneath it, will it respond with the ability to support more workload with the same level of performance? Will it do this automagically, or will it require modification (reconfig, recoding) at the application level to allow this to happen? At the utopian end of the scale you have a nice, horizontally scalable application that is designed based on infrastructure building blocks, providing “infinite scalability” (cough). It doesn’t have to be utopian though.

Mainframes & Grids

Elastic scaling based on older school technologies from mainframe (PR/SM, LPAR) and mid-range vendors (Domains, vPAR, nPAR, Containers) have done ostensibly the same thing with little modification to the application by dynamically configuring the resources from one domain to another. There are limitations to this method because it usually only works within a smaller pool of resources contained in a single or smaller number of racks/frames.

Grids are another architectural solution to this problem. They are essentially designed to provide the horizontal scalability at a operating system level in comparison to a classic cloud which requires that capability at the application layer.

Horizontal, in-box vertical and GRID all have limitations. The sensitivity to these limitations depends on the type of processing or I/O requirements, but they will always be evident at massive scale.

The next challenge is: “Do you have multiple complementary workloads?”

Many people have it wrong when they talk about the clouds dependency on multi-tenancy. Multi-tenancy (MT) is a proxy concept for complementary workloads. If you have multiple tenants with similar workloads then you have correlated peaks which result in low efficiency for capacity. You need complementary workloads for public or private clouds for tactical elasticity to work.

A complementary workload is where peaks in one workload correspond to troughs in other workloads. At a minimum, alternating peaks are critical to resource effiiciency. The more uncorrelated the workloads, the more benefit from elasticity.

Strategic Elasticity

Where the variable of time expands to between one and five years, then matching supply and demand is based on several different factors. The first is;

“What’s you gross average workload profile?”

The profile of your business volume is a key indicator for benefits from elasticity.  It’s obvious that aggressively growing companies will get most benefit of elasticity. This is true tactically, but also strategically. The growing workloads allows for the constant consumption of capacity, giving the greatest flexibility. As this growth curve approximates to a flat line, the ability to provide elasticity becomes less and the efficiency of the infrastructure decreases. Therefore, gross workload explains the demand for capacity and its benefit from elasticity. For strategic elasticity, supply is dependent on the question:

“How dynamic is your asset lifecycle?”

As the normal volume of business ebbs and flows, the amount of physical infrastructure required to process the same workload changes. This creates a complex supply/demand curve that has a large dependency on the ways assets are acquired, and more importantly disposed.  If your cycles of business approach a 1-2 year cycle, then your asset lifecycle needs to approximate the same timeframe. The financials of asset lifecycle should no longer be based solely on an asset depreciation schedule. Years of TCO calculation have already told us that sweet spot of asset disposal is based on the convergence of asset residual value, efficiency (power, cpu, space), vendor maintenance, support/management and licensing.

Elasticity Quotient
How much tactical or strategic elasticity is enough for a private cloud to be effective? The Armada Group uses an ‘elasticity quotient’ as part of its Cloud Evaluation Framework, Workload Analysis to determine the ROI of cloud deployment. It calculates the efficiency of matching supply and demand in comparison to a baseline of traditional capacity management. A positive number represents a benefit over traditional methods and negative number shows that the elasticity profile does not work.

Summary
To have rapid elasticity (large +ve elastic quotient) in a private cloud you need:

  • Ability to scale in the application
  • Complementary workloads
  • Faster asset lifecycle
  • +ve gross average workload growth

(For the private cloud haters: Public clouds have the same problems, but at larger scale they can be less sensitive to some of the variables. Should another bubble burst, supply and demand could very well be out of sync for them as well, with even more serious consequences).

More Stories By Brad Vaughan

Brad Vaughan is a twenty year veteran consultant working with companies around the globe to transform technology infrastructure to deliver enhanced business services.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.