Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, Linux Containers, @CloudExpo, @DXWorldExpo, SDN Journal

Containers Expo Blog: Blog Feed Post

Resource Pools versus Virtual Machine Pools

On the importance of recognizing that what we have today is not a pool of resources, but a pool of virtual machines

This post is following up on last week's post, "The Inevitability of Idle Resources", in which I mentioned the importance of ensuring not only that resources are available when you need to provision a new service to scale but that the resources available match the needs of the service.

So let's continue that by assuming you can find some idle (available) resources to use when you need them. Two questions need to be asked before you hit the easy button:

  1. Are there enough resources to support this service? After all, different services require different resource profiles. Some need more storage, others more RAM, others many CPUs.
  2. What's the possible impact of provisioning this service on this resource? In other words, what else is currently using these resources? How many other virtual instances are on the same hardware and what are their resource consumption profiles? What will happen if I provision two network-hungry services on the same hardware (especially in cloud environments that share a physical network interface)?

Marketing makes it sound so simple. Just grab some resources and provision away. But the reality that IT professionals know is that different services require different resources and contention for those resources is a primary cause of poor application performance and network congestion. Which is one of the arguments for specialized resources, but let's not muddy the waters with that discussion today.

What happens if you don't have the appropriate resources? Well, it's going to throw your math off and that will throw your capacity planning off.

Let's say you know you need X RAM and Y CPU and Z network in order to support 1000 CPS (connections per second) for your load balancing service. Your capacity planning exercises, then, are based on this assumption. If you set up your systems to auto-scale based on that assumption and then, for some reason, you scale your load balancing service by provisioning it with a resource profile capable of only supporting 600 or 700 CPS without significant degradation of performance, well.. you can imagine what happens. Users become frustrated, your phone starts ringing and there goes your quarterly bonus along with big chunks of your hair because the system won't kick off another instance of the service until you near that 1000 CPS threshold. (This is a good time to point out that a more adaptable load balancing algorithm might be helpful though not a panacea).

This is true for just about everything you want to run. Applications, network services, application services. It doesn't matter. Each one is going to have its unique resource requirement profile and if you start ignoring that your systems are going to begin to acting wonky.

Until we reach the real data center nirvana, where hardware resources really are a single pool from which the appropriate combination of memory, CPU and network capacity can be provisioned for a specific application or service, we have virtualization and pools of pre-determined resource sets. We don't  have a pool of resources, we have a pool of virtual machines. While advances have been made in terms of growing and shrinking virtual machine resource allocations, it's far from nirvana and it's far from perfect (and rarely on-demand).

That means when you're chopping up resources into virtual machines you're still going to have to indulge in sizing exercises and capacity planning. At least until we have true pools of resources and not pools of virtual machines.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.