@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

The Myth of 100% IT Efficiency

Idle resources will always need to exist, especially in a cloud architecture

Idle resources will always need to exist, especially in a cloud architecture

With IT focused on efficiency – for reduction in operating expenses and in the interests of creating a greener computing center – there’s a devil danger that we’ll attempt to achieve 100% efficiency. You know, the data center in which no compute resources are wasted; all are applied toward performing some task – whether administrative, revenue generating, development cycles, or business-related – and no machine is allowed to sit around idle.

Because, after all, idleness is the devil’s playground, isn’t it?

But before we decide to technically exorcise our data center of any idleness that might be laying around, consider the ramifications: if there’s no idle resources, how are you going to scale out?

Yeah, exactly. Without idle resources you can’t automatically scale out. In fact, a cloud computing model doesn’t work unless there are resources available (idle) that can be applied at any moment to an application in need of scaling out.


What’s necessary in a cloud computing model is to be allocate idle resources in the most efficient way possible. That sounds paradoxical, but it balancing-tightropeactually isn’t. Let’s say that you’re running near 100% efficiency now but recognize that just in case you’d better have some idle resources available.

You can:

  1. Take advantage of cloud bursting or cloud balancing, using external cloud providers as “overdraft protection.” 

    Cloud bursting enables an organization to scale out on-demand into the cloud, only when necessary. This keeps its own efficiency as high as possible but provides for availability just in case. Cloud balancing assumes the resources are always available and uses advanced global application delivery (load balancing) techniques to balance the distribution of requests between all available data centers, whether local or in the cloud.
  2. Invest in additional servers to ensure you have extra resources available in case you need them.

    There’s something to be said for “being prepared” and the easiest way to do that is to slap a few more servers (or blades, whatever) into the data center just in case. This may decrease your “efficiency rating”, if you have one, but it’s easy enough to counter by asking whether that service level agreement should actually be considered important or can you use it to wrap up your leftover lunch tomorrow. Mentioning SLAs generally gets attention, especially if you imply you might not meet it unless something is done.
  3. Invest in an application delivery solution – or take advantage of one you already have – to improve the efficiency of the servers and applications such that you free up idle resources on existing servers

    This is the one everyone overlooks, but it’s probably the best solution in terms of long-term ability to balance efficiency with scalability. That’s because it allows you to improve the efficiency of servers and applications, which means you can achieve higher VM densities or just have “idle” resources sitting around that can be used in the event you need to scale out. It’s the best long term solution because it’s likely if you’re scaling applications you already have a load balancing solution (and need it) and if you’re lucky it’ll be one of the ones that can be extended through modules without disruption to existing infrastructure/applications. 
  4. Move every thing to a cloud provider

    I know, it’s probably not feasible, but for those organizations for which it is, this actually makes a lot of sense. Moving all applications and IT operations to a cloud provider would certainly ensure scalability and idle resources because well, that’s what they promise. The thing to consider here is cost and where the line between your budget and scale stops. Infinite scalability is a fallacy – you can only scale as far as your budget will allow.
  5. Change the math

    Seriously, find a mathematician and have them derive a formula that incorporates the need for idle resources into the overall efficiency equation. If folks can prove 1=0 then you can find someone to come up with a formula that balances the two and still comes up with a 100% efficiency rating.
  6. Ignore the problem and hope that “just in case” doesn’t happen. Make a note to sacrifice a live chicken next week as insurance.

    Do nothing is always a solution. It may not be a good one, or the right one, or the one that ensures you have a job next week, but it is a solution.


If you aren’t running at or near 100% efficiency now, keep on top of that. Consider how you’re going to balance the need for idle resources as efficiency (i.e. utilization) continues to increase and do something about it sooner rather than later. Plan a strategy that will maintain performance and capacity while ensuring you have the resources available – regardless of where they might be located – to scale out.

If you haven’t started a virtualization and or cloud computing initiative then you should carefully consider what technology will not only enable you to move toward that goal, but that will assist in providing the best balance of idle resources with high efficiency as you move forward.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.