Welcome!

@CloudExpo Authors: Carmen Gonzalez, Yeshim Deniz, Zakia Bouachraoui, Chander Damodaran, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Three Ways to Keep Internal Cloud Computing Infrastructure Costs in Check

Achieving a high level of efficiency at a low level of risk

Many organizations are building internal clouds to capitalize on the agility cloud models offer and avoid the risk of putting mission-critical infrastructure outside of the enterprise's firewalls. The goal is to reap the benefits offered by public clouds, but with much higher levels of control, security, and availability. Building a private internal cloud, however, can come with a hefty price tag.

The Private Cloud's Critical Cost Drivers
The promise of increased agility and standardization in the supply of IT capacity is extremely compelling. Unfortunately, these benefits come at a cost, not only in terms of the technical challenges, but also in terms of the behavioral changes that can arise when users are given "self-service" access to capacity. A combination of low perceived cost of cloud capacity, lowered barriers to access, and a lack of visibility into new application requirements (requiring users to err on the side of caution) can combine to create a situation where too much capacity is deployed. Like an all-you-can-eat buffet, plates are piled a lot higher when you help yourself than when portions are decided by the chef. In addition, unlike virtual environments that allow for customized sizing and growth, capacity requirements in the internal cloud will always be rounded up to the next available standardized container. As a result, organizations will deploy more hardware to avoid exceeding over-commit policies.

Costs can also climb when designing the private cloud. Although standardization will save money in the long run, building a cloud catalog that matches the precise capabilities that previous environments maintained can be cumbersome, often requiring retesting and rebuilding to suit application owners. Cloud designers also must consider the nature of self-service capacity models, which necessitate a demand buffer be created to fulfill new planned and unplanned requests, and to accommodate organic growth, future capacity requirements, failures, operational policies and more. Much like airline overbooking, this "whitespace management" often requires deep analysis as organizations can easily be left with too much or too little capacity.

The Path to the Efficient Cloud
To keep costs in check while achieving agility promised by internal clouds, organizations should follow three key principles:

  • Define policies governing how cloud capacity will be allocated and used. Considerations such as target density, business criticality, data sensitivity, service-level agreement (SLA) requirements, regulatory compliance, and security standards all factor into cloud usage policies. Defining these policies and leveraging them to qualify and route workloads can enable automated decision support, allowing the right workloads to be put in the right place without creating a small project every time. Policies should also be defined governing when and how resources will be given and taken away from applications in cases where they are under- or over-provisioned. Not only does this save time, but it ensures that workloads are placed into appropriate infrastructure.
  • Consider a "soaking pool" of capacity for profiling and housing new workloads. Create a dedicated infrastructure of resources to serve as an incubation center for new workloads. As the behaviors of new workloads are largely unknown, a soaking pool enables workloads to be profiled and analyzed before releasing them into the environment. This creates opportunities for infrastructure and operations teams to confidently size and place workloads without undue excess capacity. Application owners receive the fast response they want and reduce risk in assigning capacity and configuring virtual machines.
  • Measure efficiency for the cloud, not legacy environments. Avoid the temptation to use CPU, memory utilization or other legacy efficiency measures in cloud environments. The only true way to measure efficiency is to determine how much infrastructure is needed and compare this to how much is in use. By monitoring infrastructure requirements such as resource utilization and allocation, over-commit policies, density targets, adherence to business policies, security constraints and HA and DR strategies, this provides a "fully loaded" utilization metric that enables organizations to know exactly how much infrastructure is required to safely service workloads, allowing excess capacity to be accurately measured and reclaimed for other purposes.

According to James Staten at Forrester: "Success with your internal cloud won't come simply because you build it." Only by methodically analyzing the workload demands against the resource supply, and meticulously managing the placements of cloud instances and the resources allocated to them, can internal cloud environments achieve a high level of efficiency at a low level of risk, and ultimately provide a level of agility that will truly transform the way IT resources are managed - without breaking the bank.

More Stories By Andrew Hillier

Andrew Hillier is CTO and co-founder of CiRBA, Inc., a data center intelligence analytics software provider that determines optimal workload placements and resource allocations required to safely maximize the efficiency of Cloud, virtual and physical infrastructure. Reach Andrew at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City. Our Silicon Valley 2019 schedule will showcase 200 keynotes, sessions, general sessions, power panels, and hands on tutorials presented by 150 rockstar speakers in 10 hottest conference tracks of 2019:
Moving to Azure is the path to digital transformation, but not every journey is effective. Organizations that start with a cohesive, well-planned migration strategy can avoid common mistakes and stay a step ahead of the competition. Learn from Atmosera CEO, Jon Thomsen about the opportunities and challenges found in three pivotal phases of the journey to the cloud: Evaluation and Architecting, Migration and Management, and Optimization & Innovation. In each phase, there are distinct insights that can give a company the edge and make sure cloud adoption is closely aligned to core business goals. Keeping these in mind will make your migration to the Azure simpler and more effective.
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest research and steps companies can take to make innovation a daily work habit by using enterprise cloud computing. He shared examples from companies that have benefited from enterprise cloud computing and took a look into the future of how the cloud helps companies become a more intelligent business.
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everything the digital transformation has to offer. Cloud adoption rates are increasing significantly, and IT budgets are morphing to follow suit. But distributing applications and infrastructure around increases risk, introduces complexity and challenges availability at every turn. To embrace DX and to come out on top, there are four underlying principles that should guide you. Understanding these four essentials along with their relevance and impact will elevate you to DX Hero status now. Jonathan will provide a high-level overview of these principles and how some of his organization's clients have embraced them w...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].