Welcome!

@CloudExpo Authors: Dana Gardner, Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

Cloud Brokers Make the Cloud Fit for Enterprise Requirements

Sometimes it’s fun to look back at your predictions to remember what you were thinking at the time

Sometimes it’s fun to look back at your predictions to remember what you were thinking at the time and see how accurate you turned out to be. Based on some recent conversations, I decided to revisit one of our early blog posts from 2009, where we were envisioning the direction this industry would take and the role our technology would play in it.

That post, Dynamic Cloud Fitting — the Future in Automated Cloud Management, described a world where workloads would move automatically to the right environment to meet a customer’s business and technical requirements. It also explored how an entity called a “cloud broker” (initially defined by Gartner in 2009) would provide the technology and expertise to achieve that goal.

Fast forward to 2011. The idea of workloads being redirecting on the fly across different clouds for real-time optimized cost/performance is still a vision for the distant future – both because the technology is not yet available and also because customers have shown little interest to date. However the main concept of a cloud broker “fitting” a workload to a cloud environment based on technical and business requirements turns out to be very important to customers, and is already possible today. By providing an intermediation layer spanning multiple clouds, cloud broker software from companies like CloudSwitch can provide a range of capabilities beyond the scope of an individual cloud provider or service.

In terms of cloud “fitting,” what customers want is the ability to set parameters on a number of dimensions in order to control usage and optimize workload performance. These parameters include (but are certainly not limited to):

  • Which cloud services they want to make available (and for which users)
  • Which geographic locations (regions, zones, data centers)
  • Cost limitations – per hour, based on quotas, etc.
  • Maximum latency that can be tolerated
  • Virtual machine requirements for CPU, memory, etc.
  • Maximum provisioning time that is acceptable
  • Minimum SLA required for reasonable availability

What’s clear is that cloud broker software must incorporate an algorithmic approach to mapping the requirements of the enterprise, user groups, individual users, and specific workloads against the possible cloud services that have been enabled. This is a non-trivial process and is based on capturing and tracking a mix of inputs from administrators and users, as well as real-time data from the virtual machines, networks, and cloud providers. Think of this as being somewhat similar to recommendation engines on websites like kayak.com that help “fit” travelers’ requirements and preferences against available flights. Only in this case, the “flights” are instances that can be provided by one or more cloud provider or by internal virtualized resources.

Another important aspect for the cloud broker software is to implement the “fitting” algorithm in the context of a role-based access control (RBAC) system. Think of this in terms of layers of enterprise controls and permissions that guide users’ options for self-service access to cloud resources. For example, a global administrator may set up the initial constraints based on which cloud services are available for the entire enterprise, while a business unit administrator may have more narrow limits for her users based on quotas, geographic constraints for certain teams, etc. – and the final end-user just wants to get his work done quickly and cost-effectively without worrying about any of this.

One point we didn’t foresee in our original blog post on cloud fitting was how the cloud broker’s role would expand. In addition to on-boarding applications into the cloud, customers now look to cloud brokers to fill important gaps in areas that cloud providers either don’t want to deliver (such as multi-cloud capability) or that are hard to deliver because of architectural limitations.

Security is a good example of the latter, where the shared environment of the cloud makes it hard to give individual customers control over encryption and key management, something that enterprises frequently require to get CSO sign-off. Extension of network configurations into the cloud with full configurability is also challenging for most cloud providers, since their network architectures are by definition fairly “flattened” and limited in options like multiple sub-nets (unless the customer is willing to pay for a dedicated network setup). This is another area where cloud brokers can help bridge the gap between what enterprise users need and what multiple cloud providers can deliver.

So the role of a cloud broker turns out to be evolving and growing broader over time, and no doubt will continue to do so. There’s a broad consensus that the broker’s role is important — not just here at CloudSwitch, but also among industry analysts like Gartner, other technology vendors, and our enterprise customers. The key insight is that cloud brokers allow enterprises to extend their control over their applications and data into the cloud. This ability to put control in the hands of the customer is what matters the most.

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

CloudEXPO Stories
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.
Artificial intelligence, machine learning, neural networks. We're in the midst of a wave of excitement around AI such as hasn't been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. This time is (mostly) different. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Pattern recognition can equal or exceed the ability of human experts in some domains. It's developing into an increasingly commercially important technology area. (Although it's also easy to look at wins in specific domains and generalize to an overly-optimistic view of AI writ large.) In this session, Red Hat Technology Evangelist for Emerging Technology Gordon Haff will examine the AI landscape and identify those domains and approaches that have seen genuine advance and why. He'll also ...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
The term "digital transformation" (DX) is being used by everyone for just about any company initiative that involves technology, the web, ecommerce, software, or even customer experience. While the term has certainly turned into a buzzword with a lot of hype, the transition to a more connected, digital world is real and comes with real challenges. In his opening keynote, Four Essentials To Become DX Hero Status Now, Jonathan Hoppe, Co-Founder and CTO of Total Uptime Technologies, shared that beyond the hype, digital transformation initiatives are infusing IT budgets with critical investment for technology. This is shifting the IT organization from a cost center/center of efficiency to one that is strategic for revenue growth. CIOs are working with the new reality of cloud, mobile-first, and digital initiatives across all areas of their businesses. What's more, top IT talent wants to w...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.