Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Dana Gardner, Yeshim Deniz, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Article

Unwrapping the Self-Service Cloud

The challenges of self-service delivery

If you get a chance to talk with directors or C-level executives about the benefits they expect to derive from adopting cloud computing techniques, expect to hear the terms ‘flexibility', ‘agility', and ‘cost reduction' come up quite frequently. While those are valid expectations, those of us a little closer to the trenches know that it takes a number of different technical capabilities to actually deliver those benefits. Oft-talked about capabilities such as elasticity, rapid provisioning, and configuration automation all come to mind. However, there's one more capability that we tend to talk about a little less, but nevertheless is equally as important. That capability is self-service access.

It seems like lately, more and more of the clients I have been visiting with are coming to terms with what self-service means to them and what it will take to enable it. Here is the unvarnished truth about effective self-service in the enterprise: It is far from easy! Numerous obstacles stand between the way traditional IT works and a truly self-service organization and they are not the kind of obstacles one can blithely ignore.

A single post could never explore every single challenge, but I would be remiss not to mention the big ones, starting with the collective fear of losing control. For years and years, IT organizations have arranged themselves in teams that are more or less domain specific. To put it a little more bluntly, if you look into any typical shop, you will probably find an infrastructure team, systems software team, middleware team, application team, and more. In most cases, well-defined processes (e.g. ticket requests) specify how these teams interact. The interaction is typically limited to the boundary of these interfaces, and each team more or less independently handles the domain over which they reside. Can you see how self-service may be an affront to such a structure?!?

Remember, when we talk about the type of self-service often bandied about in cloud computing, we are not talking about your garden variety self-service access. Self-service in the cloud normally means that a single user is provisioning (in a loose sense at least) everything they need to run a particular workload - from the hardware to the application. To say this causes some amount of organizational strife is a bit of an understatement.

Of course, traditional inter-team relationships are not the only barriers standing in the way of self-service models. On a recent trip, I listened to a client explain their intentions to move their development and test operations to a cloud-based environment. In this environment, developers and testers who needed application environments would directly provision them from a standard offering catalog. In the discussion it soon emerged there were many different sub-teams within the development and test teams and that made for special considerations in the sharing of resources. For instance, some teams required more resources than others. Some teams required access to systems outside of the cloud. Still yet, some teams needed to dynamically expand their consumption of resources even at the expense of the other teams being able to consume those resources. In short, there was a complex web of resource consumption needs among the teams.

Now, you may ask, ‘What does this have to do with self-service access?' Well, there is absolutely no way you can expose these complex dependencies of resource relationship to end users (the developers and testers). How far do you think this company would get with self-service deployments if the deployer had to figure out what resources (hardware, storage, networking, and software) they could safely use before doing anything? Right, not very far! So the trick is, by the time a developer or tester logs in, the decision about the resources to which they are entitled must have already been made. This implies a complex system of rules that considers the entitlements of the current user in relation to all other teams in the organization. And don't forget, these entitlements could very well change over time.

This may sound like I am constructing a false barrier to self-service, but I can assure you these sorts of resource sharing requirements are not at all unique. The basic problem is not one that is easy to solve, and it is even harder to do so in a way that is somewhat consumable to an administrative user in charge of the whole thing. Yet, without reasonably evolved resource sharing capabilities, there is really no way to enable self-service access to multiple different classes of users across a shared resource pool.

Given the two (but not only) significant self-service adoption obstacles of cultural churn and effective resource sharing, what are cloud providers to do? First, I believe it is important for cloud providers to acknowledge the typical division of responsibilities in an organization. When designing a solution that will harness resources that fall across traditionally isolated domains, it is important that the design accommodates different types of users. Accommodation means different users act on different resources (and those resources only), and it means that users are presented with a familiar context.

On the resource sharing side, every different resource that makes up the cloud must have associated access rights tied directly to users or groups.  This is certainly not a novel concept, but you may be surprised at how often a particular solution overlooks or under-delivers on this point. It is not enough to simply say that a user has access to a particular resource. One must be able to partition a resource and assign those logical ‘slices' out to different users or groups. To evolve the concept further, those slices should be able to dynamically grow and shrink based on defined conditions or rules.

There is little doubt that self-service is a critical aspect to the cloud, and it is in fact a key capability in delivering on the promise of cloud. Having said that, I believe there is a lot of room for maturity in this specific area, and providers will have to address the challenges I mentioned above and a host of others. Taking all of this into consideration, I can confidently say that we will see quite a bit of focus on this as we move forward in cloud. What do you think?

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

CloudEXPO Stories
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.