@CloudExpo Authors: Liz McMillan, Pat Romanski, Flint Brenton, Elizabeth White, Cameron Van Orman

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Adopting Enterprise Cloud Computing: A Graduated Transformation

Enterprise IT organizations are in the midst of deep and lasting change

Enterprise IT organizations are in the midst of deep and lasting change.

IT used to be your cable company: It held the local monopoly for IT services. The wait you experienced was a frustrating, but necessary part of your relationship with enterprise IT. You had no choice.

But with the rise of public cloud services like Amazon EC2, business lines now have a choice.

Why wait months when minutes will do?

That question will ultimately cause application owners to vote with their feet, as demand follows the path of least resistance to the public cloud. That is, unless IT leaders can transform service delivery models to match the flexibility and performance public cloud services provide.

"If the rate of change on the outside exceeds the rate of change on the inside, the end is near."

That's Jack Welch offering what may be the most urgent words for today's IT leaders who are staring down unprecedented external change. IT leaders ought to ask themselves where they stand, on the inside, relative to the changes occurring on the outside.

Of course, for most IT organizations, external change vastly outstrips internal change. That's not because IT leadership doesn't want to change-it's because cultures and bureaucracies don't readily allow it.

Traditionally, CIOs were expected to improve performance incrementally year over year; last year's metrics were next year's benchmarks. Your goal was to ensure the curve was moving in the right direction. Today, the expectation is a dramatic improvement in agility and responsiveness-from months to minutes-at a dramatically lower cost to the business.

To achieve this end, enterprise IT organizations must transform their delivery models to look a lot like a public cloud service in its own right. But how do you get from here to there?

The answer is similar to the way you eat an elephant: One bite at a time.

The challenge is that the path isn't particularly clear, and transformation can't happen overnight. This sort of change requires an incremental, step-by-step progression that yields benefits along the way. Without this, fatigue sets in, enthusiasm wanes and cloud projects die on the vine like so many architectural transformations before. When merchandised effectively, these incremental wins become the kindling that stokes the fire, building the shared vision, conviction and confidence required to transform.

Making the transformation to enterprise cloud is, thus, a graduated journey.

It's a journey that begins with experimentation and moves into production phases from the bottom up, tackling successive layers of the stack. The common thread running through the journey is ongoing optimization between public and private options based on specific requirements, policies and price. The goal isn't to place a bet, public or private. It's to turn that gamble into a managed investment that pays ever-increasing dividends by managing a dynamic portfolio of public and private resources.

Before we lay out the guideposts for the journey, let's discuss the building blocks for the enterprise cloud.

They are:

  • Virtualization: An abstraction layer for compute, network and storage resources as sharable utilities.
  • Elasticity: Dynamic provisioning, scaling and rebalancing resources with changing demand.
  • Automation: "Pushbutton" workload construction, deployment and change.
  • Self-service: A storefront for advertising, requesting and provisioning standardized services.
  • Chargeback: A tracking and billing mechanism to manage the supply/demand equilibrium.

These are the foundations for an enterprise cloud-the journey to which begins with.

1. Public Cloud Experimentation
According to Sam Boonin at Good Data, "Think about it - what used to cost $20k (a Sun server) you can now get for 15 cents an hour ... if you are not experimenting with those economics, you're crazy."

Sam's right: Amazon EC2 has removed the cost barrier to enterprise-grade compute capacity.

It has also proven the model for self-service, automated, elastic computing. If you think about it, Amazon is like having a privileged view into your future; ultimately, you'll have to deliver IT services to your internal customers in a similar way. You should get to know your future in its present-day incarnation by experimenting with the public cloud. Get your hands dirty, get your feet wet ... now.

Who should experiment?

If you haven't already, you should assemble a small cross-functional team to investigate cloud options and help define the reference model for the future. As the driver for this project, you should take a close look at strategic priorities, key IT-enabled processes, an inventory of your current infrastructure, and an honest assessment of your historical service level performance.

You should also audit your application portfolio, segmenting your applications based on their unique requirements, policies and the most appropriate use of cloud. You may determine that certain non-differentiating applications should be retired in favor of a software-as-a-service solution. Maybe you'll conclude that certain applications should never run on public cloud infrastructure. And maybe you'll find that certain newer applications are well suited to a public platform-as-a-service offering.

The goal-and the point of the exercise-is to logically consider your cloud options as an optimized portfolio that maximizes the return and minimizes the risk to your business.

In conjunction with this analysis, application and cloud architects should dive into the public cloud.

They should experiment with and experience the full breadth of the offering, peeling back the layers to understand the architecture and implementation and exercising the full feature set to cherry-pick the best attributes as the basis for designing a reference architecture for your enterprise cloud.

At the same time, dev and test engineers should take advantage of on-demand, elastic compute capacity to serve practical process needs. The agility and cycle time benefits of public cloud are appreciable, particularly in organizations whose central IT organizations are saddled by server provisioning backlog.

You should also use this time to build an automated deployment model for getting workloads provisioned to the public cloud in a way that is fast, controlled and compliant. Measure the throughput of your current release and deployment processes and then multiply demand by an order of magnitude. Does it scale?

The point is that you should select deployment tools that will scale with your cloud initiative; this is a foundational architectural capability that will be difficult to back into later.

While the IT team gains familiarity with the cloud environment, take the time to socialize the direction with lines of business, identifying targeted applications that may benefit from this deployment model and serve as an internal case study for building momentum and garnering broader support and sponsorship.

This is also a good time to start gathering some baseline metrics.

Measure cycle time and effective cost at key stages of the application lifecycle. How long does it take for IT to provision a server? What is the duration between unit test complete and production deployment? Where are the typical bottlenecks in the process and what are they costing in terms of dollars and productivity? These basic metrics should be understood and quantified at the outset to measure the impact of a cloud initiative.

What is the internal cost for server capacity? Ask the same question of your public cloud service and look at the differences over time. You may notice that, for all its pennies-per-hour goodness, public cloud economics are not yet suited for long-running compute intensive workloads.

Expect this to change as public cloud competition intensifies.

2. Private Cloud Experimentation
For the purpose of this exercise, let's assume you've already had some experience with server virtualization. If you haven't, you should; virtualization is a foundation for a private cloud and it offers an elegantly simple ROI based on dramatic improvements in hardware capacity utilization, offering benefits that range from reductions in hardware spending to energy consumption and facility expenses.

As you scale your virtualization initiatives and move into a private cloud modality, be certain that your deployment and change processes are fit for the task. Too often, IT organizations ignore the fact that poorly automated deployment processes that may have been suitable in the past will break down in a large-scale computing environment, particularly when you add the dimension of speed.

Stand up the key cloud technologies in a lab environment and build the working prototype of the reference architecture that you defined in the first phase. Leverage open source and commercial tools-not just what you find laying around, but the ones that you think will best serve you at scale.

Remember: This is the stage rehearsal for production usage; don't go through the motions in vain by making this a throwaway experiment.

3. Infrastructure-as-a-Service (IaaS)
IT evolutions happen from the bottom up, which makes sense if you think about the IT stack as a hierarchy of layered foundations. As you roll out a cloud, do so from the bottom up, focusing on compute capacity as the first service that you make available as an elastic utility.

This is often called infrastructure-as-a-service, or the generally unpronounceable, IaaS.

Today, development, test and production teams wait too long for server capacity. Your first true step into the enterprise cloud should focus on making this capacity available on-demand on a self-service basis. Most organizations will focus on development and test organizations first, which makes a great deal of sense because it affords you the safety of working out the gremlins with non-production workloads.

The next logical question is: Public or private? The answer is the consultant's classic: "It depends."

It depends on a number of things: How ready is your infrastructure? Has it been virtualized? How automated is your provisioning process? Don't present a menu of services without cooks in the kitchen.

As importantly, consider your demand profile. Are you typically seeing frequent requests for short-running capacity or less frequent requests for long-running capacity? Public cloud economics are typically better suited to short-running workloads. Ultimately, you'll want to create an environment that provides portability between public and private clouds. This will allow you to manage IT as an optimized portfolio of options that dynamically balances across business requirements, policy and price.

With that said, the wishy-washy "it depends" resolves to a more decisive, "both."

4. Platform-as-a-Service (PaaS)
The next logical layer of the stack is enabling software-operating system, middleware, frameworks and other platform services that your internal customers require to run their applications.

Public PaaS services like Google App Engine, Force.com, Heroku and Microsoft Azure provide a complete abstraction layer for rapidly deploying simple applications built with modern frameworks like Ruby on Rails, PHP and .NET. But not every application can or should run in a public PaaS environment.

One of the most common fears is that PaaS is a walled garden-it locks you into a service that you don't control. Once the application is written to a specific PaaS layer, it can't be moved into another deployment environment. Without the ability to retarget an application, organizations sacrifice considerable pricing leverage. This fear is most pronounced with applications that are strategic and differentiating for the business-and the resistance increases as you move up the stack.

But, for some types of applications, public PaaS makes a whole lot of sense and it should be factored into your portfolio of cloud options and utilized for specific classes of applications.

For other applications, IT will remain the platform provider. But, in the face of such simple public PaaS alternatives, they'll have to dramatically improve the speed and flexibility of the platform delivery model.

Historically, IT has faced two opposing choices: (1) Standardize on one plain-vanilla platform and force application owners to conform to these limited specifications; or, (2) allow the platform to splinter into dozens of variants to serve diverse application dependencies. Neither are great choices: The former constrains application innovation and the latter saddles IT with management burden.

Today, application dependencies are growing with the fragmentation of programming languages, the use of open source components and a trend toward avant-garde developer preferences. This means that you probably need to address both requirements-flexibility and control-by deeply automating platform provisioning and lifecycle management and presenting internal customers a rich set of platform options to address their diverse application requirements.

5. Software-as-a-Service (SaaS)
The final step on the journey to enterprise cloud can also be seen as the first: If you think about it, you're probably already using more than one public SaaS application within your company-perhaps for sales or marketing automation, HR, financials or accounting processes. Software-as-a-service has become the mainstream preference for many non-differentiating applications in the enterprise today.

But SaaS isn't only something you consume. In the future, it will almost certainly be the delivery model for many internally built, differentiating enterprise applications. As IT organizations look to more effectively monetize the value of the services they deliver, they'll want to make applications available on demand to internal customers by creating an enterprise app store of sorts. This spreads the cost basis across a broader set of users as applications find affinity in new, previously undiscovered ways; and it delivers more direct business value to internal customers, making IT much more than "ping, power and pipe."

There is no single path for all organizations-and, admittedly, enterprise cloud is far too embryonic to have its own road-tested, timeworn "best practices." But it's always useful to begin with a logical plan that follows a stepwise progression-the guideposts for the journey.

More Stories By Jake Sorofman

Jake Sorofman is chief marketing officer of rPath, an innovator in system automation software for physical, virtual and cloud environments. Contact Jake at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant th...
Today traditional IT approaches leverage well-architected compute/networking domains to control what applications can access what data, and how. DevOps includes rapid application development/deployment leveraging concepts like containerization, third-party sourced applications and databases. Such applications need access to production data for its test and iteration cycles. Data Security? That sounds like a roadblock to DevOps vs. protecting the crown jewels to those in IT.
SYS-CON Events announced today that B2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. B2Cloud specializes in IoT devices for preventive and predictive maintenance in any kind of equipment retrieving data like Energy consumption, working time, temperature, humidity, pressure, etc.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness.
Many companies start their journey to the cloud in the DevOps environment, where software engineers want self-service access to the custom tools and frameworks they need. Machine learning technology can help IT departments keep up with these demands. In his session at 21st Cloud Expo, Ajay Gulati, Co-Founder, CTO and Board Member at ZeroStack, will discuss the use of machine learning for automating provisioning of DevOps resources, taking the burden off IT teams.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp em...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
Why Federal cloud? What is in Federal Clouds and integrations? This session will identify the process and the FedRAMP initiative. But is it sufficient? What is the remedy for keeping abreast of cutting-edge technology? In his session at 21st Cloud Expo, Rasananda Behera will examine the proposed solutions: Private or public or hybrid cloud Responsible governing bodies How can we accomplish?
Cloud-based disaster recovery is critical to any production environment and is a high priority for many enterprise organizations today. Nearly 40% of organizations have had to execute their BCDR plan due to a service disruption in the past two years. Zerto on IBM Cloud offer VMware and Microsoft customers simple, automated recovery of on-premise VMware and Microsoft workloads to IBM Cloud data centers.
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http:...
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
SYS-CON Events announced today that Mobile Create USA will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Mobile Create USA Inc. is an MVNO-based business model that uses portable communication devices and cellular-based infrastructure in the development, sales, operation and mobile communications systems incorporating GPS capabi...
SYS-CON Events announced today that Enroute Lab will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enroute Lab is an industrial design, research and development company of unmanned robotic vehicle system. For more information, please visit http://elab.co.jp/.
SYS-CON Events announced today that Nihon Micron will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nihon Micron Co., Ltd. strives for technological innovation to establish high-density, high-precision processing technology for providing printed circuit board and metal mount RFID tags used for communication devices. For more inf...
SYS-CON Events announced today that Suzuki Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Suzuki Inc. is a semiconductor-related business, including sales of consuming parts, parts repair, and maintenance for semiconductor manufacturing machines, etc. It is also a health care business providing experimental research for...
IBM helps FinTechs and financial services companies build and monetize cognitive-enabled financial services apps quickly and at scale. Hosted on IBM Bluemix, IBM’s platform builds in customer insights, regulatory compliance analytics and security to help reduce development time and testing. In his session at 21st Cloud Expo, Lennart Frantzell, a Developer Advocate with IBM, will discuss how these tools simplify the time-consuming tasks of selection, mapping and data integration, allowing devel...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...