Welcome!

@CloudExpo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Automic Blog, Kevin Jackson

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Adopting Enterprise Cloud Computing: A Graduated Transformation

Enterprise IT organizations are in the midst of deep and lasting change

Enterprise IT organizations are in the midst of deep and lasting change.

IT used to be your cable company: It held the local monopoly for IT services. The wait you experienced was a frustrating, but necessary part of your relationship with enterprise IT. You had no choice.

But with the rise of public cloud services like Amazon EC2, business lines now have a choice.

Why wait months when minutes will do?

That question will ultimately cause application owners to vote with their feet, as demand follows the path of least resistance to the public cloud. That is, unless IT leaders can transform service delivery models to match the flexibility and performance public cloud services provide.

"If the rate of change on the outside exceeds the rate of change on the inside, the end is near."

That's Jack Welch offering what may be the most urgent words for today's IT leaders who are staring down unprecedented external change. IT leaders ought to ask themselves where they stand, on the inside, relative to the changes occurring on the outside.

Of course, for most IT organizations, external change vastly outstrips internal change. That's not because IT leadership doesn't want to change-it's because cultures and bureaucracies don't readily allow it.

Traditionally, CIOs were expected to improve performance incrementally year over year; last year's metrics were next year's benchmarks. Your goal was to ensure the curve was moving in the right direction. Today, the expectation is a dramatic improvement in agility and responsiveness-from months to minutes-at a dramatically lower cost to the business.

To achieve this end, enterprise IT organizations must transform their delivery models to look a lot like a public cloud service in its own right. But how do you get from here to there?

The answer is similar to the way you eat an elephant: One bite at a time.

The challenge is that the path isn't particularly clear, and transformation can't happen overnight. This sort of change requires an incremental, step-by-step progression that yields benefits along the way. Without this, fatigue sets in, enthusiasm wanes and cloud projects die on the vine like so many architectural transformations before. When merchandised effectively, these incremental wins become the kindling that stokes the fire, building the shared vision, conviction and confidence required to transform.

Making the transformation to enterprise cloud is, thus, a graduated journey.

It's a journey that begins with experimentation and moves into production phases from the bottom up, tackling successive layers of the stack. The common thread running through the journey is ongoing optimization between public and private options based on specific requirements, policies and price. The goal isn't to place a bet, public or private. It's to turn that gamble into a managed investment that pays ever-increasing dividends by managing a dynamic portfolio of public and private resources.

Before we lay out the guideposts for the journey, let's discuss the building blocks for the enterprise cloud.

They are:

  • Virtualization: An abstraction layer for compute, network and storage resources as sharable utilities.
  • Elasticity: Dynamic provisioning, scaling and rebalancing resources with changing demand.
  • Automation: "Pushbutton" workload construction, deployment and change.
  • Self-service: A storefront for advertising, requesting and provisioning standardized services.
  • Chargeback: A tracking and billing mechanism to manage the supply/demand equilibrium.

These are the foundations for an enterprise cloud-the journey to which begins with.

1. Public Cloud Experimentation
According to Sam Boonin at Good Data, "Think about it - what used to cost $20k (a Sun server) you can now get for 15 cents an hour ... if you are not experimenting with those economics, you're crazy."

Sam's right: Amazon EC2 has removed the cost barrier to enterprise-grade compute capacity.

It has also proven the model for self-service, automated, elastic computing. If you think about it, Amazon is like having a privileged view into your future; ultimately, you'll have to deliver IT services to your internal customers in a similar way. You should get to know your future in its present-day incarnation by experimenting with the public cloud. Get your hands dirty, get your feet wet ... now.

Who should experiment?

If you haven't already, you should assemble a small cross-functional team to investigate cloud options and help define the reference model for the future. As the driver for this project, you should take a close look at strategic priorities, key IT-enabled processes, an inventory of your current infrastructure, and an honest assessment of your historical service level performance.

You should also audit your application portfolio, segmenting your applications based on their unique requirements, policies and the most appropriate use of cloud. You may determine that certain non-differentiating applications should be retired in favor of a software-as-a-service solution. Maybe you'll conclude that certain applications should never run on public cloud infrastructure. And maybe you'll find that certain newer applications are well suited to a public platform-as-a-service offering.

The goal-and the point of the exercise-is to logically consider your cloud options as an optimized portfolio that maximizes the return and minimizes the risk to your business.

In conjunction with this analysis, application and cloud architects should dive into the public cloud.

They should experiment with and experience the full breadth of the offering, peeling back the layers to understand the architecture and implementation and exercising the full feature set to cherry-pick the best attributes as the basis for designing a reference architecture for your enterprise cloud.

At the same time, dev and test engineers should take advantage of on-demand, elastic compute capacity to serve practical process needs. The agility and cycle time benefits of public cloud are appreciable, particularly in organizations whose central IT organizations are saddled by server provisioning backlog.

You should also use this time to build an automated deployment model for getting workloads provisioned to the public cloud in a way that is fast, controlled and compliant. Measure the throughput of your current release and deployment processes and then multiply demand by an order of magnitude. Does it scale?

The point is that you should select deployment tools that will scale with your cloud initiative; this is a foundational architectural capability that will be difficult to back into later.

While the IT team gains familiarity with the cloud environment, take the time to socialize the direction with lines of business, identifying targeted applications that may benefit from this deployment model and serve as an internal case study for building momentum and garnering broader support and sponsorship.

This is also a good time to start gathering some baseline metrics.

Measure cycle time and effective cost at key stages of the application lifecycle. How long does it take for IT to provision a server? What is the duration between unit test complete and production deployment? Where are the typical bottlenecks in the process and what are they costing in terms of dollars and productivity? These basic metrics should be understood and quantified at the outset to measure the impact of a cloud initiative.

What is the internal cost for server capacity? Ask the same question of your public cloud service and look at the differences over time. You may notice that, for all its pennies-per-hour goodness, public cloud economics are not yet suited for long-running compute intensive workloads.

Expect this to change as public cloud competition intensifies.

2. Private Cloud Experimentation
For the purpose of this exercise, let's assume you've already had some experience with server virtualization. If you haven't, you should; virtualization is a foundation for a private cloud and it offers an elegantly simple ROI based on dramatic improvements in hardware capacity utilization, offering benefits that range from reductions in hardware spending to energy consumption and facility expenses.

As you scale your virtualization initiatives and move into a private cloud modality, be certain that your deployment and change processes are fit for the task. Too often, IT organizations ignore the fact that poorly automated deployment processes that may have been suitable in the past will break down in a large-scale computing environment, particularly when you add the dimension of speed.

Stand up the key cloud technologies in a lab environment and build the working prototype of the reference architecture that you defined in the first phase. Leverage open source and commercial tools-not just what you find laying around, but the ones that you think will best serve you at scale.

Remember: This is the stage rehearsal for production usage; don't go through the motions in vain by making this a throwaway experiment.

3. Infrastructure-as-a-Service (IaaS)
IT evolutions happen from the bottom up, which makes sense if you think about the IT stack as a hierarchy of layered foundations. As you roll out a cloud, do so from the bottom up, focusing on compute capacity as the first service that you make available as an elastic utility.

This is often called infrastructure-as-a-service, or the generally unpronounceable, IaaS.

Today, development, test and production teams wait too long for server capacity. Your first true step into the enterprise cloud should focus on making this capacity available on-demand on a self-service basis. Most organizations will focus on development and test organizations first, which makes a great deal of sense because it affords you the safety of working out the gremlins with non-production workloads.

The next logical question is: Public or private? The answer is the consultant's classic: "It depends."

It depends on a number of things: How ready is your infrastructure? Has it been virtualized? How automated is your provisioning process? Don't present a menu of services without cooks in the kitchen.

As importantly, consider your demand profile. Are you typically seeing frequent requests for short-running capacity or less frequent requests for long-running capacity? Public cloud economics are typically better suited to short-running workloads. Ultimately, you'll want to create an environment that provides portability between public and private clouds. This will allow you to manage IT as an optimized portfolio of options that dynamically balances across business requirements, policy and price.

With that said, the wishy-washy "it depends" resolves to a more decisive, "both."

4. Platform-as-a-Service (PaaS)
The next logical layer of the stack is enabling software-operating system, middleware, frameworks and other platform services that your internal customers require to run their applications.

Public PaaS services like Google App Engine, Force.com, Heroku and Microsoft Azure provide a complete abstraction layer for rapidly deploying simple applications built with modern frameworks like Ruby on Rails, PHP and .NET. But not every application can or should run in a public PaaS environment.

One of the most common fears is that PaaS is a walled garden-it locks you into a service that you don't control. Once the application is written to a specific PaaS layer, it can't be moved into another deployment environment. Without the ability to retarget an application, organizations sacrifice considerable pricing leverage. This fear is most pronounced with applications that are strategic and differentiating for the business-and the resistance increases as you move up the stack.

But, for some types of applications, public PaaS makes a whole lot of sense and it should be factored into your portfolio of cloud options and utilized for specific classes of applications.

For other applications, IT will remain the platform provider. But, in the face of such simple public PaaS alternatives, they'll have to dramatically improve the speed and flexibility of the platform delivery model.

Historically, IT has faced two opposing choices: (1) Standardize on one plain-vanilla platform and force application owners to conform to these limited specifications; or, (2) allow the platform to splinter into dozens of variants to serve diverse application dependencies. Neither are great choices: The former constrains application innovation and the latter saddles IT with management burden.

Today, application dependencies are growing with the fragmentation of programming languages, the use of open source components and a trend toward avant-garde developer preferences. This means that you probably need to address both requirements-flexibility and control-by deeply automating platform provisioning and lifecycle management and presenting internal customers a rich set of platform options to address their diverse application requirements.

5. Software-as-a-Service (SaaS)
The final step on the journey to enterprise cloud can also be seen as the first: If you think about it, you're probably already using more than one public SaaS application within your company-perhaps for sales or marketing automation, HR, financials or accounting processes. Software-as-a-service has become the mainstream preference for many non-differentiating applications in the enterprise today.

But SaaS isn't only something you consume. In the future, it will almost certainly be the delivery model for many internally built, differentiating enterprise applications. As IT organizations look to more effectively monetize the value of the services they deliver, they'll want to make applications available on demand to internal customers by creating an enterprise app store of sorts. This spreads the cost basis across a broader set of users as applications find affinity in new, previously undiscovered ways; and it delivers more direct business value to internal customers, making IT much more than "ping, power and pipe."

There is no single path for all organizations-and, admittedly, enterprise cloud is far too embryonic to have its own road-tested, timeworn "best practices." But it's always useful to begin with a logical plan that follows a stepwise progression-the guideposts for the journey.

More Stories By Jake Sorofman

Jake Sorofman is chief marketing officer of rPath, an innovator in system automation software for physical, virtual and cloud environments. Contact Jake at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
"WineSOFT is a software company making proxy server software, which is widely used in the telecommunication industry or the content delivery networks or e-commerce," explained Jonathan Ahn, COO of WineSOFT, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
You know you need the cloud, but you're hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You're looking at private cloud solutions based on hyperconverged infrastructure, but you're concerned with the limits inherent in those technologies. What do you do?