Welcome!

Cloud Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Vincent Brasseur, Ignacio M. Llorente

Related Topics: Cloud Expo, Virtualization

Cloud Expo: Article

Cloud Computing: A Transition Methodology

Roadmap to the Cloud

Cloud computing refers to the practice of leveraging third-party computing resources, such as network grids and server farms, to extend IT capabilities and reduce the cost of ownership. This practice offers numerous potential benefits to organizations that want to centralize software and data storage management while eliminating the costly overhead of in-house hardware and software maintenance and the personnel required to build, support, and maintain enterprise computing solutions.

Cloud computing has emerged as a new computing paradigm that gathers massive numbers of computers in centralized data centers to deliver Web-based applications, application platforms, and services via a utility model. The primary difference between the service models of cloud computing and previous software (e.g., outsourcing or data center consolidation) is scale. The premise is that as the scale of the cloud infrastructure increases, the incremental time and cost of application delivery trends toward zero.

Cloud computing allows users to dynamically and remotely control processing, memory, data storage, network bandwidth, and specialized business services from pools of resources, providing the ability to specify and deploy computing capacity on-demand. If there's a need to scale up to accommodate sudden demand, users can add the necessary resources using a Web browser. The large data center can provide similar services to multiple external customers (multi-tenancy), leveraging its shared resources to increase economies of scale and reducing service costs.

Although cloud computing is in its early stages and definitions vary greatly, the underlying technologies today are consistent. These technologies include the following:

  • Grid computing: A form of distributed parallel computing whereby processes are split up to leverage the available computing power of multiple CPUs acting in concert.
  • Utility computing: A model of purchasing computing capacity, such as CPU, storage, and bandwidth, from an IT service provider, billed based on consumption.
  • Virtualization technologies: Virtual servers and virtual private networks provide the ability to quickly reconfigure available resources on-demand and provide the necessary security assurance.

There are a number of service offerings and implementation models under the cloud computing umbrella, each with associated pros and cons. These models can be grouped into the following three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). These models target varying levels of services, ranging from general infrastructure services, such as operating systems or database services provided by IaaS vendors to targeted functional services provided by SaaS vendors (e.g., customer relationship management from Salesforce.com).

The various players in the current market can be differentiated into the following two categories:

  • Cloud Providers: Offer one or more of the cloud models (i.e., IaaS, PaaS, or SaaS) as a service. Examples include Amazon and Google.
  • Cloud Enablers: Provide technology or have adapted existing technology to run on or support cloud computing. A recent example is Oracle's partnership with Amazon to add Oracle 11g database support (technology and licensing) to Amazon's existing EC2 services offering.

We recognize that the transition to a cloud computing paradigm presents a number of challenges. Issues associated with information security, reliability, and service level agreements challenge mission-critical systems. Furthermore, we've identified what we consider the key characteristics of a cloud computing environment:

  • Minimized capital expenditure - infrastructure is provider-owned
  • Device and location independence
  • Multi-tenancy - enables resource and cost sharing among a large pool of users
  • Monitored and consistent performance - can be affected by high network load
  • Reliability via redundant sites - allows for business continuity and disaster recovery
  • Scalability to ever-changing user demands - results in lower costs
  • Improved security from centralized data and increased security-focused resources

My experience has emphasized the importance of "architecting for the cloud" versus simply deploying system components to the cloud to ensure that business requirements are met. Typical software and systems that are not designed to take advantage of the scalability and parallelism of the cloud will likely not achieve the full benefit that is provided by a cloud computing environment. It's also highlighted the need to transition the role of IT managers to brokers and negotiators of IT services rather than the day-to-day management of the operating platform.

My analysis of the benefits and challenges presented by the cloud computing paradigm has resulted in the identification of the following three cloud variations:

  • Commercial Cloud: It consists of deployment to one or more of the commercial cloud providers (e.g., Amazon or Google). It could be a simple integration with an existing SaaS service to support a subset of application functionality or could consist of a complete migration to the cloud. This may be appropriate for non-mission critical systems (e.g., < 99.99% availability) that do not process sensitive data or where sensitive data won't traverse system boundaries to the cloud.
  • On-Premises (Private) Cloud: An on-premises cloud could be created to provide some of the benefits of cloud computing. Booz Allen selected a similar option in our implementation for the FBI to address the security concerns associated with a classified environment; however, the multi-tenancy aspect is then limited to a single agency. Consequently, this option doesn't provide the massive scalability that's characteristic of a true cloud.
  • Government Cloud: The creation of one or more government cloud computing environments. These environments would be designed specifically to address the concerns that are unique to the government. For civilian agencies, this cloud could be an extension of the current eGovernment lines of business (LoBs).

Though many cloud providers proclaim that moving existing applications to the cloud is seamless and doesn't require code changes, my experience has shown that greater analysis and re-engineering are required to achieve the full benefits of a cloud computing environment. Complexities remain that organizations must consider when moving to the cloud, and careful planning is essential.

Based on the lessons learned from previous efforts, a phased Cloud Computing Transition Methodology designed to address the issues and risks associated with migrating an existing system to the cloud is developed. Figure 1 provides an overview of this approach.

The Cloud Strategy and Planning phase (Phase 1) consists of three steps designed to ensure that all aspects of moving to a cloud environment have been appropriately evaluated and agreed upon. The three steps are:

1. Conduct a Strategic Diagnostic
The objective of the strategic diagnostic is to identify the major factors influencing the decision to move to the cloud environment and determine the best approach. During the diagnostic step, we will validate the key objectives of moving to the cloud and the "pain points" that the organization wants to address. The drivers during this step include reducing day-to-day risk, reducing the level of involvement in day-to-day IT management, eliminating overhead, achieving better productivity, reducing or eliminating the cost of adding additional users, and protecting the information system from misuse and unauthorized disclosure.

The primary areas to be addressed during the diagnostic step are security and privacy, technical, business and customer impact, economics, and governance and policy. We will evaluate the implications of moving to the cloud environment in each of these categories and document the key issues and considerations revealed during the diagnostic step. The outcome of this diagnostic will be sufficient analysis to support a "go/no go" decision to move to a cloud computing environment and the development of an agreed-on cloud strategy.

2. Define a Cloud Strategy
To define a cloud strategy, the organization should document a complete understanding of each component of its existing architecture. The analysis examines the required user services, processing services, information security, application software standards, and integrated software down to each component. This can be achieved by leveraging existing architectural documents and ensuring the appropriate level of detail is documented, including system-to-system interfaces, data storage, forms processing and reporting, distributed architecture, access control (authentication and authorization), and security and user provisioning.

3. Create an Implementation Plan
The implementation plan identifies the roles and responsibilities, operating model, major milestones, Work Breakdown Structure (WBS), risk plan, dependencies, and quality control mechanisms to implement the cloud strategy successfully.

After completing Phase 1, the organization will have fully analyzed its options, identified all requirements, thoroughly assessed short-term and long-term costs and benefits, gained executive governance approval, and socialized the solution with stakeholders (including oversight entities). This phase ensures that the organization will have a high degree of confidence in successfully moving to the cloud environment, reap the expected benefits, not constrain future functionality, and avoid hidden future costs.

The Cloud Deployment phase (Phase 2) focuses on implementing the strategy developed in the planning phase. Leveraging the various cloud models helps identify the most effective solution(s) based on the existing organization architecture. Some of the criteria used in recommending a vendor are the vendor's primary service model (i.e., infrastructure, platform, or software), business model, how much existing technology can they leverage, end-user experience, and risks involved in porting to the cloud. Deploying to the cloud involves taking the decision analysis from Phase 1 as input and proceeding with the following four steps:

1.  Assess/Select the Cloud Provider(s)
The assessment step deals with analyzing the components of the architecture and identifying the optimal vendor offerings. One of the main criteria in selecting a provider is its ability to leverage existing technologies. For example, current Oracle customers can use their software licenses on Amazon's EC2 cloud, which allows for reusing existing technologies, whereas current databases can simply be moved to the cloud. In addition, Oracle lets customers deploy on the Amazon cloud through Amazon Machine Images (AMI). This way, a new virtual machine is ready for use with the Oracle database loaded in a matter of minutes.

The assessment step captures three important inputs: the current organization technical architecture, objectives, and the vendor selection criteria that are tailored to meet the organizational objectives. The vendor assessment results in recommendations on the most appropriate cloud vendor, assists in selecting the most effective cloud model, develops deployment strategies, highlights reusable components, and identifies the security options for the cloud architecture.

2. Establish Service Level Agreements (SLAs)
Unlike traditional computing models, where most, if not all, of the components are on-premises and there's direct control over services, cloud computing involves handing off the system to a third-party vendor or vendors. In this case, SLAs address concerns like performance, downtime, provisioning, security, backup, and recovery, and ensure that objectives and established benchmarks are being met. SLAs formalize the contractual agreement between the organization and the selected vendor(s) and will highlight the offerings of the vendor(s), so the expectations on both sides are clear.

A typical SLA will identify service levels for the following:

  • Retention Time: During an emergency/outage, how long would it take for the organization to sustain its operations
  • Uptime: The percent of the time that the system will be available (e.g., 99.9%) and the period over which the measurement is taken
  • Performance and throughput
  • Security and Data protection: Where is the data stored? What precautions are taken by the vendor to ensure the data isn't tampered with?
  • The level of support offered (e.g., 24/7)
  • Service credits if the SLA isn't met
  • The SLA can also address specific concerns like the guarantee of data protection and privacy when other foreign entities are hosted

3. Execute Transition
The execution step involves the actual transition of components identified in earlier steps. Based on the number and type of components that are being ported to the cloud, execution can be an iterative process. One of the primary steps in execution is to establish multiple environments, such as development, testing, production, and training. The preliminary questionnaire to set up an environment can include items like the number of instances required, memory, storage space, and basic software that needs to be installed.

4. O&M and Help Desk
The level of O&M and help desk provided by the cloud vendor may be driven by the selected cloud model(s) and will be determined by the SLAs previously established. If an IaaS is chosen where the vendor just provides the hardware resources and the organization installs the software components and deploys the applications, the maintenance provided by the vendor will be limited. Different vendors provide different support models and at different levels. It's important to ensure the kind of essential support functions that will be provided by vendors for successful continuity of operations (COOP) and to understand how operations will be restored on time at the backup site.

Deploying cloud computing solutions requires both a short-term and a long-term strategy. For example, besides the improved scalability and reliability provided by the cloud, which organizations may achieve through the initial transition, re-engineering some components to take advantage of the parallelism provided by the cloud could improve system performance and overall scalability further. Transitioning an existing system to the cloud requires an approach that addresses not only the technical aspects of cloud computing but also considers the objectives of the organization, the constraints imposed by the existing system, and the impact to its existing customers. As an experienced cloud computing strategist, I believe that corporations and even the government sector are prepared to consider the complexities of cloud computing technologies.

More Stories By Rod Fontecilla

Dr. Rod Fontecilla is a Principal at Booz Allen Hamilton. He has over 25 years of professional experience in the design, development, implementation, and management of Large Information Management Systems. He is an expert in the field of Computer Science within the federal government and private sectors. Dr. Fontecilla has Ph.D. in Applied Mathematics and previously served as a Professor at the University of Maryland Computer Science Department before entering the consulting industry.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
High-performing enterprise Software Quality Assurance (SQA) teams validate systems that are ready for use - getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time - bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of E...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective ...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happe...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrateg...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water,...
DevOps is all about agility. However, you don't want to be on a high-speed bus to nowhere. The right DevOps approach controls velocity with a tight feedback loop that not only consists of operational data but also incorporates business context. With a business context in the decision making, the right business priorities are incorporated, which results in a higher value creation. In his session at DevOps Summit, Todd Rader, Solutions Architect at AppDynamics, discussed key monitoring techniques...
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
When an enterprise builds a hybrid IaaS cloud connecting its data center to one or more public clouds, security is often a major topic along with the other challenges involved. Security is closely intertwined with the networking choices made for the hybrid cloud. Traditional networking approaches for building a hybrid cloud try to kludge together the enterprise infrastructure with the public cloud. Consequently this approach requires risky, deep "surgery" including changes to firewalls, subnets...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device exp...
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using ...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series dat...
"Verizon offers public cloud, virtual private cloud as well as private cloud on-premises - many different alternatives. Verizon's deep knowledge in applications and the fact that we are responsible for applications that make call outs to other systems. Those systems and those resources may not be in Verizon Cloud, we understand at the end of the day it's going to be federated," explained Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, in this SYS-CON.tv interview at...
"For the past 4 years we have been working mainly to export. For the last 3 or 4 years the main market was Russia. In the past year we have been working to expand our footprint in Europe and the United States," explained Andris Gailitis, CEO of DEAC, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. Acco...
The term culture has had a polarizing effect among DevOps supporters. Some propose that culture change is critical for success with DevOps, but are remiss to define culture. Some talk about a DevOps culture but then reference activities that could lead to culture change and there are those that talk about culture change as a set of behaviors that need to be adopted by those in IT. There is no question that businesses successful in adopting a DevOps mindset have seen departmental culture change, ...
"Cloud consumption is something we envision at Solgenia. That is trying to let the cloud spread to the user as a consumption, as utility computing. We want to allow the people to just pay for what they use, not a subscription model," explained Ermanno Bonifazi, CEO & Founder of Solgenia, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps,...