Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

From Legacy to the Cloud: Next Generation Datacenters

Part 1 of 5 – The problem defined

This is the introduction to a five part series focusing on identifying the problem and sharing proven strategies for transforming legacy datacenters and infrastructure to next generation datacenters that operate in a cloud computing like model for enterprise IT.

The article builds on concepts I introduced in the book I published in June of 2009, “Next Generation Datacenters in Financial Services (Driving Extreme Efficiency and Effective Cost Savings.” (Elsevier). It is a continuum of thoughts I have shared in the past along with influences of recent dialogues with technology executives and the problems they continue to face.


The Problem

One of the fundamental barriers to business today is the intersection of the “day in the life” of the business and how that executes across the IT supply chain. The barriers caused by this Business-IT Chasm, include limits placed on customer interaction, ineffective & inconsistent decision-making, time to react to market changes and the cost to do business.

This typically results across more organizations because of disconnected decisions and actions made by both business and IT.

The business can be faulted for not providing the insight into how their business operates in a day in the life on an end-to-end process and execution basis. Instead each business function drives their application and IT folks to build systems focused only on their area of responsibility creating a in complete view of business execution necessary for optimal system development.

The IT organizations take this siloed input from the business and expand on the concept in terms of applications, data repositories and infrastructure systems. Both a top down and bottom up conflict occurs. Application and data folks build their own vertical oriented capabilities. The infrastructure folks define and provision infrastructure based on their datacenter strategy of compute, network, storage, placement, power, cooling and physical characteristics. This amplifies the silo disconnect resulting in waste, quality of delivery barriers, cost limitations and creates operational risk.

Take this scenario described above and multiply it across the various business functions, business processes, business channels, products and services and you get an input factor of many. Take this input factor and implement it into the Business to IT factory conversion process and the output result is business limiting system caused by a ineffective and poorly aligned system creation process.

Additional factors facing organizations include a problem of legacy technology that has limitations ranging from manageability, integration, efficiency, costs, etc… On top of this, most organizations have not implemented the discipline to document and maintain accurate understanding of their systems and how they execute on a daily basis. Then you have the “pain avoidance” strategy where firms have pushed off upgrades and enhancements – resulting in unsupported systems that creates even greater risk and limits the change strategies firm may wish to implement.

There is not silver bullet or single strategy that solves all. There is however, proven methods and successful strategies that multiple firms have employed to attack the legacy problems in their existing datacenter infrastructures and iterate their way towards next generation datacenters and IT delivery ala Cloud Computing Models.

The Strategy

In parts 2 thru 5 of this series, a four step strategy for attacking legacy IT will be described. These steps can be summarized as:

  • Insight – day in the life understanding of both business and IT in terms of execution, historical decisions and actions and objective understanding of the actual situation as it exists today.
  • Alignment – creating a common language, taxonomy and system creation model to produce repeatable results.
  • Control – specific actions, tooling, process and approach to ensure the appropriate change occurs and re-occurs successfully.
  • Sustainment – mechanisms and processes instituted in a repeatable discipline to ensure consistent results and avoid falling back into the legacy traps.

More Stories By Tony Bishop

Blueprint4IT is authored by a longtime IT and Datacenter Technologist. Author of Next Generation Datacenters in Financial Services – Driving Extreme Efficiency and Effective Cost Savings. A former technology executive for both Morgan Stanley and Wachovia Securities.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.