Welcome!

@CloudExpo Authors: Elizabeth White, Liz McMillan, Zakia Bouachraoui, Dana Gardner, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

From Legacy to the Cloud: Next Generation Datacenters

Part 1 of 5 – The problem defined

This is the introduction to a five part series focusing on identifying the problem and sharing proven strategies for transforming legacy datacenters and infrastructure to next generation datacenters that operate in a cloud computing like model for enterprise IT.

The article builds on concepts I introduced in the book I published in June of 2009, “Next Generation Datacenters in Financial Services (Driving Extreme Efficiency and Effective Cost Savings.” (Elsevier). It is a continuum of thoughts I have shared in the past along with influences of recent dialogues with technology executives and the problems they continue to face.


The Problem

One of the fundamental barriers to business today is the intersection of the “day in the life” of the business and how that executes across the IT supply chain. The barriers caused by this Business-IT Chasm, include limits placed on customer interaction, ineffective & inconsistent decision-making, time to react to market changes and the cost to do business.

This typically results across more organizations because of disconnected decisions and actions made by both business and IT.

The business can be faulted for not providing the insight into how their business operates in a day in the life on an end-to-end process and execution basis. Instead each business function drives their application and IT folks to build systems focused only on their area of responsibility creating a in complete view of business execution necessary for optimal system development.

The IT organizations take this siloed input from the business and expand on the concept in terms of applications, data repositories and infrastructure systems. Both a top down and bottom up conflict occurs. Application and data folks build their own vertical oriented capabilities. The infrastructure folks define and provision infrastructure based on their datacenter strategy of compute, network, storage, placement, power, cooling and physical characteristics. This amplifies the silo disconnect resulting in waste, quality of delivery barriers, cost limitations and creates operational risk.

Take this scenario described above and multiply it across the various business functions, business processes, business channels, products and services and you get an input factor of many. Take this input factor and implement it into the Business to IT factory conversion process and the output result is business limiting system caused by a ineffective and poorly aligned system creation process.

Additional factors facing organizations include a problem of legacy technology that has limitations ranging from manageability, integration, efficiency, costs, etc… On top of this, most organizations have not implemented the discipline to document and maintain accurate understanding of their systems and how they execute on a daily basis. Then you have the “pain avoidance” strategy where firms have pushed off upgrades and enhancements – resulting in unsupported systems that creates even greater risk and limits the change strategies firm may wish to implement.

There is not silver bullet or single strategy that solves all. There is however, proven methods and successful strategies that multiple firms have employed to attack the legacy problems in their existing datacenter infrastructures and iterate their way towards next generation datacenters and IT delivery ala Cloud Computing Models.

The Strategy

In parts 2 thru 5 of this series, a four step strategy for attacking legacy IT will be described. These steps can be summarized as:

  • Insight – day in the life understanding of both business and IT in terms of execution, historical decisions and actions and objective understanding of the actual situation as it exists today.
  • Alignment – creating a common language, taxonomy and system creation model to produce repeatable results.
  • Control – specific actions, tooling, process and approach to ensure the appropriate change occurs and re-occurs successfully.
  • Sustainment – mechanisms and processes instituted in a repeatable discipline to ensure consistent results and avoid falling back into the legacy traps.

More Stories By Tony Bishop

Blueprint4IT is authored by a longtime IT and Datacenter Technologist. Author of Next Generation Datacenters in Financial Services – Driving Extreme Efficiency and Effective Cost Savings. A former technology executive for both Morgan Stanley and Wachovia Securities.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Eggplant, the customer experience optimization specialist, announced the latest enhancements to its Digital Automation Intelligence (DAI) Suite. The new capabilities augment Eggplant’s continuous intelligent automation by making it simple and quick for teams to test the performance and usability of their products as well as basic functionality, delivering a better user experience that drives business outcomes.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.