Welcome!

@CloudExpo Authors: Liz McMillan, Roger Strukhoff, Pat Romanski, Zakia Bouachraoui, Dana Gardner

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Blog Post

IT Change Management: The Foundation to the Cloud

Getting a better handle on inventory management and resource relationships

IT shops continually struggle with keeping resource documentation up to date. Too often IT departments seem resigned to accepting mediocre results, as if the world conspires against them. Resource tracking is a prime example of this. Fundamentally, the different organizational aspects refuse to comply with these manual processes to keep a CMDB updated. This creates the number one barrier to moving to some form of a cloud-like delivery model.

Furthermore, advancements in virtualization have made the situation worse, as software resources have been severed from physical resources. Traditional resource tracking methods are further confused as the virtual resources change their physical location over time.

This reinforces the protests that an application's dependency profile is too complicated to be 100 percent accurate. Over time, IT has simply accepted the fact that it is totally impractical to keep the physical architecture records of systems up to date manually. Instead of trying to maintain the documentation, it's generated when needed through a time-consuming trial-and-error process and, by the time it's done, it's hopelessly out of date.

Not only is this an inefficient use of people's time, it's no way to manage some of the most important resources in an organization. These IT resources have become a digital backbone upon which most organizations are totally dependent for their very survival. The cavalier attitude toward managing IT resource inventory is dangerous at best and can prove to be catastrophic.

For instance, it's just a matter of time until the next large outage occurs. Despite the best-laid disaster recovery plans, technology has an uncanny ability to fail in unexpected ways; the combinatorial possibilities are immense. Having accurate application maps can make the difference between being the hero and the goat. If individual self-preservation isn't motivating enough, there are other practical organizational reasons for keeping this information available: datacenter moves, chargeback models, and consumption analysis to name a few.

Generally this lack of inventory control has been accepted as the status quo despite the clear risks. In no other industry would such poor controls be acceptable. Imagine Wal-Mart's CEO explaining a lack of understanding of store inventory to Wall Street analysts. If such a story broke, he'd be out of a job before the ink dried. IT management shouldn't be allowed to get away with it either.

This is exceedingly frustrating as application dependency mapping tools have become mature offerings. An application-dependency mapping tool uncovers the otherwise hidden relationships between applications and infrastructure resources. In our experience as former IT practitioners, we have implemented such capabilities using tools from CA, BMC and IBM.

Our main goal was to get a better handle on inventory management and resource relationships for a datacenter move, but we quickly realized that there were many additional benefits we could take advantage of:

  • Identification of idle servers for reclamation (we were shocked at how many)
  • Identification of legacy infrastructure that poses a security threat (such as Windows 95/98)
  • Identification of production servers using development or test resources
  • Identification of changes to an environment over time, especially useful for figuring out "what changed" for debugging purposes

In addition, these tools provide a repository that supports a cradle-to-grave resource management policy.

What we found helpful was how these tools log all changes that occur to a server over time, not just the last scanning sweep, even if a resource isn't available for an extended period of time,

Bottom line - to institute change that exploits cloud-like delivery models in the enterprise, firms must grab a hold of their IT change management.

More Stories By Tony Bishop

Blueprint4IT is authored by a longtime IT and Datacenter Technologist. Author of Next Generation Datacenters in Financial Services – Driving Extreme Efficiency and Effective Cost Savings. A former technology executive for both Morgan Stanley and Wachovia Securities.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives. Serverless and Kubernetes are great examples of continuous, rapid pace of change in enterprise IT. They also raise a number of critical issues and questions about employee training, development processes, and operational metrics. There's a real need for serious conversations about Serverless and Kubernetes among the people who are doing this work and managing it. So we are very pleased today to announce the ServerlessSUMMIT at CloudEXPO.
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
This month @nodexl announced that ServerlessSUMMIT & DevOpsSUMMIT own the world's top three most influential Kubernetes domains which are more influential than LinkedIn, Twitter, YouTube, Medium, Infoworld and Microsoft combined. NodeXL is a template for Microsoft® Excel® (2007, 2010, 2013 and 2016) on Windows (XP, Vista, 7, 8, 10) that lets you enter a network edge list into a workbook, click a button, see a network graph, and get a detailed summary report, all in the familiar environment of the Excel® spreadsheet application. A collection of network maps and reports created with NodeXL can be seen in the NodeXL Graph Gallery, an archive of data sets uploaded by the NodeXL user community.
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.