Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Blog Post

IT Change Management: The Foundation to the Cloud

Getting a better handle on inventory management and resource relationships

IT shops continually struggle with keeping resource documentation up to date. Too often IT departments seem resigned to accepting mediocre results, as if the world conspires against them. Resource tracking is a prime example of this. Fundamentally, the different organizational aspects refuse to comply with these manual processes to keep a CMDB updated. This creates the number one barrier to moving to some form of a cloud-like delivery model.

Furthermore, advancements in virtualization have made the situation worse, as software resources have been severed from physical resources. Traditional resource tracking methods are further confused as the virtual resources change their physical location over time.

This reinforces the protests that an application's dependency profile is too complicated to be 100 percent accurate. Over time, IT has simply accepted the fact that it is totally impractical to keep the physical architecture records of systems up to date manually. Instead of trying to maintain the documentation, it's generated when needed through a time-consuming trial-and-error process and, by the time it's done, it's hopelessly out of date.

Not only is this an inefficient use of people's time, it's no way to manage some of the most important resources in an organization. These IT resources have become a digital backbone upon which most organizations are totally dependent for their very survival. The cavalier attitude toward managing IT resource inventory is dangerous at best and can prove to be catastrophic.

For instance, it's just a matter of time until the next large outage occurs. Despite the best-laid disaster recovery plans, technology has an uncanny ability to fail in unexpected ways; the combinatorial possibilities are immense. Having accurate application maps can make the difference between being the hero and the goat. If individual self-preservation isn't motivating enough, there are other practical organizational reasons for keeping this information available: datacenter moves, chargeback models, and consumption analysis to name a few.

Generally this lack of inventory control has been accepted as the status quo despite the clear risks. In no other industry would such poor controls be acceptable. Imagine Wal-Mart's CEO explaining a lack of understanding of store inventory to Wall Street analysts. If such a story broke, he'd be out of a job before the ink dried. IT management shouldn't be allowed to get away with it either.

This is exceedingly frustrating as application dependency mapping tools have become mature offerings. An application-dependency mapping tool uncovers the otherwise hidden relationships between applications and infrastructure resources. In our experience as former IT practitioners, we have implemented such capabilities using tools from CA, BMC and IBM.

Our main goal was to get a better handle on inventory management and resource relationships for a datacenter move, but we quickly realized that there were many additional benefits we could take advantage of:

  • Identification of idle servers for reclamation (we were shocked at how many)
  • Identification of legacy infrastructure that poses a security threat (such as Windows 95/98)
  • Identification of production servers using development or test resources
  • Identification of changes to an environment over time, especially useful for figuring out "what changed" for debugging purposes

In addition, these tools provide a repository that supports a cradle-to-grave resource management policy.

What we found helpful was how these tools log all changes that occur to a server over time, not just the last scanning sweep, even if a resource isn't available for an extended period of time,

Bottom line - to institute change that exploits cloud-like delivery models in the enterprise, firms must grab a hold of their IT change management.

More Stories By Tony Bishop

Blueprint4IT is authored by a longtime IT and Datacenter Technologist. Author of Next Generation Datacenters in Financial Services – Driving Extreme Efficiency and Effective Cost Savings. A former technology executive for both Morgan Stanley and Wachovia Securities.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.