Welcome!

@CloudExpo Authors: Elizabeth White, Carmen Gonzalez, Pat Romanski, Liz McMillan, Harry Trott

Related Topics: @DevOpsSummit, Java IoT, Microsoft Cloud, Linux Containers, @CloudExpo, @BigDataExpo

@DevOpsSummit: Blog Feed Post

Quantifying the Value of DevOps

DevOps consists of tools, processes, and the cultural change to apply both across an organization

In my experience when you work in IT the executive team rarely focuses on your team until you experience a catastrophic failure - once you do you are the center of attention until services are back to normal. It is easy to ignore the background work that IT teams spend most of their days on just to keep everything running smoothly. In this post I will discuss how to quantify the value of DevOps to organizations. The notion of DevOps is simple: Developers working together with Operations to get things done faster in an automated and repeatable way. If the process is working the cycle looks like:

DevOps

DevOps consists of tools, processes, and the cultural change to apply both across an organization. In my experience in large companies this is usually driven from the top down, and in smaller companies this comes organically from the bottom up.

When I started in IT I worked as a NOC engineer for a datacenter. Most my days were spent helping colocation customers install or upgrade their servers. If one of our managed servers failed it was my responsibility to fix it as fast as possible. Other days were spent as a consultant helping companies manage their applications. This is when most web applications were simple with only two servers - a database and an app server:

monolithic_app

As I grew in my career I moved to the engineering side and worked developing very large web applications. The applications I worked on were much more complex then what I was used to in my datacenter days. It is not just the architecture and code that is more complex, but the operational overhead to manage such large infrastructure requires an evolved attitude and better tools.

distributed_app

When I built and deployed applications we had to build our servers from the ground up. In the age of the cloud you get to choose which problems you want to spend time solving. If you choose an Infrastructure as a service provider you own not only your application and data, but the middleware and operating system as well. If you pick a platform as a service you just have to support your application and data. The traditional on-premise option while giving you the most freedom, also carries the responsibility for managing the hardware, network, and power. Pick your battles wisely:

Screen Shot 2014-03-12 at 11.50.15 AM

As an application owner on a large team you find out quickly how well a team works together. In the pre-DevOps days the typical process to resolve an operational issues looked like this:

Screen Shot 2014-03-12 at 11.49.50 AM

  1. Support creates a ticket and assigns a relative priority
  2. Operations begins to investigate and blames developers
  3. Developer say its not possible as it works in development and bounces the ticket back to operations
  4. Operations team escalates the issue to management until operations and developers are working side by side to find the root cause
  5. Both argue that the issue isn't as severe as being stated so they reprioritize
  6. Management hears about the ticket and assigns it Severity or Priority 1
  7. Operations and Developers find the root cause together and fix the issue
  8. Support closes the ticket

Many times we wasted a lot of time investigating support tickets that weren't actually issues. We investigated them because we couldn't rely on the health checks and monitoring tools to determine if the issue was valid. Either the ticket couldn't be reproduced or the issues were with a third-party. Either way we had to invest the time required to figure it out. Never once did we calculate how much money the false positives cost the company in man-hours.

Screen Shot 2014-03-12 at 11.50.35 AM

With better application monitoring tools we are able to reduce the number of false positive and the wasted money the company spent.

How much revenue did the business lose?

noidea

I never once was able to articulate how much money our team saved the company by adding tools and improving processes. In the age of DevOps there are a lot of tools in the DevOps toolchain.

By adopting infrastructure automation with tools like Chef, Puppet, and Ansible you can treat your infrastructure as code so that it is automated, versioned, testable, and most importantly repeatable. The next time a server goes down it takes seconds to spin up an identical instance. How much time have you saved the company by having a consistent way to manage configuration changes?

By adopting deployment automation with tools like Jenkins, Fabric, and Capistrano you can confidently and consistently deploy applications across your environments. How much time have you saved the company by reducing build and deployment issues?

By adopting log automation using tools such as Logstash, Splunk, SumoLogic and Loggly you can aggregate and index all of your logs across every service. How much time have you saved the company by not having to manually find the machine causing the problem and retrieve the associated logs in a single click?

By adopting application performance management tools like AppDynamics you can easily get code level visibility into production problems and understand exactly what nodes are causing problems. How much time have you saved the company by adopting APM to decrease the mean time to resolution?

By adoption run book automation through tools like AppDynamics you can automate responses to common application problems and auto-scale up and down in the cloud. How much time have you saved the company by automatically fixing common application failures with out even clicking a button?

Understanding the value these tools and processes have on your organization is straightforward:

devops_tasks

DevOps = Automation & Collaboration = Time = Money

When applying DevOps across your organization the most valuable advice I can give is to automate everything and always plan to fail. A survey from RebelLabs/ZeroTurnaround shows that:

  1. DevOps teams spend more time improving things and less time fixing things
  2. DevOps teams recover from failures faster
  3. DevOps teams release apps more than twice as fast

How much does an outage cost in your company?

This post was inspired by a tech talk I have given in the past: https://speakerdeck.com/dustinwhittle/devops-pay-raise-devnexus

The post Quantifying the value of DevOps written by Dustin.Whittle appeared first on Application Performance Monitoring Blog from AppDynamics.

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@CloudExpo Stories
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, discussed how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved efficienc...
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, will share examples from a wide range of industries – includin...
"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to mon...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...