Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

What Is #DevOps Intelligence? | @DevOpsSummit #AI #ContinuousDelivery

DevOps intelligence gives you access to both real-time and historical information about your applications, people, environments

DevOps Intelligence Changes the Game
By Lisa Wells

One of my favorite parts of the novel The Phoenix Project is when Bill Palmer, DevOps hero and VP of IT Operations for the fictional company “Parts Unlimited” has a light bulb moment about the central importance of IT to the business.

The moment comes as the company’s CFO lays out for Bill how he strives to align the goals of his department with the goals of the business. It’s here Bill starts to realize he must take a similar approach with IT. He ultimately turns to data about his delivery process to improve IT’s effectiveness and save his team from outsourcing—and a DevOps team is born.

Okay, so real-world situations might not be as dire as the fictional drama at Parts Unlimited. Still, many IT teams that are transforming to DevOps have yet to take the next step—using “DevOps Intelligence” to make data-driven decisions that help them improve software delivery.

What Is DevOps Intelligence?
DevOps Intelligence  is all about providing the intelligence and insight companies need to deliver software more efficiently, with less risk and with better results. Making it part of your process is becoming crucial as both the demand for better software faster and the complexity of application development keep increasing. As incentive for getting started, below are seven benefits of making DevOps intelligence a top priority in 2017 and beyond.

1. Faster Release Cycles
End-to-end intelligence about your delivery pipeline lets you optimize your processes and accelerate release cycles. With the real-time, actionable information that DevOps intelligence provides, you can identify waste, such as bottlenecks in the pipeline. You can quickly find out how systems are performing with new changes, monitor the success rate of deployments, get insight into the cycle times for each team, and see which processes are working well and which are negatively impacting time to delivery.

2. Higher Quality Software
DevOps intelligence enables feedback loops, which are the foundation of iterative development. Feedback loops allow for creativity and are extremely valuable for doing things like trying out new features or changes to an interface to make sure you’re building more of what customers want. Feedback loops can become an integral part of the software development and delivery process because failures are fast, cheap, and small.

3. Increased Business Value of Software
DevOps intelligence allows you to quickly get actionable information about things like which features customers are using, which processes they’re abandoning, or whether they’re changing their behavior. DevOps intelligence can also be mined after a release to support impact analysis so you can find out whether what you’re delivering is actually of value to your customers and make smarter decisions about future offerings.

4. Greater Transparency
Insight into the entire pipeline provides end-to-end transparency. Clear, real-time visibility into the process makes it easier for you to understand why you are (or are not) your hitting goals, justify requests for additional time and resources, and make the case that readiness rather than calendar dates should drive releases. Transparency also means that non-IT stakeholders can easily track progress at any given point in the process and feel empowered to make business decisions based on real-time data without having to go through IT.

5. Addition of Proactive and Predictive Management to the Delivery Process
DevOps intelligence gives you access to both real-time and historical information about your applications, people, environments, and more. Real-time, actionable insight delivers advantages such as early warning of what might fail so you can prevent it rather than wasting time firefighting. Historical data lets you analyze trends and predict behavior based on past results.

6. Better Support of Compliance Requirements
Data collected about your processes shows not just how those processes can be optimized, but what happened when, in an auditable fashion. Were processes followed? Who did what and when? What failed? What steps were taken, by whom, when, and were they correct? DevOps intelligence helps you stay on top of your compliance requirements and fix problems that might threaten your ability to meet them.

7. Stronger DevOps Culture
Intelligence about your delivery process helps strengthen your DevOps culture by empowering people, both inside and outside of IT, to affect change and be part of efforts to improve processes and products. DevOps intelligence provides insight that shines a light on their accomplishments so they can be celebrated. The ability to share data with people across the business reinforces the fact that they have an important role in making impactful decisions that help the company.

As companies improve their DevOps maturity and implement release orchestration, they’re building the infrastructure they need to automatically capture and analyze DevOps data and turn it into actionable information. Armed with this intelligence, IT will be well-positioned to fully support the goals of the business.

The post DevOps Intelligence Changes the Game appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.