Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

A Developer’s Perspective | @DevOpsSummit #DevOps #APM #Monitoring

Since moving to a model where developers own their services, there’s a lot more developer independence

A Developer's Perspective
By Eric Sigler

"Walking over to the Ops room - I don't feel like I ever need to do that anymore."

In the run up to our latest release of capabilities for developers, I sat down with David Yang, a senior engineer here at PagerDuty who's seen our internal architecture evolve from a single monolithic codebase to dozens of microservices. He's the technical lead for our Incident Management - People team, which owns the services that deliver alert notifications to all 8,000+ PagerDuty customers. We sat down and talked about life after switching to teams owning the operations of their services. Here are some observations about the benefits and drawbacks we've seen:

On life now that teams own their services:
Since moving to a model where developers own their services, there's a lot more developer independence. A side effect is that we've minimized the difficulties in provisioning and managing infrastructure. Now, the team wants to optimize for the least amount of obstacles and roadblocks. Supporting infrastructure teams are geared toward providing better self-service tools to minimize the need for human intervention.

The shift to having developers own their code reduces cycle time from when someone says, "this is a problem," to when they can actually fix the problem, which has been invaluable.

On cultural change:
By having people own more of the code, and have more responsibility in general for the systems they operate, you essentially push for a culture that's more driven towards getting roadblocks out of the way - like each team is more optimized towards, "how can I make sure I'm not ever blocked again" or "not blocked in the future." It's a lot more apparent when we are blocked. Before, I had to ask ops every time we wanted to provision hosts, and I just accepted it. Now my team can see its roadblocks better because they aren't hidden by other teams' roadblocks.

We have teams that are focused a lot more on owning the whole process of delivering customer value from end to end, which is invaluable.

On how this can help with the incident response process:
There are clearer boundaries of service ownership. It's easier to figure out which specific teams are impacted when there's an operability issue. And the fact that I know the exact procedure to follow - and it's more of an objective procedure of, "this is the checklist" - that is great. It enables me to focus 100% on solving the problem and not on the communication around the incident.

On what didn't work so well:
Not to say that owning a service doesn't come with its own set of problems. It requires dedicated time to tend to the operational maintenance of our services.  This ultimately takes up more of the team's time, which is especially an issue with legacy services where they may be knowledge gaps. In the beginning, we didn't put strong enough guardrails in place to protect operability work in our sprints. That's being improved by leveraging KPIs [such as specific scaling goals and operational load levels] to enable us to make objective decisions.

On the future:
[Of balancing operations-related work vs. feature development work] teams are asking: "How do I leverage all of this stuff day-to-day? How do I make even more objective decisions?" - and driving to those objective decisions by metrics.

Everything in our product development is defined in, "what is customer value", "what is success criteria," etc. I think trying to convey the operational work in the same sense helps make it easier to prioritize effectively. We're all on the same team and aligned to the same goal of delivering value to our customers, and you have to resolve the competing priorities at some point.

Trying to enact change within an organization around operations requires a lot of collaboration. It also takes figuring out what the right metrics are and having a discussion about those metrics.


Image: "Magnifying glass" is copyright (c) 2013 Todd Chandler

The post A Developer's Perspective appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.