|By Mehdi Daoudi||
|January 17, 2017 10:00 AM EST||
A Practical Guide to SLAs
By Mehdi Daoudi
While visiting a customer last week (a large SaaS platform company), we started to have an interesting discussion around Service Level Agreements (SLAs) during which he encouraged me to write this blog.
When I was tasked with setting up the QoS Team at DoubleClick in 1999, the primary mission was to setup a monitoring team (internal, external, etc.) and the secondary mission was actually an accidental one: to manage our SLAs.
This became a hot priority in 2001, we paid out over $1 million (in real money) in penalties for SLA violations to a single customer. Basically, someone on our side signed a contract with insane service commitments like 100% uptime.
When we started to tackle this problem we were novice and had to ramp up quickly to stop those kind of penalty payments—we were in a crisis mode. My CFO was breathing down my neck every day with the same question, over and over again: “Are we going to breach anything today?”
Today, I am sharing my journey of how I took control of our SLAs as a ASP (back when SaaS was still an Application Service Provider), and everything you need to know to do the same.
Part 1: What are SLAs?
An SLA is a contractual agreement between a vendor and a customer, stating that the vendor will deliver on agreed upon terms. The SLA is the legal umbrella, under which you have one or more Service Level Objective(s) (SLO) which describe the Service, measurement methodology, and objective (uptime, speed, transaction / seconds, etc.). The definition below describes how an SLA covers the customer and the vendor:
- Customer: An SLA benefits the client by providing objective grading criteria and protection from poor service.
- Vendor: An SLA benefits the supplier by providing a way of ensuring that the correct expectations are set, performance will be judged fairly and that the supplier is incentivized to improve quality of services.
- For SLAs to be implemented, we also agreed on the following key principles:
- Mutually acceptable
Part 2: Ground zero, discovery
The first few weeks were spent compiling the list of all the contracts, extracting SLAs, SLOs, penalties, and putting that in a database.
Then I went to start the education process with the business stakeholders, legal, and our leadership.
As you can see, I focused on end-user experience based SLAs from an early stage.
Part 3: Establishing an SLA is more than just putting a few sentences in the contract.
The reason we paid $1 million is that there was no SLA Management System in place.
We started then by building a Service Level Management practice that relied on 4 pillars: Administration, Monitoring, Reporting, and Compliance (AMRC).
Then we sat down with the business partners, customers, legal, and finance team and created a process to avoid costly mistakes in the future, an SLA lifecycle. We reviewed the SLM quarterly:
We then used our in-house Data Scientists to run simulations on the risk to breach SLAs based on the historical data we had from our monitoring tools. You do not want to set SLA that will be breached every day.
We also ran multiple “what-if” scenarios on availability vs. revenue and the impact at various hours of the day and days of the week:
We even created an online tool (2001) for our sales team to be able to request an SLA portfolio for a customer that would be reviewed and approved by our QoS team; think of this as an “SLA desk.”
Part 4: External SLAs and internal SLAs.
In these early stages of this project, we quickly discovered a major glitch: the external SLAs were not matching the internal ones. And that, in my opinion, was going to be a huge mistake—how can we ever achieve such goals, talk the same language between the tech, business, and customers when everything was so different? Customers would ask for ad serving uptime and our tech group would use servers availability!
So, we aligned our external and internal SLOs and made the internal objectives (the targets) very very high. This was a huge victory because it allowed us on a daily basis to rely on one set of metrics to understand our SLA risk position, but also to drive operational excellence. Our tech group (Ops, Engineering, etc.) became sensitive to the notion of a business SLA and started to care very much about not breaching them.
Part 5: Monitoring
For availability and performance, we relied on three synthetic products; internally, Sitescope was running in 17 datacenters, and two external synthetic products. We wanted to have as many data points as possible from as many tools as possible. The stakes were just too high to not invest in multiple tools. This entire SLM project was not cheap to implement and run on an annual basis, but I also knew the cost of not doing it right the hard way.
For monitoring it became clear to us that we needed to test as frequently as possible from as many vantage points as possible:
- When you test your SLOs end points every 1 hour, you have to wait 59 minutes after each check! So you could be inflicting yourself with false downtime.
- You want many data points to ensure statistical significance. You could use small datasets, but you will tend to have lower power and precision. Bigger datasets are also better for dealing with False Positives and False Negatives.
Part 6: Performance (speed) monitoring
One of our biggest difficulties was finding an effective way to measure the performance of third-party providers – and implementing that technique in SLAs.
The challenge was that clients would look at their site performance and notice spikes and they would attribute it to our system, meanwhile our performance chart would not show any problems. We couldn’t correlate the two charts, therefore we couldn’t come to an agreement whether it was our problem or someone else’s problem.
We created a methodology we called Differential Performance Measurement (DPM).
The philosophy behind DPM was to be able to measure the performance and availability of Doubleclick’s services as accurately as possible and its impact on the pages of our customers, making sure we were responsible and accountable for the things we had control over to eliminate the finger-pointing.
The methodology added context to the measurements. DPM introduced clarity and comparison, removing absolute performance numbers from the SLAs.
Recipe for Differential Performance Measurement (example with an advert.):
1- Take two pages, one without ads and one with one ad call.
- Page A = without ad
- Page B = with ad
2- Make sure the pages do not contain any other third-party references (CDNs, etc.).
3- Make sure the page sizes (in KB) are the same.
4- “Bake” – Measure response times for both pages and you get the following metrics:
- Differential Response (DR) will be (Response Time of page B) minus (Response Time of page A)
- Differential Response Percentage (DRP) = DR / A. (e.g. If Page A is 2 seconds, and Page B is 2.1 seconds, DR is 0.1 second, and DRP is 0.1/2=0.05 or 5%)
With this methodology, we were able to eliminate noise introduced by:
- Internet-related issues that were out of our control (fiber cuts, etc.)
- Monitoring agent issues (which raises the separate topic of monitoring your monitoring services)
- Other third parties
- Scenario 1: The ad serving company is having performance problems and negatively impacting the customer’s site performance. The vendor breached the SLA threshold from Time 4 to Time 8.
- Scenario 2: The website is having performance problems that are not caused by the ad serving company.
Part 7: Reporting
Because of that huge payout, the entire SLM project had a lot of visibility that extended all the way to CEO. We reported monthly on our compliances and our internal SLAs, breaches were detected in real time (thanks to a clever tool DigitalFuel, that, sadly, no longer exists.).
The external SLOs were just a subset of the internal SLOs; when we were done in late 2001, we were tracking over 100 OLAs.
I cannot stress enough that if you are going to embark on this journey, you better report a lot! But, also make sure the reports are actionable (Green, Yellow, Red).
Example of a monthly Report:
Example of our daily reporting during our Daily Health Check meeting:
A Culture of Quality emerged at DoubleClick because everyone was aligned around these business service metrics. No one wanted to breach SLAs, no one wanted to let down the customer.
Part 8: Conclusion
After doing this exercise of implementing what I would consider in the early 2000s to be a comprehensive SLM process, we were able to:
- Manage hundreds of contracts with up to five SLOs.
- Offer a wide range of SLAs that were scalable (adding new products).
- Reduce the financial risks to our company.
- Manage our reputation and credibility by giving meaningful SLAs and reporting on them with accuracy and integrity.
- Real time alerts of compliance breach. Knowing ahead of time that an SLA will be breached was incredible; we would know that adding four minutes of downtime will breach 12 contracts and result in $X, so Ops took steps to stop releases or anything that could have an impact on uptime.
Some people do not believe in SLAs. Bad SLAs are the ones that some companies put in the contract without real penalties or real measurements. I always see the SLAs that guarantee 0% packet loss, but if you ask how it’s measured, you quickly realize that it’s useless—this is exactly what gives SLAs a bad reputation.
I believe that SLAs are a great way to align customers and vendors, reduce frictions, and stop the finger-pointing. But, customers also need to do their jobs and ask for useful SLAs. The point is that you want to hold the vendors accountable, not to drive them out of business. You want them to feel the pain of not doing a good job when they do not deliver the service for which they are being payed.
As the “Cloudification” of our world expands, SLAs are here to stay and they are going to become even more prevalent. Customers of these services like cloud servers, cloud applications (Office365, Salesforce, etc.), services (DNS, CDN, Security, etc.) will see customers demanding more meaningful SLAs, stricter enforcement, and more penalties. This is the only way forward.
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
Mar. 26, 2017 03:15 AM EDT Reads: 938
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
Mar. 26, 2017 01:45 AM EDT Reads: 3,676
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 26, 2017 12:30 AM EDT Reads: 1,815
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, will present an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He will expound on the industry issues he frequently came up against as an analyst, and...
Mar. 26, 2017 12:00 AM EDT Reads: 4,091
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Mar. 26, 2017 12:00 AM EDT Reads: 1,679
The financial services market is one of the most data-driven industries in the world, yet it’s bogged down by legacy CPU technologies that simply can’t keep up with the task of querying and visualizing billions of records. In his session at 20th Cloud Expo, Jared Parker, Director of Financial Services at Kinetica, will discuss how the advent of advanced in-database analytics on the GPU makes it possible to run sophisticated data science workloads on the same database that is housing the rich inf...
Mar. 25, 2017 10:45 PM EDT Reads: 3,609
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Mar. 25, 2017 09:45 PM EDT Reads: 3,548
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Mar. 25, 2017 08:45 PM EDT Reads: 2,746
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
Mar. 25, 2017 08:45 PM EDT Reads: 5,932
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Mar. 25, 2017 05:15 PM EDT Reads: 1,925
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle.
Mar. 25, 2017 04:00 PM EDT Reads: 2,829
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
Mar. 25, 2017 03:00 PM EDT Reads: 2,716
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
Mar. 25, 2017 01:45 PM EDT Reads: 3,284
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
Mar. 25, 2017 01:30 PM EDT Reads: 1,706
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
Mar. 25, 2017 01:15 PM EDT Reads: 2,039
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
Mar. 25, 2017 12:45 PM EDT Reads: 1,874
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development ...
Mar. 25, 2017 12:45 PM EDT Reads: 3,478
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Mar. 25, 2017 12:30 PM EDT Reads: 5,058
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
Mar. 25, 2017 12:30 PM EDT Reads: 1,171
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
Mar. 25, 2017 12:30 PM EDT Reads: 1,805