@CloudExpo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Blog Feed Post

User based service level enforcement for Web Applications

Monitoring response times for web applications is one way to enforce SLA’s (Service Level Agreements). Typically you define different SLA’s for different pages of your application. Why that? Well - because certain features of the web application might be more critical than others and therefore you want to ensure that the critical pages respond fast enough to satisfy the end user expectations.

User based service levels

Depending on the type of application you have you may have the need to enforce different SLA’s for different users or different groups of users. Why that? In case your application provides different member ship levels you want to make sure that the more the users pay the more satisfied they are with the applications end user experience and performance.

In order to do that it is no longer sufficient to enforce SLA’s on URL’s or Pages. You need to assign individual web requests to the actual authenticated user and monitor the response times per user name. Having the contextual information about the user or group for individual transactions/web requests enables you to enforce user based service levels.

With dynaTrace we get a PurePath for every single transaction that gets executed. The PurePath not only contains execution times, SQL Statements and log messages - it also contains additional context information like HTTP parameters and method arguments. There are easy ways to capture the user name for  Java or .NET based Web Applications. You can for instance capture it from the session context or from an argument value that is passed to the method that is doing the user authentication. As a sample I implemented an ASP.NET HttpHandler that is taking the authenticated user name and puts it into the Web Request Context which will then be captured by dynaTrace. With this information I can monitor individual users and their response times. The following illustration shows the response times split up by individual users using Business Transactions.

Response times by individual Users

Response times by individual Users

I can now go ahead and define SLAs for each individual user or even for groups of users.

User Activity Monitoring

As a nice side effect I can also monitor the activity of individual users by looking at the request count instead of respone time.

Activity by User

Activity by User

The above image shows that testuser5 has the most requests amoung the non anonymous browser. With a single click I can now drill down to analyze the actual URLs that have been requested by a single user identifying the slowest or most often requested pages.

WebRequests for the most active user

WebRequests for the most active user

The option to also analyze the individual page requests of a particular user now also allows us to set SLA’s for the combination of User and Web Page.


There are more options than just enforcing SLA’s on individual pages or URLs. It can be done on a much more granular level like User, Group or even the combination of User and Web Page. This enables you to keep your most critical users happy by reacting on performance issues that are experienced by them.

Related posts:

  1. Performance Analysis: How to identify synchronization issues under load? Synchronization is a necessary mechanism to control access to shared...
  2. Challenges of Monitoring, Tracing and Profiling your Applications running in “The Cloud” Cloud Computing presents unique opportunities to companies to reduce costs,...
  3. Web Service Interoperabilty Issues I’ve been working on building a .NET Client Application to...

Read the original blog entry...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

CloudEXPO Stories
Most modern computer languages embed a lot of metadata in their application. We show how this goldmine of data from a runtime environment like production or staging can be used to increase profits. Adi conceptualized the Crosscode platform after spending over 25 years working for large enterprise companies like HP, Cisco, IBM, UHG and personally experiencing the challenges that prevent companies from quickly making changes to their technology, due to the complexity of their enterprise. An accomplished expert in Enterprise Architecture, Adi has also served as CxO advisor to numerous Fortune executives.
Cloud computing is a goal aspired to by all organizations, yet those in regulated industries and many public sector organizations are challenged in adopting cloud technologies. The ability to use modern application development capabilities such as containers, serverless computing, platform-based services, IoT and others are potentially of great benefit for these organizations but doing so in a public cloud-consistent way is the challenge.
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.