@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Pat Romanski

Related Topics: @CloudExpo

@CloudExpo: Article

Cloud Computing Journal: Compute in the Cloud, Not the Fog

How to measure application performance as experienced by the end user

Cloud computing utilizes computing resources, network bandwidth, storage, applications, and services available in the Internet "cloud" to deliver scalable Web functionalities to end users anywhere in the world. Drawing on the cloud for computing resources is similar to tapping into the electric grid for electricity - cost is incurred only as resources or computing cycles are consumed.

Application owners can theoretically take advantage of the highly scalable infrastructure available from vendors like Amazon, Google, and IBM and services available from application vendors like Google, Microsoft, and Salesforce to deliver application functionalities without incurring capital expenditure, the headache and expense of operating a data center, or the cost of developing common application functions like billing, shopping carts, and CRM. In effect, application owners can focus on delivering their unique value and rich user experience and leave the mundane development and management tasks to the domain experts.

But is it so simple?

Not really.

Cloud computing depends on a loosely coupled amalgamation of hundreds of hardware and software modules or services from multiple third-party vendors. As a result, IT has no direct control over this infrastructure. Nevertheless, IT is still responsible for application availability and response time.

As the old adage goes, you manage what you can measure. So it follows that IT must have tools that accurately measure application performance from the perspective of the end user to ensure application response times meet the requirements of the end user. That means that for applications delivered via the cloud, the point where you measure performance will dramatically alter the data. Understanding where application performance is measured, basically determines what performance data are available and that in turn impacts the validity of the measurement.

Where and What
What is an application performance problem?

Simplistically, you have a problem when an end user isn't getting the transaction response time he or she expects or needs to complete the job at hand. For an e-commerce site, that might be the time required to search for and display the image of the ideal little black dress. With consumers being more demanding and with increasingly intense competition among e-commerce sites, many studies indicate that users who wait more than about four seconds tend to click away to a competitor's site.

In the recent past when all hardware and software components existed in a single data center all under IT's direct control, it was possible to assume a strong correlation between the performance of network and servers and user experience. This made the where and what question simple: Where? In the datacenter. What? Server performance.

Cloud computing fundamentally changes this equation. While it's nice to measure CPU and memory consumption for the Web server that delivers the image of the little black dress, these metrics have little bearing on the time the customer has to wait before the page components wind through the cloud and load in the browser. For that matter, these metrics won't tell you if the image was ever delivered. Nor will you know if any of the other objects on that page served properly. These metrics simply can't comprehend the hundreds of hardware and software components and services involved in serving up the final page. Any of these components or services might degrade page load times or worse, leave gapping holes in functionality. Imagine the impact on sales when the "Buy Now" button or the link to the shopping cart never makes it to the screen.

More Stories By Hon Wong

Hon has served as CEO of Symphoniq Corporation since its inception. Prior to joining Symphoniq, Hon co-founded NetIQ, where he served on the board of directors until 2003. Hon has also co-founded and served on the board of several other companies, including Centrify, Ecosystems (acquired by Compuware), Digital Market (acquired by Oracle) and a number of other technology companies. Hon is also a General Partner of Wongfratris Investment Company, a venture investment firm. Hon holds dual BS in electrical engineering and industrial engineering from Northwestern University and a MBA from the Wharton School at the University of Pennsylvania.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
OpsRamp is an enterprise IT operation platform provided by US-based OpsRamp, Inc. It provides SaaS services through support for increasingly complex cloud and hybrid computing environments from system operation to service management. The OpsRamp platform is a SaaS-based, multi-tenant solution that enables enterprise IT organizations and cloud service providers like JBS the flexibility and control they need to manage and monitor today's hybrid, multi-cloud infrastructure, applications, and workloads, including Microsoft Azure. We are excited to partner with JBS and look forward to a long and successful relationship.
Apptio fuels digital business transformation. Technology leaders use Apptio's machine learning to analyze and plan their technology spend so they can invest in products that increase the speed of business and deliver innovation. With Apptio, they translate raw costs, utilization, and billing data into business-centric views that help their organization optimize spending, plan strategically, and drive digital strategy that funds growth of the business. Technology leaders can gather instant recommendations that result in up to 30% saving on cloud services. For more information, please visit www.Apptio.com.