Welcome!

@CloudExpo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Mehdi Daoudi, Rene Buest

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Why Perfomance Management Is Easier in Public than On-Premise Clouds

Performance Management in public and in private clouds

Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud. The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment. How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general – and clouds in particular – pose to the realm of APM.

Time is relative
The problem with timekeeping is well known in the VMWare community. There is a very good VMWare whitepaper that explains this in quite some detail. It doesn’t tell the whole story, however, because obviously there are other virtualization solutions like Xen, KVM, Hyper-V and more. All of them solve this problem differently. On top of that the various guest operating systems behave very differently as well. In fact I might write a whole article just about that, but the net result is that time measurement inside a guest is not accurate, unless you know what you do. It might lag behind real time and speedup to catch up in the next moment. If your monitoring tool is aware of that and supports native timing calls it can work around that and give you real response times. Unfortunately that leads to yet another problem. Your VM is not running all the time, like a process it will get de-scheduled from time to time; however, unlike a process it will not be aware of that. While real time is important for response time, it will screw with your performance analysis on a deeper level.

The Effects of timekeeping on Response and Execution Time

The Effects of timekeeping on Response and Execution Time

If you measure real time, then Method B looks more expensive than it actually is. This might lead you down a wrong track when you look for a performance problem. When you measure apparent time then you don’t have this problem, but your response times do not reflect the real user experience. There are generally two ways of handling that. Your monitoring solution can capture these de-schedule times and account this all the way against your execution times. The more granular your measurement the more overhead this will produce. The more pragmatic approach is to simply account this once per transaction and thus capture the “impact” that the de-schedules have on your response time. Yet another approach is to periodically read the CPU steal time (either from vSphere or via mpstat on Xen) and correlate this with your transaction data. This will give you a better grasp on things. Even then it will add a level of uncertainty in your performance diagnostics, but at least you know the real response time and how fast your transactions really are. Bottom line, those two are no longer the same thing.

The impact of shared environments
The sharing of resources is what makes virtualization and cloud environments compelling from a cost perspective. Most normal data centers have a average CPU utilization far below 20%. The reason is two fold: on the one hand they isolate the different applications by running them on different hardware; on the other hand they have to provision for peak load. By using virtualization you can put multiple “isolated” applications on the same hardware. Resource utilization is higher, but even then it does not go beyond 30-40 percent most of the time, as you still need to take peak load into account. But the peak loads for the different applications might occur at different times! The first order of business here is to find the optimal balance.

The first thing to realize is that your VM is treated like a process by the virtualization infrastructure. It gets a share of resources – how much can be configured. If it reaches the configured limit it has to wait. The same is true if the physical resources are exhausted. To drive utilization higher, virtualization and cloud environments overcommit. That means they allow 10 2GHz VMs on a 16GHz physical machine. Most of the time this is perfectly fine as not all VMs will demand 100 percent CPU at the same time. If there is not enough CPU to go around, some will be de-scheduled and will be given a greater share the next time around. Most importantly this is not only true for CPU but also memory, disk and network IO.

What does this mean for performance management? It means that increasing load on one application, or a bug in the same, can impact another negatively without you being aware of this. Without having a virtualization-aware monitoring solution that also monitors the other application you will not see this. All you see is that the application performance goes down!

When the load increases on one Application it effects the other

When the load increases on one Application it affects the other

With proper tools this is relatively easy to catch for CPU-related problems, but a lot harder for IO-related issues. So you need to monitor both applications, their VMs and the underlying virtualization infrastructure and correlate the information. That adds a lot of complexity. The virtualization vendors try to solve this by looking purely at VM and Host level system metrics. What they forget is that high utilization of a resource does not mean the application is slow! And it is the application we care about.

OS metrics are worse than useless
Now for the good stuff. Forget your guest operating system utilization metrics, they are not showing you what is really going on. There are several reasons why that is so. One is the timekeeping problem. Even if you and your monitoring tool use the right timer and measure time correctly, your operating system might not. In fact most systems will not read out the timer device all the time, but rely on the CPU frequency and counters to estimate time as it is faster than reading the timer device. As utilization metrics are always based on a total number of possible requests or instructions per time slice, they get screwed up by that. This is true for every metric, not just CPU. The second problem is that the guest does not really know the upper limit for a resource, as the virtualization environment might overcommit. That means you may never be able to get 100% or you can get it at one time but not another. A good example is the Amazon EC2 Cloud. Although I cannot be sure, I suspect that the guest CPU metrics are actually correct. They correctly report the CPU utilization of the underlying hardware, only you will never get 100% of the underlying hardware. So without knowing how much of a share you get, they are useless.

What does this mean? You can rely on absolute numbers like the number of I/O requests, the number of SQL Statements and the amount of data sent over the wire for a specific application or transaction. But you do not know whether an over-utilization of the physical hardware presents a bottleneck. There are two ways to solve this problem.

The first involves correlating resource and throughput metrics of your application with the reported utilization and throughput measures on the virtualization layer. In case of VMWare that means correlating detailed application and transaction level metrics with metrics provided by vSphere. On EC2 you can do the same with metrics provided by CloudWatch.

EC2 Cloud Monitoring Dashboard showing 3 instances

EC2 Cloud Monitoring Dashboard showing 3 instances

This is the approach recommended by some virtualization vendors. It is possible, but because of the complexity requires a lot of expertise.  You do however know which VM consumes how much of your resources. With a little calculation magic you can break this down to application and transaction level; at least on average. You need this for resource optimization and to decide which VMs should be moved to a different physical hardware. This does not do you a lot of good in case of acute performance problems or troubleshooting as you don’t know the actual impact of the resource shortage. Or if it has an impact at all. You might move a VM, and not actually speed things up. The real crux is that just because something is heavily used does not mean that it is the source of your performance problem! And of course this approach only works if you are in charge of the hardware, meaning it does not work with public clouds!

The second option is one that is, among others, proposed by Bernd Harzog, a well-known expert in the virtualization space. It is also the one that I would recommend.

Response time, response time, Latency and more response time
On the Virtualization Practice blog Bernd explains in detail why resource utilization does not help you on either performance management or capacity planning. Instead he points out that what really matters is response time or throughput of your application. If your physical hardware or virtualization infrastructure runs into utilization problems the easiest way to spot this is when it slows down. In effect that means that I/O requests done by your application are slowing down and you can measure that. What’s more important is that you can turn this around! If your application performs fine then whatever the virtualization or cloud infrastructure reports, there is no performance problem. To be more accurate, you only need to analyze the virtualization layer if your application performance monitoring shows that a high portion of your response time is down to CPU shortage, memory shortage or I/O latency. If that is not the case than nothing is gained by optimizing the virtualization layer from a performance perspective.

Network Impact on Transaction is minimal, even though network utilization is high

Network Impact on Transaction is minimal, even though network utilization is high

Diagnosing the virtualization layer
Of course in case of virtualization and private clouds you still need to diagnose a infrastructure response time problem, once identified. You measure the infrastructure response time inside your application. If you have identified a bottleneck, meaning it slows down or is a big portion of your response time, you need to relate that infrastructure response time back to your virtualized infrastructure: Which resource slows down? From there you can use the metrics provided by VMWare (or whatever your virtualization vendor) to diagnose the root cause of the bottleneck. The key is that you identify the problem based on actual impact and then use the infrastructure metrics to diagnose the cause of that.

Layers add complexity
What this of course means is that you now have to manage performance on even more levels than before. It also means that you have to somehow manage which VMs run on the same physical host. We have already seen that the nature of the shared environment means that applications can impact each other. So a big part of managing the performance in a virtualized environment is to detect that impact and “tune” your environment in a way that both minimizes that impact and maximizes your resource usage and utilization. These are diametrically opposed goals!

Now what about Clouds
A cloud by nature is more dynamic than a “simple” virtualized environment. A cloud will enable you to provision new environments on the fly and also dispose of them again. This will lead to spikes on your utilization, leading to performance impact on existing application. So in the cloud the “minimum impact vs. maximize resource usage” goal becomes even harder to achieve. Cloud Vendors usually provide you with management software to manage the placement of your VMs. They will move them around based on complex algorithms to try and achieve the impossible goal of high performance and high utilization. The success is limited, because most of these management solutions ignore the application and only look at the virtualization layer to make these decisions. It’s a vicious cycle and the price you pay for better utilizing your datacenter and faster provisioning of new environments.

Maybe a bigger issue is Capacity management. The shared nature of the environment prevents you from making straight-forward predictions about capacity usage on a hardware level. You get a long way by relating the requests done by your application on a transactional level with the capacity usage on the virtualization layer, but that is cumbersome and does not lead to accurate results. Then of course a cloud is dynamic and your application is distributed, so without having a Solution that measures all your transactions and auto detects changes in the cloud environment you can easily make this a full time job.

Another problem is that the only way to notice a real capacity problems is to determine if the infrastructure response time goes down and negatively impact your application. Remember utilization does not equal performance and you want high utilization anyway! But once you notice capacity problems, it is to late to order new hardware.

That means is that you not only need to provision for peak loads, effectively over provisioning again,  you also need to take all those temporary and newly-provisioned environments into account. A match made in planning hell.

Performance Management in a public cloud
First let me clarify the term public cloud here. While a public cloud has many characteristics, the most important ones for this article are that you don’t own the hardware, have limited control over it and can provision new instances on the fly.

If you think about this carefully you will notice immediately that you have fewer problems. You only care about the performance of your application and not at all about the utilization of the hardware – it’s not your hardware after all. Meaning there are no competing goals! Depending on your application you will add a new instance if response time goes down on a specific tier or if you need more throughput than you currently achieve. You provision on the fly, meaning your capacity management is done on the fly as well. Another problem solved. You still run in a shared environment and this will impact you. But your options are limited as you cannot monitor or fix this directly. What you can do is measure the latency of the infrastructure. If you notice a slowdown you can talk to your vendor. Most of the time you will not care and just terminate the old and start a new instance if infrastructure response time goes down. Chances are the new instances are started on a less utilized server and that’s that. I won’t say that this is easy. I also do not say that this is better, but I do say that performance management is easier than in private clouds.

Conclusion
Private and Public cloud strategies are based on similar underlying technologies. Just because they are based on similar technologies, however, doesn’t mean that they are similar in any way in terms of actual usage. In the private cloud, the goal is becoming more efficient by dynamically and automatically allocating resources in order to drive up utilization while also lowering management costs of those many instances. The problem with this is that driving up utilization and having high performance are competing goals. The higher the utilization the more the applications will impact one another. Reaching a balance is highly complex, and is made more complex due to the dynamic nature of the private cloud.

In the public cloud, these competing goals are split – between the cloud provider, who cares about utilization, and the application owner, who cares about performance. In the public cloud the application owner has limited options: he can measure application performance; he can measure the impact of infrastructure degradation on the performance of his business transactions; but he cannot resolve the actual degradation. All he can do is terminate slow instances and/or add new once and in the hope that they will perform at a higher level. In this way, performance in the public cloud is in fact easier to manage.

But whether it be public or private you must actively manage performance in a cloud production environment. In the private cloud you need to maintain a balance between high utilization and application performance, which requires you to know what is going under the hood. And without application performance management in the public cloud, application owners are at the mercy of cloud providers, whose goals are not necessarily aligned with yours.

Related reading:

  1. The rise and fall of the machines – Watching out for clouds // It has been 5 years ago that Amazon launched...
  2. From Cloud Monitoring to Effective Cloud Management The following overview of our webinar with IntraLinks is taken...
  3. Integrated Cloud based Load Testing and Performance Management from Keynote and dynaTrace Watch the 7 Minute Walk-Through Video that guides you through the...
  4. Field Report – Application Performance Management in WebSphere Environments // Just in time for the upcoming Webinar with The...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@CloudExpo Stories
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS. In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availa...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, will provide a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
FinTechs use the cloud to operate at the speed and scale of digital financial activity, but are often hindered by the complexity of managing security and compliance in the cloud. In his session at 20th Cloud Expo, Sesh Murthy, co-founder and CTO of Cloud Raxak, showed how proactive and automated cloud security enables FinTechs to leverage the cloud to achieve their business goals. Through business-driven cloud security, FinTechs can speed time-to-market, diminish risk and costs, maintain continu...
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...