Welcome!

Cloud Expo Authors: Elizabeth White, Roger Strukhoff, Jason Bloomberg, Liz McMillan, Pat Romanski

Related Topics: Cloud Expo, AJAX & REA

Cloud Expo: Article

Integrated Cloud-based Load Testing and Performance Management

New Integration from Keynote and dynaTrace

Load Testing has traditionally been done In-House with load-testing tools using machines in your test center to generate HTTP traffic against the application needing to be tested for high volume transactions. With agile development practices, shorter release cycles and higher number of users that will ultimately access a web application from more places around the world, in-house testing reached its limits. Maintaining a load-testing infrastructure that supports 10 or 100 thousands of users becomes costly. With rapidly changing applications, updating test scripts is also becoming a bigger challenge all the time, binding lots of test resources to just the task of maintaining test scripts. When running tests more frequently, analyzing test results becomes a task that consumes performance architects or engineers with analyzing graphs and log files in order to figure out what problems were just uncovered by the recent test.

Let’s summarize these problems/requirements:

  • It is important to run bigger loads than ever before as our apps are accessed by more users around the globe
  • Besides just running the load we want to know how end-user performance is perceived from different locations around the globe
  • It is costly to own and maintain a test environment large enough to support these loads
  • It is time consuming to constantly adapt test scripts to reflect the changes of every product iteration
  • It takes experienced performance engineers or architects too long to analyze the test results and identify the root cause of problems

Cloud-Based Load-Testing with integrated Application Performance Management
Cloud based Load Testing solves many of these problems by providing high-volume tests from around the globe at specific times at a manageable cost. But it comes with some requirements on the tested application, and services like this must meet certain requirements in order to solve the discussed problems:

  • The Application Under Test (AUT) must be accessible from the internet as the generated transactions are generated from machines around the globe and not within the local test environment. Companies usually use part of their production system at “off-hours” to host the version of the application to be tested. This allows running large scale tests without having to have a replicate of the production environment for testing
  • The load testing service must provide an easy way to create and update scripts to adapt to changes within a product’s iterations. Otherwise too much time and effort is put into setting up tests.
  • The service must integrate with performance management software that runs on the tested application. This allows correlating data shown in load testing reports (Response Times, Transaction Rates, Bandwidth Usage …) with data captured in the application infrastructure (Transaction Times, CPU, Memory, Exceptions, …)

Proof of Concept: Load Testing with Keynote integrated with Application Performance Management from dynaTrace
Together with Keynote’s Load Testing Consultants we set up the following environment showcasing the benefits of an integrated solution of Cloud-Based Load Testing and Application Performance Management.

Step 1: Deploying the application
We deployed a 4 tier (2 Java and 2 .NET Runtimes) eCommerce Travel Portal on a hosted virtual infrastructure so that it is accessible by the Cloud-based Load Testing Service. We also installed and configured dynaTrace to manage this multi-tier heterogeneous application in order to identify problems once we put load on the system.

Application Dependency between the 4 tier heterogenous application

Application Dependency between the 4 tier heterogeneous application

Step 2: Test Scripts and Keynote/dynaTrace Integration
Keynote modeled several use-case scenarios based on the testing requirements we had on our application. We ended up with use cases such as executing a specific search, accessing the last-minute offers page or purchasing a trip. dynaTrace provides an integration interface for Load-Testing and Monitoring Services that allows us to link every executed synthetic request with the transaction that dynaTrace traces on the application server when these requests are handled by the application.

Step 3: Running a test
We decided to run a test starting with increasing load to figure out where the breaking point of our application is. We started with a load of 3000 sessions per hour running for 15 minutes and increasing this load every 15 minutes to 6k, 9, and 12k sessions/hour. It turned out our application broke much faster than we anticipated :-)

Step 4: Analyzing the Load Testing Report
When I log into the Keynote Load Testing Portal I start by looking at the load testing report that shows me the executed sessions, response times, page views and errors:

Keynote Load Testing Report showing an application problem when increasing the load to 6k sessions per hour

Keynote Load Testing Report showing an application problem when increasing the load to 6k sessions per hour

It is easy to see that – once we went from Phase 1 (3k sessions) to Phase 2 (6k sessions) – our application’s response times go through the roof causing most of the simulated users to experience timeouts. A click on the Page Error graph shows that these errors are mainly timeouts or connection errors. The question now is: Is this problem an application problem or is it related to the infrastructure? Without having insight to the application these results could be interpreted in multiple different ways, e.g: our hosting company doesn’t provide enough bandwidth. That is the point when Application Performance Management helps answering these uncertainties.

Step 5: Looking at application performance data
I’ve created two dashboards that I use to analyze application performance while or after running a load test. The first one is an Infrastructure Dashboard where I display CPU and Memory Utilization of all 4 Application Runtimes that are involved:

The Dashboard shows us high memory und GC activity on our GoSpaceBackend which also leads to very high CPU Utilization

The Dashboard shows us high memory und GC activity on our GoSpaceBackend which also leads to very high CPU Utilization

The red measure in the JVM Memory Usage graph indicates GC Collection time. The red in the CPU Usage indicates the max CPU Usage of that JVM. The conclusion is therefore easy. High memory usage leads to high GC activity which maxes out our CPU.

The next Dashboard gives me insight into the application itself – with all the involved application layers and the individual transactions that dynaTrace analyzed coming from the Keynote Load Test:

Transaction Response Time on the Application Server and Breakdown into Application Layers proofs that the slow response times are application related

Transaction Response Time on the Application Server and Breakdown into Application Layers proves that the slow response times are application-related

On the left of the Dashboard I placed a transaction overview of the individual use cases Keynote executed during the load test. It is easy to spot that once the load got ramped up to 6k sessions we saw a dramatic increase in response time on our application server. That means that our first question is answered: it is not an infrastructure problem with our web hosting but an application-specific problem. With the knowledge we already have by looking at the memory and CPU measures we can already guess that this is the main contributor. The performance breakdown on the bottom right also highlights which application layers were contributing the most to the application transaction response time. A double click on that graph gives us a close-up on this data:

Our Persistence Layer, EJBs and JMS are the main contributors of the application performance

Our Persistence Layer, EJBs and JMS are the main contributors of the application performance

Step 6: Drilling deeper into the problem
dynaTrace captured every single request that was executed while running the load test. Its PurePath technology is the enabler of the dashboards we looked at earlier. The next step is to identify what is really going on in the application and where is the main impact of the increased load. The next dashboard I created gives me a better overview of the application architecture, showing me which methods are called most often and how well they execute. I am also interested in database activity as well as individual web requests that were slow:

Detailed Overview of where my application hotspots are including application layers, methods, web requests and database statements

Detailed Overview of where my application hot spots are including application layers, methods, web requests and database statements

The dashboard again shows us that the primary application layer impacted is our persistence layer. It is also very interesting that the slowest URL is a web service hosted by our back-end application server and that we have a very high number of database statements coming from only a few web requests. This information is really valuable for the application architects who need insight into application dynamics under heavy load.

Step 7: Show me the root cause of these slow-running transactions
Not only can we get an overview of which requests were slow and how many methods or database statements were executed. We can now look into individual transactions, and also compare transactions to see where the difference is between a slow-running and a fast-running transaction. dynaTrace allows me to drill down to those 718 transactions that executed the slow running web service and I can inspect each individually:

All transactions (PurePaths) available for analysis. Selecting the slowest shows me where this web service got called and where time was spent

All transactions (PurePaths) available for analysis. Selecting the slowest shows me where this web service got called and where time was spent

Looking at the duration, CPU duration and Suspension Duration (Garbage Collection) really highlights the problem that we have. Suspension Time is really high with those transactions impacting the overall execution time.

I can also pick one that ran very slow and one that ran fast, and let dynaTrace compare these two transactions for me and highlight the differences:

Comparison shows the structural and timing difference between two transactions making it easy to spot the actual differences

Comparison shows the structural and timing difference between two transactions making it easy to spot the actual differences

Not only do I see how Garbage Collection impacts execution time of individual methods and the overall transaction. It also shows me how different the same transaction executes in case of an error (such as thrown abort exception) – which brings me to one additional dashboard I like to look it. This one includes exceptions, logging messages and an overview on the Garbage Collection runs on individual methods:

Exceptions including full stack traces, log messages with the context of where the were logged and an overview of suspended methods

Exceptions including full stack traces, log messages with the context of where they were logged and an overview of suspended methods

Step 8: Hand off the data
Looking at this data was easy as I simply look at these dashboards after the load test is finished. The dashboards already helped identifying several hot spots, e.g: high memory consumption by the back-end web services causing high GC, too many SQL statements per request, many hidden exceptions that never made it to a proper log message, …

dynaTrace makes this captured data available to the engineering team in order to resolve these problems. They can either access the data by directly accessing the dynaTrace environment used to capture this information. Another way is to export individual PurePaths or maybe all of them into a dynaTrace Session file which can be exchanged via email, Instant Messenger or attached to a bug ticket.

Proved the Concept: Cloud Based Load Testing with APM is ready for Agile Development

The problems/requirements listed in the beginning of this blog are solved/met with the integrated solution from Keynote and dynaTrace:

  • Keynote runs large scale load tests by driving load from many different locations around the globe
  • The global-distributed load generation allows us to identify local content delivery problems (slow network connections, wrongly configured CDNs, …)
  • The costs are under control as you only pay for the load test but don’t pay for maintaining your own load-testing infrastructure that would sit idle most of the time
  • Keynote makes it easy to create scripts and offers services to do the scripting for you
  • dynaTrace automatically highlights the problems identified during the load test. High-level analysis through dashboards doesn’t require highly skilled performance architects. The fine-grained data captured, however, gives the performance engineers and software architects actionable data without digging through log files or manually correlating a multitude of different performance metrics
  • No change to your application is required to use this integration

Related reading:

  1. End-to-End Monitoring and Load Testing with Keynote and dynaTrace We’ve learned from recent studies that performance has a direct...
  2. VS2010 Load Testing for Distributed and Heterogeneous Applications powered by dynaTrace Visual Studio 2010 is almost here – Microsoft just released...
  3. Performance Analysis in Load Testing Collection diagnostics information in Load Testing is a challenging task....
  4. From Cloud Monitoring to Effective Cloud Management – Webinar with IntraLinks on July 15th 2010 I am hosting a Webinar with IntraLinks this Wednesday. The...
  5. Elevating Web- and Load-Testing with MicroFocus SilkPerformer Diagnostics powered by dynaTrace MicroFocus and dynaTrace recently announced “SilkPerformer Assurance” and with that...

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Cloud Expo Latest Stories
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss this shifting dynamic with an example of a top European Telco provider. Find out how they are leveraging the power of acloud-based consumption model services to offer more value to the mass market and enable a new revenue model that embraces the true meaning of the Third Industrial Revolution.
The emergence of cloud computing and Big Data warrants a greater role for the PMO to successfully manage enterprise transformation driven by these powerful trends. As the adoption of cloud-based services continues to grow, a governance model is needed to orchestrate enterprise cloud implementations and harness the power of Big Data analytics. In his session at 15th Cloud Expo, Mahesh Singh, President of BigData, Inc., to discuss how the Enterprise PMO takes center stage not only in developing the appropriate governance model but also in collaborating with key stakeholders to ensure a successful transformation.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.