Welcome!

@CloudExpo Authors: Elizabeth White, Jyoti Bansal, Yeshim Deniz, Greg Schulz, ManageEngine IT Matters

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing

@CloudExpo: Blog Post

Considerations for Choosing a Cloud Load Testing Solution

How to pick a tool to load test in the cloud

In a previous post, I've discussed how load testing with the cloud can enable your team to conduct more efficient and more realistic large scale tests while saving you (time and) money. With this all in mind, how do we go about choosing the correct cloud load testing solution?

There are many cloud load testing solutions that will enable you to make use of the cloud. However, very few enable you to capitalize on the opportunities of load testing in the cloud. As I've discussed, load testing with the cloud offers clear advantages over traditional load testing in certain circumstances, but the tools you use are even more important to the quality of your tests.

When considering a cloud testing solution, ask the following questions:

  • To what extent does the solution integrate with the cloud?
  • Will the solution enable us to conduct realistic tests?
  • Does the solution support unified tests inside and outside the firewall?
  • Is the solution easy to use, or will we spend weeks learning and configuring it?
  • Does the solution include full-featured reporting and decision-making modules to help our team make the most of the results?
  • Does the solution support the technologies we used to build the application?

Much of this is outlined in the Neotys Cloud Whitepaper, but for the purposes of this blog, let's briefly talk about each of these:

Integration with the Cloud Platform
If you choose a solution that is not integrated with one or more cloud platforms, there are several manual steps you'll need to perform on your own. First, you'll need to learn how each platform you'll be using works, including its limitations and constraints. Second, you'll need to build, maintain and access the machines you wish to use.

Load testing solutions that offer integration with the cloud simplify and accelerate the steps needed to use the cloud infrastructure. With the proper solution, connecting to, and utilizing the cloud should be possible with just a few mouse clicks.

Realistic Load Tests
Simply moving to the cloud is not always enough to ensure the most realistic tests. Real users often have access to less bandwidth than a load generator in a cloud data center. With a slower connection, the real user will have to wait longer than the load generator to download all the data needed for a web page or application. This has some major implications as your performance results won't be fully reflective of what your real users will experience (here we go again with the inaccurate load testing).

When choosing a load testing solution, look for one that provides a bandwidth simulation feature that limits bandwidth to ensure that the virtual users download the content of the web application at a realistic rate. This capability is particularly important when testing mobile applications, because mobile devices typically operate with less bandwidth than laptops and desktops.

Unified Lab Testing and Cloud Testing
While including the cloud in our load testing is obviously a huge advantage, it does not mean that we should exclude testing from within our own firewall. Both types of testing should be included - if we are only doing one or the other, we are again running into a disadvantage. Likewise if we use different solutions for these different tests.

When choosing a load testing solution, you should make sure that it supports lab testing so you can confirm the performance on your application(s) internally, before you introduce additional variables by testing over the internet. This allows you to shake out any early performance issues with smaller loads before you execute tests from the cloud. Finding these issues earlier in the application's lifecycle will lower costs associated with not only the cloud but also those attributed to "late stage bugs".

Making sure that you use a single solution for both "internal" (within the lab and/or firewall) and "external" (via The Cloud) testing allows your engineers to reuse scripts across both types of tests. If you choose different solutions, you may have to create a new set of scripts for your external testing, costing you both time and money.

Ease of Use
As every test engineer knows (not just the performance testers, the functional testers can relate too!), testing is almost always performed under tight time constraints. Delays in development or other phases earlier in the lifecycle result in less time for the test engineers to do their jobs. The pressure is on to deliver results as quickly as possible. With this kind of stress, there is no place for a solution that is hard to use. Finding a tool that is easy to use is imperative. Not only does it save2 time and money, but it will allow you to get more testing done in the time you have - ensuring a higher quality application. While an exhaustive list of "ease of use" features might be best suited for another blog article, some key things you should look for are:

  • Easy recording of a virtual user profile (preferably in one click).
  • The ability to easily define advanced behaviors (with structures such as conditions and loops) via a graphical interface, without a need for writing code or test scripts.
  • Automatic handling of session parameters. This is probably the most challenging and time-consuming part of in the development of load testing user profiles. Find a tool that does this for you automatically.
  • Easy results comparisons. Generating graphs and charts based on a lot of raw data can be excruciating - a tool that automates this ability is a must.

Analysis, Monitoring, and Reporting
Creating realistic scenarios and running comprehensive load tests is only the part of the story. The final steps of course are to understand how your application behaved while it was subjected to the test load. Did it meet performance requirements? If not, why? To do this analysis, you need a powerful set of analysis tools to analyze the data generated during the test.

Make sure you choose a solution that allows you to easily analyze the collected data, creating actionable reports that describe the performance of the tested application. I've seen too many a test engineer spend hours, if not days taking raw data from a load testing tool and pushing it through Excel or a statistical tool to create graphs, charts and reports. A tool that allows for easy analysis not only makes this process pain-free but can increase the collaboration among the different stakeholders of the application's infrastructure.

Making sure your load testing solution contains a comprehensive monitoring system is also essential when you need to find the root causes of a problem. If you are only looking at performance measure collected from the end-users point of view, you're missing half the picture. Being able to use a monitoring system allows you to understand what was happening within the infrastructure of your web application is critical in identifying any root issues. Moreover, the selected load testing solution should allow for easy correlation between data collected from the end-users point of view and data collected from any back-end servers. Without this ability, results analysis becomes extremely cumbersome as well as potentially inaccurate. Too many times in my load testing career I've tried to correlate performance data collected from a load testing tool with that of data collected from separate application server and database server monitoring tools. An almost impossible task.

Support for Web Technologies
This one is pretty simple. If your application has been developed using more advanced web technologies, make sure the solution you choose has the proper support. With applications built with Adobe Flex, Microsoft Silverlight, Real-Time Messaging Protocol (RTMP), AJAX push technologies, etc. becoming more and more prevalent, the proper technology support is critical. Without this support, effectively testing the performance of your application is next to impossible. I'm sure that any of you that have tried to load test a Siebel application are nodding your heads as you read this.

While considering the above topics to search for a cloud provider, you may also want to consider using MULTIPLE cloud computing providers. There are several advantages to this approach.

First, multiple providers may allow you to test from more geographical regions. Earlier, we've talked about how testing from multiple regions can provide a more realistic test scenario. With this in mind, combine the regions available from multiple providers can lead to an even MORE realistic scenario. In other words, if a single provider does not give you the geographical coverage you need to emulate (based on where the real world users of your application "live"), aggregating multiple regions can be very useful.

Second, if you are executing exceptionally large scale tests, engaging multiple providers simultaneously allows you to bypass any limitations that a single provider may place on bandwidth or the number of machines in use.

Finally, using multiple cloud providers enables you to detect potential network issues at the cloud provider level. Let's say that during results analysis of particular test run you noting that there is significantly worse performance measure from load generators from a particular cloud provider while all other load generator show acceptable performance. With this data, you can safely conclude that there is a problem (temporary or not) with that provider and not your application. If you had locked yourself into a single cloud provider, you might be limited in your ability to conduct accurate large-scale tests.

More Stories By Steve Weisfeldt

Steve Weisfeldt is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, he has worked as the President of Engine 1 Consulting, a services firm specializing in all facets of test automation. Prior to his involvement at Engine 1 Consulting, he was a Senior Systems Engineer at Aternity. Prior to that, Steve spent seven years at automated testing vendor Segue Software (acquired by Borland). While spending most of his time at Segue delivering professional services and training, he was also involved in pre-sales and product marketing efforts.

Being in the load and performance testing space since 1999, Steve has been involved in load and performance testing projects of all sizes, in industries that span the retail, financial services, insurance and manufacturing sectors. His expertise lies in enabling organizations to optimize their ability to develop, test and launch high-quality applications efficiently, on-time and on-budget. Steve graduated from the University of Massachusetts-Lowell with a BS in Electrical Engineering and an MS in Computer Engineering.

@CloudExpo Stories
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business. Though, IoT is far more complex than most firms expected with a majority of IoT projects having failed. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, Chief IoTologist at Wipro, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology portfolios and business models to adopt and leverage IoT. He will delve in...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at @ThingsExpo, Steve Wilkes, CTO and founder of Striim, will delve into four enterprise-scale, business-critical case studies where streaming analytics serves as the key to enabling real-time data integration and right-time insights in hybrid cloud, IoT, and fog computing environments. As part of this discussion, he will also present a demo based on its partnership with Fujitsu, highlighting their technologies in a healthcare IoT use-case. The demo showcases the tracking of pati...
Tricky charts and visually deceptive graphs often make a case for the impact IT performance has on business. The debate isn't around the obvious; of course, IT performance metrics like website load time influence business metrics such as conversions and revenue. Rather, this presentation will explore various data analysis concepts to understand how, and how not to, assert such correlations. In his session at 20th Cloud Expo, Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Sys...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Stratoscale, the software company developing the next generation data center operating system, exhibited at SYS-CON's 18th International Cloud Expo®, which took place at the Javits Center in New York City, NY, in June 2016.Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere, IT is empowered to take control of their data ce...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his Day 2 Keynote at @ThingsExpo, Henrik Kenani Dahlgren, Portfolio Marketing Manager at Ericsson, discussed how to plan to cooperate, partner, and form lasting all-star teams to change the...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"A lot of times people will come to us and have a very diverse set of requirements or very customized need and we'll help them to implement it in a fashion that you can't just buy off of the shelf," explained Nick Rose, CTO of Enzu, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.