Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Machine Learning , Agile Computing, Release Management , @DXWorldExpo

@CloudExpo: Article

How to Performance Test Automation for GWT and SmartGWT

The next “evolutionary” step is to monitor performance for every end user

This article is based on the experience of Jan Swaelens, Software Architect at Sofico. He is responsible for automatic performance testing of the company's new web platform based on GWT and SmartGWT. Sofico is specialized in software solutions for automotive finance, leasing, fleet and mobility management companies.

Choosing GWT and SmartGWT over Other Technologies
About two years ago Sofico started a project to replace its rich desktop application (built with PowerBuilder) with a browseribased rich Internet application. The developers selected GWT and SmartGWT as core technologies to leverage their in-house Java expertise because they believed in the potential of what these (fairly) new technologies had to offer. Their goal was to replace the existing desktop client with a new one that ran in a browser. Their eyes where set on a better user experience and high degree of customization possibilities to give their customers the flexibility and adaptability that they need to run their businesses.

Need End-to-End Visibility into GWT Black Box
GWT was a great choice as they could soon deliver the first basic version. The problems started when trying to figure out what was actually going on in these frameworks in order to analyze performance problems reported by the first testers.

Developers started off by using the "usual suspects" - browser-specific Dev Tools for Chrome, Firefox and IE. Back then, the built-in tools lacked first class JavaScript performance analysis capabilities which made it difficult to analyze a complex browser application. Additionally, there were no integration capabilities into server-side performance analysis tools such as JProfiler which would allow them to analyze the impact and correlation between server-side and client-side GWT code. Taking performance seriously, the performance automation team came up with some key requirements for additional tooling and process support.

Requirement #1: Browser to Database Visibility to "understand" what's going on
Do you know what really happens when a page of a GWT application is loaded? No?! Neither did the developers from Sofico. Getting insight into the "Black Box" was therefore the first requirement because they wanted to understand: what really happens in the browser, how many resources are downloaded from the web server, which transactions make it to the app server, what requests are cached, where is it cached and how the business logic and data access layer implementation impacts end user experience.

The following screenshots show the current implementation using dynaTrace (sign up for the free trial), which gives the developers full visibility from the browser to the web, app and database server. The Transaction Flow visualizes how individual requests or page loads and services by the different application tiers are processed.

End-to-End Visibility gave the developers more insight into how their GWT Application really works and what happens when pages are loaded or users interact with certain features.

A great view for front-end developers is the timeline view that shows what happens in a browser when a page gets loaded, when a user clicks a button that executes AJAX Requests, or when backend JavaScript continuously updates the page. It gives insight into performance problems of JavaScript code, inefficient use of resources (JS, CSS, Images...) and highlights whether certain requests just take a very long time on the server-side implementation:

Developers love the timeline view as it is easy to see what work is done by the browser, where performance hotspots are and even provides screenshots at certain events

To read more about additional requirements, please click here for the full article.

Requirement #2: JavaScript Performance Data to Optimize Framework Usage

Requirement #3: Correlated Server-Side Performance Data

Requirement #4: Automation, Automation, Automation

Next Step: Real User Monitoring
Giving developers the tools they need to build optimized and fast websites is great. Having a test framework that automatically verifies that performance metrics are always met is even better. Ultimately you also want to monitor performance of your real end users. The next "evolutionary" step therefore is to monitor performance for every end user, from all different geographical regions and all browsers they use. The following shows a dashboard that provides a high level analytics view of actual users. In case there are problems from specific regions, browser types, or specific web site features, you can drill down to the JavaScript error, long running method, problematic SQL Statement or thrown Exception.

After test automation comes production: You want to make sure to also monitor your real users and catch problems not found in testing

Read more and test it yourself

If you want to analyze your web site - whether it is implemented in GWT or any other Java, .NET or PHP Framework sign up for the dynaTrace Free Trial (click on try dynaTrace for free) and get 15 days full featured access to the product.

Also - here are some additional blogs you might be interested in

If you happen to be a Compuware APM/dynaTrace customer also check out the Test Automation features of dynaTrace on our APM Community Portal: Test Automation Video

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.