About the Author

Integrated Load Test Analysis: Using Compuware APM Web Load Test and PureStack Technology

Andreas Grabner described how he used the Compuware APM PureStack technology to identify the server-side performance issues during a recent load test run against the Compuware APM Community Portal, a production application used by our customers. He was able to quickly identify the CPU bottleneck that caused the performance degradation in the server environment, leading to an almost immediate resolution of the issue.

Bridging the Gap between Ops and Apps Data by adding Context: One picture that shows the Hotspots of “Horizontal” Transaction as well as the “Vertical” Stack.

Bridging the Gap between Ops and Apps Data by adding Context: One picture that shows the Hotspots of “Horizontal” Transaction as well as the “Vertical” Stack.

But what about the external performance recorded during this load test? What would a customer have experienced if they had tried to access the site during this time? Well, at the peak of the test, I used WebPageTest to capture a video of the APM Community Homepage loading (NOTE: The video has been advanced to 50 seconds already).

So, the external performance degraded badly at the peak of the test – this isn’t a surprise given what Andreas already pointed out. But how can the person running the external load – in this case, me, using the Compuware APM Web Load Testing service – make use of the data captured from outside the firewall and the rich data set covering system/infrastructure health and its effect on user experience and application performance available from the Compuware APM PureStack Technology? This post will show how I used a subset of the PureStack data to build charts that helped correlate key events on the server side to performance events in the Web Load Test (WLT) data.

I always like to start with the “Why?” of a load test. The goal of this load test was determine if the APM Community Portal could handle a substantial increase in traffic as it had just been designated as the central hub for product documentation and customer discussions. To be absolutely sure, the APM Community Portal team wanted to determine if the application could support up to 200 concurrent visitors, an increase of nearly 10X from its current peak traffic.

For WLT to achieve this load volume is easy. Doing it in a controlled way meant that we needed to come up with a plan that effectively tested the application, but provided critical information at all stages of the test. The Portal team wanted a load test that ramped up to a maximum of 200 virtual users (VUs) over the course of 2 hours, with load distributed around the globe. This slow ramping of the load would help diagnose critical performance issues in a controlled fashion, as performance events can be directly tied to the amount of load and the activities occurring on the server at that time.

Test Ramping to 200 VUs used in the April 14 2013 APM Community Portal Load Test

Test Ramping to 200 VUs used in the April 14 2013 APM Community Portal Load Test

In addition to ramping the load, the global distribution of load generation and traffic types had to be determined. Not all of the virtual users (VUs) would be executing the same test script – 4 test scripts were created, with each testing a core part of the infrastructure. The Portal team decided on the load and test script distribution, and required only some very small adjustments before the configuration was finalized.

Compuware APM Web Load Test Global Traffic and Script Distribution for APM Community Test Execution - April 14 2013

Compuware APM Web Load Test Global Traffic and Script Distribution for APM Community Test Execution – April 14 2013

The test was run on a Sunday morning when traffic and customer impact would be low, which turned out to be a good thing. As load began to increase, performance began to degrade dramatically before the halfway point of the test. Transaction response times began to skyrocket.

 

Response Times Increasing as Load Increases until the site becomes so slow that it appears unresponsive to visitors

Response Times Increasing as Load Increases until the site becomes so slow that it appears unresponsive to visitors

Transaction response times degraded right up 09:49 EDT when the system began reporting a nearly 100% error rate. Most of the performance analysis here is focused on the time between 08:10 and 09:49 EDT.

Using the PureStack Technology, Andreas has detailed the server-side diagnostic process he went through to diagnose the server side performance effects. The data captured inside the firewall aligns perfectly with the external data. By comparing the external response time of the transactions to the time required for the server PurePaths, a very clear and direct correlation between the amount of load on the system, the effect on server processing times, and the degradation in performance experienced by customers can be drawn.

 

A comparative chart that shows Web Load Test Average Transaction Response Time v. VUs v. Average Server PurePath Time

A comparative chart that shows Web Load Test Average Transaction Response Time v. VUs v. Average Server PurePath Time

One item not discussed in the previous assessment was the performance event that was detected between 08:50 and 08:55 EDT. During that period, both external transaction and server PurePath response times increased noticeably. As it stands alone in the load test, it was clear that the causes of this event were different than those that eventually caused the overall failure of the system.

By aligning the total transactions per minute being executed by the load test system to the percentage of CPU being consumed at the web server layer, the cause of the 08:50-08:55 EDT spike becomes clear: something at the web server was suddenly consuming 100% of the total available CPU. This had the effect of decreasing the number of transactions that were processed, and caused the WLT response times and PurePath times to increase.

A comparative graph showing Transactions per Minute v. VUs v. CPU Percentage for Web Server - April 14 2013 Load Test

A comparative graph showing Transactions per Minute v. VUs v. CPU Percentage for Web Server – April 14 2013 Load Test

While the eventual failure of the test can also be related to CPU exhaustion, this anomalous event seems completely unrelated to volume of traffic occurring at that time. The timing indicated that a scheduled job that occurs either daily or hourly as the cause of the spike. Finding these scheduled jobs that may be either undetected or forgotten by system administrators is not unusual during load tests. Digging deeper into the system found that the Atlassian/Confluence application layer, the software that controls much of the core functionality of APM Community, spiked almost exactly in the middle of the recorded issue, indicating that the job was related to something in this layer.

Atlassian Execution CPU Time during the April 14 2013 Load Test

Atlassian Execution CPU Time during the April 14 2013 Load Test

What makes the integrated approach to load testing critical to those of us who have only had access to the external, Web Load Test data in the past is that we can immediately draw correlations between events inside the datacenter and the performance effects we are capturing outside the firewall. By integrating a few key Web Load Test metrics (Average Response Time, Transactions per Minute, and Total VUs) with select PureStack metrics (Number of Confluence Requests in the last 10 seconds and CPU percentages), the team was quickly able to have in-depth information available to them throughout the load test.Finding this high load job was a bonus of the load test, which clearly pointed out that the system was undersized for the load that the Portal team was expecting. But this conclusion could only be found by correlating multiple layers of data into a coherent whole that provided the team with the information they needed to identify critical issues.

The chart below shows how this would appear to someone monitoring the load test.

Comparative Web Load Test and PureStack Metrics - April 14 2013

Comparative Web Load Test and PureStack Metrics – April 14 2013

In one chart, multiple critical metrics are available to identify potential problem hotspots. For example, while the ultimate application bottleneck is a critical issue to resolve, without the correlating data the event between 08:50 and 08:55 EDT may have been overlooked, leaving the Portal team with a potential user experience problem that could surface at a later date.

With all of this data available to teams running load tests, it is recommended that care be taken not to drown them in a flood of data. Here, we took 6 key metrics and were easily able to show that the issue was a bottleneck at the web server CPU as traffic increased. These metrics were:

  1. WLT Response Time
  2. WLT Transactions per Minute
  3. Server side PurePath time
  4. CPU percentage on the web server
  5. Number of requests to the Confluence application layer
  6. The number of VUs deployed at each minute

Choosing 5-6 key metrics is the most critical element in this process. These metrics should be able to directly indicate problem areas or point the load test team in the right direction to begin to resolve the issue. For example, the sudden decrease in requests to Confluence during the 08:50-08:55 EDT period may not give you the root cause, but immediately posed the question “Why is this component suddenly showing signs of degradation?”

Another perspective would be to add in database statistics, as the database layer is often the cause of performance issues under heavy load. What is interesting in this case is that an amalgamated view of the load test data shows completely the opposite – when response times and CPU % begin to spike, the number of database queries and the total time spent at the database layer decreases dramatically.

Another integrated view that includes database metrics, showing that the database is likely not an issue in this test.

Another integrated view that includes database metrics, showing that the database is likely not an issue in this test.

This last chart provides the team with a key metric: At 09:05 EDT and 90 VUs, the application layer became so congested that it effectively stopped passing requests through to the database. At the same time, WLT response times crossed 20 seconds and the CPU percentage crossed 90%. With this integrated view, the Portal team now has a very clear picture of the end-to-end application, and its effect on customers.

We showed two potential methods for PureStack and Web Load Test metrics to produce a complete picture of a load test. Your key metrics may not be the same as ours, and may include number of bytes in and out, Disk I/O, memory usage, total Web requests, third-party performance, or other metrics that are meaningful to your application. But with the PureStack Technology, integrating any of these datapoints directly with the Compuware APM Web Load Testing service becomes easy. PureStack allows you to link the external performance of the application under load to the server side effects on key components, building a complete end-to-end model of performance for your application during load testing events.

Comments

  1. nice article for WLT + dynaTrace load360 integration !

Comments

*


9 + = ten