Andreas Grabner About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Premium AJAX Edition 3 Extensions – Next Generation Web Performance Optimization– PART II

 

If you are serious about Web Development then I am sure you are working on Web 2.0 Applications leveraging several JavaScript Frameworks, making XHR calls to the Server to retrieve dynamic content and also include 3rd party content such as Ads or Social Network Plugins. You probably also have Selenium, WebDriver or any other functional tests in place that get executed in your Continuous Integration environment. If any of this is true, you likely want to automate your efforts around web performance optimization as it is too complex and inefficient to verify performance manually on all your pages across all your supported browsers. dynaTrace offers premium extensions to the free dynaTrace AJAX Edition that can accelerate your tasks through enterprise-class automation. Let’s look at the main capabilities this upgrade gives you.

Web 2.0 Agnostic

“Traditional” Web Pages were page based. That means that every click on a link usually causes a full page reload of a new URL. An example is a traditional eCommerce Site where you start on the Home Page and then click through the individual Product categories. Every click results in a new page request giving you the products of this particular category with the URL reflecting the actual category you just clicked. When optimizing page load times for such page-based applications, tools like YSlow, PageSpeed and dynaTrace AJAX Edition are perfect as these tools analyze activities per visited URL.

Page Based Web Applications load a new page for almost every user interaction

Page Based Web Applications load a new page for almost every user interaction

In modern “Web 2.0″ applications it is more common to leverage JavaScript, XHR and DOM Manipulations whenever the user interacts with a page. Instead of loading a new page for every user interaction JavaScript loads additional information from the Web Server and merges this into the current page. A good example is a Google Search. Let’s open the Google Start Page. Now start entering your keyword. The new “Instant Search” feature of Google not only gives you suggested keywords in a drop down box but also shows you the instant search result of the partial keyword you entered so far. This all happens without forcing the browser to load a different page – we are still on the main Google URL.

Modern Web 2.0 Application staying on the same URL while users execute actions

Modern Web 2.0 Application staying on the same URL while users execute actions

Google Search is just one example. Another one would be Google Mail or your Online Banking Service. In these Web Applications you usually remain on a single URL. When executing actions (through mouse or keyboard) JavaScript handlers take care of executing these actions without loading a new page.

Staying on a single URL for a sequence of actions makes it impossible to verify common Best Practices such as number of downloaded resources or size of the downloaded content. For example: Downloading no more than 5 JavaScript files for the initial page load is OK and adheres to round trip-related Best Practices. But when you download one additional JavaScript file for every keystroke in order to populate “Suggested Keywords” it would violate the Best Practices – which should not be the case because it might be OK to download a JavaScript file for every keystroke. I talked about this problem in detailed on my blog Why Best Practices alone don’t work when optimizing Web 2.0 Applications.

How can this problem be solved in an automated way? dynaTrace allows us to not only look at performance metrics per web page. dynaTrace allows us to look at individual user interactions, e.g.: what exactly happened while entering the keyword into the search field? How many JavaScript files, CSS and image files were downloaded? The following screenshot shows the dynaTrace Browser Summary Dashlet that allows us to view browser activities grouped by what dynaTrace calls Timer Names (a term commonly used in testing tools):

We get details on the overall tested scenario and also individual results for each user action

We get details on the overall tested scenario and also individual results for each user action

Specifying these Timer Names (or User Action Names) can either be done by adding a specific parameter to the URL or through a JavaScript method that dynaTrace exposes once dynaTrace is active. This is especially useful when integrating dynaTrace with your functional tests where you can set these timer names from your test scripts. If you want to know more about it then read Integrating dynaTrace with Selenium.

More Key Performance Indicators

Number of Requests, Content Size or Download Time are common Performance Metrics that you get from tools such as YSlow, PageSpeed or dynaTrace AJAX Edition. But there is much more that is really interesting when analyzing web pages. dynaTrace gives us metrics on Cached vs. Un-cached Objects, number of actual JavaScript executions, number of HTTP 200/300/400/500 errors, JavaScript errors or number of resource domains. Depending on the web site you are testing you have different metrics you want to look at. Therefore dynaTrace allows you to configure which metrics to track. For a site with mainly static content it is important to keep the ratio of cached vs. un-cached objects high as you want to speed up the site for revisiting users. For sites that make heavy use of external content you want to verify that content delivered by 3rd party services don’t impact your overall page load time.

The measures are calculated by Timer Name (as discussed in the previous section). This allows you to track how certain features of your web application deal with things like the number of downloaded resources over time. The following screenshot shows us several key performance indicators that dynaTrace tracks across test runs:

Tracking custom performance metrics such as number of certain resource types, number of resource domains, execution time of JavaScript, ...

Tracking custom performance metrics such as number of certain resource types, number of resource domains, execution time of JavaScript, …

It is also interesting to see that dynaTrace automatically calculates how volatile certain performance metrics are. A volatile measure indicates frequent change of the tested application without adhering to common best practices (such as keeping the number of CSS files low). It can also indicate that the used test script is not producing constant results. In this case we have to make sure that we have stable tests because only stable tests allow us to automate performance analysis.

Automated Regression Analysis

Getting more performance indicators as described in the previous section is great – but – nobody wants to manually look at metrics of hundreds of tests individually to figure out if any of these metrics indicate a regression. dynaTrace automates this task for us.

For every test run dynaTrace analyzes every captured metric (number of resources, number of cached objects, number of un-cached objects, number of external domains, …) and compares it with the results of previous test runs. When a measure falls outside the expected value range dynaTrace automatically triggers an incident. The expected value range is automatically calculated by looking at the recent test results. An incident can send out an email notification to the assigned developer or notify the test manager in a dashboard about all tests that show a regression. The following screenshot shows how dynaTrace verifies every single measure against the calculated expected value range:

Tracking a regression on the number of CSS files loaded by an action

Tracking a regression on the number of CSS files loaded by an action

Knowing that we have a regression is good. But what is the exact difference here? A double click on one of the measured values compares the captured Network and JavaScript Traces with the previous build that didn’t show this regression. The following screenshot shows a Comparison Dashboard that gets automatically opened when analyzing a regression:

Compare the differences between two test sessions to identify regressions

Compare the differences between two test sessions to identify regressions

From the regression dashboard we can then drill into more details such as the Browser Summary or the individual JavaScript traces of every JavaScript handler that got executed during the test run.

To get a better overview of which tests seem to have a problem we can also access this data through a REST interface that dynaTrace provides. The most interesting information is whether there was a change in the last test run compared to the previous. This information can be queried as XML, CSV, PDF or HTML. The following shows the HTML version of this report available at all times:

Overview Report highlighting the problematic tests of the last test iteration. Includes Browser and Unit Test executions

Overview Report highlighting the problematic tests of the last test iteration. Includes Browser and Unit Test executions

You may notice that this report not only includes our Browser Tests on the Google Search Page. It also includes results of Unit Tests. dynaTrace supports analyzing Java and .NET Unit Tests. Instead of looking at the number of resource downloads we look at number of database statements, number of exceptions or the execution time of certain methods. Tracking these metrics per unit test also allows us to identify regressions early on. A good example would be the number of SQL Statements executed for a particular feature. If that changes significantly from one build to the next the developer probably accidentally introduced a regression that should be fixed right away.

End-to-End Performance Analysis

Even though more logic in modern web applications gets moved to the browser leveraging JavaScript, DOM Manipulations and CSS, critical Business Logic is still executed on the server. When searching for a keyword in Google, a XHR Request is sent to the Google Servers that will respond with the search result. The fastest JavaScript won’t speed up the end-user’s experience if the result from the Web Server takes too long. Therefore it is important to not only look at the Browser but also at what the server is doing when the user interacts with the web application. dynaTrace provides full End-to-End Performance Analysis by analyzing both Browser and Server-Side and tying these two sides together. The following screenshot shows the Browser Timeline – a view that many of you are familiar with from dynaTrace AJAX Edition. The difference here is that we also get to see how much time is actually spent on the Application Servers when processing Server-Side requests:

Browser Timeline showing both Browser and Server Side activities when users interact with the Web Application

Browser Timeline showing both Browser and Server Side activities when users interact with the Web Application

The Timeline gives us information on which XHR Requests were executed by which JavaScript handlers. We also get to see the Network Request of the XHR Request and also which Server(s) handled this request. The tooltip on the Server-Side execution block gives us additional information on Database or Remoting Calls (EJB, RMI, Web Services, WCF, …). But it doesn’t stop here. We can dig into the full End-To-End Execution Trace that was captured by dynaTrace. The following screenshot shows the part of the Trace (dynaTrace PurePath) where we see the XHR Request sent by JavaScript, handled by the Frontend-Server and passing the call to the Backend-Server via RMI. We also see the JavaScript that got executed when the request actually came back and how the result was applied to the current page:

End-to-End PurePath showing where time is spent in both JavaScript and Java/.NET when the user clicks on a button in the Web Application

End-to-End PurePath showing where time is spent in both JavaScript and Java/.NET when the user clicks on a button in the Web Application

Seeing the full End-to-End Trace including method arguments, return values, SQL Statements, Exceptions, Log Messages, etc. allows us to better understand where time is spent when users interact with the web site. In the end it is about optimizing the performance of a web site when the user interacts with it by executing certain actions. Whether you have your own Web Framework or use Frameworks such as GWT, JSF, ASP.NET or Spring, you need to understand what is really going on when pages are rendered or when user actions get executed.

dynaTrace not only allows us to see the full End-to-End Trace which is great for diagnostics. dynaTrace also calculates performance metrics such as number of database statements, exceptions or how long certain remoting calls took. These metrics are calculated in the same way as explained in the sections above. This allows you to keep track of your performance metrics in an automated test environment with dynaTrace automatically telling you if there are any regressions on either the browser or server-side.

Want to know more about the Premium Features of dynaTrace?

If you are serious about automating performance analysis on both the Browser and Server, or if you struggle with the limitations of tools such as YSlow, PageSpeed and dynaTrace AJAX Edition because of their Page-Based Analysis approach, then check out the Premium Extensions of dynaTrace.

Comments

*


4 + one =