About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

How to Speed Up sites like vancouver2010.com by more than 50% in 5 minutes

Many Web Sites that use JavaScript frameworks to make the site more interactive and more appealing to the end user suffer from poor performance. Over the past couple of months I’ve been contacted by users of our FREE dynaTrace AJAX Edition asking me to help them analyze their problems. In doing so, I’ve developed a standard approach in order to get to a high-level analysis result in 5 minutes.

As the Winter Olympics are a hot topic right now I checked out vancouver2010.com to see if they have any potential to improve their web site performance. It seems I found a perfect candidate for this 5 minute guide :-)

Minute 1: Record your dynaTrace AJAX Session

Before I start recording a session I always turn on argument capturing via the preferences dialog:

Turn on Argument Capuring in the Preferences Dialog

Turn on Argument Capuring in the Preferences Dialog

The reason I do that is because I want to see the CSS Selectors passed to the $ or $$ lookup functions from various JavaScript frameworks like jQuery or Prototype. The main problem I’ve identified in my work are CSS Selectors per className that cause huge overhead on pages with many DOM elements. I wrote two blogs about the performance impact of CSS Selectors in jQuery and Prototype.

Now its time to start tracing. I executed the following scenario:
1. went to http://vancouver2010.com
2. click on Alpine skiing
3. click on Schedules & Results
4. click on the results of the February 17th race (that’s where we Austrians actually made it on the podium)

Minute 2: Identify poorly performing pages

After closing the browser, I return to dynaTrace AJAX Edition and look at the Summary View to analyze the individual page load times and identify whether there is a lot of JavaScript, Rendering or Network time involved. Let’s see what we got here:

Identifying HotSpots on every page

Identifying HotSpots on every page

Here is what we can see
1. Across the board we have high JavaScript execution. The last page (schedule and results) tops it with almost 7 seconds in pure JavaScript
2. The first page has a large amount of Rendering Time – that is time spent in the browser’s rendering engine
3. Page 2 and 4 have page load times (time till the onLoad event was triggered) of more than 5 seconds!!
4. Page 3 has a very high Network Time although it doesn’t have a very bad page load time. This means that we have content that was loaded after the onLoad

Minute 3: Analyze Timeline of slowest Page

I pick page 4 as we see a very high Page Load and very high JavaScript time. I drill down to the timeline view and analyze the page characteristics:

Where is the time spent on this page?

Where is the time spent on this page?

Here is what I can read from this timeline graph (moving the mouse over these blocks gives me a tooltip with timing and context information):
1. the readystatechangehandler takes 5.6 seconds in JavaScript. This handler is used by jQuery and calls all registered load handlers
2. the script FB.share takes 792 ms when it gets loaded
3. an XHR Request at the very beginning takes 820ms
4. we have about 80 images all coming from the same domain – this could be improved by using multiple domains
5. we have calls to external apps like facebook, google ads or google analytics

Minute 4: Identify poorly performing CSS Selectors

The biggest block is the JavaScript executed in the readystatechangehandler. I double click on it and end up in the PurePath view showing me the JavaScript trace of this event handler. I navigate to the actual handler implementation which gets called by jQuery. I expand the handler to see the methods it calls and which one consume the most time. It is not surprising to see a lot of jQuery Selector methods in there using a CSS className to identify the element:

PurePath View showing HotSpots in the onLoad event handlers

PurePath View showing HotSpots in the onLoad event handlers

I highlighted those calls that have a major impact on the performance of this event handler. You can see that most of the time is actually spent in the $ methods that is used to look up elements. Another thing that I can see is that they change the class name of the body to “en” which takes 550ms to execute.

As I am sure there are tons of calls to jQuery Selector Lookups in that JavaScript handler as well as in all other JavaScript handlers on the vancouver2010.com website I open up the HotSpot view. The HotSpot view shows me the JavaScript, DOM Access and Rendering Hotspots across all pages. I am interested in the $ methods only. In the HotSpot view I therefore filter for “$(” and also filter to only show the DOM API (we account the $ method to the DOM API and not to jQuery). Here is what I get after sorting the table by the Total Sum column:

HotSpot View showing all jQuery CSS Selectors and their performance overhead

HotSpot View showing all jQuery CSS Selectors and their performance overhead

The problem here is easy to explain. The site makes heavy use of the CSS Selectors to look up elements by class name. This type of lookup is not natively supported by Internet Explorer and therefore jQuery has to iterate through the whole DOM to find those elements. A better solution would be to use unique IDs - or at least add the tag name to the selector string – this also helps jQuery as it first finds all elements by tag name (which is natively implemented and therefore rather fast) and then only has to iterate through these elements. So instead of an average lookup time of between 50ms and 368ms this can be brought down to 5-10ms -> a nice performance boost - eh? :-)

Minute 5: Identify network bottlenecks

In the timeline I saw many image requests coming from the same domain. As most browsers have a physical network connection limitation per domain (e.g.: IE7 uses 2) the browsers can only download so many images in parallel. All other images have to wait for a physical connection to become available. Drilling into the Network View for page 4 I can see all these 70+ images and how they “have to wait” to become downloaded. Once these images are cached this problem is no longer such a big deal – but for first-time visitors it is definitely slowing down the page:

Network View showing wating times for Images

Network View showing waiting times for Images

The solution for this problem is using the concept of domain sharding. Using 2 domains to host the images allows the browser to use twice as many physical connections to download more images in parallel. This will speed up page the download of those images by 50%.

Conclusion

It is easy to analyze the performance hotspots of any web site out there. This is my approach to identify the most common problems that I’ve seen in my work. Besides the problems with CSS Selectors and Network Requests we see problems with poorly performing JavaScript routines (very often from 3rd party libraries), too many JavaScript files on the page, too many XHR (XmlHttpRequests) Requests to the server and slow responses from the server of those XHR Requests. Especially for that last piece we then use our End-To-End Monitoring Solution by integrating the data captured with dynaTrace AJAX Edition with the server-side PurePath data captured with dynaTrace CAPM. Also – check out my blog about why end-to-end performance analysis is important and how to do it.

Feedback on this is always welcome. I am sure you have your own little tricks and processes to identify performance problems of your web sites. Feel free to share it with us.

 

Comments

  1. It’s very interesting article. Thank you for information.

  2. You might want to mention that the domain sharding is correct in this very situation – using sprites might be another one which is good if using lots of small images. (Navigation elements etc.)

    Thanks for the post, it show the power of Dynatrace end to end analysis and the capabilities really well. (They should have hired you for consulting for sites of this scale anyway :) )

    The article points out that knowledge of the end users environment is indeed still a requirement, for example what is supported by IE natively and what is emulated by jQuery. Really done well!

  3. @Thomas:you are totally right-its important to show that certain browsers really have certain limitations causing major performance problems. I put a link in to domain sharding :-)

  4. This is very interesting. There are so many articles on how to speed up a website, but this one stands out from the rest with the comprehensive analysis.

  5. @ESN: Thanks for the flowers :-)
    The interesting thing that I found is that many websites have this very same issue – using classname in their selectors. All analytics I’ve done for major sites showed that this is the #1 problem in JavaScript execution. With the dynaTrace AJAX Edition it is now possible to get the list of selectors that are problematic and it should be “fairly” easy to change those selectors to something much faster, e.g.: by ID

  6. Could registration at dynatrace.com be any more complicated? Me thinks its busted.

  7. @unk: I am sorry to hear that you have problems. We had some problems in the registration process that should now be resolved. please let us know if you still have problems and we will take care of it

  8. Great article, Andreas. @Thomas is right – using sprites would eliminate about 30 image requests. These step-by-step case studies are a great way to show how to analyze web site performance.

  9. Looking at the Javascript of the Vancouver2010.com website, I noticed that there is ample room for more caching the jQuery selection results. It’s common to find multiple calls within the same scope instance.

    That said, I must say, I’ve been very happy with the site overall.

  10. @Steven:you are correct-caching some of these queries would help as well.And-I am totally with you:I also like the site :-)

  11. good post!COME ON

  12. I am working with an ASP.NET MVC Application and had problems with some of my calls to the HtmlHelper Extension functions…Mohegan Sun Arena

  13. However, how would you argue this same point when dealing with an ecommerce site that requires unique information about the User that deals with a more personal issue: finances.

  14. @Phanie: slow javascript or too many network requests are problems that are independant of the type of website. sites that have more personalized content will probably have more time spent on the appserver to generate this user-specific content. to optimize the server-side you should look into a performance management solution that is able to analyze your server-side performance. dynaTrace offers a solution – check out our website – http://www.dynatrace.com

  15. There is no doubt that it is a good way to to Speed Up sites

  16. Anonymous says:

    From the first point,I want to see the CSS Selectors passed to the $ or $$ lookup functions from various JavaScript frameworks like jQuery or Prototype

  17. Anonymous says:

    those are some neat little tricks; thanks for sharing!

Comments

*


− one = 6