Andreas Grabner About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Top 10 Client-Side Performance Problems in Web 2.0

Inspired by the Top 10 Performance Problems post which focuses on Server-Side performance problems taken from companies such as Zappos, Thomson, Monster and Novell I came up with the Top 10 Client-Side performance problems in Web 2.0 applications I’ve seen when working with our dynaTrace AJAX Edition users.

Symptom: JavaScript blocking resource downloads and slowing page load time

JavaScript opened the door for Web 2.0 applications. Since JavaScript is used on almost every web site traditional methods to analyze web site performance don’t always give you all the answers for a slow running page. Looking at a network waterfall diagram is the typical way to start analyzing a page and all its requests. Most browser performance tools support a network waterfall diagram. It shows which resources got downloaded and how much time the browser had to wait for all these resources to be downloaded.

In Web 2.0 applications it is no longer just the network downloads that contribute to the page load time. We start seeing blank spots between downloads where the browser is busy with other stuff. These blanks are usually explained with JavaScript execution where the browser stops all network activity:

Network Waterfall Diagram with blank spots that can be explained with JavaScript execution

Network Waterfall Diagram with blank spots that can be explained with JavaScript execution

Tools such as dynaTrace AJAX Edition or Google Speed Tracer allow analyzing these blank spots that block further network downloads as they analyze JavaScript execution, DOM access and Rendering activity in the browser while loading a page:

dynaTrace Timeline view shows what happens in these blind network spots

dynaTrace Timeline view shows what happens in these blind network spots

Long running JavaScript executed when JavaScript files are loaded cause the browser to suspend download of remaining network resources and therefore slows down the overall page load time.

Further Reading: Best Practices on Blocking and long running script tags

Top 10 Problems that explain these symptoms

The following is a list problems that explain most of these symptoms of long running and blocking JavaScript.

#1: Slow CSS Selectors on Internet Explorer

The #1 performance problem causing slow running or blocking JavaScript are slow running CSS Selectors in Internet Explorer. Web developers make use of lookup methods via CSS Selectors provided by JavaScript frameworks such as jQuery or Prototype. A common way to lookup elements is by using the CSS class name.

var element = $(“.shoppingcart”)

Internet Explorer 6 and 7 does not provide a native implementation for this lookup. jQuery/Prototype therefore needs to iterate through the whole DOM tree to perform the lookup purely through JavaScript. This iteration takes much longer than a native implementation provided in other browsers. The iteration is also heavily impacted by the DOM Size. The following image shows CSS lookups executed on a single page with the Total Execution Time (in ms) of these individual lookups on Internet Explorer. Many calls like this can make up several seconds on an individual page

dynaTrace AJAX analysis and highlights slow CSS Selectors

dynaTrace AJAX analysis and highlights slow CSS Selectors

Internet Explorer 8 provides better support for CSS Lookups. In order to take advantage of this it is important to upgrade to the latest versions of your JavaScript framework, e.g: jQuery/Prototype. Many websites are still stuck with older versions that do not leverage these new features making these pages also slow in the latest version of Internet Explorer.

Further Reading: Best Practices on Slow CSS Selectors with jQuery/Prototype

#2: Multiple CSS Lookups for same object

Individual CSS Lookups can be expensive. Executing the same lookup multiple times on the same page adds more execution time than necessary. Instead of executing the same lookup multiple times it is recommended to store the result of a lookup in a variable and reuse it on that same page. The following image shows the number of CSS Lookups for individual CSS Selectors on a single page. Some of them are called up to 10 times

dynaTrace analysis how often a CSS Selector got executed on a single page

dynaTrace analysis of how often a CSS Selector was executed on a single page

The 8 invocations to .ztBucket take a total of 660ms. Calling it only once and reusing the lookup result can reduce this total time to ~80ms

#3: Too many XHR Calls

JavaScript and XmlHttpRequests are the basis for what in general is called AJAX. Frameworks like jQuery make it very easy to make AJAX calls in order to retrieve additional content from the server. An example would be the implementation of a paging mechanism. Instead of downloading all pages at once only the first page is downloaded. When the user navigates to the next page we request the next page via an AJAX call and refresh the DOM. This avoids a full roundtrip and avoids the browser reloading the whole page.

A mistake that is often made is that too much information is fetched dynamically with too many calls. One example is a product page with 10 products. The developer may decide to use AJAX to load detailed product information for every product individually. This means 10 XHR calls for every 10 products that are displayed. This will of course work but it means that you have 10 roundtrips to the server that lets the user wait for the final result. The server needs to handle 10 additional requests that puts additional pressure on the server infrastructure.

Instead of making 10 individual requests it is recommended to combine these calls into a single batch call requesting the product details for all 10 products on the page.

Further Readings: AJAX Best Practices to reduce and aggregate XHR Calls, Best Practices on too many XHR Calls

#4: Expensive DOM Manipulations

Manipulating the DOM is necessary in highly interactive web sites. Dynamically loaded content needs to be added to the site or user preference changes need to be applied in order to change the look and feel of the website.

There are multiple ways to add new DOM elements – each with a different performance impact depending on the browser and the number of elements that are added. It is important to analyze different approaches (e.g: adding elements as HTML or creating individual DOM elements) and apply the approach that works best in each use case.

#5: Too many JavaScript files

The more JavaScript files there are on the page – the more often the browser needs to switch context with the JavaScript engine when loading these files. It is not uncommon to see web sites with 40 or more individual JavaScript files. Besides having additional context switches with the JavaScript engine the additional network roundtrips to download these JavaScript files have a significant impact on overall page load time.

A solution to this problem is merging individual JavaScript files into fewer. This saves on roundtrips and context switches for the JavaScript engine.

#6: Large DOM

The size of the DOM plays an important role in page performance. The larger the DOM:

  • the more memory is required by the browser
  • the longer manipulations take as style changes on top nodes need to be applied to more child nodes
  • especially on Internet Explorer larger DOMs have a disadvantage when performing certain CSS Lookups, e.g: by class name
  • any custom JavaScript that iterates through the DOM will become slower

Further Reading: Optimizing Data Intensive Webpages by Example

#7: Excessive Event Handler Bindings

Frameworks like jQuery, Prototype or YUI make it easy to bind event handlers to certain types of DOM elements, e.g.: all Hyperlinks. Binding event handlers to DOM elements impacts performance in 3 ways:

  1. the binding itself takes time as objects need to be looked up and are either registered to a central event manager or DOM elements are modified by assigning the handler method to it
  2. whenever an event is triggered the event manager needs to lookup the elements that have registered for that event and then call the correct event handlers (only true when using event managers)
  3. event handlers need to be unbound when moving to a different page. This has to be done in order to avoid any DOM-related memory leaks

The following image shows the internals of an event manager that needs to lookup all elements to identify which element to handle the actual event

Event Manager that needs to perform expensive CSS Lookups to resolve objects

Event Manager that needs to perform expensive CSS Lookups to resolve objects

#8: Slow executing external services

Many web pages embed external content (Ad Banners, Facebook Connect, …) or call external services (End User Monitoring, Review Information, …). This content is usually included by including a JavaScript file from this 3rd party provider. Very often these JavaScript files show common performance problems such as expensive CSS Lookups or DOM Manipulations. It is important to analyze 3rd party content and talk with 3rd party content providers to fix problems in their code in case it impacts web site performance. Often an upgrade to a newer version solves the problem. The following screenshot shows JavaScript execution that collects information for end-user monitoring purposes. It adds several hundred ms on each visited page and therefore impacts end-user performance:

Long running JavaScript code from a 3rd party library

Long running JavaScript code from a 3rd party library

#9: Excessive Visual Effects

Many JavaScript libraries provide nice visual effects, e.g: dynamic popup menus, accordion effects, etc. Where most of these frameworks do a good job on sample web sites, some of them do not perform well on real life pages with large DOMs. It is important to analyze the impact of visual effects on the browser’s CPU, the rendering engine and the overall web site performance

Further Reading: Performance Analysis of dynamic JavaScript menus

#10: Too fine granular logging and monitoring

Custom or 3rd party logging and monitoring frameworks allow the collection of very detailed information about user interaction. A common problem is that too much information is collected which results in additional overhead in the JavaScript engine as well as on the network as the collected data needs to be sent to a monitoring or logging service. Logging every mouse move for instance might seem like a good idea but can easily end up in blocking browsers (too much JavaScript execution) or congested services (too many logging calls to the monitoring/logging service)

Further Readings: How to Automate Google Analytics Analysis, Combining Analytics with Performance Management Data

Yes – there is more to Web 2.0 Performance Analysis

This list is by far not complete. For more information check out the blogs we wrote on JavaScript/AJAX, Best Practices from Google and Yahoo. Read what people like Steve Souders or John Resig have to say about web performance. There is also a great collection of web performance related blogs on Planet Performance.

Let me know what your top problems are and whether you would add or remove anything from my list.

Comments

  1. “#5: Too many JavaScript files

    The more JavaScript files there are on the page – the more often the browser needs to switch context with the JavaScript engine when loading these files.”

    Can you explain the JS engine context switching? I’m not sure I’ve heard of this before.

  2. @cancel bubble
    I dont have exact numbers – but – based on our work we have done especially on older versions with IE we see a certain amount of overhead when loading new javascript files. When you think about it. JavaScript needs to hold execution of other tasks such as downloads, it then needs to kick off the parser, certain callback interfaces for IE Add-Ons must be called and then needs to execute the script.
    When reducing the number of js files you avoid some of these “context switches” and therefore help overall page performance. I am sure more modern browsers do a better job with what is going on internally – but – as we still have a large number of (mainly corporate users) on browsers like IE6 – this is something to keep in mind.
    makes sense?

  3. I have done some experiments with inline JavaScript blocks in IE7 and each block was adding around 10ms to the rendering time compared to merging the JavaScript into one block. I can only assume this is due to context switching and should apply to external JavaScript files as well.

  4. Still, security is on the top list of Web 2.0 application problems.

    • Agreed. Definitely high on the list. My blog however focuses on performance problems and besides the “overhead” of SSL I havent come across other security related performance problems. Let me know if you have any examples

  5. Very interesting article Andreas, we have very similar issues with our site and are actually partially through a project to correct many of the same things you have identified. This is very helpful! ;)

  6. Thanks for the nice information.This will help a lot of users.

  7. Thanks for publishing and I’m looking forward to your new posts.

Comments

*


six − = 4