Andreas Grabner About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Garbage Collection in IE7 heavily impacted by number of JavaScript objects and string sizes

After my recent presentation at TSSJS – Performance Anti-Patterns in AJAX Applications - I got interesting feedback from one of the attendees: “The presentation was good but I thought you are talking more about actual problems with XHR/AJAX Requests”. I have to admit that I focused on all common problems of Web 2.0 applications – including network roundtrips, JavaScript and Rendering – and not just on those related to asynchronous communication. The comment made me work on a sample application where I want to test different approaches of asynchronous data processing that I’ve seen when working with our user base. I ran into a very interesting performance problem that I didn’t anticipate. So – before I blog about the actual different approaches (XML vs. JSON, XHR vs. script tags, …) I want to share a performance problem in IE 7′s JavaScript engine that everybody should be aware of.

The more “active” JavaScript objects the slower your string manipulations

In my sample I process 1000 contact objects from two different data sources: XML and JSON. In case of XML I read element by element – create a JavaScript object and pass it to a method that appends the object’s content to a string buffer which will later be displayed on the page. In the JSON case I load all 1000 objects at once into an array and then hand each object to the same method by simply iterating over the array elements. Here is the pseudo code for my XML use case:

foreach(xmlElement in allXmlElements) {
  var jsonObject = {};
  jsonObject["firstName"] = xmlElement.getValueOf("firstName");
  jsonObject["lastName"] = xmlElement.getValueOf("lastName");

And the pseudo code for JSON:

var allJsonObjects = eval(jsonObjectsAsString);
foreach(jsonObject in allJsonObjects) {

processObject itself is simply concatenating the individual properties of the JSON Object to a global string variable:

globalString += jsonObject.firstName + " " + jsonObject.lastName + ...

Running this example on Internet Explorer 7 is SLOW

I assumed that the 2nd use case (JSON) is much faster than the first as I assumed that iterating through the XML DOM is slower then parsing a string of JSON. Well, this assumption was correct (parsing through the XML is slower than parsing JSON), but the string concatenation in processObject turned out to be much slower in the JSON use case as compared to the XML use case.
Analyzing the JavaScript execution with the FREE dynaTrace AJAX Edition shows me all the 1000 invocations of processObject in both use case scenarios. In both cases execution time of the method increases over time – that is explained by a growing globalString object which gets more expensive to manipulate the larger the string grows. By the way, I know there are better ways to concatenate strings, and I should try out some of these recommendations. An excellent read here are the blogs from the guys at SitePen, e.g.: String Performance: An Analysis or String Performance: Getting Good Performance from Internet Explorer.

Let’s look at my results side-by-side showing the average execution time of processObject as well as the first couple of calls and the last couple of calls of the overall 1000 invocations of that method:

Execution Time Comparison between my two scenarios

Execution Time Comparison between my two scenarios

The left execution trace shows the JSON example where I constantly have 1000 JavaScript objects in memory (300ms). The right shows the XML example where I only have 1 JavaScript object in memory (150ms). The processObject method is called 1000 times in both scenarios and is doing exactly the same thing – concatenating the individual properties of the passed JavaScript object to a global string. The difference in execution time of these string concatenations, however, changes dramatically the longer the string gets. The only explanation I have for this “phenomenon” is that IE’s Memory Management is impacted greatly by having the additional 1000 JavaScript objects in memory (see my detailed analysis below the next image).

I copied the 1000 execution times of processObject for both use cases into an excel spreadsheet and graphed the timings. It’s interesting to see how the exec times change over time with an interesting spike at about 800 times the method was invoked:

JavaScript execution time over time

JavaScript execution time over time

The blue line shows the execution time of the scenario where I have the 1000 additional JavaScript objects in memory. The pink line shows the times with only 1 custom JavaScript object in memory. There are two interesting observations here – and I hope some of you (my fellow readers) can share your thoughts on it and correct me if my conclusions are incorrect: We see an increased exec time at the very beginning and a jump in execution time after about 800 executions of this method. I assume it has to do purely with the Garbage Collection and Object Allocation strategy of IE. Daniel Pupius’ blog about Garbage Collection in IE6 gives some answers on when IE kicks of the GC (after 256 variable allocations, 4096 array slot allocations and 64kb of String allocations). I am sure Microsoft has changed the GC strategy for IE7 but Daniel’s blog definitely explains the spike in my graph. My string size ends up being 79k. After about 800 executions it crosses the 64kb limit. Interesting is that from that point on I have constantly high execution times. So – the GC always runs when you have strings larger than 64kb!! I explain the individual smaller spikes with the GC runs triggered by 256 variable allocations. What I can’t explain is the much higher execution times in the very beginning – any thoughts?

Key takeaway: Be careful with string allocations and objects in memory -> GC in IE7 performs really badly

Running this example on IE8, FireFox and Chrome is FAST

I ran the same example on Internet Explorer 8, FireFox 3.6 and the latest version of Chrome. Not surprisingly the execution time on all of these latest versions of these browsers where between 2 and 15ms. These timings were the same for both use case scenarios (1 and 1000 objects in memory). Chrome and FireFox were in fact the fastest with only 2ms of execution time. IE8 took 15ms. The following graph shows the comparison between the tested browsers:

String Concatenation Times across Browsers

String Concatenation Times across Browsers


First of all – here is the actual sample I used. Click on the 1000 Objects and 1000 Objects (wArray). Test it on your machine and let me know if you experience any different behaviour. I am also very happy to accept any recommendations on how to speed up this sample – I am sure there are many ways of doing so.

Second – it is important to understand the browsers internal memory management and the GC strategies. IE 7 will be here for a long long time so be careful with your object allocations. It also shows how important it is to test and analyze performance across browsers – especially on Internet Explorer. Most developers that I meet don’t use IE at all and purely focus on FF, Chrome or Safari. This often leads to the fact that problems like this here won’t show up until late in the project (when testers are testing the software across browsers and platforms) or when the page hits “The Real User” who happens to use IE quite a lot.

Third and final conclusion – I hope this was helpful. As always – if you are interested in more information check out some of the other AJAX Related blogs and watch some of the webinars we recorded with our clients, e.g.: the one I did with



  1. The fixed GC thresholds you describe apply to versions of JScript below 5.7. Microsoft did a critical update for IE6 which fixed this issue (

    IE7 has the new version of JScript by default. As I understand it, it uses a similar GC model to older versions of JScript but the thresholds are variable. So as the working set of your application increases, so does the threshold at which the GC will run.

    I’m not sure what the GC strategies are for IE8 and IE9. We mostly found GC thrashing to be a non-issue after the JScript patch.

    BTW : The Closure Library has code for detecting JScript:

    var BAD_GC = goog.userAgent.jscript.HAS_JSCRIPT &&

  2. @Daniel: I am running IE7 with the latest patches installed.
    So – the “self adjusting” thresholds basically explain the initial peaks I see in my graph – then flatening out and then starting again after a while.
    Thanks for your feedback – will look into the closure library

  3. Sebastian says:

    Without exact ability to measure the thing in detail we (the qooxdoo team) have found out a similar issue once with IE6. As written by Daniel, it’s a lot better since IE7 or the garbage collector fix, but it’s still pretty “awesome” especially for big object oriented web applications.

    A workaround which dramatically improved the situation for us was a optimization in our build system. The idea was to lay out all immutable strings (string compares or part of new strings) into a global data structure. This dramatically reduces the impact on the garbage collector because of less instances (of strings). It also dramatically improved switch statements with string compares as somehow comparison to a string is also slow in this case because it first creates the comparison string.

    It’s interesting that this is still not part of YUICompressor and similar tools.

    See also:

  4. @Sebastian:good suggestions and thanks for the link. You are right-it is interesting that these things are still valid for IE7. IE8 does a better job but is still behind its competition.

  5. Could it be that you were doing this:

    while(…) {
    str += other;

    This is a very bad way to concatenate because it is O(n2) on the number of chars (in any language).

    What you should do is:

    a = array();
    while (…) {
    a[a.length] = other;
    str = a.join();

    That will give much better performance (probably on any browser)!

  6. @am: you are correct – I used a very inefficient method to concatenate strings – I mentioned that in the blog. The interesting fact still is that IE’s JavaScript engine shows extrem bad performance with a growing number of JS objects and number of string allocations. For string concatenations I posted two links of blog posts from the guys at SitePen.

  7. You are absoulutely right.The interesting fact in this still is that IE’s JavaScript shows extremely the worst performance with a growing number of JS objects and number of string allocations.

  8. I could found out excellent and useful information from you guys.I appreciate your work here.I like this type of post because it will be useful for everyone.Thanks.
    commercial playground equipment

  9. Hi,
    I am running one piece of JS code to integrate third-party tool. Below line in IE8 causing stop running the script with pop-up. I debugged and checked the exact cause, data and key length will increase more than 1500 then that for loop will run that many times. So can you please suggest me how to overcome from this problem. I am new to JS.

    function index(data, key) {
    var i,
    data = data || [];
    for (i = 0, len = data.length; i < len; i += 1) {
    if (data[i][0] === key) {
    return i;
    return -1;



seven − = 0