Andreas Grabner About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Performance as Key to Success! How Online News Portals could do better

This is the English translation of my German article published on Create or Die today.

What factors make you think a web page is good or not? What keeps on that page longer than others? On the one hand it is the content on the page and whether this content is of interest to you. On the other it is the velocity with which you can navigate through the individual pages. High-Speed internet and performance-optimized pages make our day-to-day browsing easier when accessing our emails, tweets, latest updates on sports or news. With all the changes in the recent years in Web Performance Optimization (WPO) we’ve been spoiled by those sites that follow all these Best Practices and boosted their web site experience. No wonder that we start losing our patience with a site that doesn’t respond as fast as we’ve come to expect.

For this particular reason Google modified their ranking algorithm to now also include web performance as one of their metrics. The speed of a web page – the time from entering the URL in the browser until the page has been fully loaded – has now become one factor that defines whether your site shows up on the top or on the bottom of a search result. A good reason to pick “fast by default” as the main theme for this year’s Velocity 2010 Conference. Steve Souders – the driving force behind Web Performance Optimization – hosted this conference and it became clear the significant impact Web Site Performance has on end user behavior and thus the success of a web site.

At last year’s Velocity the first results on Business Impact of Performance were presented to the public. Microsoft, Yahoo and Google presented their results of their internal tests that showed that slower pages have a direct impact on banner clicks – and therefore generated revenue. On the contrary – Shopzilla presented the results of a web site overhaul showing how a faster web site positively impacts the number of users, time spent on site, number of clicks and generated revenue.

Who is faster? Who is better? Bild.de or Spiegel.de?

Similar to my analysis of the FIFA World Cup Website, the Golf Masters or the web site for the Winter Olympic Games in Vancouver I wanted to take a closer look at the two biggest German news portals: Bild.de and Spiegel.de.

Based on Alexa, these are the top two news portals in Germany. Both portals probably have a very different reader community as both publishers have a different way to “present” news. My analysis focuses on how well these pages follow the Best Practices on Web Performance Optimization – therefore the actual content doesn’t matter, as it is all about how to deliver the content. Both pages deliver high-volume, multimedia content. Both pages make money with online ads and therefore it is in their interest that many users visit their site, stay as long as possible on their site and click as many ads as possible. We’ve learned from Google, Bing, etc… that speed is a critical factor in keeping user on the page – now let’s see how these two sites do:

There are 3 great and free tools available that we can use for an analysis like this:

All 3 tools analyze individual web sites based on the Best Practices from dynaTrace, Yahoo and Google and provide a nice overview on whether the rules discussed in these documents were met or not.

The Test scenario

On both sites – www.bild.de and www.spiegel.de – I looked at two individual pages: the start page and the overview page for politics. Important for my testing is to start with a cleared browser cache in order to analyze the page as a First Time Visitor. It is recommended to run the same test again – this time with a primed browser cache – and then compare the timings.

The Result

Using any of the 3 mentioned tools is rather simple. dynaTrace AJAX allows me to record a complete browser session which includes all 4 pages I am testing. The following screenshot shows the Performance Report that is opened when double clicking on the recorded session showing the 4 tested pages:

dynaTrace Performance Report shows all 4 pages in performance comparison

dynaTrace Performance Report shows all 4 pages in performance comparison

We can make the following interesting observations:

  1. Both sites have a First Impression Time of less than 2 seconds. This means that it takes less than 2 seconds until the user gets a first visual impression of the website – which is considered acceptable
  2. The Fully Loaded Time of both start pages is very high – both take about 14 seconds to fully load. The main reason for this is the amount of multimedia content (mainly images)
  3. Bild.de requires 289 HTTP Requests to fully load – that’s 113 more than Spiegel.de
  4. Bild.de uses JavaScript and XHR (XmlHttpRequests) to dynamically load content
  5. The full page size of bild.de is 4.6MB. That is 2.5 times more than Spiegel.de which “only” has 1.8MB

Lifetime of a Web Page

One of the nicest features of dynaTrace AJAX is the Timeline View. This view shows all activities (HTTP Requests, JavaScript execution, Rendering, XHR Requests and page events) that happen on a single page in chronological order. We can also see all requests split up by domain which makes it easy to spot which domains serve a lot of content or which domains serve the content very slowly. Especially on pages that include external 3rd party content, e.g.: Ad-Banners, it is interesting to see how this type of content slows down your page. Google’s PageSpeed has a new feature in their latest version which allows you to focus on your own or external content when analyzing page performance.

The following two screenshots show the dynaTrace Timeline for the start page of Bild.de and Spiegel.de:

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

There are some fundamental differences between the two start pages:

  • Bild.de uses multiple domains to deliver multimedia content, e.g.: bilder.bild.de or newscase.bild.de. Splitting content on multiple domains is a Best Practice that is called Domain Sharding. It has the advantage of letting the browser use more physical network connections to download more content in parallel
  • Spiegel.de delivers most of its multimedia content from the primary www.spiegel.de domain which is the bottleneck of their deployment. A browser has a limited number of physical network connections to a single domain, e.g: IE 7 uses 2 connections. If there are more than 2 resources to be downloaded from a domain they need to get queued up and have to wait for a connection to become available. Domain Sharding solves this problem by splitting content on multiple domains
  • both pages have individual resources that take extraordinarily long to load, e.g.: initial HTML page and CSS files on spiegel.de or 2 Flash components on bild.de
  • Bild.de shows a constantly high rendering activity – caused by animated images as well as the large number of images on that page

Key Performance Indicators

Additionally to the performance metrics as shown in the dynaTrace AJAX Performance Report Overview the Key Performance Indicator (KPI) tab shows a set of additional important KPI’s:

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

Key Performance Indicators for Bild.de – showing high load time, server-time, wait time and number of requests

As mentioned earlier, bild.de requires many HTTP Roundtrips (289) in order to load the full page. The number one rule in Web Performance Optimization says to minimize HTTP Roundtrips. 289 roundtrips from browser to server is definitely too much. Every roundtrip needs to wait on a free physical network connection (read more on The Two HTTP Connection Limit Issue). Every roundtrip includes the overhead of network latency between browser and server as well as the overhead of the HTTP Protocol itself (HTTP Headers very often contribute a large percentage of the total roundtrip size).

The KPI Report also shows how many resources (images, CSS, …) are being cached or have to be retrieved from the server every time. The first table shows that 80 resources do not use any browser cache headers. That means that these resources have to be re-downloaded on every subsequent visit of the same user.

Usage of the Browser Cache

The Browser Caching Tab on the dynaTrace AJAX Performance Report performs a detailed analysis of HTTP Cache Headers on every downloaded resource. It is recommended to read Best Practices on Browser Caching which explains available browser caching options.

Bild.de and Spiegel.de don’t make optimal use of the browser cache. The following illustration shows a list of all resources that have no cache setting, a cache setting with a past date or a very short cache expiration date:

Many resoures on the page will not be cached at all or have a very short expiration date

Many resoures on the page will not be cached at all or have a very short expiration date

Obviously it doesn’t make sense to cache every image on a page. Images that have a short life time should not fill up the local browser cache unnecessarily. If we have a closer look at some of the images on these pages here it seems that at least some of them could be cached for a longer time as they won’t change that frequently. The following section of the website shows images that are cached only shortly (< 48 hours):

Some examples on images that can be cached longer as they won't change that frequently

Some examples on images that can be cached longer as they won’t change frequently

These logos will probably not change every 2 days – therefore it makes sense to specify a Far-Future Expiration Header. This reduces the number of roundtrips for revisiting users.

Reducing HTTP Roundtrips

I talked about this several times now – the top rule is to reduce network roundtrips. Besides making use of the browser’s cache there are other ways to reduce the roundtrips for every user (not just revisiting). dynaTrace AJAX has the Network tab that shows those roundtrips that are considered “unnecessary”:

7 unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

7 unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

HTTP Redirects allow implementing some important use cases, e.g.: authentication, short memorable URLs or end-user monitoring. Too often though HTTP Redirects are caused by wrong configuration settings on the web server. Redirects that can be avoided are a great way to improve web site performance. On the start page of spiegel.de we have 5 redirects – bild.de has 7. A redirect means an additional HTTP Roundtrip as the browser needs to follow the redirect and request another URL in order to get to the originally requested resource.

Great savings can also be achieved on CSS, JavaScript and image resources. All 3 types can potentially be merged into fewer resources. The Best Practice on Network Requests and Roundtrips talks about CSS and JavaScript merging and compression. It also talked about CSS Sprites. CSS Sprites is a technique to combine multiple images in a single resource and use CSS styles to show the individual images on the correct location on the page. Check out the Best Practices on CSS Sprites.

Reducing resources not only brings the advantage of fewer roundtrips. It also means that fewer resources need to be downloaded from the same domain which reduces waiting time (remember the limitation of 2 connections per domain on IE7 mentioned earlier in this blog?).

On all pages of the two sites (Bild and Spiegel) we can observe too many JavaScript, CSS and image resources. It is not possible to merge all of these files – as this is technically not always feasible. It looks like some of these files could be merged and would therefore greatly enhance load time.

Dynamic content

Even though the majority of the content on a news site is static for at least a while (new news doesn’t arrive every second) there is enough content that needs to be generated dynamically for every user. Examples for this are any type of ticker information, weather or personalized ads.

dynaTrace identifies dynamic content based on the rules defined in Best Practice on Server-Side Performance. All these requests are listed on the Server-Side tab in the Performance Report. The Server-Time is the time also known as Time-To-First-Byte. It is the time from the last byte of the HTTP Request sent until the first byte of the response is received. This also includes network latency between the browser and the server – it is however the closest you can get to the server-time without actually measuring the time on the server. The following illustration shows the dynamic resources of bild.de:

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

dynaTrace shows slow running server-side requests – both on bild.de as well as Ad-Service domains

The slowest requests are stock information requests, the initial page request itself as well as the weather information. We also see some requests to an external ad-service and requests to a web-tracking service (not visible in the screenshot above). Server-side performance mainly becomes an issue for dynamically generated content – and usually only in a scenario where many users want to access data from the server. In this case it becomes an even bigger problem because it prevents a larger number of users to actually click on those ads that generate revenue.

The blog Top 10 Performance Problems taken from Zappos, Monster, Thomson and Co lists the typical server-side performance problems and shows how to prevent them.

Conclusion: Who delivers the fastest News?

In our tests, neither site does particularly well. Both sites deliver lots of content without following the Best Practices on Browser Caching or Network Roundtrips. The fully-loaded time of both pages is very high with 14 seconds each (obviously this also depends on your connection speed). Both sites deliver an acceptable first visual impression (<2 seconds).

dynaTrace, PageSpeed and YSlow allow uploading performance results to ShowSlow. ShowSlow is an Open Source Platform and can be used as performance repository for web metrics. ShowSlow.com hosts a public instance of the ShowSlow server and allows you to upload and compare results:

Performance comparision between the sites using all available Web Performance Tools

Performance comparison between the sites using all available Web Performance Tools

The difference in ranking and grading is that every tool puts the focus on different rules. dynaTrace puts its focus on load time first and then on rules such as browser caching or network roundtrips. YSlow and PageSpeed focus more on the Best Practice rules. A good mix of tool usage is therefore recommended – especially because no tool supports every browser anyway.

Now – who is the winner of this analysis? Let’s hope it is us – the readers of these online news portals. With the result of this analysis, with the help of the tools and with the help of people like Steve Souders, Web Site Performance is put in the spotlight and will ultimately lead to better and faster websites:

Performance == More Users == More Revenue


Comments

  1. Mirko Novakovic says:

    Hi Andreas,

    it is always nice to read your good written articles, when you analyze webites and I think they are very useful to understand the basics of WPO.

    In this case I think you forgot about two very important performance tuning rules:

    1. Only tune your application if needed.

    2. Metrics are only metrics and not problem indicators.

    So what do I mean with the first point? Ok. I open a browser…type in http://bild.de…1-2 seconds later the website is loaded and I can see the the nice naked girls on the screen…they are so nice that I don’t care that in the Firefox status bar I can see that the site is not completely loaded and without turning a analysis tool on, I cannot even see what the browser loads, as nothing happens on the screen.

    As a user I would say: Big website (> 4Mb), a lot of pictures and the website runs very, very quickly! So why should I really want Bild.de to spend money to optimize this website (and at the end it is our money as a reader that is spend)?

    That leads me to point 2. Do I want to optimize a website because some metrics (e.g. number of YSlow issues) are not good?

    No. Metrics are good but you always should validate them against the requirments. In this case I would say that the response time is so good that they do not matter. Especially if you take into account that the whole content is really dynamically edited and that the CMS seems to be very well optimized (caching etc.).

    So my conclusion. Very nice article, good analysis BUT in my point of view wrong conclusion: “In our tests, neither site does particularly well.”

    Mirko

  2. Good points Mirko. I should have been more specific with when I said “Both sites deliver an acceptable first visual impression (<2 seconds)". This is a good value as people get to see something (whether it is a naked girl or some actual news) very quickly.

    The main problem I still see on both pages is the lack of browser caching and the lack of CDNs (or multiple host domains). The first is easy to address and not only helps the end user when re-visiting but also takes a lot of pressure off from the server. The second definitely helps the user as the browser can download more resources in parallel.

    Overall I have to agree with the point you are making on "its not always the metrics". This heavily depends on the type of website we are analyzing. thanks for your input – will keep that in mind for future analysis.

    Any other thoughts and comments are highly appreciated

  3. Mirko,

    you made some good points. Whether you want to optimize or not depends on the impact of the end-user experience or other factors you want to optimize.

    As you said one might be perfectly happy with the performance you get. This does not mean that everything is perfect however. At the same time optimizing just because you can makes no sense.

    This is one of the reasons why we include impact calculations. This enables performance engineers to decide whether they want to optimize or not.

  4. Mirko Novakovic says:

    Alois,

    can only agree with you. Performance is not about tuning because you can, but because you need. Impact calculation is very useful and much better than just “metric counting”.

    So tools and metrics are really good in supporting engineers if they need to optimize their website but we have to take care that people are not focussing too much on tools. I’ve seen developers spending days using profilers etc. because it is fun. And they optimized code that was ok only because the methods where described as “bottleneck” – after some days they recognized that there always will be a No. 1 bottleneck – even if you optimize forever :-)

    Mirko

  5. Vielen Dank für das Tutorial, sehr interessant.nico neugeboren

Comments

*


five − 4 =