About the Author

Welcome to the Show of CDN Monitoring: Act3 – Things going Wrong

In my first 2 blogs Act 1 – The What and Why  and Act2 – The How and How Not I covered the main benefits of CDNs and what type of tools are needed to monitor them. Today I want to go into some details of why you need to consider monitoring them in the first place. Let’s face it: if things work well enough, why worry about monitoring them? Of course it is great to be able to see all the details, but do I really need all that information? Aren’t the CDNs doing a good enough job?

The answer to that last one: Yes, CDNs are doing a very good job … most of the time.

How can you know the CDN delivers what it promised?

At the end you will have made your decision to go for a CDN based on criteria like “if we invest X in this solution the improvement needs to be at least Y”. Typically the X and the expected improvements are fairly easy to define – but when it comes to determining the actual Y delivered traditional testing or monitoring approaches are used – and as I explained in Act2 – The How and How Not they fail to deliver the right answers.

Back in my theatre days (see Act 1 – The What and Why) we took the risk and chose our marketing flyer distribution agency purely based on budget and gut feeling. Soon our flyers could be found in relevant tourist and culture locations across Berlin and even all the way in Hamburg. They even gave us the option to replace the material with new versions on short notice (e.g. updating the flyers with a note that the first 2 shows were sold out already). And believe me, we certainly didn’t have the bandwidth to pull such a stunt on our own just a few days before the opening night.

Our theatre “CDN” journey ended there – but think of bigger players like Disney Land or Phantom of the Opera who spend a fortune on making sure I see their flyers in all the hotels I visit – they sure seem to follow me wherever I travel.

We were not “professional” enough to actually validate that our investment was a good one – or to even systematically check whether they did everything they told us. Again: we were quite busy running the show. But overall we were quite happy.

And now think of your CDN investment.

What does the CDN promise?

Instead of just copying what different vendors publish on their websites or repeating the core benefits explained in ACT1 let me list some basic key technical aspects most people think of:

  • Get the content closer to the end user
  • Cache the content and lower the traffic on my data center
  • Balance the load to deliver a good performance even in peak times
  • Always be available

What could possibly go wrong?

Looking at this simplified view everything sounds fine and with such basic items the risk of failures should be relatively low.

However we very often do see issues undetected by traditional testing/monitoring approaches caused by CDN customer misconfigurations, CDN outages or other irregularities.

And again thinking back – all of these issues exist in the real brochure world as well.

While not complete the list includes:

  • Wrong routes sending the request halfway across the globe instead of to the closest PoP.
    Once I saw a whole stack of Phantom of the Opera flyers in a nice little Hotel in Germany – alas advertising the great show in Singapore!
  • Content not cached or compressed the way it should
    Brochures being folded in the wrong format and thus not fitting into the stands is something quite damaging. And sending out the wrong caching headers or screwing up the nice content compression is also not something you would get a lot of applause for.
  • All requests hitting same PoP instead of spreading load
    One time all of our brochures were only placed in one of the hotel lobbies instead of all the lobbies we had paid for in our package. Luckily, a friend pointed it out for us.
  • DNS mishaps
    Misprinting the contact information on where to find the show or how to book tickets resembles a case in which a customer found out that in some of the key markets the DNS entries were wrong resulting in the site not being available in a number of countries.

Example: CDN requests misrouted across the globe

Surprisingly, we can see in our data something that is happens quite often and across a number of CDN vendors. Instead of routing the request to the closest PoP and thus optimizing latency due to RTT, the request often is routed to some server far away from the actual user. If it were for a single request the impact might be negligible but when most of your resources are served with a latency of say 500ms it quickly adds up and results in a very high total response time.

In these particular examples synthetic tests have been executed using the Compuware APMaaS Last Mile agents from a large number of different locations within Australia, Germany or Italy.

Results show that in many cases PoPs in USA were hit from machines located in Australia and while some of these did offer a very low connection latency, quite a number of them did cause a dramatic slowdown in the overall end user performance. The table shows the average connect time from the last mile agents to the PoP as a measure of the latency.

1Avg connection time of 20 most hit CDN PoPs running synthetic last mile tests in Australia

1Avg connection time of 20 most hit CDN PoPs running synthetic last mile tests in Australia

Another example shows that it’s not only Australia having such issues. The following map shows results from a test conducted in Germany over 24h. Green dots represent the locations of end user machines used for the synthetic Last Mile test and the red dots show the locations of the hit CDN PoPs.

2 Request from within Germany are routed to CDN PoPs across the globe

2 Request from within Germany are routed to CDN PoPs across the globe

Yet another case found with one of our Italian customers showed that the response times of end users connected with Telecom Italia as their local ISP were 33% above the average of all others. Looking at the network components the biggest difference was the average connection time to the CDN PoPs being used. The end user machines using Telecom Italia had an average connection time of 298ms while all others connected within 181ms.  Drilling into the details we found that most of the CDN PoPs hit by these end user machines were actually located in the US which of course explains the drop in performance.

comparison average CDN PoP connection time Telecom Italia vs other ISPs

comparison average CDN PoP connection time Telecom Italia vs other ISPs

CDN PoPs hit from Italy - minimum 100 total connections

CDN PoPs hit from Italy – minimum 100 total connections

Lesson learned: Make sure you know where the content is delivered from and correlate end user performance to increased latency due to mis-routing.

Example: Cache hit ratio too low

In most cases you would want to follow web performance best practices rules and serve static content from a CDN with a long cache expiry. CDNs honor the HTTP headers you serve and thus would initially pull a static content from your origin the first time it was requested and after that cache it on various tiers within their own network. Especially in high traffic site you would very often see a cache hit ratio well above 99% of the static content. However ever now and then something goes wrong – and more often than not it’s a misconfiguration introduced by your team not sending out the correct headers.

Sometimes however you look at the data and you start to wonder what is going on. In this particular example a homepage was monitored in Australia and the CDN hit/miss ratio of 97% was not good enough

Overall hit miss/ratio of static content delivered by CDN

Overall hit miss/ratio of static content delivered by CDN

A quick look into the distribution of misses showed that 1 Australian ISP (Telstra) did have much worse results than the others. And indeed splitting the data into sections showed that while most synthetic end user machines spread across 17 different local ISPs had a very good hit/miss ratio of over 99% the main ISP only delivered 95%. The root cause for this would not be within your team but in different configurations either within the CDN or the local ISP. In any case typically getting in touch with them and presenting detailed proof of your finding will help you in getting the issue resolved.

Hit/miss ratio of static content requested from end user machines connected via Telstra

Hit/miss ratio of static content requested from end user machines connected via Telstra

Lesson learned: make sure you know the hit/miss ratio and you are able to split it up into segments like regions or ISP.

Example: Oversubscription of a CDN PoP

Typically you would expect a CDN to distribute the load somewhat evenly across their network. And that is also often the case. However as you see in the chart below it may also happen that during a certain time all the traffic is being served from a single PoP. In this particular case there was no drastic performance degradation since the amount of traffic and number of requests was very low – however such incidents could also happen in larger scale causing a high traffic surge for the oversubscribed PoPs resulting in performance issues for the end user.

Traffic served from one PoP and not distributed evenly

Traffic served from one PoP and not distributed evenly

Example: DNS misconfiguration

A major hotel chain was gearing up towards a global relaunch of its website. Obviously not wanting to start up numerous data centers across the globe, it paid for CDN services to decrease the latency and guarantee good performance. The launch day was getting closer and the team had everything prepared for a big SaaS load test hitting the core application from all over the world. Results were coming in and very quickly a slight panic entered the room.

A number of key regions had 0% availability! The request didn’t even make it to the data center or the CDN. Looking at the detailed results returned by the synthetic Last Mile agents and running a few adhoc DNS lookups from these end user machines across the problematic regions was enough to understand the root cause. Quite simply a number of DNS entries which had been uploaded to the CDN configuration system had false IP addresses.

This was a typical case in which pre-production testing from all the regions in which end users would be accessing the application helped to prevent a disaster.

Lesson Learned: Make sure you can validate any configuration changes of your CDN from where your end users are – before they get access.

Example: Real User Monitoring detects CDN performance peak

This case has already been described in detail in Why Bon Ton needs real-time visibility into 85% of its content delivered by Akamai, but it is such a classic when it comes to the value a RUM solution brings to the table regarding CDN or 3rd party monitoring. Bonton had rolled out the Compuware RUM solution and was able to see a drastic performance peak. Drilling into the data it turned out that this was caused by the slow delivery of 3 particular images served by the CDN.

The benefit of the RUM solution here definitely lies in getting visibility across all the resources served in your application. While synthetic tests offer much deeper technical insight they are not able to actually scan your complete application 24/7 and hit every aspect of your site.

That’s why the combination of a broadly distributed synthetic monitoring network and a real user monitoring tool with visibility into CDN and 3rd party resources is the perfect combination.

Don’t live with the risk of not knowing what is going on with your application!

Comments

*


six − = 4