Gary Kaiser About the Author

Gary is a Subject Matter Expert in Network Performance Analysis at Compuware APM. He has global field enablement responsibilities for performance monitoring and analysis solutions embracing emerging and strategic technologies, including WAN optimization, thin client infrastructures, network forensics, and a unique performance management maturity methodology. He is also a co-inventor of multiple analysis features, and continues to champion the value of software-enabled expert network analysis.

Understanding Application Performance on the Network – Part IV: Packet Loss

We know that losing packets is not a good thing; retransmissions cause delays. We also know that TCP ensures reliable data delivery, masking the impact of packet loss. So why are some applications seemingly unaffected by the same packet loss rate that seems to cripple others? From a performance analysis perspective, how do you understand the relevance of packet loss and avoid chasing red herrings?

In Part II, we examined two closely-related constraints – bandwidth and congestion. In Part III, we discussed TCP slow-start and introduced the Congestion Window (CWD). In Part IV, we’ll focus on packet loss, continuing the concepts from these two previous entries.

TCP Reliability

TCP ensures reliable delivery of data through its sliding window approach to managing byte sequences and acknowledgements; among other things, this sequencing allows a receiver to inform the sender of missing data caused by packet loss in multi-packet flows. Independently, a sender may detect packet loss through the expiration of its retransmission timer. We will look at the behavior and performance penalty associated with each of these cases; generally, the impact of packet loss will depend on both the characteristics of the flow and the position of the dropped packet within the flow.

The Retransmission Timer

Each packet a node sends is associated with a retransmission timer; if the timer expires before the sent data has been acknowledged, it is considered lost and retransmitted. There are two important characteristics of the retransmission timer that relate to performance. First, the default value for the initial retransmission timeout (RTO) is almost always 3000 milliseconds; this is adjusted to a more reasonable value as TCP observes actual path round-trip times. Second, the timeout value is doubled for subsequent retransmissions of a packet.

In small flows (a common characteristic of chatty operations – like web pages), the retransmission timer is the method used to detect packet loss. Consider a request or reply message of just 1000 bytes, sent in a single packet; if this packet is dropped, there will of course be no acknowledgement; the receiver has no idea the packet was sent. If the packet is dropped early in the life of a TCP connection – perhaps one of the SYN packets during the TCP 3-way handshake, or an initial GET request or a 304 Not Modified response – the dropped packet will be retransmitted only after 3 seconds have elapsed.

Triple Duplicate ACK

Within larger flows, a dropped packet may be detected before the retransmission time expires if the sender receives three duplicate ACKs; this is generally more efficient (faster) than waiting for the retransmission timer to expire. As the receiving node receives packets that are out of sequence (i.e., after the missing packet data should have been seen), it sends duplicate ACKs, the acknowledgement number repeatedly referencing the expected (missing) packet data. When the sending node receives the third duplicate ACK, it assumes the packet was in fact lost (not just delayed) and retransmits it. This event causes the sender to assume network congestion, reducing its congestion window by 50% to allow congestion to subside. Slow-start begins to increase the CWD from that new value, using a relatively conservative congestion avoidance ramp.

As an example, consider a server sending a large file to a client; the sending node is ramping up through slow-start. As the CWD reaches 24, earlier packet loss is detected via a triple duplicate ACK; the lost data is retransmitted, and the CWD is reduced to 12. Slow-start resumes from this point in its congestion avoidance mode.

While arguments abound about the inefficiency of existing congestion avoidance approaches, especially on high-speed networks, you can expect to see this behavior in today’s networks.

Transaction Trace Illustration

Identifying retransmission timeouts using merged trace files is generally quite straightforward; we have proof the packet has been lost (because we see it on the sending side and not on the receiving side), and we know the delay between the dropped and retransmitted packets at the sending node. The Delta column in the Error Table indicates the retransmission delay.

 

Error Table entry showing a 3-second retransmission delay caused by a retransmission timeout (RTO)

Error Table entry showing a 3-second retransmission delay caused by a retransmission timeout (RTO)

For larger flows, you can illustrate the effect of dropped packets on the sender’s Congestion Window by using the Time Plot view. For Series 1, graph the sender’s Frames in Transit; this is essentially the CWD. For Series 2, graph the Cumulative Error Count in both directions. As errors (retransmitted packets or out-of-sequence packets) occur, the CWD will be reduced by about 50%.

Time Plot view showing the impact of packet loss (blue plot) on the Congestion Window (brown plot)

Time Plot view showing the impact of packet loss (blue plot) on the Congestion Window (brown plot)

Using Single Trace Files

If you are not using Transaction Trace and merging client and server trace files, but instead relying on a single trace from either the client or server location, detecting a dropped packet may require some inference. Consider that from the sender’s perspective, you will always see both the initial (dropped) packet and the corresponding retransmission; Transaction Trace will mark these as packet retransmissions and calculate the delta time. However, from the receiver’s perspective, you will of course not see the dropped packet – only the (successful) retransmission.

In the case of multi-packet flows, the retransmitted packet will arrive at the receiver out of sequence – that is, after one or more packets later in the TCP stream have arrived. Transaction Trace will mark these in the Error Table as out-of-sequence packets; the delta time will be (at a minimum) slightly greater than the link round-trip time. (Sometimes, load balancers or other devices may also cause packets to arrive out of sequence, although no packets were lost. Under these conditions, the delta time value will be very small. This characteristic should help you to differentiate between these two cases.) Therefore, you should treat out-of-sequence packets (with larger delta times) as retransmissions.

In the case of a smaller flow, you may see similar evidence of packet loss (out-of-sequence packets on the receiving side, retransmissions on the sending side), although the delta time may be significantly greater than the round-trip time. If there are not enough packets in the flow to cause a triple duplicate ACK to trigger a retransmission, the sender’s TCP retransmission timer will be used.

Finally, in single-packet flows, there will be no evidence of a dropped packet in the receiver trace file; instead, there will just be some delay (equal to the retransmission timeout plus any node processing time) before the retransmitted packet is seen. Transaction Trace will assign these delays to remote node processing – as Client Before Thread if you are capturing on the server side, or as Server Processing if you are capturing on the client side. For persistent, long-lived TCP connections, the RTO will have been adjested to the link and be relatively small; this “invisible” retransmission delay will not be very long, perhaps a few times the link RTT; in these environments, an occasional dropped packet may not have a significant impact on operation time. However, for operations that open and close TCP connections frequently – like many web pages – a single dropped packet can often add 3 seconds to overall operation time. If you notice unexpected remote-node delays of 3 seconds, and the delays occurs early in the life of a new TCP connection, you should suspect dropped packets.

In this server-side trace, the three-second delay between the TCP 3-way handshake and the client’s GET request was very likely caused by the loss of the initial GET request packet

In this server-side trace, the three-second delay between the TCP 3-way handshake and the client’s GET request was very likely caused by the loss of the initial GET request packet

For this type of problem in particular, the caveat we mentioned in the Part I (look for consistency) is important; if one in 1000 operations incurs a 3-second delay due to a dropped packet, and you happen to capture that in your trace, you probably don’t want to spend too much time chasing that problem. Finally, for the greatest confidence and accuracy you should consider the more comprehensive insight derived from merging client and server trace files.

Corrective Actions

Improving performance where the packet loss is caused by congestion would usually mean mitigating the congestion caused by other traffic. This might be accomplished in a number of ways, including QoS policies for priority queuing, reducing background traffic, or increasing bandwidth.

If packet loss is due to other conditions – a faulty network interface, a misconfigured queue, or a poor cable connection – then corrective actions will require additional sleuthing, looking for a specific interface or network segment where the drops are introduced.

To minimize the impact of packet loss, ensure your TCP connections are not closed unnecessarily, and that TCP sessions do not time out too quickly for your users’ behavior patterns. You may also consider reducing the value of the initial retransmission timeout; 3 seconds is an eternity in most enterprise networks.

How do you report packet loss? Can you correlate the impact of packet loss to application performance?

In Part V we will look at four types of client and server processing delays, examining how these appear on the network. Stay tuned and feel free to comment below.

Comments

  1. Dana J. Dawson says:

    Nice series, Gary! I just have a quick comment. Technically “slow-start” and “congestion avoidance” are two different modes for TCP, so it might be useful to note that for any readers who may choose to dig deeper into the workings of TCP, such as by reading the relevant RFCs where such a distinction is important.

  2. Gary Kaiser Gary Kaiser says:

    Hi Dana,
    I have been guilty of conflating the two into a single topic for some time. I may in fact change my approach, using congestion management as the general topic, to include such algorithms or phases as slow-start and congestion avoidance; in any case, I will pay closer attention to the distinction between the two. Thanks very much for the comment!

Comments

*


− two = 6