TCP Tahoe and Reno (and hence also NewReno) employs a reactive congestion control strategy. What this means is that TCP has to experience loss in order to control its throughput. This can have negative impact on the performance of the internet as a result. In summary, TCP Tahoe and Reno repeatedly increase the load in an effort to find the point at which congestion occurs, and then backs off.
TCP Vegas employs an alternative strategy; in that it tries to predict when congestion is about to happen and adapts its window to compensate. This has a proactive approach as it attempts to reduce rate its sending rate before packets start being discarded by the network.
This can be call this congestion avoidance, instead of congestion control.
It was first proposed by Brakmo and Peterson [vegas94] based on some modifications on the source behavior of Reno. This new algorithm is able to inter-operate with any other valid implementation of TCP and was reported in to achieve between 37% and 71% better throughput than Reno, to retransmit between one fifth and one half as much data as does Reno and to use more efficiently the available bandwidth.
The algorithm is based on the principal that there are signs prior to congestion in the network. source watches for some sign that router’s queue is building up and congestion will happen too; e.g. increase in rtt (Wang and Crowcroft's DUAL algorithm) and the flattening of the sending rate (Wand and Crowcroft's Tri-S algorithm). Both of these prinicals are based on the fact that the buffer space in the router decreases and hence increases the rtt time. It also extends on the ideas of Jain's CARD algorithm whereby the increase or decrease to the window size is based on both the RTT and the window size. As such, the window oscillates around its optimal value.
Vega's approach is to compare the changes in throughput rate; that of achieved for a certain number of packets and that of the expected throughput from calculations. It uses the fact that the "number of byts in transmit is directly proportional to the expected throughput, and therefore, as the window size increases - the throughput of the connection should also increase".
Using this information, Vegas measures and control the amount of extra data the connection should have in transit. If Vegas is sending too much data, then obviously, it will cause congestion - causing the rtt of its packets to increase. However, the way that the protocol is designed also means that it is also important for Vegas to maintain enough throughput to actually allow it to adjust the throughput to the available bandwidth.
The Vegas algorithm differs in Standard Reno and Tahoe is three areas:
More detail on these algorithms are described below.
As Vegas tries not to send at a rate that causes buffers to fill, we need to define a rtt from which we can decide whether this happens. Let BaseRTT be the minimum of all measured RTTs during a connection. If we are not overflowing the connection, then
ExpectRate = CongestionWindow / BaseRTT
Here, we assume that the CongestionWindow (cwnd) is equal to number of bytes in transit.
We also need to define the current sending rate, ActualRate, once per RTT. This is done by picking one 'distinguished' packet per RTT, timestamping it when it is sent (T1), and finding out when the ack for the same packet is received (T2). From this, we can divide by the number of bytes in transit to get a rate:
ActualRate = ( BytesOut between T1 and T2 ) / ( T2 - T1 )
Obviously, there is an overhead associated with the timestamping. This overhead increases with decreased RTT. Initial studies [vegas94] show that there was only a 5% overhead over Reno for SPARC machines in 94'.
The source then compares the two rates;
Diff = ExpectedRate – ActualRate
if ( Diff < a ), we increase the CongestionWindow linearly (1 segment)
else if ( Diff > b ), we decrease CongestionWindow linearly (1 segment)
else if ( a < Diff < b ), we leave the CongestionWindow unchanged
While a and b are defined in terms of a rate (eg kb/sec), the authors of [vegas94] present the argument that these values can be thought of as the amount of 'extra buffers' for the connection.
An important note is that the linear decrease in the Vegas algorithm does not violate AIMD since it happens before packets loss. The increase of one segment per rtt also conforms to the rfc1122 draft?.
Packet loss detection
The basic premise of the packet loss detection algorithm in vegas is that it tries to detect losses by comparing the RTT of a dupack received with a fine grained RTO. If the RTT> RTO then Vegas will retransmit the lost packet (dupack) packet without waiting for 3 dupacks as Tahoe and Reno does.
Vegas reads and records the system clock each time a packet is sent. When an ack is received, Vegas reads the clock and does the RTT calculation by computing the round trip time, and a fine grain timeout value. This timeout is computed in the same way as that Reno does but is not adjusted to a multiple of 500 ms (new versions of linux?). Recall that coarse grain timeout used by Reno is still effective under Vegas in case the vegas mechanism fails to catch all timeouts. Vegas decides to retransmit in the following situations:
This technique enables to detect losses faster and to recover from multiple drops without restarting the slow-start phase if the retransmission timer doesn't expire before. Hence it permits to deal with a problem from which Reno suffers too much, namely, multiple drops in the same window of data.
To translate this loss to a change in the cwnd, [vegas94] states that the cwnd "should only be reduced due to losses that happened at the current sending rate, and not due to losses that happended at an earlier, higher rate. in Reno, it is possible to decrease teh congestion window more than once for losses that occurred during one RTT interval. In contrast, Vegas only decreases the congestion window if the retransmiited segement was previously sent after the last increase. Any losses that happended before the last window decrease do not imply that the network is congested for the current congestion window size, and therefore, do not imply that it should be decreased again."
The principle is similar to that of Reno, yet it still maintains a the techique as discussed in the Congestion Avoidance section. However, Vegas doubles its congestion window every other ack. In between each other ack, the cwnd remains constant as to not invalidate the comparision of the expected and the actual rates.
Slow Start ends when the sender detects the queue built up, and this is true when the actual rate falls below the expected rate by a certain amount, which is defined as the y threshold. Once slow start ends, it falls into the linear increase/decrease of congestion avoidance.
|© 2001-2003, Yee-Ting Li, email: firstname.lastname@example.org,
Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT