![]() |
![]() ![]() ![]() |
![]() |
![]() |
|||||
![]() |
||||||||
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |
![]() |
|||||
![]() ![]() ![]() ![]() ![]() |
Self Similar Background with DRS TCP This page compares the performance of two new TCP based protocols against the standard one found in linux-2.4.20. The tests wer run on the MB-NG network with a ltency of 6msec and a bottleneck line rate of 1Gbit/sec. We can see from the graphs below that we achieve very close to line rate for all stacks. This sis due to the low latency of the link which allows AIMD to recooperate very quickly. The following graph shows the Covariance of Variation. It is the stdev of the throughput over the average throughput. It gives an indication of the performance of each stack. Note that we would like low variation and high throughput, and hence the lowere the CoV the better. We see that even though the new stacks are meant to be 'better', in this particular network configuration, and self similar traffic pattern (Hurst 1, max120 packets for line rate, time period of 1476usec, and variance 1 packet), we see that for the region below about 425mbit bg rate, we get similar results. This is most likely due to the fact that the streams are capped to the recv window as a result of DRS. But for the regoin between the end of the previous and about 775mbit/sec bg, we see that the CoV for the new stacks is much higher than that of the standard Vanilla stack. As we achieve pretty much the same throughput, this implies that we get higher variance (and hence stdev) on the throughput of the streams. This greater variance is bad and make the stream less 'predicatible' in real life environments. The above graph shows that we are indeed performing less (in terms of CoV) due to the high stdev over fo the new stacks. In order to investigate this fully, one needs to test this also on long distance networks. Looking at the AveCwnd values of the stacks, we see that HSTCP actually manages to maintain a higher value tahn Vanilla. This is not surprising as it alters AIMD in favor for high throughputs. Notice the capped cwnd for bg <425mbit.sec. One should repeat the test with a higher socket buffer size. However, in certain cases, especially in real life environments, it is actually preferable to be able to cap the throughput to maintain a steady operating region (as the cap in the cwnd will also cap the throughput, and hence make the stream less variable). Surprisingly, the AveCwnd of Scalable actually is lower for the region 425 to about 750. This suggests that the mimd of scalable is actually working against it in this operating region. We can see that we get a lot more congestion signals with teh newer stacks; this is most likely due to the fact that the more agressive nature of the newer stacks cause more feedback. In the cae of HSTCP, the extra feeback from the congestion signals is actually benefical as it manages to get a higher ave cwnd value. However, for Scalable, it is not. Plotting the achieved ave throughput and the ave cwnd size shows that we get a slightly higher throughput with scalable. The regsion oof the vertical line is where we are limited by the socket buffer size. The graph above shows the number of packets that had to be retransmitted (eitehr through real loss or by dupacks). We see that if we use this as an indication of how 'efficient' the stacks are (even though they all use the same retransmission algorithms - hence it's solely how 'bad' the extra ai/mi is to the network upon itself), that the HSTCP and scalable stacks are highly bad to itself! Looking at the number (fraction) of dupacs we get for the various bg loads, we see that we get a lot more dupacks for the newer stacks. Correlating the dupacs and pkts that were retransmitted shows that they three stacks do not follow the same pattern in dupacks and retrans pkts (they should all foloow the same line if they did). We see that Scalable copes with (or shoudl that be causes more) dupacks. However, the dupacks do not give an indication of the actual loss as a pkt is only retransmitted after three consequtive dupacks. When this happens, we get a fast retransmit: Plotting the fast retrans against the pkts retrans show a relatively correllated graph., however, hstcp seems to fall just off the gradient; and scalable has scattered points almost everywhere! The above graph shows that we get a lot of fast retransmits without actually recieving any more dupacks. This suggests that we are retransmitting due to ???. It can't be losses due to timeouts as a timeout would not trigger a fast retransmit. We see from teh above graph that we actually get more dupacks for the newer stacks at the same throughput than with vanilla; more so with scalable than hstcp. As the background traffic pattern is the same in all cases, the cause of the dupacks must be the protocols themselves. As expected, the above plot is similar ot dupacksin vs fastretrans as there is almost a 15:2 relation to fast retrans and pkts retrans. A one to one relation between DSACKs and DupAcks would show that we are getting reordering ont the netwo.
{blank}
|
|||||||
![]() |
![]() |
![]() |
||||||
![]() |
![]() |
![]() |
||||||
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk,
Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145 Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT |
||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |