Personal Miscellaneous TCP/IP GRID Quality of ServiceMulti-Cast  
Background high-speed tcp Transfer Tests Web 100TCP Tuning  

VanillaTCP Comparision

We see that the performance of DRS is actually better than Web100tune (NON-DRS) for high loss rates. This is not surprising as looking at the cwnd values shows that DRS actually maintains a higher average value as it remains capped by the recv rate.

The throughput is also more variable with the DRS as a results for the same region due to AIMD. What is surprising is that for low loss rates, we see that the DRS is still more variable than Web100tune - uptil we get no losses what soever (>2e7).

WHY?

Lookinga t the above graph of the stdev of the aimd windows shows that cwnd changes a lot more with Web100tune. This should result in a greater variability in the number of packets leaving the interface;

This doesn't appear to the be case; in teh above graph, we see that the stdev of pkts in DRS is greater (which matches what the thruput shows).

 

We see that the ratio of acks to data packets differs between the DRS and web100tune implements for high loss rates. Once we reach the line rate, they are very similar. As acks are necessary for feedback in order for tcp to function well, DRS tcp should before better as a result.

 

 

 

 

 

We see that the effects of the kernel reductions on cwnd above. For CM's, we see that they are very similar. However, we get many more CV reductions for DRS than with Web100tune. SSHOULDN@T THIS BE THE OTHER WAY AROUND? Although capping may cause invaliditity...

 

 

 

 

 

 

 

Tue, 29 July, 2003 16:30 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT