Personal Miscellaneous TCP/IP GRID Quality of ServiceMulti-Cast  
Background high-speed tcp Transfer Tests Web 100TCP Tuning  

HSTCP DRS

We see that the HSTCP on the MBNG link achieves similar results to the Vanilla graph. However, as HSTCP alters its AIMD values according to the window sizes experienced, we see that the incline for high loss rates to be not so straight. This can also be seen in the stdev of the throughput which matches closely to the highs and lows of the throughput.

The tailing off of the stdev can be explained by exactly the same means as that experienced in the vanilla DRS graph.

Here, we see the effects of the HSTCP protocol more clearly; whilst the standard vanilla DRS cwnd is rather more linear, we see that it grows more viciously when theres high loss; although its the same for very high loss and when we reach the DRS limit (recv window).

We experience a similar stdev of the cwnd to the vanilla plot; however, we get more variance near the peak of the stdev; it also occurs sooner (on the x axis) - implying that it's functioning better by being able to achieve line rate in higher loss condtions than VanillaTCP.

Here we see that the linear condition of the cwnd an thruput still holds; However, for highish cwnd; we see that the stdev of the thruput is slightly less than anticipated; implying that this is the region where HSTCP performs best under this network (as the stdev of the throughput is least and hence most stable).

This graph above shows the total number of packets that were put onto the network by both the sender and the recv for each test. We see that matching to the throughput graphs; that we do not get a straight line region before the plateau.

Looking into the rates more; we see that stdev of the packets, especially for the data pkts (sender out and recv in) that it's huge especially for packet drop frequency of about 50000 packets.

This graph above shows that the cwnd is indeed limiting the number of packets which get out of the sender.

The above graph shows the number of fast retransmit attempts and the number of packets which were retransmitted; we see for that high loss, it's roughly equivalent (as we would expect), but for the lower loss (right x axis), we see that ther eis a discrepancy; WHY? is this caused by line loss?

We can see this better on the graph above; we see that for low numbers of pkts retransmitted that we get a higher gradient than 1; and there are also two stray points around (1200,1500). WHY?

Here, we also see that the ratio of dupacs to pkts is not 3 (or 6 in this case).

Again, we see that the packet raio appears to drop to about 1/3 (ie one ack for every 3rd data packet. WHY?

The next graph shows that there is a small relation to the number of DupAcksIn and the number of PktsRetrans. It starts off linear, but then begins to plateau off for high number of retransmitted packets.

THINK I'M ON TO SOMETHING HERE.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

{blank}

 

Mon, 4 August, 2003 15:40 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT