Personal Miscellaneous TCP/IP GRID Quality of ServiceMulti-Cast  
Background high-speed tcp Transfer Tests Web 100TCP Tuning  

Transfer Tests

From: HEP@UCL, London, UK pc43@hep.ucl.ac.uk

To:

HEP@MAN, Manchester, UK gig3.hep.man.ac.uk
Date: 28th June 2001 13:00 - 15:00

Transfer tests using iperf with 'readvars' web100 monitoring. Tests were conducted with 2 minute transfers for various buffer sizes.


Settings

Manchester acted as an iperf 'Server' with its receiving buffer size set at 128k. Even though Iperf reports the buffer size is set not at the requested 128k, but at 256k, the web100 software confirmed that the receiver buffer was indeed set to 128k.

UCL acted as 'Client' with buffer sizes that were set using the kernel. The sizes of the buffer ranged from 4k to 256k.

<buffer size scripts>


Results

Results were logged and compiled into Excel for analysis.

These measurements were taken straight from the results outputted by iperf. The graph show a roughly linear regime initially for window sizes smaller than about 32k. For window sizes above this threshold, the transfer rate plateaus. This means that for window sizes above this limit, transfer rate benefits are negligable compared to the overhead in memory.

Using the web100 software (with readvars), we can start to see the exact progess of the transfer. This graph shows the bytes transfered for each second of the iperf transfer. Dips in the graph demonstrate that some kind of error has occurred upon the data being transfered - either a timeout thru a lost packet or some kind of transfer negoiation due to a fast retransmit. This is just one of many of the variables available through the TCP-Kernel Instrument Set.

An important variable in TCP transfers is the congestion window. This represents the size of a 'virtual buffer' as hosted by the internet and defined by the sender (in this case UCL). The sender is not allowed to have to have any more data on the internet than the value of this variable congestion window. The relation of this to the buffer size of our machine is that data held on our end (in the window buffer) outputs data to the internet (held by the congestion window) and then recieved by the server (Manchester). Any of these three can have a drastic effect upon the transfer of data.

When an error or problem occurs during the transfer, the value of the congestion window is set to half for every single packet which has been found 'problematic'. As you can see, for most of the small buffer sizes, no errors are encountered resulting in a gradual increase in the cwnd. However, we must note that the sender can only send upto a maximum of this value and the receiving buffer size (in this case 128k) - which for this example is well below most of the values defined by the value of the cwnd sizes.

Another very important variable is the Ssthresh which defines whether (when compared to the value of cwnd) the connection should be in slow start or conjestion avoidance. It can be imagined that a continual updating (i.e. smooth occurence) of ssthresh means that the network is being well 'probed' by the TCP connection. As you can see, for the higher buffer sizes this is so - and this can be confirmed by the higher transfer rates achieved.

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT