UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 
 

8h ScalableTCP Transfer - 25th February 2003

Purpose

The purpose of this test is to see how well scalable tcp completes against the other tcp variants on an empty and overprovisioned network.

Method

A Scalable TCP patch from Tom Kelly was merged with web100 (2.1a) and the resultant patch can be found here. Note this patch does NOT include the modifications to no_cong etc in dev.c. It does however, implement Tom's SACK Fast path algorithm.

Tests were conducted from lon02 to man03 on the MBNG network using standard txqueuelens on the sender.

Results

sender log+web100 zip

recv log+web100 zip

These graphs show the throughput as reported by web100 and that by iperf (top). Whilst the variation in throughput is not as large as it is with Vanilla TCP, it the throughput 'band' is larger than it is with HSTCP.

 

 

These graphs show the number of pkts reported by Web100 leaving the interface. As you can see, there is a relatively narrow peak of pkts about the 73,000 mark. This is much lower than the 83,000 for gigE rates. This correlates quite well with the fact the average rate at which scalable tcp sent at was only about 840mbit/sec.

 

This graphs shows the cumulative number of sendstalls recorded by web100. note that this implementation of scalable tcp did not incorporate the txq-moderation code. While HSTCP only reports about 30,000 sendstalls for the entire 8hours and Vanilla TCP only reported 6,000; Scalable TCP reports a whopping 1.2 million! [WHY?]

 

 

 

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page

 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT