UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 

lan wan

 

Variation in RTT for various packet sizes and interval times

Aim

To determine the performance of the 3COM GigE cards in the Dells for various packet sizes and Interpacket sending times.

Method

By varying the packet size of sending a fixed number of packets, we can get a latency profile of the system under test. By also varying the interval time of packets, we can see the responsiveness of the cards.

Scripts

An adaptation of the udp packetsize vs interval script was used. It was adapted to handle the differences in the ping command (most notably, it wraps around ping's do_packetsize.pl).

do_packetsize.pl - a wrapper script to vary the packet size of icmp ping packets.
ping-cook.pl - a required script to handle tabluting and cross functions.
do_packetsize-vs-interval.pl - a wrapper script arround do_packetsize.pl to repeat for various interval times.

 

Expectations

There should be a linear increase in the rtt of larger packets. There should also be an offset caused by the host times (both) and the NIC time in stamping a header. Past the physical size of 1500 (including ip), so 1492bytes, packets would have to be fragmented and hence a 'hitch' should occur.

I don't expect a difference caused by the interval times as the path is clean (empty).

 

Results

Results file: excel, results directory.

Interesting. As expected, there is a linear increase in the rtt as a function of packet size... and then there's a slight increase in rtt due to packet IP fragmentation (more study into the results of this later). However, there is also an slight increase in rtt when we increase the interpacket spacing (interval).?!?!?

The only thing i can think of at present is that there is an associated time penalty in software to actually package the data before sending it out....??

Unlike the minimum rtt, the max is pretty boring, it has the same trend in packet size, but show no affect with interval time.

Not much i can say about ave rtt.. except that the anomoly found earlier with the min rtt is apparent here for small smaller packet sizes - and tends off right at the end of the sample with larger packet sizes.

The standard deviation from the mean is pretty uninteresting. There is a slight trend that for very long itnervals (1 sec) the stdev is smaller. However, as shown by the experiment in statistical multiplexing here, it wouldn't help much to increase the number of packets sent in the sample.

So issues...:

  1. Why is there an increase in minimum rtt with increasing interpacket time?
  2. Is there actually less variation in rtt if we increase the interpacket time?
  3. Need to investigate the behaviour of ip fragmentation upon rtt.
  4. ...?

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page

 

 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT