UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 

lan wan

 

Number of Packets on Precision

Aim

To determine the optimal number of packets to be sent with udpmon in order to obtain reasonable and consistent results.

Method

Using the scripts do_number.pl, a server was set up on pc56 with a -s option of 65555. We are basically looking for a plateauing of the results, similar to socket buffer size measurements. The value of the plateau should indicate 9at least for the test under question) the optimal value in which tests should be performed. Optimal means a balance between accuracy and speed.

Run 1

This will act as an initial sweep to determine the values that i should be investigating. The range of values is between 100 and 3000 packets. The packet size was set to 1400bytes and an interpacket time of 50us was used.

Hmm... lets' see:

  1. Wild oscillations - why is this happening? Maybe it's to do with flow control...?
  2. The recv_wire_rate is very low - i was getting 940mbit/sec with tcp!

Let's deal with the low throughput first. i used 50us interval time, lets do a run of interval time against throughput. Why? because the data rate out of pc55 seems to be the limiting factor in all of this.

Run 2

So we are gonna run a set of tests to see the effect of the interpacket time on the data rates. We will use 1400byte packets, of which we are gonna send 1000 at intervals of 5 to 100 ms.

So we have nice graphs here showing the sending rates (interval) as a funciton of the requested rates (req interval). We can see that The mininal interval allowable is about 10ms. It gives a nice graph (the last one) showing the variation rcv pkt rate results in a exponentially decreasing thoughput. The bottom line of this is that the interpacket time should be set to <10ms for 1000mbit/sec rates.

Run 3

Okay, so with that information, we run run1 again, but this time with a pkt interval set to 5ms. We also include some more pkts to cover the low end of the test run.

Okay, so we're not getting much more out for anything greater than about 500 pkts.

Let's zoom in on this range shall we?

Run 4

We're gonna be a bit extremem here, so we're gonna do from 2 to 500 packets in almost 5 pkt intervals.

The two graphs are the same, just different scales on the x axis. The rate for anything under about 20 pkts is certainly wildly off!

Run 5

Hmm.. interesting when you look closer in how it osciallates quite periodically to converge to a point. I think i need to go out a bit more to more packets; and get right of stuff below about 200. And repeat 3 times.

 

Run 6

That's strange how they all have slightly different deviations.... not a good thing at all.... it's also a bad thing that the trend is still going down. Looking at run3, it appears that the values do not start tending off until about 1300 packets.

Run 7

Same thing goes here for larger packet sizes.. unfortunately. There is an obvious trend from each test, but the variations aren't really all that predictable...

Gawd, i'm so stupid - i forgot that GridNM was running in the background, hence putting extra load on the cpu. I must run it without cpu load to ensure smooth and repeatable results. Still, it does go to show that even a stateless protocol such as udp requires that the cpu load be minimal for optimal performance....

 

30 July 2002

Okay, so i've finally got around to doing this again.... same setup, this time i've turned off GridNM - so the cpu load should be minimal.. this time with 0 for the interval

I'll get a proper graph up later.. this was more so that i could see individual bits rather than the whole thing... nevermind! Err... the thing is still tending off :( And it still reports that the wire rate is actually higher than it says it is - although this may be possible; i'm no ethernet expert. One annoying thing is that even though there is little cpu utilisation, there is still some variatin in results. The pattern seen in the range 2000 to 6000 packets is interesting; maybe it's to do with the way that udpmon calculates the bandwidth? Its certainly periodic.

31st July 2002

So today i'm gonna run through the entire thing. Same parameters as yesterday (0 wait, 1400byte packets) back to back between pc55 and pc56 on the gigE card. Gonna try a slightly higher threshold this time - number of packets from 100 to 20,000. If it doesn't flatten out now, i have no hope in this software at all!

Run 1

Still decreasing... still has that funny periodic pattern... Let's try to go higher in the number of packets sent...

 

Run 2

As the last run showed that it was still tending downwards, i've increased the test upto 40,000 packets. @ 1,400 bytes each, we're sending about 56*10^7 bytes which is about half a gig of data.

The results seem very similar to run1 - with the exception of a few more sudden drops in recv_wire_rate than in run2. However, there were no reported lost packets in this test either. And then all the sudden, at 30k packets, the whole thing goes very funny.... lets run again to confirm...

another thing is that the line does not seem to tend off to 1000....!!!?!?

Run 3

So it happens again....!!! Grrr.....

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT