UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 
 

Across the MB-NG network

The same test as define here for back to back ping tests will be repeated but over the MB-NG WAN.

 

Results

Experiment 0: number of pings

[root@pc58 ping]# ./do_number.pl -d 195.194.15.10 -s 16 -c 2 -i 1 -n 5..250:5 -o summary.log -f number-mbng1-gig1

log file tsv zip

 

Experiment 1a: Packet size vs. latency mbng1->gig1 - 11th Feb 2003 @ 15:50

[root@pc58 ping]# ./do_packetsize.pl -d 195.194.15.10 -c 2 -i 1 -n 100 -s 16..3500:10 -f packetsize -o summary.log

log file tsv zip

There is a nice linear increase for packet sizes between 0 and 1500 ip. There is also the kink as seen on the b2b tests, and a subsequent increase as the ethernet preamble and the minimum ethernet frame size introduces a constant latency in the pings when it is fragmented into 2 packets. Similarly, there is the 3 ethernet frame when packet sizes are greater than 3000 ip (2960 ip payload).

However, there are subtle differences between this and the b2b tests

  • The minimal latency is about 5.92ms. From the b2b tests, the latency imposed by the system (kernel, driver, card - it is assumed that the wire time is negliable) is about 0.03ms. This gives the end to end latency of about 5.89ms.
  • The initial increase if packet sizes <1500bytes on the wan is about (6.12-5.92) / 1500 = 1.33e-4. on the b2b it is ( 0.06 - 0.03 ) / 1500 ~ 2e-5. This means.... time of flight measurement?
  • The second linear increase when the packet size is >~2000 and < 3000 is about

 

Experiment 1b: Packet size vs. latency mbng3->gig6 - 17th Feb 2003 @ 18:45

[root@mbng2 ping]# ./do_packetsize.pl -d 195.194.15.18 -c 2 -i 1 -n 100 -s 16..3500:10 -f packetsize -o summary.log

log file: tsv zip

 

Experiment 1c: Packet size vs. latency mbng3->gig6 - 17th Feb 2003 @ 11:30

[root@mbng3 ping]# ./do_packetsize.pl -d 195.194.15.34 -c 2 -i 1 -n 100 -s 16..3500:10 -f packetsize -o summary.log

log file: tsv zip

Hmm... what is this? there appears to be periodic structure in the average latency (and max) for packet sizes < 1500. For packet sizes > 1500, there is a problem with the variation of the minimum latency.

Experiment 2: Conducted 19th Feb 2003 @ ~14:00

log: tsv, zip

log: tsv, zip

log: tsv, zip

It seems like the structure problem is still there... and there seems to be an increase in latency of about 1 ms.

Systematic analysis of route - 19th Feb 2003 from ~14:30 -

In order to discover where in the link the problem arises, i will ping each hop (or ingress/egress) from mbng1 to the end of the link.

Experiment 1: From mbng1 to 195.194.10.14 (first hop - ingress) @ 14:23

logs zip

This graph actually makes sense! Note that the x axis indicates theh ip payload size (+20 for ip).

  1. From 0 to 1500: There is the standard packet in one ethernet frame in which the rtt increases linearly with the packet size requested
  2. Fromo 1500 to 3000: Same thing, but this time the packet has to be fragmented into two frames as the MTU of the network is 1518. The extra delay induced after 1500 is due to the overhead of the ethernet preamble etc. Notice that the line is not as steep as the one for 0 to 1500. This is accountable through the kernel having to fragment and reassemble the packet in the host and the router.
  3. Greater than 3000: Well, more than 2 frames. The slope of this line looks simlilar to the one previous. There is also less delay introduced for the third frame than the second (step is less) - WHY?!?!!?

Experiment 2: From mbng1 to 146.97.43.66 (first hop - egress) @ 14:33

logs zip

Notice that the rtt is actually quite high for large packet sizes (larger than that of the end2end path). This is most likely due to the fact that the router is configured to under-prioritise pings over normal traffic. [CAN SOMEONE TELL ME HOW TO CHECK]

Experiment 3: From mbng1 to 146.97.43.65 (second hop - ingress) @ 14:44

log zip

Same again, this time going into the GSR at ULCC.

Experiment 3: From mbng1 to 146.97.43.97 (third hop - egress) @ 14:55

log zip

There is very strange structure in the graph, it appears to happen shortly after 1000 bytes. The maximum remains relatively constant (100 packets remember!), while the minima seems okay.

There is the usual cut off at 1500 - but why? POS? and there doesn't seem to be any difference above this at 3000 for the minima. Is the latter due to encapsulation at the POS level?

Experiment 4: From mbng1 to 146.97.43.98 (fourth hop - ingress) @ 15:00

Not pingable

Experiment 5: From mbng1 to 195.194.15.9 (fourth hop - egress) @ 15:00

log zip

There is a little bit of structure for small packet sizes of the same nature as that of the end-to-end tests.

 

Router Policies - no Policies

The routers were checked to ensure that no qos policies were installed. (which is was at the ucl boundary). And an end-2-end test was run again....

log zip

There's proof for you: When the policies are removed from teh ucl-boundary machine, everything is fine! In fact, the graph is almost perfect!

Summary

The funny wavey averages are due to a presence of QoS policy in the ucl boundary router. By turning it off, the pings become almost perfect. QoS seems to affect the ping times on cisco machines! Just how much, we need to quantify later. But for now, it seems as though while the minimum isn't too bad, the average can fluculate quite a lot. With packets greater than the mtu, this effect is even worse.

 

Repeating Tests - 20th Feb 2003 ~Midnight

Experiment 1: mbng1->gig1

log tsv zip

 

Experiment 2: mbng2->gig5

log tsv zip

log tsv zip

Funny dip at just before 2000 bytes.....

 

Experiment 3: mbng3->gig6

log tsv zip

 

 

 

 

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT