UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 
 

Udpmon Back-2-Back Results

Purpose

The purpose of this experiment is to quantify the udp capability of the mbng server machines in order to generate traffic (back ground and foreground) and to enable direct analysis of the Smartbits and Adtech testing modules for network WAN and LAN tests.

Assumptions

It is assumed that the servers are identical and hence results would be symmetric.

Results

The excel file of results can be found here.

The graph on the left is NOT the interval on the y axis, but the send_duration for the 1000 packets that we have specified for the tests. It gives an indication of the requested interval of packets and the actual interval of the packets sent out by udpmon, as the average interpacket rate would be the value of the send_duration/number of packets.

The 'upper' part of the graph, which gives a linear relation between the packet size and the send_duration is due to the fact that the ethernet card and system can not service each packet fast enough for the requested interpacket time. As such, the system should wait for the entire packet to send before sending the next (back to back).

The other relation on the same graph, of the interpacket time againsts send_duration shows a 'wedge' witch represents the fact that with an increased requested interpacket time, the transfer for n packets will take longer. This relation is, as expected, linear.

However, there is a regrion for small packet sizes (<600) and small inter packet times (<40ish) that these rules do no hold; we actually end up with a very small send_duration, even though physically it should not be possible to send it out at that rate. This is shown on the graph top right. The graph shows that the send_data_rate is much higher than the physical limit of 100 for the same region. In this case, it is reporting that we are sending out at a gig rate on a 100mbit card!

For more information on this 'bug' see here.

The effect of this increased rate is seen at the receiver as shown below; for the same area, we get a major increase in the number of lost/dropped packets. (about 80%) Otherwise, it's fine.

 

A timeout in udpmon is defined as:

???

The effect of the rates are:

On the left, we have the rcv_data_rate as a function of the udp packet size (so plus 20 per packet for the ip), and on the right we have the ethernet rate. Looking at the ethernet graph, we can see that there are only certain regions where we can get 100mbit/sec. After about 1500 bytes at ip layer, there is a decrease in the throughput corresponding to fragmentation.

However, the 'bug' reported above does not seem prevalent in the throughput graphs here.

At ethernet level (which includes the preample etc), we reach the peak of 100mbit/sec quite quickly, however, this rate depends on two factors; the rate at which we send out the frames (the interpkt time defined in the udp) and the packet size of the udp packet (plus 20 for ip and then plus 18 for the ethernet frame per packet sent).

 

The graphs above is from the same data set that created the 3d graph above right. We can see a general trend of an increase in ethernet rate as we increase the packet size, upto the maximum physical limit of the nic (100mbit). The effect of decreasing the interpkt time results in higher throughput for the same sized packet.

The relation of the ethernet rate to the interpkt time is not as straigh forward. In order to transfer an empty packet at ip level, we would need 20bytes for the ip header, plus 18bytes for the ethernet frame (including preamble). At 100mbit/sec, we would need to send the 38bytes=304bits. At 100mbit/sec we would need 3.04e-6sec 3.04usec to send it out minimum. We would also need some time for the system to process and form the packet and for the ethernet to gather the data from the system to put onto the wire.

The curved nature of the graph on the right can be attributed to the fact that a throughput is inversely proportional to the time. As such, as the interpkt time increases, the denominator increases. Therefore giving an inverse T relationship. The fact that the curves moves upwards as the packet size increases is because we can send more data at any instance in each packet, hence increasing the top of the equation.

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page

 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT