Personal Miscellaneous TCP/IP GRID Quality of ServiceMulti-Cast  
Background high-speed tcp Transfer Tests Web 100TCP Tuning  

Back 2 Back Vanilla TCP (cont).

Test B is a repeat of Test A. However, a new feature called Dynamic Right Sizing is now turned off for the tests. The result of this would be that the cwnd values would be allowed to grow a lot larger; even though it would not be indicative of what the recieve is able to recv at.

Web100 enables a funcitonality to turn this feature off by turning web100_{s|r}bufmode on.


Test B: No Dynamic Right Sizing (web100 tuning)

Results xls tsv

This shows that by disabling DRS, we actually get a lot more sendstalls; so much so that in order to reduce the number of sendstalls to zero, we must set a txqueuelen of greater than 1200. The initial kink in the graph at about txq 14 is at the same place on the graph.

It is possible that this kink is associated with the way in which the kernel deals with incoming packets. The question is how do i find out????

Comparing this to the last graph for DRS enabled tests; we see that for small values (<200), we get similar results. However, for larger txqueuelens, we get a much higher value of cwnd, with about a factor of two greater variation (shown with stdev).

However, looking at the throughputs, we get exactly the same results.

The number of packets in and out, is unsurprisingly, identical as we are limited by the interface; not by tcp.

We also get the same increase in the number of dupacks; however, we now note that the start of the plateau is at ~1200 - the same value at which the number of sendstalls reaches zero.

This is puzzling as it implies that we need sendstalls in order to prevent dupacks. Actually, this makes sense as if we reduce the flux of packet leaving our device, then the recv is put under less pressure to process the packets. This means that it is more likely to be able to process the packet in time before the next one. However, if we just keep on throttling the sender, if it is not funcitoning at the same rate at the sender, then we are bound to get some losses as a result.

However, even though we get less packets retransmitted, we still see that there is a constant number of fast retransmits which cause the retransmission. This still leads to the idea that there is actually reordering on the network. Strange, because it is literally a piece of wire connecting the two machines. This implies that there must be something in the driver or the kernel which is selectively delaying packets before the ack is sent back.

Of course, if the queue's are not fifo, then this will happen.

Again, we see similar results on the effect of the txqueuelen on the number of instances of slowstart and congAvoid. The trends follow the place where the number of sendstalls is zero.

Similarly for the number of Congestion Signals and the number of OtherReductions experienced. While the number of CongestionSignals for small txqueuelen (<200) is about the same as that for the DRS TCP, it stretches out a lot longer. Also, the number of OtherReductions is much higher than for the DRS case for large txqueuelens.

Looking lower down in the network stack, the above graph shows the recvr's stats from the interface level. More specifically, it is the number given when you enter the ifconfig command. As you can see, there are no reported drops at the recvr's interface. Implying that the flux of packets from the source is absorbed by the recv without problems.

However, as tcp is a two way protocol; ie we also get ACKs coming back to the sender, we also need to look at that. One vital difference between the ACK's and the data packets is that the ACKs are relatively small packet sizes. Usually just over the minimum transfer unit of ethernet (64bytes). This means that the interface card actually has to work harder to process the packets.

  1. The above graph shows that, in fact, we do get a loss on the sender side. Strange enough, all the errors are the on the recving queue on the sender (ACKs), and that the drops are caused by errors. This could imply the folloing possibilities:
  2. the network is corrupting small packets - considering these are back to back tests over CAT 5e, unless theres is a lot of EM interference, this is unlikely.
  3. that the recv is not able to process the small packets to send out quick enough. The previous graph shows that there were no problems, at least with the ifconfig command, of the driver putting the packets into the ethernet card, and that there are no errors.

The sender is not capable of processing the small packets fast enough.

More investigation is required to conclude on the reasons of this loss.

The other outstanding question is where do the dupacks come from? we can see from the number of packets errored (on the sender's recv queue), that there appears to be a trend for less packets to be errors with a larger txqueue. This is strange as the recvr's txqueue remains constant. The dupack graph implies that we would actually see more packets being errored and hence dropped and therefore not processed by tcp.


Mon, 7 April, 2003 16:59 Previous PageNext Page
    email me!
2001-2003, Yee-Ting Li, email:, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT