With the advent of fibre optics and satellite communication, technological development has increased communication between hosts from speeds of 9.6kbits/sec to over 1Gbit/sec in the last couple of decades. Even though highly adaptable, TCP - the middleware that describes how computers and even satellites should transfer data over a network, has changed only to accommodate new ways of acknowledging data. The question is whether the advance of the algorithms and the software that drives the internet capable of utilising this mammoth advance in hardware?
The performance of network connections can be characteristed by two main metrics: the bandwidth and the delay. The more bandwidth we have, the higher the end-to-end throughput and better quality-of-service for the applications. Delay is important in order to guarantee fast response from the network. And recently, with higher speed networks, extra latency can infact reduce the utilisation of a TCP stream.
With the distributed nature of the internet, many users share the resource of networks; as a result, the state of the internet can never be predicted. As a direct result of this, an end user can only expect never to be able to obtain predictatble performance. As such, it has recently became vital to monitor the state of the network in order to be able to estimate needs for extra provisioning and network planning.
Recently, the issue of bandwidth monitoring has become of major importance. Users need to check whether they get the throughput that they expect or pay for, and whether the network `clouds' that they use are sufficiently provisioned. Network managers also need bandwidth monitoring tools in order to plan their capacity upgrades, and to detect congested or underutilized links.
Two bandwidth metrics that are commonly associated with a path are the capacity and the available bandwidth. The capacity is the maximum throughput that the path can provide to an application when there is no competing traffic load (cross traffic). The available bandwidth, on the other hand, is the maximum throughput that the path can provide to an application, given the path's current cross traffic load. Measuring the capacity is crucial for 'debugging', calibrating, and managing a path. Measuring the available bandwidth, on the other hand, is of great importance for predicting the end-to-end performance of applications, for dynamic path selection and traffic engineering, and for selecting between a number of differentiated classes of service.
However, TCP has been around since the 80's. When access speeds are only a measly 9.6kbit/sec. We are now looking at having 1Gigabit/sec as standard as the technology becomes cheaper, and more importantly, more widespread. To date, 10Gigbit/sec ethernet has been standardised. For transport protocols such as UDP, where the raw throughput we get is only limited to how many packets we can shove out of our interface card (which can still be difficult sometimes!), TCP is based on a complex interaction of algorithms to guarantee both delivery of all data and to enable fair sharing of the network. This sharing is possible through the use of Congestion Control which under certain circumstances takes on a rather conservative fallback mechanism to prevent congestion collapse.
As these congestion control mechansims have been around since the early paper by Van Jacobson, little has changed in the way of improving TCP throughput, especially at high speeds. Much of this section focuses on the new ideas and implementations of enabling TCP communication at very high speeds.
Avoidance and Control (1988) Van Jacobson, Michael J. Karels
Rate-Halving Algorithm for TCP Congestion Control
Official Networking Documents (RFC) also here
|© 2001-2003, Yee-Ting Li, email: email@example.com,
Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT