UCL
 

Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast

 

lan wan

 

Network Monitoring

This page will contain information on experiments and proceedures to determine network state. Initial results will focus on back to back tests to understand network dynamics. This will then move on to control lab tests (MB-NG) and then finally full blown WAN tests between selected sites throughout the World.

I may also focus on host performances as well that may affect the rate of data transport. Particularly, hard disk performance, cpu load and requirements and operation systems.

As the number of internet users raise and the implementation of Grid Computing is successfully implemented, internet traffic volume is likely to grow exponentially. As such, it is essential that a set of network performance metrics and measurements be introduced to both the user and network service providers at all levels to provide accurate and common understanding of the performance and reliability of the connections between nodes and how each component of the internet path affects what performance.

Without this understanding, it would difficult to define how QoS policies should be implemented - it would certainly be impossible to police and provision. By standardising and implementing quantitative network performance metrics, intelligent networking decisions and ‘smart’ Grid applications can be developed, this could include:

  • Benchmarking: A user can determine if some network equipment/service is really delivering the claimed bandwidth and or latency. This would be especially important in QoS provisioned networks.
  • Policing: With a quantitative method of defining performance, and the bounds allowed by the service provider, one may check whether traffic from a domain is within the bounds of the SLA.
  • Cache selection: Given several sites with replicated data, a client could use bandwidth measurements in selecting the cache that would give it the best performance to retrieve the data.
  • Protocol selection: By providing a measure of how well different transport protocols perform under certain conditions, a user (either end user or middleware) could select the one that suits their needs more.
  • Protocol tuning: By adapting a transport protocol within its design parameters to provide performance suitable for at particular application, a quick way of obtaining performance from the network can be achieved. Different degrees of tuning can be easily compared and developed.
  • Protocol development: By providing a framework from which metrics are universal and well defined, one may quantify how better or worse a new transport protocol performs compared to other transport protocols.
  • Application-level network adaptation: An application could transform its data based on the current network conditions. Examples of this are real time video applications that reduce their frame rates when bandwidth drops. Understanding of how the network performs is paramount in this case.

There have been several research efforts on network performance measurement and analysis, of which the IETF IPPM is a major contributor. In this section, we outline some of the formal metrics that have been devised or are thought to be useful in determining network performance.

 

In order to obtain information about the network, certain tests may need to be performed. These tests can be broadly classified into two categories:

         Active: Where test data is sent through the network in order to discover the properties of the end-to-end connection.

         Passive: Where useful data, such as a required file, is transferred across the network and the resultant transfer properties used as a test benchmark.

Most monitoring tools are active, that is we put data in to test out the network. These could be simple pings, or a transfer of data which isn't of any real use to anyone else except for the person conducting the tests (and their audience).

 

Contents

 
Item Description
GridPP Testing A series of tests using some of the tools mentioned below across GridPP sites. Document outlining techniques and goals here. Webpage outlining activity and results here.
Tools Useful tools and analysis techniques to determine link state, functionality and capacity. Here.


Documents

Measuring Bottleneck Link Speed in Packet-Switched Networks Robert L. Carter and Mark E. Crovella, TR-96-006, Boston University Computer Science Department, March 15, 1996.
"Modelling TCP Throughput: a Simple Model and its Empirical Validation", J. Padhye, V. Firoio, D. Towsley, J. Kurose - Proceedings of SIGCOMM 98.


Links

Enabling High Performance Data Transfers on Hosts also here
Structure of Management Information for Version 2 of the Simple Network Management Protocol (SNMPv2)

RFC1902
Von Welch's site - links of monitoring tools


Research Groups

Optimizing bulk transfers over high latency/bandwidth nets @ ORNL.

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT