Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast  
DiffServ MPLS Label Switching  

Introduction

Quality of Service is the term given to given data streams (more specificially IP packets) priority over a network (WAN, LAN etc.). The driving technology behind Quaility of Service is the introduction of Label Switching which allows the use of high speed ATM switches over normal IP networks.

Not long ago, IP was used primarily in Unix environments or for connecting to the Internet; other protocols, like SNA and IPX were used for other purposes. Now, however, many companies have begun using IP for everything-from sharing information within the company to running voice and other real-time applications across their global enterprise networks.

The rise of IP as a foundation for a universal network raises several issues for both enterprise IT departments and ISPs, not the least of which is how to guarantee that applications will receive the service levels they require to perform adequately across the network. For example, network managers might need a way to define a low level of latency and packet loss to ensure that a large file or a business-critical traffic flow gets to its destination on time and without delay. Or, they may need to ensure that a real-time session such as voice or video over IP doesn't look choppy or out of sequence.

The problem with IP is that, like Ethernet, it is a connectionless technology and does not guarantee bandwidth. Specifically, the protocol will not, in itself, differentiate network traffic based on the type of flow to ensure that the proper amount of bandwidth and prioritization level are defined for a particular type of application. By contrast, the cell-based ATM standard incorporates such service requirements in its specifications.

Because IP does not inherently support the preferential treatment of data traffic, it's up to network managers and service providers to make their network components aware of applications and their various performance requirements.

Standard Internet Protocol (IP)-based networks provide "best effort" data delivery by default. Best-effort IP allows the complexity to stay in the end-hosts, so the network can remain relatively simple [e2e]. This scales well, as evidenced by the ability of the Internet to support its phenomenal growth. As more hosts are connected, network service demands eventually exceed capacity, but service is not denied. Instead it degrades gracefully. Although the resulting variability in delivery delays (jitter) and packet loss do not adversely affect typical Internet applications--email, file transfer and Web applications-other applications cannot adapt to inconsistent service levels. Delivery delays cause problems for applications with real-time requirements, such as those that deliver multimedia, the most demanding of which are two-way applications like telephony.

Increasing bandwidth is a necessary first step for accommodating these real-time applications, but it is still not enough to avoid jitter during traffic bursts. Even on a relatively unloaded IP network, delivery delays can vary enough to continue to adversely affect real-time applications. To provide adequate service -- some level of quantitative or qualitative determinism -- IP services must be supplemented. This requires adding some "smarts" to the net to distinguish traffic with strict timing requirements from those that can tolerate delay, jitter and loss. That is what Quality of Service (QoS) protocols are designed to do. QoS does not create bandwidth, but manages it so it is used more effectively to meet the wide range or application requirements. The goal of QoS is to provide some level of predictability and control beyond the current IP "best-effort" service.

A number of QoS protocols have evolved to satisfy the variety of application needs. We describe these protocols individually, then describe how they fit together in various architectures with the end-to-end principle in mind. The challenge of these IP QoS technologies is to provide differentiated delivery services for individual flows or aggregates without breaking the Net in the process. Adding "smarts" to the Net and improving on "best effort" service represents a fundamental change to the design that made the Internet such a success. The prospect of such a potentially drastic change makes many of the Internet's architects very nervous.

To avoid these potential problems as QoS protocols are applied to the Net, the end-to-end principle is still the primary focus of QoS architects. As a result, the fundamental principle of "Leave complexity at the 'edges' and keep the network 'core' simple" is a central theme among QoS architecture designs. This is not as much a focus for individual QoS protocols, but in how they are used together to enable end-to-end QoS. We explore these architectures later in this paper after we give a brief overview of each of the key QoS protocols.

 

QoS protocols

There is more than one way to characterize Quality of Service (QoS). Generally speaking, QoS is the ability of a network element (e.g. an application, a host or a router) to provide some level of assurance for consistent network data delivery. Some applications are more stringent about their QoS requirements than others, and for this reason (among others) we have two basic types of QoS available:

  • Resource reservation (integrated services): network resources are apportioned according to an application's QoS request, and subject to bandwidth management policy.0
  • Prioritization (differentiated services): network traffic is classified and apportioned network resources according to bandwidth management policy criteria. To enable QoS, network elements give preferential treatment to classifications identified as having more demanding requirements.

These types of QoS can be applied to individual application "flows" or to flow aggregates, hence there are two other ways to characterize types of QoS:

  • Per Flow: A "flow" is defined as an individual, uni-directional, data stream between two applications (sender and receiver), uniquely identified by a 5-tuple (transport protocol, source address, source port number, destination address, and destination port number).
  • Per Aggregate: An aggregate is simply two or more flows. Typically the flows will have something in common (e.g. any one or more of the 5-tuple parameters, a label or a priority number, or perhaps some authentication information).

Applications, network topology and policy dictate which type of QoS is most appropriate for individual flows or aggregates. To accommodate the need for these different types of QoS, there are a number of different QoS protocols and algorithms:

  • ReSerVation Protocol (RSVP): Provides the signaling to enable network resource reservation (otherwise known as Integrated Services). Although typically used on a per-flow basis, RSVP is also used to reserve resources for aggregates (as we describe in our examination of QoS architectures).
  • Differentiated Services (DiffServ): Provides a coarse and simple way to categorize and prioritize network traffic (flow) aggregates.
  • Multi Protocol Labeling Switching (MPLS): Provides bandwidth management for aggregates via network routing control according to labels in (encapsulating) packet headers.
  • Subnet Bandwidth Management (SBM): Enables categorization and prioritization at Layer 2 (the data-link layer in the OSI model) on shared and switched IEEE 802 networks.
QoS
Net
App
Description
most
X
Provisioned resources end-to-end (e.g. private, low-traffic network)
 
X
X
RSVP (Resource reSerVation Protocol) [IntServ Guaranteed] Service (provides feedback to application)
 
X
X
RSVP [IntServ Controlled] Load Service (provides feedback to application)
 
X
Multi-Protocol Label Switching [MPLS]
 
X
X
Differentiated Services [DiffServ] applied at network core ingress appropriate to RSVP reservation service level for that flow. Prioritization using Subnet Bandwidth Manager [SBM] applied on the LAN would also fit this category.
 
X
X
Diffserv or SBM applied on per-flow basis by source application
 
X
Diffserv applied at network core ingress
 
X
Fair queuing applied by network elements (e.g. CFQ, WFQ, RED)
least
Best effort service

Table 1. Shows the different bandwidth management algorithms and protocols, their relative QoS levels, and whether they are activated by network elements (Net) or applications (App), or both.

Table 1 compares the QoS protocols in terms of the level of QoS they provide and where the service and control are implemented -- in the Application (App) or in the Network (Net). Notice that this table also refers to router queue management algorithms such as Fair Queuing (FQ), Random Early Drops (RED). Queue management-including the number of queues and their depth, as well as the algorithms used to manage them--is very important to QoS implementations. We refer to them here only to illustrate a full spectrum of QoS capabilities, but as they are largely transparent to applications and not explicitly QoS algorithms, we will not refer to them again. For more information see [Queuing].

The QoS protocols we are focused on in this paper vary, but they are not mutually exclusive of one another. On the contrary, they complement each other nicely. There is a variety of architectures in which these protocols work together to provide end-to-end QoS across multiple service providers. We will now describe each of these protocols in some more detail -- describing their essential mechanics and functionality -- and follow that with a description of the various architectures in which they can be used together to provide end-to-end QoS.


The QoS Dilemma

Quality of Service (QoS) generally encompasses bandwidth allocation, prioritization, and control over network latency for network applications. There are several ways to ensure QoS, no matter what type of network you're talking about - Ethernet or ATM, IP or IPX. The easiest one is simply to throw bandwidth at the problem until service quality becomes acceptable. This approach might involve upgrading the backbone to a high-speed technology such as Gigabit Ethernet or 622Mbit/sec ATM. If you have fairly light traffic in general, more bandwidth may be all you need to ensure that applications receive the high priority and low latency they require.

However, this simplistic strategy collapses if a network is even moderately busy. In a complex environment - one that has a lot of data packets moving in many paths throughout the network, or that has a mixture of data and real-time applications - you could run into bottlenecks and congestion.

Also, simply adding bandwidth doesn't address the need to distinguish high-priority traffic flows from lower-priority ones. In other words, all traffic is treated the same. In the network realm, such egalitarianism is not good, because network traffic is, by its nature, unpredictable. For instance, on some days, you'll see traffic bursts occurring at 8 a.m., while on other days you'll see them at noon or at the end of the day. These traffic bursts can move around too. One day, your Internet gateway or one of your switches is the bottleneck; another day, it's your intra-campus video conferences or heavy voice traffic causing the congestion.

As you can see, additional bandwidth can solve some of your short-term problems, but it's not a viable long-term solution, particularly if you already have enough bandwidth to accommodate all but the most highly sensitive network applications.

So how can you flag special traffic as high priority on an IP network? Options like Resource Reservation Protocol (RSVP), multiple flows, and tagging fields can help you give sensitive applications the resources they need.

 


Not Out of Reach

Although many of the QoS standards and protocols mentioned here are still in their infancy and not yet widely deployed, the growing need for QoS in IP-based networks will soon drive them to the top of any IT manager's or service provider's to-do list.

In corporate intranets or extranets, where all routers are typically part of a particular enterprise and therefore subject to the same policies, methods such as RSVP and policy-based networking shouldn't be difficult to use. The public Internet is another story, because you have no idea which routers are out there and whether or not they will honor QoS requests.

The IETF hopes to change that with IPv6, the next version of IP, which includes inherent provisions for QoS. For instance, IPv6 will allow applications to request different levels of service, and will guarantee these service levels even when a request goes over the WAN.

This inherent QoS will be a big boost for real-time applications such as voice and video. Currently, the only viable way you can ensure a specific QoS without adding on support for other protocols is through the use of ATM's User-to-Network (UNI) signaling and Private Network-to-Network Interface (PNNI) routing mechanisms. UNI allows sending and receiving stations and the network to work together to ensure that a particular traffic flow gets the QoS it needs. PNNI selects the most appropriate route through the network for traffic.


Contents

Background Some background RFCs etc. Here.
Label Switching Label switching is the broad term given to the routing of packets over a network. Instead of simply being rerouted by switches and or routers, each packet can be anaylsed and given a priority based on the source and destination of the packets.
DiffServ  
MPLS  
RSVP  
QoS and Multicast  
Policies  
   
   
   

 

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT