Personal Miscellaneous TCP/IP GRID Quality of Service Multi-Cast  
DiffServ MPLS Label Switching  

SBM - Subnet Bandwidth Management

QoS assurances are only as good as their weakest link. The QoS "chain" is end-to-end between sender and receiver, which means every router along the route must have support for the QoS technology in use, as we have described with the previous QoS protocols. The QoS "chain" from top-to-bottom is also an important consideration, however, in two aspects:

  • Sender and receiver hosts must enable QoS so applications can enable it explicitly or the system can enable it implicitly on behalf of the applications. Each OSI layer from the application down must also support QoS to assure that high-priority send and receive requests receive high priority treatment from the host's network system.
  • Local Area Network (LAN) must enable QoS so high-priority frames receive high-priority treatment as they traverse the network media (e.g., host-to-host, host-to-router, and router-to-router). LANs are OSI Layer 2, the data-link layer, whereas the QoS technologies described previous to this have been Layer 3 (DiffServ) and above (RSVP & MPLS).

Some Layer 2 technologies have always been QoS-enabled, such as Asynchronous Transfer Mode (ATM). However, other more common LAN technologies such as Ethernet were not originally designed to be QoS-capable. As a shared broadcast medium or even in its switched form, Ethernet provides a service analogous to standard "best effort" IP Service, in which variable delays can affect real-time applications. However, the [IEEE] has "retro-fitted" Ethernet and other Layer 2 technologies to allow for QoS support by providing protocol mechanisms for traffic differentiation.

The IEEE 802.1p, 802.1Q and 802.1D standards define how Ethernet switches can classify frames in order to expedite delivery of time-critical traffic. The Internet Engineering Task Force [IETF] Integrated Services over Specific Link Layers [ISSLL] Working Group is chartered to define the mapping between upper-layer QoS protocols and services with those of Layer 2 technologies, like Ethernet. Among other things, this has resulted in the development of the "Subnet Bandwidth Manager" (SBM) for shared or switched 802 LANs such as Ethernet (also FDDI, Token Ring, etc.). SBM is a signaling protocol [SBM] that allows communication and coordination between network nodes and switches in the [SBM Framework] and enables mapping to higher-layer QoS protocols [SBM Mapping].

A fundamental requirement in the SBM framework is that all traffic must pass through at least one SBM-enabled switch. As shown in Figure 5, aside from the QoS-enabled application and Layer 2 (e.g., Ethernet), the primary (logical) components of the SBM system are:

  • Bandwidth Allocator (BA): Maintains state about allocation of resources on the subnet and performs admission control according to the resources available and other administrator-defined policy criteria.
  • Requestor Module (RM): Resides in every end-station and not in any switches. The RM maps between Layer 2 priority levels and the higher-layer QoS protocol parameters according to the administrator-defined policy. For example, if used with RSVP it could map based on the type of QoS (Guaranteed or Controlled Load) or specific Tspec, Rspec or Filter-spec values.

As illustrated in Figure 5, the location of the BA determines the type of SBM architecture in use: Centralized or Distributed. Whether there is only one or more than one BA per network segment, only one is the "Designated SBM" (DSBM) (Note: there can be more segment per subnet). The DSBM may be statically configured or "elected" among the other BAs [SBM].

Figure 5: There are two forms of the Subnet Bandwidth Manager (SBM) architecture, in which the Bandwidth Allocator is either centralized or distributed [SBM Framework]

The SBM protocol provides an RM-to-BA or BA-to-BA signaling mechanism for initiating reservations, querying a BA about available resources, and changing or deleting reservations. The SBM protocol is also used between the QoS-enabled application (or its third-party agent) and the RM, but this involves use of a programming interface (API) rather than the protocol, so it simply shares the functional primitives. Although SBM protocol is designed to be QoS protocol-independent, so it is designed work with other QoS protocols such as ST-II, for example, the specifications use RSVP in their examples, as will we. Here is a simple summary of the admission control procedure of the SBM protocol:

  1. DSBM initializes: gets resource limits (statically configured for now)
  2. DSBM Client (any RSVP-capable host or router) looks for the DSBM on the segment attached to each interface (done by monitoring the "AllSBMAddress," the reserved IP Multicast address 224.0.0.17).
  3. When sending a PATH message, a DSBM client sends it to the "DSBMLogicalAddress" (reserved IP Multicast address, 224.0.0.16) rather than to destination RSVP address.
  4. Upon receiving a PATH message, a DSBM establishes PATH state in the switch, stores the Layer2 and Layer3 (L2/L3) addresses from which it came, and puts its own L2/L3 addresses in the message. DSBM then forwards the PATH message to next hop (which may be another DSBM on the next network segment).
  5. When sending an RSVP RESV message, a host sends it to the first hop (as always), which would be the DSBM(s) in this case (taken from the PATH message).
  6. DSBM evaluates the request and if sufficient resources are available, forwards to the next hop (else returns an error).

This sketch looks very much like standard RSVP processing in a router, however we omitted some significant details for the sake of simplicity. We will not attempt more detail here, but want to mention the TCLASS object that either a sender or any DSBM can add to a RSVP PATH or RESV message. It contains a preferred 802.1p priority setting and allows overriding a default setting, although any DSBM may change the value after receiving it. Routers must save the TCLASS in the PATH or RESV state, and remove it from the message to avoid forwarding it on the outgoing interface, but then they must put it back into incoming messages.

IEEE 802.1p uses a 3-bit value (part of an 802.1Q header) in which can represent an 8-level priority value. They are changeable and the specified bounds are only targets, but the default service-to-value mappings defined in [SBM Mapping] are:

  1. Priority 0: Default, assumed to be best-effort service
  2. Priority 1: Reserved, "less-than" best-effort service
  3. Priority 2-3: Reserved
  4. Priority 4: Delay Sensitive, no bound
  5. Priority 5: Delay Sensitive, 100ms bound
  6. Priority 6: Delay Sensitive, 10ms bound
  7. Priority 7: Network Control

As with DiffServ, the simplicity of prioritization values belies the complexity that is possible. As we describe next in the QoS Architectures section, the flexibility that mapping provides allows for a wide variety of possibilities capable of supporting a wide range of QoS assurances and granularity.

 

Wed, 23 July, 2003 13:07 Previous PageNext Page
 
 
    email me!
© 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145
Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT