cc/td/doc/product/atm/l2020/l2020r20
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Network Connections

Network Connections

Establishing VCCs · Services Provided By a LightStream Network · Types of Service for VCCs · Behind the Scenes

In a LightStream enterprise ATM switch, all traffic that passes over the network is connection-oriented. This means that connections must be established before any traffic (ATM cells) can be transmitted or received. This chapter describes connections in a LightStream network.

Establishing VCCS

LightStream supports two ways to establish VCCs:

Provisioning

Provisioning is the explicit creation of a VCC in which a user specifies its endpoints and other attributes in a configuration database. Provisioned VCCs are called permanent virtual circuits (PVCs).

When you specify the endpoints of a connection, the LightStream network automatically sets up a pair of VCCs to provide bidirectional communication between the two endpoints. You can configure the bandwidth separately for each direction of these VCCs. In this document, the term PVC refers to the provisioned VCCs used to provide bidirectional communication.

Implicitly Establishing VCCs

Implicit establishment of VCCs occurs dynamically when a module recognizes the need for a new connection. For example, when a LAN port on an Ethernet interface module does not recognize an incoming packet as belonging to an existing VCC, the system may create a new VCC and route the data across it.

Services Provided by a Lightstream Network

LightStream provides several methods of connecting external devices to the LightStream network and passing traffic through the network.

Frame Forwarding

The frame forwarding service lets you replace direct connections between devices that support HDLC and SDLC with a connection through the LightStream network; this allows you to connect older devices that do not support frame relay, ATM UNI, or LAN interfaces. For example, you can use the frame forwarding service to connect X.25 packet switching nodes or SNA devices through the LightStream network.

Frame forwarding PVCs provide a "virtual wire" between two network ports on the edge of the LightStream network. All traffic that enters the LightStream network on a particular frame forwarding port is sent through the network to the port on the other end of the virtual wire. All traffic that enters the network on a particular frame forwarding port must have the same destination on the other side of the LightStream network.

A frame forwarding PVC is defined by two endpoints (frame forwarding ports) on the edges of the network. Figure 3-1 shows two frame forwarding PVCs: PVC 1 and PVC 2. The endpoints of PVC 1 are port 1 on N1 and port 3 on N3. The endpoints of PVC 2 are port 2 on N1 and port 6 on N4. There may be any number of LightStream switches between the endpoints. The LightStream network selects the best route between the two endpoints and sends the ATM cells through that route.


Figure 3-1: A LightStream network containing two frame forwarding PVCs. HDLC or SDLC frames enter the LightStream network on a frame forwarding interface, are passed through the network, and are reassembled into frames.

Frame Relay

LightStream supplies a frame relay DCE or NNI interface to which you can connect routers, packet switches, and other devices that have frame relay DTE interfaces.

Using the frame relay service, the LightStream network can accept traffic at a single port and send that traffic to multiple destinations. This is in contrast to the frame forwarding service, where all traffic received on a particular port is sent to one destination.

A frame relay PVC is defined by two endpoints (frame relay ports) on the edges of the network and the data link connection identifiers (DLCIs) associated with the source and destination ports. The LightStream network uses the DLCI associated with each frame of the traffic to determine its PVC. LightStream then segments each frame into cells and sends it to its destination.

Figure 3-2 shows three frame relay PVCs. As shown, there can be more than one frame relay PVC between the same LightStream switches.


Figure 3-2: A LightStream network containing three frame relay PVCs.

Figure 3-3 and Figure 3-4 show how frames with multiple destinations are received on one port and passed through the LightStream network to their correct destinations.


Figure 3-3: Traffic from multiple sources enters the LightStream network through one frame relay DCE port. LightStream switch N1 examines the DLCI of each frame, determines the PVC and segments the frame into cells.

The LightStream switch at which the traffic enters looks at each frame's DLCI and determines the PVC on which the traffic should be passed. The frame is then segmented into cells. Each cell is passed through the network on the selected PVC. When the cells reach the final LightStream switch in the PVC, they are reassembled into a frame and passed out of the LightStream network on the correct destination port and DLCI.


Figure 3-4:
The cells have passed through the network on the correct PVCs and are reassembled into frames. The frames are then sent from a LightStream switch with the new (destination) DLCI, out the appropriate port.

ATM UNI

LightStream's ATM user network interface (UNI) service supplies an ATM interface that allows non-LightStream ATM networks or other ATM-capable devices to use the LightStream backbone network. LightStream's ATM UNI interface conforms to the structure and field encoding convention defined in T1.627-1993 by the American National Standards Institute. Also, ATM UNI service is based on the ATM Forum UNI specification, version 3.0, dated September 10, 1993.

ATM UNI is managed using structures in the standard management information base (MIB) and the LightStream enterprise-specific MIB.

An ATM UNI PVC is defined by two endpoints (ATM ports) on the edges of the network and the virtual channel identifiers (VCIs) associated with the particular PVC that runs between the source port and the destination port. Figure 3-5 shows two ATM UNI PVCs: PVC 1 and PVC 2.


Figure 3-5: A LightStream network containing two ATM UNI PVCs.

Since the incoming traffic is already in ATM cells, it is not necessary for the LightStream switch to segment it into cells. The LightStream switch at which the cells enter looks at the VCI and determines the PVC on which the traffic should be passed. Each cell is passed through the network on the selected PVC. When they reach the final LightStream switch in the PVC, the cells are passed out of the LightStream network on the correct destination port VCI.

Figure 3-6 shows the ATM cells entering the network, and Figure 3-7 shows the cells exiting the network.


Figure 3-6: ATM cells with multiple destinations enter the LightStream network through an ATM port. LightStream switch N1 examines the VCI of each cell in the stream of traffic and determines the PVC on which it should be sent.

Figure 3-7:
The ATM cells are passed through the LightStream network to their destinations with appropriate VCI transformations.

Bridging

A LightStream network supports transparent and translation bridging. Specifically, it supports Ethernet-to-Ethernet, FDDI-to-FDDI, and Ethernet-to-FDDI bridging within a switch as well as across the ATM network.

LightStream bridging is implemented as a cell internet with underlying ATM features, as described in the subsections that follow. Bridging service also includes the following features, each of which is described later in this section:

From a user's point of view, a LightStream network is modeled as a collection of bridges connected through the ATM backbone with one bridge per LightStream switch. Externally, all of the bridges in the network appear to be sharing a single broadcast medium on the inside of the ATM network. In the model, each bridge in a LightStream network has one internal connection to an internal broadcast backbone.

Figure 3-8 shows the LightStream bridging model.


Figure 3-8: LightStream bridging model.

Underlying LightStream ATM Services

To overlay a connectionless service such as bridging onto an ATM network infrastructure, the LightStream network automatically manages bandwidth and VCCs for bridged traffic. These operations are invisible to the user of the network.

  A stream of packets travelling from one LAN station to another is called a flow. There are typically several flows traversing an ATM VCC.

Spanning Tree Protocol

The LightStream bridging model is structured so that loops cannot occur in a network composed solely of LightStream bridges. However, spanning tree support is required for interoperability with bridges from other vendors. Ports that are configured for bridging implement the spanning tree algorithm defined in IEEE802.1d to eliminate loops that may be caused by external bridges or incorrect cabling attached to LightStream ports.

The logical network that the algorithm creates is always a spanning tree with these characteristics:

If the spanning tree protocol detects a loop, one of the ports on the bridge goes into a blocking state to break the loop. While in the blocking state, the port discards all bridged traffic and stops the process of learning media access control (MAC) address information.

Custom Filtering

The LightStream bridge supports custom filters on LAN interfaces. You create custom filters and assign them to ports using either the configurator or the CLI. Each incoming packet header is evaluated against the filters and either forwarded into or blocked from the network. You could, for instance, prohibit a particular protocol from passing between two ports by creating the appropriate filter for each port.

Custom filtering is applied only at inbound ports. To achieve the effect of filtering on an outbound port, the specific custom filter must be assigned to all bridged ports in the network.

For information about how to configure custom filters, see the LightStream 2020 Configuration Guide or LightStream 2020 Administration Guide.

Static Filtering

The LightStream bridge supports static filtering (also called static bridge forwarding as defined in IEEE801.d). Through the configurator or the CLI, you can make static entries in the bridge's filtering database. You may, for instance, want to make a static entry if you are directing a broadcast to specific ports in order to limit broadcast propagation. You would also make a static entry if you have an end station that only receives traffic, in which case the bridge can't learn about the station.

For information about how to configure static filters, see the LightStream 2020 Configuration Guide or LightStream 2020 Administration Guide.

Broadcast Limiting

To reduce the amount of broadcast traffic on the network, LightStream bridging includes:

IP Packet Fragmentation

When necessary, a LightStream switch can fragment IP packets when bridging the packets between ports that have different maximum transfer unit (MTU) values, such as Ethernet and FDDI.

Workgroups

A workgroup is a collection of LAN ports that are allowed to communicate with each other. By assigning groups of ports to different workgroups, you can provide privacy between groups or limit the impact of one group's traffic on another. The workgroup membership for a port defines the membership of all the stations attached to it.

You can create workgroups through the configurator or CLI. By default, all ports in the network are assigned to a single workgroup. This makes the default behavior the same as that of an ordinary bridged network.

In a LightStream network, stations can

Limited IP Routing for Network Management Traffic

The LightStream network offers limited IP routing capability to allow for the flow of SNMP, telnet, and FTP traffic between LightStream switches and an external network management system (NMS). An NMS station can attach directly to an NP Ethernet port, or can attach through an Ethernet or FDDI edge interface. Every NP has an internal IP address, and the network routing database contains enough information to route incoming IP packets between any NP in the network and any FDDI or Ethernet port, including the Ethernet ports on the NPs.


Note These IP routing services are provided only for monitoring and network management activities. They are not available for carrying user traffic.

Types of Service for VCCS

The LightStream 2020 has a comprehensive traffic management subsystem which supplies various services with configurable alternatives for carrying user traffic. This section presents the configurable aspects of that subsystem using the terminology and concepts of the user interface. For any given VCC, the traffic service is determined by a composite of the independently configurable attributes described below.

Internal Mechanisms

Many of the internal mechanisms that govern the service supplied to individual VCCs are affected by the settings of configurable parameters. These mechanisms are summarized here.


Note For a more detailed explanation of these mechanisms, see Chapter 4, Traffic Management.

Transmit Priority

There are four priority levels for servicing cell queues wherever they exist within the network. All cells waiting to be forwarded at a given level are serviced before any at a lower priority. The highest priority is reserved for CBR traffic. The next highest is for internal control traffic. The remaining two are available for user traffic.

Bandwidth Allocation

To manage its bandwidth resources, the LightStream network tracks two kinds of available bandwidth -- allocated and best effort. The allocated bandwidth is increased (or decreased) as a result of call establishment (or teardown). Its magnitude is determined by the requirements of VCCs for a specific traffic capacity under any circumstance. The best effort bandwidth is a dynamic quantity whose magnitude is the sum of the unallocated bandwidth (the difference between the allocated bandwidth and total capacity) and the currently unused allocated bandwidth. It represents statistically sharable capacity for carrying bursty traffic.

Call Admission Control

For the network to support a requested VCC, it must be able to allocate bandwidth along the path, and impose a limit on the amount of traffic that the VCC will be allowed to carry. The allocation of bandwidth is required to meet service goals, and the limitation of traffic is necessary to protect the network from unruly traffic sources.

Traffic Policing

The policing function in a LightStream network is done at the edges of the network for both frame- and cell-based traffic. It determines whether the traffic is allowed to proceed into the network, and if so, whether it should use allocated or best effort bandwidth within the network. For a given VCC, the policer operates with four static parameters: Insured Rate and burst, and Maximum Rate and burst. These represent, correspondingly, the largest average rate and instantaneous buffering associated with the insured traffic (the type which uses allocated bandwidth), and the largest average rate and instantaneous buffering associated with all traffic. In addition, the VCC policer also uses a dynamic parameter (controlled by the congestion avoidance mechanism) called Total Rate, which is never lower than the Insured Rate or higher than the Maximum Rate. Traffic which exceeds the Insured Rate and burst parameters, but is within the Total Rate and Maximum Burst parameters, is called excess and uses best effort bandwidth. Traffic which exceeds the Total Rate and Maximum Burst parameters is dropped.

Selective Cell Discard

Although traffic policing is the prevalent mechanism for discarding traffic that the network cannot handle, there can be occasional congestion within the network due to statistical fluctuations which cause local overload. When this happens, cells are discarded according to their cell drop eligibility, for example, cells with higher drop eligibility are dropped before cell with lower eligibility. This cell drop eligibility can be one of three levels, ranging from most to least eligible: Best Effort, Best Effort Plus, and Insured.

Congestion Avoidance

The congestion avoidance mechanism continuously monitors the best effort bandwidth availability within the network and adjusts the total rate parameter of each VCC policer. Its dual objectives are to maximize use of bandwidth resources (such as trunk lines) while preventing too much traffic from entering the network and causing congestion.

Configurable Attributes

The following attributes affect the operation of one or more of the internal mechanisms previously described, for VCCs carrying user traffic. These attributes are explicitly configurable for frame relay, frame forwarding, CBR, and ATM UNI PVC VCCs. In addition, there is a predefined set of attribute values assigned to implicitly established VCCs carrying internal control traffic and bridged Ethernet/FDDI traffic.

Rate Parameters

There are five attributes which control traffic rate aspects of VCC service. There are Insured Rate, Insured Burst, Maximum Rate, Maximum Burst, and Secondary Scale.

The first four of these attributes establish the corresponding traffic policing parameters. The allocated bandwidth, used by the bandwidth allocation and call admission control mechanisms, is the sum of the insured rate plus a fraction (specified by the Secondary Scale) of the difference between the maximum and insured rates.

Principal Service Type

There are two principal service types, Guaranteed and Insured. They share control of the cell drop eligibility mechanism with the rate parameters.

If the rate is within the Insured Rate value, then the traffic is given lowest drop eligibility (Insured), whether the VCC is designated as Guaranteed or Insured principal type of service. In fact, the likelihood of any cell dropping of insured traffic is negligible, since all of its bandwidth has been allocated.

For best effort traffic, Insured principal service provides Best Effort (highest) drop eligibility and Guaranteed principal service provides Best Effort Plus (medium) drop eligibility.

Transmit Priority for User Traffic

The transmit priority attribute mainly controls the delay characteristics of traffic on a user VCC and has only two values: zero and one. Zero indicates the lowest of the four priorities maintained by the transmit priority mechanism, and one indicates the second of these. The highest priority is used for CBR traffic and the next is reserved for control traffic VCCs. Traffic that is significantly delay-sensitive should use transmit priority one, while traffic that is less delay-sensitive or relatively delay-insensitive should use priority zero.

The transmit priority has a secondary effect on the selective discard mechanism, in that for a given cell drop eligibility, those cells that are assigned a higher transmit priority will be less likely to be dropped than those assigned a lower transmit priority.

Behind the Scenes

The LightStream network performs two important services automatically: neighborhood discovery and global information distribution (GID). These services simplify network configuration and maintain a consistent, network-wide database of routing and address information.

Neighborhood Discovery

A neighborhood discovery process runs on every NP in a LightStream network. It performs the following tasks:


Note The NP that controls an interface module must be in the same chassis as the interface module.

Whenever you add or remove a local resource, the neighborhood discovery process informs the global information distribution (GID) system, which floods the information from NP module to NP module through the network. The neighborhood discovery process also keeps the local GID process informed about who its neighbors are so it can flood information properly.

Neighborhood discovery simplifies the network configuration process and eliminates the need to manually configure some of the interface module attributes in each switch and all the connections to other switches in the network.

Global Information Distribution

The GID system is a service that maintains a consistent network-wide database. All of the switches contribute to the database and all of the switches extract information from it. The GID system ensures that every switch has an up-to-date copy of all the information in the database.

NPs use a flooding algorithm to distribute the global information between neighboring NPs. The flooding algorithm is similar to the one used by the Open Shortest Path First (OSPF) routing system, but the updates are much more frequent. Flooding can occur only between NPs that have established a neighbor relationship, and therefore a communication path, between them. These relationships and communication paths are established, maintained, and removed by the neighborhood discovery process.

The GID system is represented by a process on every NP in the network. Each GID process serves several clients that produce and consume information. A GID process issues an update whenever a client contributes new information. The GID also has mechanisms for quickly initializing a GID database when a new switch enters the network.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Wed Oct 2 06:06:32 PDT 2002
All contents are Copyright © 1992--2002 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.