cc/td/doc/product/wanbu/igx8400/9_1
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Cisco Wide Area ATM Networks

Cisco Wide Area ATM Networks

This chapter provides an introduction to the Cisco Wide Area ATM networking products including an overview of the Cisco IPX narrowband switch, the Cisco IGX 8400 series multiband switch, the Cisco BPX 8600 series wideband switch, ESP, the Cisco MGX 8220 edge concentrator, and Cisco WAN Switching access products, including the Cisco 3810, FastPAD, INS-DAS, INS-VNS, and the StrataSphere NMS.

This chapter includes the following:

Introduction

Cisco wide area ATM networks meet the expanding requirements of today's private enterprises and service providers. These ATM Wide Area Networks provide more bandwidth (up to OC-12/STM-4 rates of 622.08 Mbps), new services, reduced transaction costs, greater flexibility, scalability, service interworking, plus manageability and security for both enterprise and service provider networks (Figure 1-1).


Figure 1-1: An ATM Network Configuration


Expanding Network Requirements

Due to a number of events occurring in both enterprise and service provider networks; including on-going advances in computing power, more desktop interaction, the internet, more transactions, more visual content, an explosion of new applications, etc., greater demands have been placed on both private and public networks.

To maximize bandwidth utilization and flexibility, networks are moving from dedicated circuits with fixed bandwidth between devices to virtual networks. A virtual network comprises logical connections (virtual circuits) which dynamically share physical bandwidth capacity on an as needed basis with other logical connections (virtual circuits) or networks.

ATM Networks

These emerging virtual networks share bandwidth using the multiplexing technique called ATM asynchronous transfer mode, which allows networks to dynamically allocate capacity to connections on an as needed basis. ATM traffic is segmented into 53-byte cells for transmission of all types of traffic: voice, data, frame relay, video, ATM services, etc., at narrowband and broadband speeds.

New with Release 9.1

Cisco BPX 8600 Series Broadband Switch

Cisco IGX 8400 Series Multiband Switch

Network

Cisco MGX 8220 Edge Concentrator

StrataSphere Network Management

CiscoView Network Element Management

New with Release 8.5

Cisco IGX 8400 Series Multiband Switch

Network

Cisco MGX 8220 Edge Concentrator

StrataSphere Network Management

New with Release 8.4

The Extended Services Processor (ESP) is an adjunct processor that is co-located with a Cisco BPX 8600 series wideband switch. The ESP provides the signaling and Private Network to Network Interface (PNNI) routing for ATM and frame relay switched virtual circuits (SVCs) via BXM cards in the Cisco BPX 8600 series wideband switch and AUSM and FRSM cards in the Cisco MGX 8220 edge concentrator.

Continuing Features with Release 9.1, 8.5 and 8.4

The following is a list of some of the continuing features with Release 9.1, 8.5 and 8.4:

StrataSphere Network Management

Network

INS-DAS and INS-VNS

Cisco BPX 8600 Series Wideband Switch

Cisco IGX 8400 Series Multiband Switch

Cisco IPX Narrowband Switch

Cisco MGX 8220 Edge Concentrators

Access Products

FastPAD MM

FastPAD MP

ATM Networks

Cisco WAN switching networking systems support multiband ATM applications in private wide area networks and service provider service offerings, such as frame relay and native ATM. Cisco's WAN switching product family includes the Cisco IPX narrowband switch, the Cisco IGX 8400 series multiband switch, the Cisco BPX 8600 series wideband switch, the Cisco MGX 8220 edge concentrator, and the FastPAD, Cisco 3810, INS-DAS, INS-VNS, and StrataSphere products. These products integrate and transport a wide variety of information, including voice, data, frame relay, video, LAN traffic, image, and multimedia traffic ranging from narrowband to broadband ATM.

The Cisco IPX narrowband switches, the Cisco IGX 8400 series multiband switches, and the Cisco BPX 8600 series wideband switches are used to implement digital high-speed, wide area private and public networks (WANs) for interconnecting customer's local area networks (LANs). These cell relay networks are created by interconnecting Cisco WAN switching network switches with high-speed digital trunks provided by any of the number of public common carriers or private service providers.

Enterprise Wide Area Networks

Corporations, government agencies, universities, telecommunications service providers, and others with a need to link their communications facilities can use the Cisco IPX narrowband switch, the Cisco IGX 8400 series multiband switch, and the Cisco BPX 8600 series wideband switch as a basis on which to build their own private networks (Figure 1-2).

In many instances, the primary reason for implementing private WANs is to link far-flung LANs. With the additional bandwidth available and the flexibility of cell relay technology, a private user often can add voice circuits and even a video conferencing facility on the same network without adding trunks and with very little additional expense.


Figure 1-2:
Example of an Enterprise Network Application


The Cisco IPX Narrowband Switch

The majority of private network locations have lower bandwidth requirements, fewer routes, small hubs, and a wide variety of service requirements. The Cisco IPX narrowband switch fits these applications by providing a wide offering of customer interfaces, several package sizes, and a scalable architecture. The Cisco IPX narrowband switch allows the user at each site to replace numerous low-speed dial-up and/or leased line circuits with a few high-speed T1 or E1 lines. The advantages often include faster response times, wider range of available services, more efficient utilization of bandwidth and the resulting cost savings. Plus, the private network often provides better management control, tighter security, and increased configuration flexibility under direct control of the end user.

The Cisco IGX 8400 series Multiband Switch

The Cisco IGX 8400 series multiband switch is a multiservice ATM networking switch that provides interfaces to support today's legacy and emerging broadband applications. Users have the advantage of ATM technology over narrowband and subrate T1 and E1, and broadband T3, and E3 trunks.

The Cisco IGX 8400 series multiband switch can be used as the basis for a leased-line Campus/MAN/WAN network, as an access device to high-speed digital services such as ATM, as a combination of both applications, and as a Value Added Network (VAN) service switch. Operating at 1.2 Gbps, the Cisco IGX 8400 series multiband switch seamlessly integrates with the Cisco IPX narrowband switch, the Cisco BPX 8600 series broadband switch, the Cisco MGX 8220 edge concentrator, INS-DAS, INS-VNS, FastPAD access devices, and the Cisco 3810 to provide multiband solutions from the access interface to the core layer. The Cisco IGX 8400 series multiband switch can be configured as a routing hub and as an interface shelf. A Cisco IGX 8400 series multiband switch configured as an interface shelf connected to a Cisco IGX 8400 series multiband switch configured as a routing hub supports voice, data, and frame relay connections. The voice and data connections, in this case, are routed across Cisco IGX 8400 series intermediate nodes to another Cisco IGX 8400 series multiband switch configured as an interface shelf. A Cisco IGX 8400 series multiband switch configured as an interface shelf connected to a Cisco BPX 8600 series wideband switch configured as a routing hub supports frame relay connections across an ATM network.

The Cisco BPX 8600 Series Broadband Switch with Cisco MGX 8220 Edge Concentrators

Many network locations have increasing bandwidth requirements due to emerging applications. To meet these requirements, users can overlay their existing narrowband networks with a backbone of Cisco BPX 8600 series broadband switches to utilize the high-speed connectivity of the BPX operating at 19.2 Gbps with its T3/E3/OC3/OC12 network and service interfaces. The BPX service interfaces include BXM and ASI ports on the Cisco BPX 8600 series broadband switch and service ports on Cisco MGX 8220 edge concentrators. The Cisco MGX 8220 edge concentrators may be co-located in the same cabinet as the Cisco BPX 8600 series broadband switch, providing economical port concentration for T1/E1 Frame Relay, T1/E1 ATM, CES, and FUNI connections.

The Cisco BPX 8600 Series Broadband Switch with ESP

With a co-located Extended Services Processor (ESP), the Cisco BPX 8600 series broadband switch adds the capability to support ATM and frame relay switched virtual circuits (SVCs).

Service Provider Multi-Service Networks

The demand to provide LAN interconnections has driven most of the public service providers to consider ways to quickly react to this opportunity. Frame relay has proven to be a reliable, cost-effective, standards-based service for transmitting LAN data, which tends to be very bursty in nature. Typically, LANs access the network only at periodic intervals but when they do, they often require large amounts of bandwidth for short periods of time. It is not cost effective to provide sufficient bandwidth to every LAN connection on a full-time basis.

Since both the Cisco IPX narrowband switch and the Cisco IGX 8400 series multiband switch utilize cell-relay technology, there are significant service advantages when frame relay is implemented on them. Since cell network platforms only allocate bandwidth when there is demand, the unused bandwidth from idle frame relay connections can be used by active connections. This allows the active connections to "burst" or to send large amounts of data for a short interval above their committed information rate. Then as the connection goes idle, the bandwidth is utilized for yet another connection.

Another advantage of cell relay networks is the flexibility of offering Permanent Virtual Circuits (PVCs) to interconnect all LAN sites in a mesh topology, in contrast to using physical circuits that require a large investment in interface hardware and data circuits. Frame relay networks based on Cisco IPX narrowband switches, Cisco IGX 8400 series multiband switches and Cisco MGX 8220 edge concentrators offer minimal delay, maximum throughput, and avoid congestion.

Current frame relay networks offer LAN circuit interconnection at rates from 56 Kbps to 2 Mbps. As frame relay traffic increases and customers demand more bandwidth for advanced applications, Cisco IPX narrowband and Cisco IGX 8400 series multiband FastPacket architectures give service providers a clear upgrade path to broadband ATM capabilities. Broadband networks, utilizing high-speed trunks and ATM cell switching, can be overlaid on a narrowband FastPacket network by adding a backbone of Cisco BPX 8600 series broadband switches. An existing network can be upgraded by adding a high-speed network ATM backbone utilizing Cisco BPX 8600 series wideband switches with gigabit switching as indicated in Figure 1-3.


Figure 1-3: Example of a Service Provider Application


The Cisco BPX 8600 series broadband switch provides ATM UNI services from the same platform using BXM and ASI cards, and also interfaces to Cisco IGX 8400 series multiband switches and Cisco IPX narrowband switches configured as routing nodes and to Cisco MGX 8220 edge concentrators and Cisco IPX narrowband switches and Cisco IGX 8400 series multiband switches configured as interface shelves to provide cost-effective multi-media services.

Frame relay ports can also be provided directly on Cisco BPX 8600 series broadband switches using Cisco MGX 8220 edge concentrators or Cisco IPX narrowband switches configured as interface shelves for maximum port density.

This multiband ATM service model provides both narrow and broadband interfaces from a common cell switching infrastructure. Connection management, congestion control, and network management are extended seamlessly across the entire network.

ATM NW Features

Advanced Capabilities

The Cisco WAN switching ATM networks include sophisticated system software for management and control of the network. Cisco WAN Switching system software is fully distributed in each switch to provide the fastest response time to network provisioning and network problems. Advanced capabilities include ABR with VSVD, Frame Relay to ATM Network and Service Interworking, Tiered Network operation, Virtual Trunking, Sonet/SDH Interfaces, AutoRoute, Opti-Class, FairShare, and ForeSight.

ABR with VSVD

The BXM series of cards, BXM T3/E3, BXM-155, and BXM-622 using high density application specific integrated circuit (ASIC) technology, provide advanced ATM networking features including ABR with VSVD supporting explicit rate (ER) feedback and congestion indication (CI) options. The BXM cards provide a high degree of scalability, providing 8 or 12 T3 or E3 ports per slot with the BXM-T3/E3, 4 or 8 OC3 ports with the BXM-155, and 1 or 2 OC12 ports with the BXM-622.

Frame Relay to ATM Interworking

Interworking allows users to retain their existing services, and, as their needs expand, migrate to the higher bandwidth capabilities provided by BPX ATM networks. Frame relay to ATM Interworking enables frame relay traffic to be connected across high-speed ATM trunks using ATM standard Network and Service Interworking

Two types of frame relay to ATM interworking are supported, Network Interworking (Figure 1-4) and Service Interworking (Figure 1-5). The Network Interworking function is performed by the AIT card on the Cisco IPX narrowband switch, the BTM card on the Cisco IGX 8400 series multiband switch, and the FRSM card on the Cisco MGX 8220 edge concentrator. The FRSM cards on the Cisco MGX 8220 edge concentrator and the UFM cards on the Cisco IGX 8400 series multiband switch also support Service Interworking.

The frame relay to ATM network and service interworking functions are available as follows:

Network Interworking

Part A of Figure 1-4 shows typical frame relay to network interworking. In this example, a frame relay connection is transported across an ATM network, and the interworking function is performed by both ends of the ATM network. The following are typical configurations:

Part B of Figure 1-4 shows a form of network interworking where the interworking function is performed by only one end of the ATM network, and the CPE connected to the other end of the network must itself perform the appropriate service specific convergence sublayer function. The following are example configurations:

Network Interworking is supported by the FRP on the Cisco IPX narrowband switch, the FRM, UFM-C, and UFM-U on the Cisco IGX 8400 series multiband switch, and the FRSM on the Cisco MGX 8220 edge concentrator. The Frame Relay Service Specific Convergence Sublayer (FR-SSCS) of AAL5 is used to provide protocol conversion and mapping.


Figure 1-4: Frame Relay to ATM Network Interworking


Service Interworking

Figure 1-5 shows a typical example of service interworking. Service interworking is supported by the FRSM on the Cisco MGX 8220 edge concentrator and the UFM-C and UFM-U on the Cisco IGX 8400 series multiband switch. Translation between the Frame Relay and ATM protocols is performed in accordance with RFC 1490 and RFC 1483. The following is a typical configuration for service interworking:


Figure 1-5: Frame Relay to ATM Service Interworking


Additional Information on Interworking

For additional information about interworking, refer to Chapter 13, "Frame Relay to ATM Network and Service Interworking".

Tiered Networks

Networks may be configured as flat (all nodes perform routing and communicate fully with one another), or they may be configured as tiered. In a tiered network interface shelves are connected to routing hubs, where the interface shelves are configured as non-routing nodes.


Note   A routing hub is a standard routing node that is also connected to interface shelves via feeder trunks.

With Release 8.5 and beyond, tiered networks support voice and data connections as well as frame relay connections. With this addition, a tiered network can now provide a multi-service capability (frame relay, circuit data, voice, and ATM). By allowing CPE connections to connect to a non-routing node (interface shelf), a tiered network is able to grow in size beyond that which would be possible with only routing nodes comprising the network.

Routing Hubs and Interface Shelves

In a tiered network, interface shelves at the access layer (edge) of the network are connected to routing nodes via feeder trunks (Figure 1-6). Those routing nodes with attached interface shelves are referred to as routing hubs. The interface shelves, sometimes referred to as feeders, are non-routing nodes. The routing hubs route the interface shelf connections across the core layer of the network.

The interface shelves do not need to maintain network topology nor connection routing information. This task is left to their routing hubs. This architecture provides an expanded network consisting of a number of non-routing nodes (interface shelves) at the edge of the network that are connected to the network by their routing hubs.

For detailed information about tiered networks, refer to Chapter 6, "Tiered Networks."

Cisco IGX 8400 Series Multiband Switches Configured as Routing Hubs

Voice, data, and frame relay connections originating on Cisco IGX 8400 series multiband switches configured as interface shelves (feeders) are routed across the routing network via their associated Cisco IGX 8400 series multiband switches configured as routing hubs. For voice and data connections originating on Cisco IGX 8400 series multiband switches configured as interface shelves, the intermediate routing nodes must be Cisco IGX 8400 series multiband switches. A frame relay connection originating at a Cisco IGX 8400 series multiband switch configured as an interface shelf may terminate on a Cisco MGX 8220 edge concentrator, a Cisco IPX narrowband switch, or a Cisco IGX 8400 series multiband switch configured as an interface shelf, or a Cisco IPX narrowband switch or a Cisco IGX 8400 series multiband switch configured as a routing node.

The following applies to IGX routing hubs and interface shelves:

The following applies to voice and data connections over Cisco IGX 8400 series multiband switches configured as interface shelves:

The following applies to frame relay connections originating at Cisco IGX 8400 series multiband switches configured as interface shelves connected to Cisco IGX 8400 series multiband switches configured as routing hubs:


Figure 1-6: Tiered Network with Cisco BPX 8600 Series Broadband Switches and Cisco IGX 8400 Series Multiband Switches Configured as Routing Hubs


BPX Routing Hubs

T1/E1 Frame Relay connections originating at Cisco IPX narrowband switches and Cisco IGX 8400 series multiband switches configured as interface shelves and T1/E1 Frame Relay, T1/E1 ATM, CES, and FUNI connections originating at Cisco MGX 8220 edge concentrators configured as interface shelves are routed across the routing network via their associated Cisco BPX 8600 series wideband switches configured as routing hubs.

The following requirements apply to Cisco BPX 8600 series wideband switches configured as routing hubs and their associated interface shelves:

IMA (Inverse Multiplexing ATM)

Where greater bandwidths are not needed, the Inverse Multiplexing ATM (IMA) feature provides a low cost trunk between two Cisco BPX 8600 series wideband switches. The IMA feature allows Cisco BPX 8600 series wideband switches to be connected to one another over from 1 to 8 T1 or E1 trunks provided by an IMATM module on a Cisco MGX 8220 edge concentrator. A BNI or BXM port on each Cisco BPX 8600 series wideband switch is directly connected to an IMATM or an AUSM 8 module in a Cisco MGX 8220 edge concentrator by a T3 or E3 trunk. The AIMNM modules are then linked together by from 1 to 8 T1 or E1 trunks. Refer to the Cisco MGX 8220 Edge Concentrator Installation and Configuration and the Cisco MGX 8220 Edge Concentrator Command Reference documents for further information.

Circuit Emulation Service (CES)

The Cisco MGX 8220 edge concentrator supports CES over T1/E1 lines with either an 8 port or 4 port Circuit Emulation Service Module. Data is transmitted and received over the network in AAL-1 cell format.

Zero CIR for Frame Relay

This feature allows users to take advantage of lower cost uncommitted frame relay service. The feature applies to frame relay connections that originate and terminate on FRP (Cisco IPX narrowband switch) or FRM (Cisco IGX 8400 series multiband switch) ports. It does not apply to frame relay to ATM connections. The CIR on the connection is configured to zero, and the MIR must be non-zero.

Virtual Trunking

Virtual trunking provides the ability to define multiple trunks within a single physical trunk interface. Virtual trunking benefits include the following:

A virtual trunk may be defined as a "trunk over a public ATM service". The trunk really doesn't exist as a physical line in the network. Rather, an additional level of reference, called a virtual trunk number, is used to differentiate the virtual trunks found within a physical trunk port. Figure 1-7 shows four Cisco WAN Switching sub-networks, each connected to a Public ATM Network via a physical line. The Public ATM Network is shown linking all four of these subnetworks to every other one with a full meshed network of virtual trunks. In this example, each physical line is configured with three virtual trunks.


Figure 1-7: Virtual Trunking Example


For further information on Virtual Trunking, refer to Chapter 8, ATM and Broadband Trunks. Also, refer to the Cisco WAN Switching Command Reference document.

BXM Network and Service SONET and T3/E3 Interfaces

To meet the need for high performance backbone networking, the BXM-622, BXM-155, and BXM T3/E3 cards provide OC12/STM-4 (622.08 Mbps), OC3/STM-1 (155.52 Mbps), and T3/E3 (44.736/34.368 Mbps) interfaces, respectively. The BXM cards may be user configured to function in either of two modes:

The BXM-622, BXM-155, and BXM-T3/E3 are designed to support the ATM UNI 3.1 and ATM Forum TM 4.0 standards.

BXM-622, 155, and T3/E3 Cards

The BXM-622, BXM-155, and BXM-T3/E3 cards support the full range of ATM service types per ATM Forum TM 4.0.

To support the BXM-622 front cards, there are SMF and SMFLR backcards in either one or two port versions as required. To support the BXM-155-XX front cards there are MMF, SMF, and SMFLR back cards available in either 4 or 8 port configurations. To support the BXM-T3/E3 there is the Cisco BPX 8600 series broadband switch-T3/E3-8/12 back card available in either 8 or 12 port versions. Any of the 12 general purpose Cisco BPX 8600 series broadband switch slots can be used to hold these cards. The same backcards are used whether the BXM ports are configured as trunks or lines.

Trunk Mode

When configured for trunk mode, the BXM cards provide high-speed ATM interconnections between Cisco BPX 8600 series broadband switches and networks. The large cell buffering capability provided by the BXM cards ensures highly reliable ATM trunk connections.

Service Mode

The BXM-622, BXM-155, and BXM-T3/E3 are designed to support all the following service classes per ATM Forum TM 4.0: Constant Bit Rate (CBR), Variable Bit Rate (VBR), Available Bit Rate (ABR with VS/VD, ABR without VS/VD, and ABR using Foresight), and Unspecified Bit Rate (UBR). ABR with VS/VD supports explicit rate marking and Congestion Indication (CI) control.

BNI Network and ASI Service Interfaces

The BNI-155 and ASI-155 provide OC-3 network and service interfaces, respectively, but provide a more limited range of service types and configurations than the BXM-155 card. The BNI T3 and BNI E3 provide T3 and E3 network interfaces respectively. The ASI T3 and E3 provide T3 and E3 service interfaces, respectively.

BNI-155 Network Interface

The BNI-155 operates at the standard OC-3/STM-1 (155.52 Mbps) rate to provide high-speed ATM trunking between Cisco BPX 8600 series broadband switches. The BNI-155 supports up to 12 Classes of Service (CoS) over Cisco BPX 8600 series broadband switch networks.

The physical interface options include multi-mode fiber (MMF), single-mode fiber intermediate reach (SMF-IR) and single-mode fiber long reach (SMF-LR) for optical terminations.

ASI-155 Service Interface

The ASI-155 provide broadbands connectivity between the Cisco BPX 8600 series broadband switch and ATM CPE. The ASI-155 is a two port OC-3/STM-1 (155.520Mbps) ATM service interface card that can be plugged into any of the 12 general purpose card slots in the Cisco BPX 8600 series broadband switch. The ASI supports a more limited range of service types than the BXM cards.

The physical interface options include multi-mode fiber (MMF), single-mode fiber intermediate reach (SMF-IR) and single-mode fiber long reach (SMF-LR) for optical terminations.

Traffic and Congestion Management

The Cisco BPX 8600 series broadband switch provides ATM standard traffic and congestion management per ATM Forum TM 4.0 using BXM cards.

The Traffic Control functions include:

In addition to these standard functions, the Cisco BPX 8600 series broadband switch provides advanced traffic and congestion management features including:

FairShare(TM)

FairShare provides per-VC queueing and per-VC scheduling. FairShare provides fairness between connections and firewalls between connections. Firewalls prevent a single non-compliant connection from affecting the QoS of compliant connections. The non-compliant connection simply overflows its own buffer.

The cells received by a port are not automatically transmitted by that port out to the network trunks at the port access rate. Each VC is assigned its own ingress queue that buffers the connection at the entry to the network. With ABR with VSVD or with ForeSight, the service rate can be adjusted up and down depending on network congestion.

Network queues buffer the data at the trunk interfaces throughout the network according to the connections class of service. Service classes are defined by standards-based QoS. Classes can consist of the four broad service classes defined in the ATM standards as well as multiple sub-classes to each of the four general classes. Classes can range from constant bit rate services with minimal cell delay variation to variable bit rates with less stringent cell delay.

When cells are received from the network for transmission out a port, egress queues at that port provide additional buffering based on the service class of the connection.

OptiClass(TM)

OptiClass provides a simple but effective means of managing the quality of service defined for various types of traffic. It permits network operators to segregate traffic to provide more control over the way that network capacity is divided among users. This is especially important when there are multiple user services on one network.

Rather than limiting the user to the four broad classes of service initially defined by the ATM standards committees, OptiClass can provide up to 16 classes of service (service subclasses) that can be further defined by the user and assigned to connections. Some of the COS parameters that may be assigned include:

These class of service parameters are based on the standards-based Quality of Service parameters and are software programmable by the user. The Cisco BPX 8600 series broadband switch provides separate queues for each traffic class.

AutoRoute

With AutoRoute, connections in Cisco WAN Switching cell relay networks are added if there is sufficient bandwidth across the network and are automatically routed when they are added. The user only needs to enter the endpoints of the connection at one end of the connection and the Cisco IPX narrowband switch, the Cisco IGX 8400 series multiband switch, and the Cisco BPX 8600 series broadband switch software automatically sets up a route based on a sophisticated routing algorithm. This feature is called AutoRoute. It is a standard feature on all Cisco WAN Switching nodes.

System software automatically sets up the most direct route after considering the network topology and status, the amount of spare bandwidth on each trunk, as well as any routing restrictions entered by the user (e.g. avoid satellite links). This avoids having to manually enter a routing table at each node in the network. AutoRoute simplifies adding connections, speeds rerouting around network failures, and provides higher connection reliability.

PNNI

The Private Network to Network Interface (PNNI) protocol provides a standards-based dynamic routing protocol for ATM and frame relay switched virtual circuits (SVCs). PNNI is an ATM-Forum-defined interface and routing protocol which is responsive to changes in network resources, availability, and will scale to large networks. PNNI is available on the Cisco BPX 8600 series broadband switch when an Extended Services Processor (ESP) is installed. For further information about PNNI and the ESP, refer to the Cisco WAN Switching BPX Service Node Extended Services Processor Installation and Operation document.

Congestion Management, VS/VD

The Cisco BPX 8600 series broadband switch/Cisco IGX 8400 series multiband switch/Cisco IPX narrowband switch networks provide a choice of two dynamic rate based congestion control methods, ABR with VS/VD and ForeSight. This section describes Standard ABR with VSVD.


Note   ABR with VSVD is an optional feature that must be purchased and enabled on a single node for the entire network.

When an ATM connection is configured for Standard ABR with VSVD per ATM Forum TM 4.0, RM (Resource Management) cells are used to carry congestion control feedback information back to the connection's source from the connection's destination.

The ABR sources periodically interleave RM cells into the data they are transmitting. These RM cells are called forward RM cells because they travel in the same direction as the data. At the destination these cells are turned around and sent back to the source as Backward RM cells.

The RM cells contain fields to increase or decrease the rate (the CI and NI fields) or set it at a particular value (the explicit rate ER field). The intervening switches may adjust these fields according to network conditions. When the source receives an RM cell it must adjust its rate in response to the setting of these fields.

When spare capacity exists within the network, ABR with VSVD permits the extra bandwidth to be allocated to active virtual circuits.

Congestion Management, ForeSight

The Cisco BPX 8600 series broadband switch/Cisco IGX 8400 series multiband switch/Cisco IPX narrowband switch networks provide a choice of two dynamic rate based congestion control methods, ABR with VS/VD and ForeSight. This section describes ForeSight.


Note   ForeSight is an optional feature that must be purchased and enabled on a single node for the entire network.

ForeSight may be used for congestion control across Cisco BPX 8600 series broadband switches/Cisco IGX 8400 series multiband switches/Cisco IPX narrowband switches for connections that have one or both end points terminating on other than BXM cards, for example ASI cards. The ForeSight feature is a dynamic closed-loop, rate-based, congestion management feature that yields bandwidth savings compared to non-ForeSight equipped trunks when transmitting bursty data across cell-based networks.

ForeSight permits users to burst above their committed information rate for extended periods of time when there is unused network bandwidth available. This enables users to maximize the use of network bandwidth while offering superior congestion avoidance by actively monitoring the state of shared trunks carrying frame relay traffic within the network.

ForeSight monitors each path in the forward direction to detect any point where congestion may occur and returns the information back to the entry to the network. When spare capacity exists with the network, ForeSight permits the extra bandwidth to be allocated to active virtual circuits. Each PVC is treated fairly by allocating the extra bandwidth based on each PVC's committed bandwidth parameter.

Conversely, if the network reaches full utilization, ForeSight detects this and quickly acts to reduce the extra bandwidth allocated to the active PVCs. ForeSight reacts quickly to network loading in order to prevent dropped packets. Periodically, each node automatically measures the delay experienced along a frame relay PVC. This delay factor is used in calculating the ForeSight algorithm.

With basic frame relay service, only a single rate parameter can be specified for each PVC. With ForeSight, the virtual circuit rate can be specified based on a minimum, maximum, and initial transmission rate for more flexibility in defining the frame relay circuits.

ForeSight provides effective congestion management for PVC's traversing broadband ATM as well. ForeSight operates at the cell-relay level that lies below the frame relay services provided by the Cisco IPX narrowband switch. With the queue sizes utilized in the Cisco BPX 8600 series broadband switch, the bandwidth savings is approximately the same as experienced with lower speed trunks. When the cost of these lines is considered, the savings offered by ForeSight can be significant.

ELMI

ELMI is an enhancement to LMI. ELMI adds capabilities that are not currently supported in LMI so that network switches, e.g., Cisco BPX 8600 series broadband switches, Cisco IGX 8400 series multiband switches, etc., can inform a user (routers, bridges, etc.) about network parameters such as various quality of service (QoS) parameters. Depending on the implementation, these might be such parameters as Committed Information Rate (CIR), Committed Burst Size (Bc), Excess Burst Size (Be), maximum Frame Size, etc. Currently, ELMI support includes the UFM-U and UFM-C cards on the Cisco IGX 8400 series multiband switch.

Cell Relay Networking, ATM and FastPacket

Cell relay technology is also referred to as cell switching, FastPacket, or asynchronous transfer mode (ATM). Cell switching is just another name for cell relay technology. The Cisco WAN switching FastPacket technology uses fixed 24-byte length cells. ATM uses a standards based 53-byte cell. All these terms describe a switching and multiplexing technique in which user data is placed into fixed length cells that are routed to their destination without regard to content.

Cell relay communications networks use high-speed digital trunks to link network nodes which provide customer access and network routing functions. Cell relay networks are characterized by very high throughput, short delays, and very low error rates. These networks provide highly reliable transport services to the user without the overhead associated with extensive error control implementations.

There are currently three basic methods employed for transmitting data over digital trunks: the classic time division multiplexing techniques, frame switching, and cell relay.

Cell relay networks utilize small, fixed-length data packets, called cells, that contain an address identifying the network connection and a payload. The use of a common packet format for the transport of all network traffic results in simplified routing and multiplexing.

Unlike Time Division Multiplexing technology used in previous systems, cell relay technology only uses network bandwidth when there is information to send. Connections are established in the network configuration but do not generate cells when idle or there is no data to be sent. These connection types are referred to as Permanent Virtual Circuits (PVCs). Once set up, they are permanent but they are virtual as they do not use any network bandwidth unless there is information to transmit.

When PVCs need to transmit information, the data is segmented into cells and the cells are then assigned to the proper trunk that has bandwidth available. As a result, cell relay networks use about half the bandwidth of TDM networks for most voice and data applications. The net result is a statistical sharing of the network trunk facilities that effectively increases the amount of traffic a network can accommodate.

Cell relay networks are especially useful for LAN to LAN interconnects. LAN data tends to occur in bursts with periods of inactivity in between the bursts. Cell relay connections provide bandwidth on demand for these bursty data applications and can dynamically allocate unused bandwidth from idle connections to active connections.

Cells typically consist of a short header with a destination address and a payload for carrying the user data. Cell length can be either fixed or variable depending on the system type. Cell buffers are employed to temporarily store data to allow for processing and routing.

Asynchronous Transfer Mode (ATM)

Asynchronous Transfer Mode is a well-defined standard for broadband, cell-switching networks. It offers the ability to intermix various types of traffic and dynamic bandwidth allocation to maximize the utilization of network bandwidth. Traffic type may be intermixed from cell to cell and all cells have a relatively small variability in the end-to-end delay.

ATM is a connection-oriented network protocol using small, fixed-length cells for carrying data. The small cell size minimizes the delay in building the cell and other queuing delays in transmitting the information across the network. The fixed-length cell size simplifies processing, supports higher switching speeds, and minimizes the uncertainty in delays experienced by variable cell/frame length of some common LAN protocols.

ATM traffic is carried in fixed length (53-byte) cells at high speeds (typically DS3 or E3, OC3/STM-1, OC12/STM-4 and above). The cell size was chosen as a compromise between a small cell with short delay, better for voice quality, and a larger cell size with a better ratio of data to overhead, best for data transmission.

The asynchronous aspect of ATM refers to the fact that data is transmitted only when there is actual information to be sent unlike synchronous transfer modes, such as TDM, where data is continuously being sent, even when it is an idle code. This leads to better utilization of network bandwidth.

The ATM protocol provides a clearly defined delineation between the transport layer and the application layer. Since the ATM protocol is independent of the transmission speed of the connection, it simplifies the network data processing requirements and facilitates scaling of the transmission facilities to accommodate the needs of each individual user while providing an economical growth path as demands on the network increase.

Unlike most LAN protocols, ATM is connection-oriented. Before data transfer can occur, an end-to-end connection must be established. Once a connection is defined, ATM cells are self-routing in that each cell contains a header with an address that indicates the destination. This saves processing time at each intermediate node since the routing is pre-determined.

Each ATM cell header contains two address fields, a Virtual Path Identifier (VPI) and a Virtual Connection Identifier (VCI). These two fields serve to identify each connection across a single link in the network. ATM switches use either the VPI alone or the VPI and VCI fields to switch cells from the input port to an output port at each network node.

PVCs

Permanent Virtual Circuits are connections which, after being added to the network, remain relatively static. PVCs are generally defined by the network operator. While no bandwidth is allocated on a link to a PVC when setting up the connection, the operator informs the network about the characteristics of the desired circuit and sufficient network bandwidth is reserved.

SVCs

Switched Virtual Circuits (SVCs), on the other hand, are dynamic in that they are established by the user on-demand. Enhancements of the ATM standards include provisions for SVCs. SVCs can provide dial-up capabilities controlled by the user. The Cisco BPX 8600 series broadband switch with the ESP installed provides SVC capability for ATM and frame relay connections.

UNI/NNI Interfaces

Two network interfaces are defined in ATM standards, a User to Network Interface (UNI) and a Network to Network Interface (NNI). The UNI is any interface between a user device and an ATM network. This interface terminates on an ATM switch. The UNI is used to send messages from the network to the user device on the status of the circuit and rate control information to prevent network congestion. NNI is used at the boundary between two different ATM networks e.g. a private network and a public network. Information passing across a NNI is related to circuit routing and status of the circuit in the adjacent network.

Cisco WAN Switching FastPackets

Unlike early X.25 networks, which used low bit-rate facilities, the Cisco IPX narrowband switch FastPacket networks were designed to take advantage of the widely-available, high-speed T1 and E1 networks. The fixed length, 24-byte Cisco IPX narrowband switch FastPacket cell length was specifically designed to fit into the 192-bit payload of a standard T1 frame.

Because a short, fixed-length cell is used to carry information, delay through the network is held to a minimum permitting delay-sensitive applications, such as digitized voice and SNA to be successfully carried by FastPacket networks in addition to LAN traffic.

FastPacket networks depended on digital transmission facilities that transmit with very low error rate. Taking advantage of this, only minimal error checking is performed and only at the FastPacket destination rather than at all intermediate network nodes. Using a simplified protocol for transmitting data, FastPacket networks are able to utilize hardware-based switch fabrics, resulting in very high switching speeds.

As a result, FastPacket networks have very high throughput and low delays. They can be used for all kinds of communication traffic: voice, synchronous data and video, as well as the low speed data applications that are being serviced by conventional packet networks to date. FastPacket transmits all information across the digital trunk in a single packet format, including voice, data, video, and signaling, and all packets are transported through the network using common switching, queuing, and transmission techniques, no matter what the connection type or its bandwidth requirements.

FastPacket networks clearly demonstrate the advantages of cell-relay technology. Currently available wide area information networks are being built using digital trunks with bandwidths up to approximately 2 Mbps. But rapid growth of Local Area Networks, linking communities of personal computers and workstations with distributed databases, has placed increased bandwidth requirements on these wide area networks.

The migration path is to broadband networks, employing higher speed trunks, from T3 to OC3 rates and beyond, to satisfy this increased demand for bandwidth.

As bandwidth demands increase in a network, the Cisco IPX narrowband switch and the Cisco IGX 8400 series multiband switch may be connected to ATM networks, and, by converting between similar cell protocols (FastPackets and ATM cells), take advantage of the high speed, flexibility, and scalability offered by ATM.

ATM Product Family Overview

The Cisco WAN switching product family includes: the Cisco BPX 8600 series broadband switch which can include the Extended Services Processor (ESP) for SVC switching of ATM and frame relay connections, the Cisco IPX narrowband switches, and the Cisco IGX 8400 series multiband switches, Cisco MGX 8220 edge concentrators configured as interface shelves, Cisco IGX 8400 series multiband switches configured as interface shelves, and Cisco IPX narrowband switches configured as interface shelves, StrataSphere network management products, and Access Products including the Cisco MGX 8220 edge concentrator, the Cisco 3810, and FastPAD.

Cisco's WAN switching systems are flexible, modular, cell-based platforms that support network requirements ranging from a few voice and data connections, frame relay connections, on up to a multi-service ATM network with thousands of users.

A common software architecture and baseline ensures full interoperability within the Cisco WAN switching product line including the Cisco IPX narrowband switch, the Cisco IGX 8400 series multiband switch, and the Cisco BPX 8600 series broadband switch.

StrataSphere, Standards-Based Network Management

Conforming to the Network Management Forum's advanced management framework for integrated service management and press automation, StrataSphere is a standards-based multi-protocol management architecture. StrataSphere combines embedded management intelligence distributed throughout the network elements (for fast implementation) along with centrally located NMS workstation advanced system applications and tools to provide integrated fault, performance, and configuration management functions unique to cell-based networks.

StrataSphere automates key network management processes such as service provisioning, billing, statistics collection, and network modeling and optimization. StrataSphere provides a high-volume standards-based usage billing solution for emerging services, such as ATM, as well as cell-based frame relay services, and supports the high level of statistics collection—up to one million statistics per hour per billing station required in a next-generation ATM product. Additional information on StrataSphere Network Management is provided in Chapter 3, Cisco WAN Manager Network Management.

System Switch, ESP, Edge Concentrator, and Network Access Products Description

The Cisco BPX 8600 Series Broadband Switch

The Cisco BPX 8600 series broadband switch is a standards based high-capacity, 19.2 Gbps, broadband ATM switch that with the co-located Cisco MGX 8220 edge concentrator and ESP provides backbone ATM switching and delivers a wide range of user services (Figure 1-9). Fully integrated with the Cisco IPX narrowband switch and the Cisco IGX 8400 series multiband switch, the Cisco BPX 8600 series broadband switch is a scalable, standards-compliant unit. Using a multi-shelf architecture, the Cisco BPX 8600 series broadband switch supports both narrowband and broadband user services. The modular, multi-shelf architecture enables users to incrementally expand the capacity of the system as needed.

The Cisco MGX 8220 edge concentrator configured as an interface shelf supports a wide range of narrowband interfaces. It converts all non-ATM traffic into 53-byte ATM cells and concentrates this traffic for high speed switching by the Cisco BPX 8600 series broadband switch.

Similarly, the Cisco IPX narrowband switches or Cisco IGX 8400 series multiband switches may be configured as shelves and connected to a Cisco BPX 8600 series broadband switch configured as a routing hub to provide a low-cost service input for frame relay to ATM interworking for the Cisco BPX 8600 series broadband switch. The Cisco IPX narrowband switches and Cisco IGX 8400 series multiband switches configured as interface shelves concentrate this traffic over an ATM trunk connected to the Cisco BPX 8600 series broadband switch.


Figure 1-8: Cisco BPX 8600 Series Broadband Switch Configuration


The ESP

The Extended Services Processor (ESP) is an adjunct shelf co-located with the Cisco BPX 8600 series broadband switch (Figure 1-9). Typically, a Cisco MGX 8220 Edge Concentrator is also co-located with the Cisco BPX 8600 series broadband switch to provide ATM and frame relay switched virtual circuits (SVCs). For further information about the ESP, refer to the Cisco WAN Switching BPX Service Node Extended Services Processor Installation and Operation document.


Figure 1-9: Cisco BPX 8600 Series Broadband Switch with Co-Located ESP and Cisco MGX 8220 Edge Concentrator


The Cisco MGX 8220 Edge Concentrator

The Cisco MGX 8220 edge concentrator is a standards-based ATM interface shelf that a provides a low-cost service interface to multi-service networks and is usually co-located with a Cisco BPX 8600 series broadband switch (Figure 1-9). The Cisco MGX 8220 edge concentrator provides a broad range of narrowband user interfaces. Release 4 of the Cisco MGX 8220 edge concentrator provides T1/E1 and subrate frame relay, FUNI (Frame Based UNI over ATM), T1/E1 ATM, T1/E1 Circuit Emulation Service (CES), frame relay to ATM network and service interworking for traffic over the ATM network via the Cisco BPX 8600 series broadband switch, HSSI and X.21 interfaces and SRM-3T3 enhancements. The Cisco MGX 8220 edge concentrator allows users to concentrate large numbers of PVC connections over high-speed ATM trunks. For further information, refer to the Cisco MGX 8220 Edge Concentrator Installation and Configuration and the Cisco MGX 8220 Edge Concentrator Command Reference documents. In conjunction with the Cisco BPX 8600 series broadband switch and the ESP, the Cisco MGX 8220 edge concentrator also supports ATM and frame relay switched virtual circuits (SVCs).

The Cisco IGX 8400 Series Multiband Switch

The Cisco IGX 8400 series multiband switch is a standards based, 1.2 Gbps, highly-scalable ATM switch that provides interfaces to support current legacy and emerging broadband applications (Figure 1-10). The Cisco IGX 8400 series multiband switch is designed for use in Wide Area Networks (WANs) public or private, using subrate, fractional T1, E1, T3, or E3 transmission facilities. The Cisco IGX 8400 series multiband switch supports multiservice traffic including voice, data, Frame Relay, and ATM T3/E3 connections. The Cisco IGX 8400 series multiband switch currently supports CBR and VBR ATM connection types. The Frame Relay connection interfaces include T1/E1 channelized and unchannelized, HSI, X.21, and V.35.

The Cisco IGX 8400 series multiband switch provides the capability to migrate wide area traffic from older time-division multiplexed (TCM) networks to more efficient, robust, and higher-speed ATM networks. The Cisco IGX 8400 series multiband switch is fully compatible with the Cisco IPX narrowband switch and the Cisco BPX 8600 series broadband switch.

The Cisco IGX 8400 series multiband switch uses standard ATM, together with efficient cell adaption to provide seamless conductivity across multiband networks using trunks ranging from
128 kbps to 155 Mbps. The Cisco IGX 8400 series multiband switch supports FastPAD access trunk connectivity from 9.6 Kbps up to 2 Mbps. The Cisco IGX 8400 series multiband switch also supports the Cisco IPX narrowband switch's narrowband cell relay protocol.

ELMI is used to inform a user (routers, bridges, etc.) about network parameters such as various quality of service (QoS) parameters that may be considered in user congestion control actions. The UFM-U and UFM-C frame relay cards support ELMI.

CAS switching enables the UVM/VNS to switch calls coming from CAS/DTMF-based PBX switches. The CAS signaling and DTMF tones are translated by the UVM into CCS call control messages which are processed by the VNS. The UVM card supports CAS switching.


Figure 1-10:
A Cisco IGX 8400 Series Multiband Switch Configuration


The Cisco IPX Narrowband Switch

The Cisco IPX narrowband switch is a narrowband cell switch (Figure 1-11) that accepts frame relay, digitized voice and FAX, encoded video, data streams, etc., and adapts these information streams into fixed length cells (Figure 1-11). These cells are then routed to appropriate network interfaces, either ATM or FastPacket. The Cisco IPX narrowband switch supports frame relay to ATM network interworking, which provides the advantages of transporting frame relay traffic across a high-speed ATM network.

The Cisco IPX narrowband switch is compatible with Cisco IGX 8400 series multiband switches and Cisco BPX 8600 series broadband switches, and supports FairShare, Opti-Class, frame relay to ATM Network Interworking-protocol translation, VAD, RPS, ABR with VS/VD and ForeSight congestion management, etc.


Figure 1-11: A Cisco IPX Narrowband Switch Configuration


Network Access Products

These products, located at the outer edges of a network, offer several subrate, narrowband, and broadband configurations such as multiplexers, frame relay access devices-FRADs, and routers with a wide range of interface options. They enable users to convert legacy and lower-speed traffic into fixed-length frames or cells for both narrowband and broadband switching.

FastPADs

The multi-media FastPADs are an OEM product that provides voice and data integration and Frame Relay switching over Frame Relay or leased line networks. A FastPAD connects to a Cisco IPX narrowband switch or a frame relay port on a Cisco IGX 8400 series multiband switch and is managed by Cisco WAN Manager. The multi-protocol FastPADs are OEM products that provide legacy protocol support.

Cisco 3810 Series

The Cisco 3810 provides multi-service integration of voice, fax, video, legacy data, and LAN traffic over either a Frame Relay or ATM trunk. The 3810 typically connects to a frame relay port on a Cisco IGX 8400 series multiband switch configured as a routing node or to another 3810 over leased line or public frame relay. Cisco WAN Manager supports provisioning and management of the 3810.

INS-VNS and INS-DAS

The Intelligent Network Server products, INS-VNS and INS-DAS use a robust high-powered processing platform to add several important capabilities to Cisco WAN switching networks. In addition, the INS-VNS and INS-DAS support some form of standards-based signaling between customer premise equipment (CPE) and a Cisco BPX 8600 series broadband switch, Cisco IGX 8400 series multiband switch, Cisco IPX narrowband switch network. Typically this signaling is a variation of common-channel, message-oriented Integrated Services Digital Network (ISDN) or Broadband ISDN (B-ISDN) signaling protocols. The INS-VNS and INS-DAS applications both interpret these industry-standard signaling messages, translates the logical addresses to the appropriate physical endpoints of the network, and instructs the Cisco IGX 8400 series multiband switches and Cisco IPX narrowband switches to establish the connection required for the particular application. The switches then take over, dynamically establishing the optimum route through the network and maintaining the connection for its duration.

The two primary INS applications are:

Each INS application uses one or more adjunct processors that are co-located with a node (that is a Cisco BPX 8600 series broadband switch/Cisco MGX 8220 edge concentrator, a Cisco IPX narrowband switch, or a Cisco IGX 8400 series multiband switch) and often installed in the same equipment rack. Available in either AC- or DC-powered models, the base INS processor is a scalable UNIX platform and contains:

The base INS-VNS and INS-DAS processors are equipped with different interface modules, memory and disk configurations, and different application software.

For further information, refer to "Intelligent Network Server DAS and VNS" in Chapter 4, Network Services Overview, and to the INS documents.

System Software Description

The Cisco WAN switching cell relay system software shares most core system software, as well as a library of applications, between platforms. System software provides basic management and control capabilities to each node.

Cisco IPX narrowband switch, Cisco IGX 8400 series multiband switch, and Cisco BPX 8600 series broadband switch system software manages its own configuration, fault-isolation, failure recovery, and other resources. Since no remote resources are involved, this ensures rapid response to local problems. This distributed network control, rather than centralized control, provides increased reliability.

Software among multiple nodes cooperates to perform network-wide functions such as trunk and connection management. This multi-processor approach ensures rapid response with no single point of failure. System software applications provide advanced features that may be installed and configured as required by the user.

Some of the many software features are:

The system software, configuration database, and the firmware that controls the operation of each card type is resident in programmable memory and can be stored off-line in the StrataView Plus NMS for immediate backup if necessary. This software and firmware is easily updated remotely from a central site or from Cisco Customer Service, which reduces the likelihood of early obsolescence.

Connections and Connection Routing

The routing software supports the establishment, removal and rerouting of end-to-end channel connections. There are three modes:

The system software uses the following criteria when it establishes an automatic route for a connection:

When a node reroutes a connection, it uses these criteria and also looks at the priority that has been assigned and any user-configured routing restrictions. The node analyzes trunk loading to determine the number of cells or packets the network can successfully deliver. Within these loading limits, the node can calculate the maximum combination allowed on a network trunk of each type of connection: synchronous data, ATM traffic, frame relay data, multi-media data, voice, and compressed voice.

Network-wide T3, E3, OC3, or OC12 connections are supported between Cisco BPX 8600 series broadband switches terminating ATM user devices on the Cisco BPX 8600 series broadband switch UNI ports. These connections are routed using the virtual path and/or virtual circuit addressing fields in the ATM cell header.

Narrowband connections, terminating on Cisco IPX narrowband switches, can be routed over high-speed ATM backbone networks built on Cisco BPX 8600 series broadband switches. FastPacket addresses are translated into ATM cell addresses that are then used to route the connections between Cisco BPX 8600 series broadband switches, and to ATM networks with mixed vendor ATM switches. Routing algorithms select broadband links only, avoiding narrowband nodes that could create a choke point.

Connection Routing Groups

The re-routing mechanism is enhanced so that connections are presorted in order of cell loading when they are added. Re-routing takes place by rerouting the group containing the connections with the largest cell loadings first on down to the last group which contains the connections with the smallest cell loadings. These groups are referred to as routing groups. Each routing group contains connections with loading in a particular range,

There are three configurable parameters for configuring the rerouting groups,

The three routing group parameters are configured with the cnfcmparm command.

For example, there might be 10 groups, with the starting load size of the first group at 50, and the incremental load size of each succeeding group being 10 cells. Then group 0 would contain all connections requiring 0-59 cell load units, group 1 would contain all connections requiring from 60-69 cell load units, on up through group 9 which would contain all connections requiring 140 or more cell load units.

Table 1-1 Routing Group Configuration Example

Routing group Connection cell loading

0

0-59

1

60-69

2

70-79

3

80-89

4

90-99

5

101-109

6

110-119

7

120-129

8

130-139

9

140 and up

Network Synchronization

Cisco WAN switching cell relay networks use a fault-tolerant network synchronization method of the type recommended for Integrated Services Digital Network (ISDN). Any circuit line, trunk, or an external clock input can be selected to provide a primary network clock. Any line can be configured as a secondary clock source in the event that the primary clock source fails.

All nodes are equipped with a redundant, high-stability internal oscillator that meets Stratum 3 (Cisco BPX 8600 series broadband switch) or Stratum 4 requirements. Each node keeps a map of the network's clocking hierarchy. The network clock source is automatically switched in the event of failure of a clock source.

There is less likelihood of a loss of customer data resulting from re-frames that occur during a clock switchover or other momentary disruption of network clocking with cell-based networks than there is with traditional TDM networks. Data is held in buffers and packets are not sent until a trunk has regained frame synchronism to prevent loss of data.

Network Availability

Hardware and software components are designed to provide a node availability in excess of 99.99%. Network availability will be much more impacted by link failure, which has a higher probability of occurrence, than equipment failure.

Because of this, Cisco WAN switching switches are designed so that connections are automatically rerouted around network trunk failures often before users detect a problem. System faults are detected and corrective action taken often before they become service affecting. The following paragraphs describe some of the features that contribute to network availability.

System Diagnostics

Each node within the network runs continuous background diagnostics to verify the proper operation of all network trunks, active and standby cards, buses, switch paths, cabinet temperature, and power supplies. This background process is transparent to normal network operation.

Failures that affect system service are reported as major alarms. Failures that could affect service later (such as a failure of a standby card) are reported as minor alarms. For example, the following lists some of the failures that will generate an alarm:

When an alarm occurs in a network and a trunk or circuit line is in alarm, a loopback test for all cards associated with the failed trunk or line is automatically performed to verify proper operation of the node hardware.

Alarm Reporting

The Cisco BPX 8600 series broadband switch, the Cisco IGX 8400 series multiband switch, the Cisco IPX narrowband switch, and the Cisco MGX 8220 edge concentrator provide both software alarms displayed on operator terminal screens and hardware alarm indicators. The hardware alarms are LED indicators located on the various cards in the node. All cards have LEDs that indicate whether or not a fault is detected on the card. On interface cards, LEDs indicate whether or not there is a local or remote line failure.

Each power supply has front panel indications of proper output. Since the power supplies share the power load, redundant supplies are not idle. All power supplies are active. If one fails, then the others pick up its load. The power supply outputs are monitored as well as the cabinet internal temperature.

Statistical Alarms

The network manager can configure alarm thresholds on a per-trunk basis for any transmission problems that are statistical. Thresholds are configurable for a number of alarm types including frame slips, out of frames, bipolar errors, dropped packets, and packet errors. When an alarm threshold is exceeded, the screen displays an alarm message.

Failure Recovery

In a redundant system, if a hardware failure occurs, a redundant module is automatically switched into service, replacing the failed module. These systems provide redundant common control buses, redundant power supplies, and redundant cards. All cards have 1+1 redundancy option, which provides each card with a dedicated standby. If an active module fails, the system automatically switches to a standby.

Channel connections on a failed trunk are automatically rerouted to a different trunk if one is available. Rerouting time is a function of the complexity of the network, but normally the first will be completed within milliseconds and the last within several seconds.

Standards

The performance of the systems are compatible with most recent recommendations from various international standards committees and forums to assure seamless interworking with other network equipment. The following is a partial list of these standards committees:


hometocprevnextglossaryfeedbacksearchhelp
Posted: Tue Jan 16 11:14:03 PST 2001
All contents are Copyright © 1992--2001 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.