|
This chapter contains the following:
The Cisco BPX® 8600 Series wide-area switches are standards based high-capacity broadband ATM switches that provide backbone ATM switching, IP + ATM services including Multiprotocol Label Switching (MPLS) and deliver a wide range of other user services (see Figure 1-1). The BPX 8600 Series includes the BPX 8620 wide-area switch, the BPX 8650 IP + ATM switch, and the BPX 8680 universal service switch.
Fully compatible with the Cisco MGX 8800 series wide area edge switch, MGX 8220 edge concentrator, the Cisco IGX 8400 series wide-area switch, the BPX 8620 switch is a scalable, standards-compliant unit. Using a multi-shelf architecture, the BPX switch supports both narrowband and broadband user services. The modular, multi-shelf architecture enables users to incrementally expand the capacity of the system as needed. The BPX switch consists of the BPX shelf with fifteen card slots which may be co-located with the MGX 8220 and Extended Services Processor (ESP), as required.
Three of the slots on the BPX switch shelf are reserved for common equipment cards. The other twelve are general purpose slots used for network interface cards or service interface cards. The cards are provided in sets, consisting of a front card and associated back card. The BPX shelf can be mounted in a rack enclosure which provides mounting for a co-located ESP and the MGX 8220 interface shelves.
The BPX® 8650 is an IP+ATM switch that provides ATM-based broadband services and integrates Cisco IOS® software via Cisco 7200 series routers to deliver Multiprotocol Label Switching (MPLS) services. The BPX 8650 provides the core Internet requirements of scalability, advanced IP services, Layer 2 virtual circuit switching advantages, and Layer 2/Layer 3 interoperability. In addition to scaling Internet services, the BPX 8650 switch enables the user to provision new integrated IP+ATM services such as voice over IP, MPLS virtual private networks (VPNs), and Web hosting services across the ATM backbone.
The BPX 8680 universal service switch is a scalable IP+ATM WAN edge switch that combines the benefits of Cisco IOS® IP with the extensive queuing, buffering, scalability, and quality-of-service (QoS) capabilities provided by the BPX 8600 and MGX 8800 series platforms.
The BPX 8680 switch incorporates a modular, multishelf architecture that scales from small sites to very large sites and enables service providers to meet the rapidly growing demand for IP applications while cost-effectively delivering today's services. The BPX 8680 consists of one or more MGX 8850s connected as feeders to a BPX 8620. Designed for very large installations, the BPX 8680 can scale to 16,000 DS1s by adding up to 16 MGX 8850 concentrator shelves while still being managed as a single node.
With Release 9.2, the BPX supports a number of new features:
With Release 9.2, virtual switch interfaces (VSIs) can be configured on a BXM virtual trunk. The BXM previously did not support virtual trunks. MPLS switching can be enabled on a virtual trunk VSI in a similar manner to a standard trunk or port.
MPLS Class of Service (CoS) templates simplify bandwidth allocation on a class-by-class basis for end-to-end IP CoS in a routed ATM environment. An MPLS switching CoS template is assigned to a VSI when the VSI is enabled. Each templates links to two types of tables. One table type defines, on a per VC basis, bandwidth related parameters and UPC actions. The other table type defines the parameters required to configure qbins that provide Quality of Service (QoS) support.
With the implementation of VSI 2.0, MPLS supports master VSI controller redundancy and VSI slave controller redundancy.
MLPS based enhancements include the use of MPLS Class of Service (CoS) templates supported by VSI, and MPLS VSI master and VSI slave redundancy. In addition, MPLS supports Virtual Private Network (VPNs) via the use of virtual trunks. For additional information, refer to PART 5, MPLS.
Either rt-VBR or nrt-VBR connections are supported. Separate queuing and traffic control features enhance the performance of connections added as rt-VBR connections. For additional information, refer to Chapter 8, ATM Connections.
With Release 9.2, individual ports can be enabled in either trunk (network) or line (service) mode independent of how other ports on the card are enabled. So the ports on the same BXM card can be active in either trunk or line mode. In previous releases, once a port was upped, all ports had to be upped in the same mode.
Virtual trunking is now available on the BXM. Virtual trunks typically are used to connect private virtual networks (PVNs) across a public ATM cloud, taking advantage of the full mesh capabilities of the public network. The virtual trunks can be used for standard ATM Forum traffic or for MPLS traffic.
The BXM card can support up to 31 virtual trunks. The 31 virtual trunks can be configured all on one physical port or distributed across the physical ports on the BXM. Each virtual trunk is associated with a virtual interface which in turn has 16 qbins available to provide traffic engineering VC differentiation. For additional information, refer to Chapter 7, BXM Virtual Trunks.
Virtual private networks using IP over the network allow such groups as companies, campuses, and enterprises, to use the capabilities and flexibility of the internet for employee communication, employee telecommuting, remote site access, and branch office data exchange. The standard Web applications available, make for easy and quick site access and utilization.
Cisco's implementation of virtual privates networks using MPLS provides scalability through integrated support of ATM switches within an IP core, supports advanced IP services on ATM switches, and delivers traffic engineering and IP based VPN capabilities on both ATM switches and on standard layer 3 routers. For additional information, refer to Chapter 18, MPLS VPNS with BPX 8650.
Automatic protection switching (APS) provides redundancy for fiber optic line interface connections on BXM cards with SMF and SMFLR OC-3 or OC-12 interfaces.
Sonet Automatic Protection Switching provides the ability to configure a pair of SONET lines for line redundancy so that the hardware automatically switches from the active line to the standby line when the active line fails.
Each redundant line pair consists of a working line and a protection line. The hardware is set up to switch automatically. Upon detection of a signal fail condition (e.g., LOS, LOF, Line AIS, or Bit Error Rate exceeding a configured limit) or a signal degrade condition (BER exceeding a configured limit) the hardware switches from a working line to the protection line.
The following APS types of redundancy are supported, APS 1+1, APS 1:1, and APS 1+1 (Annex B).
To support line redundancy only, no additional hardware is required other than cabling. To support card and line redundancy, APS 1+1 requires a new paired backcard. When used with the current BPX chassis, the APS card locations are restricted to slots 2-5 and 10-13. WIth the new BPX chassis (post Rel. 9.1) this card location restriction is removed. For additional information, refer to Chapter 9, SONET APS.
LMI and ILMI functions for the BXM card are moved to the card from the BCC to localize these functions. These functions support virtual UNIs and trunk ports - a total of 256 sessions on different interfaces (ports, trunks, virtual UNIs) per BXM.
The time to reroute connections varies depending on different parameters, such as the number of connections to reroute, reroute bundle size, etc. It is important to notify the CPE if a connection is derouted and fails to transport user data after a specified time interval. However, it is also desirable not to send out Abit = 0, then Abit =1 when a connection is derouted and rerouted quickly. Such notifictions may prematurely trigger the CPE backup facilities causing instabilities in an otherwise stable system.
The early Abit Notification with configurable timer feature provides a way to send Abit = 0 status changes over the LMI interface or to send ILMI traps over the ILMI interface after connections are derouted a certain amount of time. The time period is configurable. The configurable time allows the user the flexibility to synchronize the operation of the primary network and backup utilities, such as dialed backup over the ISDN or PSTN network. The feature can be turned on using the cnfnodeparm command. For further information, refer to the Rel. 9.2.30 Cisco WAN Switching Command Reference.
The Cisco WAN Manager (formerly StrataView Plus) and Cisco View add additional topology and management functions.
Rel. 5.0 supported.
The following is a list of previously provided features that are included in this release along with the new features previously listed:
With the BCC-4, the BPX switch employs an up to 19.2 Gbps peak non-blocking crosspoint switch matrix for cell switching. The switch matrix can establish up to 20 million point-to-point connections per second between ports. A single BPX switch provides twelve card slots, with each card capable of operating at 800 Mbps for ASI and BNI cards. The BXM cards support egress at up to 1600 Mbps and ingress at up to 800 Mbps. The enhanced egress rate enhance operations such as multicast. Access to and from the crosspoint switch matrix on the BCC is through multi-port network and user access cards. It is designed to easily meet current requirements with scalability to higher capacity for future growth.
A BPX switch shelf is a self-contained chassis which may be rack-mounted in a standard 19-inch rack or open enclosure. All control functions, switching matrix, backplane connections, and power supplies are redundant, and non-disruptive diagnostics continuously monitor system operation to detect any system or transmission failure. Hot-standby hardware and alternate routing capability combine to provide maximum system availability.
Many network locations have increasing bandwidth requirements due to emerging applications. To meet these requirements, users can overlay their existing narrowband networks with a backbone of BPX switches to utilize the high-speed connectivity of the BPX switch operating at up to 19.2 Gbps with its T3/E3/OC-3/OC-12 network and service interfaces. The BPX switch service interfaces include BXM and ASI ports on the BPX switch and service ports on MGX 8220 shelves. The MGX 8220 shelves may be co-located in the same cabinet as the BPX switch, providing economical port concentration for T1/E1 Frame Relay, T1/E1 ATM, CES, and FUNI connections.
For multiservice networks, the BPX 8650 switch provides ATM, Frame Relay, and IP Internet service all on a single platform in a highly scalable way. Support of all these services on a common platform provides operational cost savings and simplifies provisioning for multi-service providers.
By integrating the switching and routing functions, MPLS combines the reachability, scalability, and flexibility provided by the router function with the traffic engineering optimizing capabilities of the switch. The BPX 8650 MPLS switch combines a BPX switch with a separate MPLS controller (Cisco Series 7200 or 7500 router).
With a co-located ESP, the BPX Switch adds the capability to support ATM and Frame Relay switched virtual circuits (SVCs), and also soft permanent virtual circuits (SPVCs). Refer to the Cisco WAN Service Node Extended Services Processor Installation and Operation document for detailed information abut the ESP.
The following paragrap;hs provide a brief description of VPNs. For additional information, refer to Chapter 18, MPLS VPNS with BPX 8650.
Conventional Virtual Private Networks using dedicated lease lines or Frame Relay PVCs and a meshed network (Figure 1-2), while providing many advantages, have typically been limited in efficiency and flexibility.
Instead of using dedicated leased lines or Frame Relay PVCs, etc., for a virtual private network (VPN), an IP virtual private network uses the open connectionless architecture of the Internet for transporting data as shown in Figure 1-2.
An IP virtual private network offers the following benefits:
MPLS virtual private networks combine the advantages of IP flexibility and connectionless operation with the QoS and performance features of ATM (Figure 1-3).
The MPLS VPNs provide the same benefits as a plain IP Virtual Network plus:
Interworking allows users to retain their existing services, and as their needs expand, migrate to the higher bandwidth capabilities provided by BPX switch networks. Frame Relay to ATM Interworking enables Frame Relay traffic to be connected across high-speed ATM trunks using ATM standard Network and Service Interworking. For additional information, refer to Chapter 13, Frame Relay to ATM Network and Service Interworking.
Two types of Frame Relay to ATM interworking are supported, Network Interworking
(see Figure 1-4) and Service Interworking (see Figure 1-5). The Network Interworking function is performed by the BTM card on the IGX switch, and the FRSM card on the MGX 8220. The FRSM card on the MGX 8220 and the UFM cards on the IGX switch also support Service Interworking.
The Frame Relay to ATM network and service interworking functions are available as described in the following paragraphs:
Part A of Figure 1-4 shows typical Frame Relay to network interworking. In this example, a Frame Relay connection is transported across an ATM network, and the interworking function is performed by both ends of the ATM network. The following are typical configurations:
Part B of Figure 1-4 shows a form of network interworking where the interworking function is performed by only one end of the ATM network, and the CPE connected to the other end of the network must itself perform the appropriate service specific convergence sublayer function. The following are example configurations:
Network Interworking is supported by the FRM, UFM-C, and UFM-U on the IGX switch, and the FRSM on the MGX 8220. The Frame Relay Service Specific Convergence Sublayer (FR-SSCS) of AAL5 is used to provide protocol conversion and mapping.
Figure 1-5 shows a typical example of Service Interworking. Service Interworking is supported by the FRSM on the MGX 8220 and the UFM-C and UFM-U on the IGX switch. Translation between the Frame Relay and ATM protocols is performed in accordance with RFC 1490 and RFC 1483.
In Service Interworking, for example, for a connection between an ATM port and a Frame Relay port, unlike Network Interworking, the ATM device does not need to be aware that it is connected to an interworking function.
The Frame Relay service user does not implement any ATM specific procedures, and the ATM service user does not need to provide any Frame Relay specific functions. All translational (mapping functions) are performed by the intermediate interworking function.
The following is a typical configuration for service interworking:
For additional information about interworking, refer to Chapter 13, Frame Relay to ATM Network and Service Interworking.
Networks may be configured as flat (all nodes perform routing and communicate fully with one another), or they may be configured as tiered. In a tiered network, interface shelves are connected to routing hubs, where the interface shelves are configured as non-routing nodes. For additional information, refer to Chapter 14, Tiered Networks.
By allowing CPE connections to connect to a non-routing node (interface shelf), a tiered network is able to grow in size beyond that which would be possible with only routing nodes comprising the network.
Starting with Release 8.5, in addition to BPX switch routing hubs, tiered networks now support IGX switch routing hubs. Voice and data connections originating and terminating on IGX switch interface shelves (feeders) are routed across the routing network via their associated IGX switch routing hubs. Intermediate routing nodes must be IGX switches, and IGX switch interface shelves are the only interface shelves that can be connected to an IGX switch routing hub. With this addition, a tiered network provides a multi-service capability (Frame Relay, circuit data, voice, and ATM).
In a tiered network, interface shelves at the access layer (edge) of the network are connected to routing nodes via feeder trunks (Figure 1-6). Those routing nodes with attached interface shelves are referred to as routing hubs. The interface shelves, sometimes referred to as feeders, are non-routing nodes. The routing hubs route the interface shelf connections across the core layer of the network.
The interface shelves do not need to maintain network topology nor connection routing information. This task is left to their routing hubs. This architecture provides an expanded network consisting of a number of non-routing nodes (interface shelves) at the edge of the network that are connected to the network by their routing hubs.
For detailed information about tiered networks, refer to Chapter 14, "Tiered Networks".
T1/E1 Frame Relay connections originating at IGX switch interface shelves and T1/E1 Frame Relay, T1/E1 ATM, CES, and FUNI connections originating at MGX 8220 interface shelves are routed across the routing network via their associated BPX switch routing hubs.
The following requirements apply to BPX switch routing hubs and their associated interface shelves:
Where greater bandwidths are not needed, the Inverse Multiplexing ATM (IMA) feature provides a low cost trunk between two BPX switches. The IMA feature allows BPX switches to be connected to one another over any of the 8 T1 or E1 trunks provided by an AIMNM module on an MGX 8220 shelf. A BNI or BXM port on each BPX switch is directly connected to an AIMNM module in an MGX 8220 by a T3 or E3 trunk. The AIMNM modules are then linked together by any of the 8 T1 or E1 trunks. Refer to the Cisco MGX 8220 Reference and the Cisco WAN Switching Command Reference publications for further information.
A virtual trunk may be defined as a "trunk over a public ATM service". The trunk really doesn't exist as a physical line in the network. Rather, an additional level of reference, called a virtual trunk number, is used to differentiate the virtual trunks found within a physical trunk port. Figure 1-7 shows four Cisco WAN switching networks, each connected to a Public ATM Network via a physical line. The Public ATM Network is shown linking all four of these subnetworks to every other one with a full meshed network of virtual trunks. In this example, each physical line is configured with three virtual trunks.
The BPX switch provides ATM standard traffic and congestion management per ATM Forum TM 4.0 using BXM cards.
The Traffic Control functions include:
In addition to these standard functions, the BPX switch provides advanced traffic and congestion management features including:
Advanced CoS management provides per-VC queueing and per-VC scheduling. CoS management provides fairness between connections and firewalls between connections. Firewalls prevent a single non-compliant connection from affecting the QoS of compliant connections. The non-compliant connection simply overflows its own buffer.
The cells received by a port are not automatically transmitted by that port out to the network trunks at the port access rate. Each VC is assigned its own ingress queue that buffers the connection at the entry to the network. With ABR with VSVD or with Optimized Bandwidth Management (ForeSight), the service rate can be adjusted up and down depending on network congestion.
Network queues buffer the data at the trunk interfaces throughout the network according to the connection's class of service. Service classes are defined by standards-based QoS. Classes can consist of the five service classes defined in the ATM standards as well as multiple sub-classes to each of these classes. Classes can range from constant bit rate services with minimal cell delay variation to variable bit rates with less stringent cell delay.
When cells are received from the network for transmission out a port, egress queues at that port provide additional buffering based on the service class of the connection.
CoS Management provides an effective means of managing the quality of service defined for various types of traffic. It permits network operators to segregate traffic to provide more control over the way that network capacity is divided among users. This is especially important when there are multiple user services on one network. The BPX switch provides separate queues for each traffic class.
Rather than limiting the user to the five broad classes of service defined by the ATM standards committees, CoS management can provide up to 16 classes of service (service subclasses) that can be further defined by the user and assigned to connections. Some of the COS parameters that may be assigned include:
With Automatic Routing Management (formerly referred to as AutoRoute), connections in Cisco WAN switching networks are added if there is sufficient bandwidth across the network and are automatically routed when they are added. The user only needs to enter the endpoints of the connection at one end of the connection and the IGX switch, and BPX switch software automatically set up a route based on a sophisticated routing algorithm. This feature is called Automatic Routing Management. It is a standard feature on the IGX switch, BPX switch, and MGX 8220.
System software automatically sets up the most direct route after considering the network topology and status, the amount of spare bandwidth on each trunk, as well as any routing restrictions entered by the user (e.g. avoid satellite links). This avoids having to manually enter a routing table at each node in the network. Automatic Routing Management simplifies adding connections, speeds rerouting around network failures, and provides higher connection reliability.
Cost-based route selection can be selectively enabled by the user as the route selection per node. With this feature a trunk cost is assigned to each trunk (physical and virtual) in the network. The routing algorithm then chooses the lowest cost route to the destination node. The lowest cost routes are stored in a cache to reduce the computation time for on-demand routing.
Cost-based routing can be enabled or disabled at anytime, and there can be a mixture of cost-based and hop-based nodes in a network.
The section, Cost-Based Connection Routing, contains more detailed information about cost-based AutoRoute.
The BPX/IGX switch networks provide a choice of two dynamic rate based congestion control methods, ABR with VSVD and Optimized Bandwidth Management (ForeSight). This section describes Standard ABR with VSVD.
When an ATM connection is configured between BXM cards for Standard ABR with VSVD per ATM Forum TM 4.0, Resource Management (RM) cells are used to carry congestion control feedback information back to the connection's source from the connection's destination.
The ABR sources periodically interleave RM cells into the data they are transmitting. These RM cells are called forward RM cells because they travel in the same direction as the data. At the destination these cells are turned around and sent back to the source as backward RM cells.
The RM cells contain fields to increase or decrease the rate (the CI and NI fields) or set it at a particular value (the explicit rate ER field). The intervening switches may adjust these fields according to network conditions. When the source receives an RM cell, it must adjust its rate in response to the setting of these fields.
When spare capacity exists with the network, ABR with VSVD permits the extra bandwidth to be allocated to active virtual circuits.
The BPX/IGX switch networks provide a choice of two dynamic rate based congestion control methods, ABR with VSVD and Optimized Bandwidth Management (ForeSight). This section describes Optimized Bandwidth Management (ForeSight).
Optimized Bandwidth Management (ForeSight) may be used for congestion control across BPX/IGX switches for connections that have one or both end points terminating on other than BXM cards, for example ASI cards. The Optimized Bandwidth Management (ForeSight) feature is a dynamic closed-loop, rate-based, congestion management feature that yields bandwidth savings compared to non-Optimized Bandwidth Management (ForeSight) equipped trunks when transmitting bursty data across cell-based networks.
Optimized Bandwidth Management (ForeSight) permits users to burst above their committed information rate for extended periods of time when there is unused network bandwidth available. This enables users to maximize the use of network bandwidth while offering superior congestion avoidance by actively monitoring the state of shared trunks carrying Frame Relay traffic within the network.
Optimized Bandwidth Management (ForeSight) monitors each path in the forward direction to detect any point where congestion may occur and returns the information back to the entry to the network. When spare capacity exists with the network, Optimized Bandwidth Management (ForeSight) permits the extra bandwidth to be allocated to active virtual circuits. Each PVC is treated fairly by allocating the extra bandwidth based on each PVC's committed bandwidth parameter.
If the network reaches full utilization, Optimized Bandwidth Management (ForeSight) detects this and quickly acts to reduce the extra bandwidth allocated to the active PVCs. Optimized Bandwidth Management (ForeSight) reacts quickly to network loading in order to prevent dropped packets. Periodically, each node automatically measures the delay experienced along a Frame Relay PVC. This delay factor is used in calculating the Optimized Bandwidth Management (ForeSight) algorithm.
With basic Frame Relay service, only a single rate parameter can be specified for each PVC. With Optimized Bandwidth Management (ForeSight), the virtual circuit rate can be specified based on a minimum, maximum, and initial transmission rate for more flexibility in defining the Frame Relay circuits.
Optimized Bandwidth Management (ForeSight) provides effective congestion management for PVC's traversing broadband ATM as well. Optimized Bandwidth Management (ForeSight) operates at the cell-relay level that lies below the Frame Relay services provided by the IGX switch. With the queue sizes utilized in the BPX switch, the bandwidth savings is approximately the same as experienced with lower speed trunks. When the cost of these lines is considered, the savings offered by Optimized Bandwidth Management (ForeSight) can be significant.
The Private Network to Network Interface (PNNI) protocol provides a standards-based dynamic routing protocol for ATM and Frame Relay SVCs. PNNI is an ATM-Forum-defined interface and routing protocol which is responsive to changes in network resources, availability, and will scale to large networks. PNNI is available on the BPX switch when an ESP or SES PNNI is installed. For further information about PNNI and the ESP, refer to the Cisco WAN Service Node Series Extended Services Processor Installation and Operation publication.
BPX switches provide one high-speed and two low-speed data interfaces for data collection and network management. The high-speed interface is an Ethernet 802.3 LAN interface port for communicating with a Cisco WAN Manager NMS workstation. TCP/IP provides the transport and network layer, Logical Link Control 1 is the protocol across the Ethernet port.
The low-speed interfaces are two RS-232 ports, one for a network printer and the second for either a modem connection or a connection to an external control terminal. These low-speed interfaces are the same as provided by the IGX switch.
A Cisco WAN Manager NMS workstation connects via the Ethernet to the LAN port on the BPX and provides network management via SNMP. Statistics are collected by Cisco WAN Manager using the TFTP protocol. On IGX switch shelves, Frame Relay connections are managed via the Cisco WAN Manager Connection Manager. On MGX 8220 shelves, the Cisco WAN Manager Connection Manager manages Frame Relay and ATM connections, and the Connection Manager is used for MGX 8220 shelf configuration.
Each BPX switch can be configured to use optional low-speed modems for inward access by the Cisco Technical Response Team for network troubleshooting assistance or to autodial Customer Service to report alarms remotely. If desired, another option is remote monitoring or control of customer premise equipment through a window on the Cisco WAN Manager workstation.
Network interfaces connect the BPX switch to other BPX or IGX switches to form a wide-area network.
The BPX switch provides T3, E3, OC-3/STM-1, and OC-12/STM-4 trunk interfaces. The T3 physical interface utilizes DS3 C-bit parity and the 53-byte ATM physical layer cell relay transmission using the Physical Layer Convergence Protocol. The E3 physical interface uses G.804 for cell delineation and HDB3 line coding. The BNI-155 card supports single-mode fiber (SMF), single-mode fiber long reach (SMF-LR), and multi-mode fiber (MMF) physical interfaces. The BXM-155 cards support SMF, SMFLR, and MMF physical interfaces. The BXM-622 cards support SMF and SMFLR physical interfaces.
The design of the BPX switch permits it to support network interfaces up to 622 Mbps in the current release while providing the architecture to support higher broadband network interfaces as the need arises.
Optional redundancy is on a one-to-one basis. The physical interface can operate either in a normal or looped clock mode. And as an option, the node synchronization can be obtained from the DS3 extracted clock for any selected network trunk.
The MGX 8220 interfaces to a BNI or BXM card on the BPX, via a T3, E3, or OC-3 interface. The MGX 8220 provides a concentrator for T1 or E1 Frame Relay and ATM connections to the BPX switch with the ability to apply Optimized Bandwidth Management (ForeSight) across a connection from end-to-end. The MGX 8220 also supports CES and FUNI (Frame Based UNI over ATM) connections.
The BPX Switch system manager can configure alarm thresholds for all statistical type error conditions. Thresholds are configurable for conditions such as frame errors, out of frame, bipolar errors, dropped cells, and cell header errors. When an alarm threshold is exceeded, the NMS screen displays an alarm message.
Graphical displays of collected statistics information, a feature of the Cisco WAN Manager NMS, are a useful tool for monitoring network usage. Statistics collected on network operation fall into two general categories:
These statistics are collected in real-time throughout the network and forwarded to the WAN Manager workstation for logging and display. The link from the node to the Cisco WAN Manager workstation uses a protocol to acknowledge receipt of each statistics data packet. Refer to the Cisco WAN Manager Operations publication, for more details on statistics and statistical alarms.
A BPX Service switch network provides network-wide, intelligent clock synchronization. It uses a fault-tolerant network synchronization architecture recommended for Integrated Services Digital Network (ISDN). The BPX switch internal clock operates as a Stratum 3 clock per ANSI T1.101.
Since the BPX switch is designed to be part of a larger communications network, it is capable of synchronizing to higher-level network clocks as well as providing synchronization to lower-level devices. Any network access input can be configured to synchronize the node. Any external T1 or E1 input can also be configured to synchronize network timing. A clock output allows synchronizing an adjacent IGX switch or other network device to the BPX switch and the network. In nodes equipped with optional redundancy, the standby hardware is locked to the active hardware to minimize system disruption during system switchovers.
The BPX Service Node can be configured to select clock from the following sources:
The Cisco WAN switching cell relay system software shares most core system software, as well as a library of applications, between platforms. System software provides basic management and control capabilities to each node.
IGX, and BPX node system software manages its own configuration, fault-isolation, failure recovery, and other resources. Since no remote resources are involved, this ensures rapid response to local problems. This distributed network control, rather than centralized control, provides increased reliability.
Software among multiple nodes cooperates to perform network-wide functions such as trunk and connection management. This multi-processor approach ensures rapid response with no single point of failure. System software applications provide advanced features that may be installed and configured as required by the user.
Some of the many software features are:
The routing software supports the establishment, removal and rerouting of end-to-end channel connections. There are three modes:
The system software uses the following criteria when it establishes an automatic route for a connection:
When a node reroutes a connection, it uses these criteria and also looks at the priority that has been assigned and any user-configured routing restrictions. The node analyzes trunk loading to determine the number of cells or packets the network can successfully deliver. Within these loading limits, the node can calculate the maximum combination allowed on a network trunk of each type of connection: synchronous data, ATM traffic, Frame Relay data, multimedia data, voice, and compressed voice.
Network-wide T3, E3, OC-3, or OC-12 connections are supported between BPX switches terminating ATM user devices on the BPX switch UNI ports. These connections are routed using the virtual path and/or virtual circuit addressing fields in the ATM cell header.
Narrowband connections can be routed over high-speed ATM backbone networks built on BPX broadband switches. FastPacket addresses are translated into ATM cell addresses that are then used to route the connections between BPX switches, and to ATM networks with mixed vendor ATM switches. Routing algorithms select broadband links only, avoiding narrowband nodes that could create a choke point.
The re-routing mechanism is enhanced so that connections are presorted in order of cell loading when they are added. Rerouting takes place by rerouting the group containing the connections with the largest cell loadings first on down to the last group which contains the connections with the smallest cell loadings. These groups are referred to as routing groups. Each routing group contains connections with loading in a particular range.
The three routing group parameters are configured with the cnfcmparm command.
Routing group | Connection cell loading |
---|---|
0 | 0-59 |
1 | 60-69 |
2 | 70-79 |
3 | 80-89 |
4 | 90-99 |
5 | 101-109 |
6 | 110-119 |
7 | 120-129 |
8 | 130-139 |
9 | 140 and up |
In standard AutoRoute, the path with the fewest number of hops to the destination node is chosen as the best route. Cost-based route selection uses an administrative trunk cost routing metric. The path with the lowest total trunk cost is chosen as the best route. Cost-based route selection is based on Dijkstra's Shortest Path Algorithm, which is widely used in network routing environments. You can use cost-based route selection (that is, cost-based AutoRoute) to give preference to slower privately owned trunks over faster public trunks which charge based on usage time. This gives network operators more control over the usability of their network trunks, while providing a more standard algorithm for route selection.
The following list gives a short description of the major functional elements of Cost-Based Route Selection.
The following switched software Command Line Interface (CLI) commands are used for cost-based route selection:
The Cisco WAN Switching Command Reference contains detailed information about the use of BPX switch commands.
Cisco WAN switching cell relay networks use a fault-tolerant network synchronization method of the type recommended for Integrated Services Digital Network (ISDN). Any circuit line, trunk, or an external clock input can be selected to provide a primary network clock. Any line can be configured as a secondary clock source in the event that the primary clock source fails.
All nodes are equipped with a redundant, high-stability internal oscillator that meets Stratum 3 (BPX) or Stratum 4 requirements. Each node keeps a map of the network's clocking hierarchy. The network clock source is automatically switched in the event of failure of a clock source.
There is less likelihood of a loss of customer data resulting from re-frames that occur during a clock switchover or other momentary disruption of network clocking with cell-based networks than there is with traditional TDM networks. Data is held in buffers and packets are not sent until a trunk has regained frame synchronism to prevent loss of data.
Hardware and software components are designed to provide a switch availability in excess of 99.99%. Network availability will be impacted by link failure, which has a higher probability of occurrence, than equipment failure.
Because of this, Cisco WAN network switches are designed so that connections are automatically rerouted around network trunk failures often before users detect a problem. System faults are detected and corrective action taken often before they become service affecting. The following paragraphs describe some of the features that contribute to network availability.
System availability is a primary requirement with the BPX switch. The designed availability factor of a BPX switch is (99.99%) based on a node equipped with optional redundancy and a network designed with alternate routing available. The system software, as well as firmware for each individual system module, incorporates various diagnostic and self-test routines to monitor the node for proper operation and availability of backup hardware.
For protection against hardware failure, a BPX switch shelf can be equipped with the following redundancy options:
If redundancy is provided for a BPX switch, when a hardware failure occurs, a hot-standby module is automatically switched into service, replacing the failed module. All cards are hot-pluggable, so replacing a failed card in a redundant system can be performed without disrupting service.
Since the power supplies share the power load, redundant supplies are not idle. All power supplies are active; if one fails, then the others pick up its load. The power supply subsystem is sized so that if any one supply fails, the node will continue to be supplied with adequate power to maintain normal operation of the node. The node monitors each power supply voltage output and measures cabinet temperature to be displayed on the NMS terminal or other system terminal.
Each BPX switch shelf within the network runs continuous background diagnostics to verify the proper operation of all active and standby cards, backplane control, data, and clock lines, cabinet temperature, and power supplies. These background tests are transparent to normal network operation.
Each card in the node has front-panel LEDs to indicate active, failed, or standby status. Each power supply has green LEDs to indicate proper voltage input and output. An Alarm, Status, and Monitor card collects all the node hardware status conditions and reports it using front panel LED indicators and alarm closures. Indicators are provided for major alarm, minor alarm, ACO, power supply status, and alarm history. Alarm relay contact closures for major and minor alarms are available from each node through a 15-pin D-type connector for forwarding to a site alarm system.
BPX switches are completely compatible with the network status and alarm display provided by the Cisco WAN Manager NMS workstation. In addition to providing network management capabilities, it displays major and minor alarm status on its topology screen for all nodes in a network. The Cisco WAN Manager NMS also provides a maintenance log capability with configurable filtering of the maintenance log output by node name, start time, end time, alarm type, and user specified search string.
Posted: Sun Jan 14 18:41:49 PST 2001
All contents are Copyright © 1992--2001 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.