|
This chapter contains an overall description of the BPX Service Node. For installation information, refer to the BPX Service Node Installation Manual. For additional information on BPX Service Node operation and configuration, refer to the Release 8.4 Cisco StrataCom System Overview and Command Reference documents.
This chapter contains the following sections:
The BPX Service Node is a standards based high-capacity broadband ATM switch that provides backbone ATM switching and delivers a wide range of user services (Figure 1-1).
Fully integrated with the AXIS, IPX, and IGX, the BPX Service Node is a scalable, standards-compliant unit. Using a multi-shelf architecture, the BPX Service Node supports both narrowband and broadband user services. The modular, multi-shelf architecture enables users to incrementally expand the capacity of the system as needed. The BPX Service Node consists of the BPX broadband shelf with fifteen card slots and co-located AXIS, and ESP (Extended Services Processor), as required.
Three of the slots on the BPX shelf are reserved for common equipment cards. The other twelve are general purpose slots used for network interface cards or service interface cards. The cards are provided in sets, consisting of a front card and associated back card. The BPX shelf can be mounted in a rack enclosure, which provides mounting for a co-located ESP and AXIS Interface Shelves.
With a co-located Extended Services Processor (ESP), the BPX Service Node adds the capability to support ATM and Frame Relay Switch Virtual Circuits (SVCs).
The AXIS interface shelf supports a wide range of economical narrowband interfaces. It converts all non-ATM traffic into 53-byte ATM cells and concentrates this traffic for high speed switching by the BPX. The AXIS provides a broad range of narrowband user interfaces. Release 4 of the AXIS provides T1/E1 and subrate Frame Relay, FUNI (Frame Based UNI over ATM), T1/E1 ATM, T1/E1 Circuit Emulation Service (CES), HSSI and X.21 interfaces and SRM-3T3 enhancements, and Frame Relay to ATM network and service interworking for traffic over the ATM network via the BPX.
The IPX may be configured as a shelf and used as a low-cost Frame Relay to ATM interworking concentrator for the BPX. The IPX may also be configured as a co-located shelf and used as an economical ATM service input to the BPX.
The following is a list of previously provided features that are included in Release 8.4 along with the new features previously listed:
A BPX node is a self-contained chassis which may be rack-mounted in a standard 19-inch rack or open enclosure. All control functions, switching matrix, backplane connections, and power supplies are redundant, and non-disruptive diagnostics continuously monitor system operation to detect any system or transmission failure. Hot-standby hardware and alternate routing capability combine to provide maximum system availability.
Many network locations have increasing bandwidth requirements due to emerging applications. To meet these requirements, users can overlay their existing narrowband networks with a backbone of BPX nodes to utilize the high-speed connectivity of the BPX operating at 19.2 Gbps with its T3/E3/OC3/OC12 network and service interfaces. The BPX service interfaces include BXM and ASI ports on the BPX and service ports on AXIS shelves. The AXIS shelves may be co-located in the same cabinet as the BPX, providing economical port concentration for T1/E1 Frame Relay, T1/E1 ATM, CES, and FUNI connections.
With a co-located Extended Services Processor (ESP), the BPX Service Node adds the capability to support ATM and Frame Relay Switched Virtual Circuits (SVCs).
With the BCC-4 the BPX employs a 19.2Gbps crosspoint switch matrix for cell-based switching. The switch matrix provides total, non-blocking bandwidth for point-to-point cell switching of up to 20 million point-to-point connections per second between slots for cell-switching. It is designed to easily meet current requirements with scalability to higher capacity for future growth.
The crosspoint switch matrix provides fourteen (including 2 BCC slots), 800 Mbps switch ports, each of which are capable of supporting up to OC-12 transmission rates. A software-controlled polling arbiter supervises polling order and priority. Data flow to and from the switch matrix is supervised by a redundant common controller. Access to and from the crosspoint switch matrix is through multiport network and user access cards.
A BPX shelf is a self-contained chassis which may be rack-mounted in a standard 19-inch rack or open enclosure. All control functions, switching matrix, backplane connections, and power supplies are redundant, and non-disruptive diagnostics continuously monitor system operation to detect any system or transmission failure. Hot-standby hardware and alternate routing capability combine to provide maximum system availability.
Interworking allows users to retain their existing services, and as their needs expand, migrate to the higher bandwidth capabilities provided by BPX ATM networks. Frame Relay to ATM Interworking enables Frame Relay traffic to be connected across high-speed ATM trunks using ATM standard Network and Service Interworking
Two types of Frame Relay to ATM interworking are supported, Network Interworking
(see Figure 1-2) and Service Interworking (see Figure 1-3). The Network Interworking function is performed by the AIT card on the IPX, the BTM card on the IGX, and the FRSM card on the AXIS. The FRSM card on the AXIS and the UFM cards on the IGX also support Service Interworking.
The Frame Relay to ATM network and service interworking functions are available as follows:
Part A of Figure 1-2 shows typical Frame Relay to network interworking. In this example, a Frame Relay connection is transported across an ATM network, and the interworking function is performed by both ends of the ATM network. The following are typical configurations:
Part B of Figure 1-2 shows a form of network interworking where the interworking function is performed by only one end of the ATM network, and the CPE connected to the other end of the network must itself perform the appropriate service specific convergence sublayer function. The following are example configurations:
Network Interworking is supported by the FRP on the IPX, the FRM, UFM-C, and UFM-U on the IGX, and the FRSM on the AXIS. The Frame Relay Service Specific Convergence Sublayer (FR-SSCS) of AAL5 is used to provide protocol conversion and mapping.
Figure 1-3 shows a typical example of Service Interworking. Service Interworking is supported by the FRSM on the AXIS and the UFM-C and UFM-U on the IGX. Translation between the Frame Relay and ATM protocols is performed in accordance with RFC 1490 and RFC 1483. The following is a typical configuration for service interworking:
For additional information about interworking, see Chapter 9, Frame Relay to ATM Network and Service Interworking.
Using BPX Service Nodes as hubs, networks may be configured as flat (all nodes perform routing and communicate fully with one another), or tiered (AXIS, IPX, and IGX Interface Shelves are connected to BPX routing hubs where the IPX/IGX Interface Shelves are configured as non-routing hubs).
Tiered networks with BPX routing hubs are established by adding interface shelves (non-routing nodes) to an IPX/BPX network (Figure 1-4). AXIS interface shelves and IPX/IGX interface shelves are supported by the BPX routing hubs. By connecting interface shelves to BPX routing nodes, the network is able to support additional T1/E1 Frame Relay traffic (IPX/IGX Shelves) and T1/E1 Frame Relay and ATM traffic (AXIS Shelves) without adding routing nodes.
The AXIS interface shelf supports T1/E1 Frame Relay, T1/E1 ATM ports, FUNI, and T1/E1 CES, and is designed to support additional interfaces in the future. The IPX interface shelf supports Frame Relay ports, as does the IGX (option is available to configure as a shelf).
The following are necessary requirements in order to implement tiered networks:
For additional information about Tiered Networks, see Chapter 10, Tiered Networks.
Where greater bandwidths are not needed, the Inverse Multiplexing ATM (IMA) feature provides a low cost trunk between two BPXs. The IMA feature allows BPX nodes to be connected to one another over from 1 to 8 T1 or E1 trunks provided by an AIMNM module on an AXIS Shelf. A BNI port on each BPX is directly connected to an AIMNM module in an AXIS shelf by a T3 or E3 trunk. The AIMNM modules are then linked together by from 1 to 8 T1 or E1 trunks. Refer to the AXIS reference and the command reference documentation for further information.
A virtual trunk may be defined as a "trunk over a public ATM service." The trunk really doesn't exist as a physical line in the network. Rather, an additional level of reference, called a virtual trunk number, is used to differentiate the virtual trunks found within a physical trunk port. Figure 1-5 shows four StrataCom sub-networks, each connected to a Public ATM Network via a physical line. The Public ATM Network is shown linking all four of these subnetworks to every other one with a full meshed network of virtual trunks. In this example, each physical line is configured with three virtual trunks.
For further information on Virtual Trunking, refer to the Systems Manual and to the Command Reference documentation. For sample configuration information, see Chapter 6, Configuration and Management.
The BPX provides ATM standard traffic and congestion management per ATM Forum TM 4.0 using BXM cards.
The Traffic Control functions include:
In addition to these standard functions, the BPX provides advanced traffic and congestion management features including:
FairShare provides per-VC queueing and per-VC scheduling. FairShare provides fairness between connections and firewalls between connections. Firewalls prevent a single non-compliant connection from affecting the QoS of compliant connections. The non-compliant connection simply overflows its own buffer.
The cells received by a port are not automatically transmitted by that port out to the network trunks at the port access rate. Each VC is assigned its own ingress queue that buffers the connection at the entry to the network. With ABR with VSVD or with ForeSight, the service rate can be adjusted up and down depending on network congestion.
Network queues buffer the data at the trunk interfaces throughout the network according to the connections class of service. Service classes are defined by standards-based QoS. Classes can consist of the four broad service classes defined in the ATM standards as well as multiple sub-classes to each of the four general classes. Classes can range from constant bit rate services with minimal cell delay variation to variable bit rates with less stringent cell delay.
When cells are received from the network for transmission out a port, egress queues at that port provide additional buffering based on the service class of the connection.
Rather than limiting the user to the four broad classes of service initially defined by the ATM standards committees, OptiClass can provide up to thirty-two classes of service (service subclasses) that can be further defined by the user and assigned to connections. Some of the COS parameters that may be assigned include:
With AutoRoute, connections in StrataCom cell relay networks are added if there is sufficient bandwidth across the network and are automatically routed when they are added. The user only needs to enter the endpoints of the connection at one end of the connection and the IPX, IGX, and BPX software automatically set up a route based on a sophisticated routing algorithm. This feature is called AutoRoute. It is a standard feature on all StrataCom nodes.
System software automatically sets up the most direct route after considering the network topology and status, the amount of spare bandwidth on each trunk, as well as any routing restrictions entered by the user (e.g. avoid satellite links). This avoids having to manually enter a routing table at each node in the network. AutoRoute simplifies adding connections, speeds rerouting around network failures, and provides higher connection reliability.
The Private Network-to-Network Interface (PNNI) protocol provides a standards-based dynamic routing protocol for ATM and Frame Relay switched virtual circuits (SVCs). PNNI is an ATM-Forum-defined interface and routing protocol which is responsive to changes in network resources, availability, and will scale to large networks. PNNI is available on the BPX Service Node when an Extended Services Processor (ESP) is installed. For further information about PNNI and the ESP, refer to the Cisco StrataCom BPX Service Node Extended Services Processor Installation and Operation.
The BPX/IGX/IPX networks provide a choice of two dynamic rate based congestion control methods, ABR with VS/VD and ForeSight. This section describes Standard ABR with VSVD.
When an ATM connection is configured for Standard ABR with VSVD per ATM Forum TM 4.0, RM (Resource Management) cells are used to carry congestion control feedback information back to the connection's source from the connection's destination.
The ABR sources periodically interleave RM cells into the data they are transmitting. These RM cells are called forward RM cells because they travel in the same direction as the data. At the destination these cells are turned around and sent back to the source as Backward RM cells.
The RM cells contain fields to increase or decrease the rate (the CI and NI fields) or set it at a particular value (the explicit rate ER field). The intervening switches may adjust these fields according to network conditions. When the source receives an RM cell it must adjust its rate in response to the setting of these fields.
When spare capacity exists with the network, ABR with VSVD permits the extra bandwidth to be allocated to active virtual circuits.
The BPX/IGX/IPX networks provide a choice of two dynamic rate based congestion control methods, ABR with VS/VD and ForeSight. This section describes ForeSight.
ForeSight may be used for congestion control across BPX/IGX/IPX switches for connections that have one or both end points terminating on other than BXM cards, for example ASI cards. The ForeSight feature is a dynamic closed-loop, rate-based, congestion management feature that yields bandwidth savings compared to non-ForeSight equipped trunks when transmitting bursty data across cell-based networks.
ForeSight permits users to burst above their committed information rate for extended periods of time when there is unused network bandwidth available. This enables users to maximize the use of network bandwidth while offering superior congestion avoidance by actively monitoring the state of shared trunks carrying Frame Relay traffic within the network.
ForeSight monitors each path in the forward direction to detect any point where congestion may occur and returns the information back to the entry to the network. When spare capacity exists with the network, ForeSight permits the extra bandwidth to be allocated to active virtual circuits. Each PVC is treated fairly by allocating the extra bandwidth based on each PVC's committed bandwidth parameter.
Conversely, if the network reaches full utilization, ForeSight detects this and quickly acts to reduce the extra bandwidth allocated to the active PVCs. ForeSight reacts quickly to network loading in order to prevent dropped packets. Periodically, each node automatically measures the delay experienced along a Frame Relay PVC. This delay factor is used in calculating the ForeSight algorithm.
With basic Frame Relay service, only a single rate parameter can be specified for each PVC. With ForeSight, the virtual circuit rate can be specified based on a minimum, maximum, and initial transmission rate for more flexibility in defining the Frame Relay circuits.
ForeSight provides effective congestion management for PVC's traversing broadband ATM as well. ForeSight operates at the cell-relay level that lies below the Frame Relay services provided by the IPX. With the queue sizes utilized in the BPX, the bandwidth savings is approximately the same as experienced with lower speed trunks. When the cost of these lines is considered, the savings offered by ForeSight can be significant.
BPX Service Node nodes provide one high-speed and two low-speed data interfaces for data collection and network management. The high-speed interface is an Ethernet 802.3 LAN interface port for communicating with a StrataView Plus NMS workstation. TCP/IP provides the transport and network layer, Logical Link Control 1 is the protocol across the Ethernet port.
The low-speed interfaces are two RS-232 ports, one for a network printer and the second for either a modem connection or a connection to an external control terminal. These low-speed interfaces are the same as provided by the IPX nodes.
A StrataView Plus NMS workstation connects to the Ethernet (LAN) port on the BPX and provides network management via SNMP. Statistics are collected by SV+ using the TFTP protocol. On IPX shelves, Frame Relay connections are managed via the SV+ Connection Manager. On AXIS shelves, the SV+ Connection Manager manages Frame Relay and ATM connections, and the Connection Manager is used for AXIS shelf configuration
Each BPX Service Node can be configured to use optional low-speed modems for inward access. For network troubleshooting assistance call Cisco TAC to report alarms remotely. If desired, another option is remote monitoring or control of customer premise equipment through a window on the StrataView Plus workstation.
Network interfaces connect the BPX node to other BPX, IGX, or IPX nodes to form a wide-area network.
The BPX provides T3, E3, OC3/STM-1, and OC12/STM-4 trunk interfaces. The T3 physical interface utilizes DS3 C-bit parity and the 53-byte ATM physical layer cell relay transmission using the Physical Layer Convergence Protocol. The E3 physical interface uses G.804 for cell delineation and HDB3 line coding. The BNI-155 card supports single-mode fiber (SMF), single-mode fiber long reach (SMF-LR), and multi-mode fiber (MMF) physical interfaces. The BXM-155 cards support SMF, SMFLR, and MMF physical interfaces. The BXM-622 cards support SMF and SMFLR physical interfaces.
The design of the BPX permits it to support network interfaces up to 622 Mbps in the current release while providing the architecture to support higher broadband network interfaces as the need arises.
Optional redundancy is on a one-to-one basis. The physical interface can operate either in a normal or looped clock mode. And as an option, the node synchronization can be obtained from the DS3 extracted clock for any selected network trunk.
The ATM Interface Shelf (AXIS) interfaces to the Broadband Network Interface (BNI) card, via a T3 or E3 ATM STI interface, respectively, or via an OC3 interface. The AXIS provides a concentrator for T1 or E1 Frame Relay and ATM connections to the BPX Service Node with the ability to apply ForeSight across a connection from end-to-end. The AXIS also supports FUNI (Frame Based UNI over ATM) connections.
The BPX Service Node system manager can configure alarm thresholds for all statistical type error conditions. Thresholds are configurable for conditions such as frame errors, out of frame, bipolar errors, dropped cells, and cell header errors. When an alarm threshold is exceeded, the NMS screen displays an alarm message.
Graphical displays of collected statistics information, a feature of the StrataView Plus NMS, are a useful tool for monitoring network usage. Statistics collected on network operation fall into two general categories:
These statistics are collected in real-time throughout the network and forwarded to the StrataView Plus workstation for logging and display. The link from the node to StrataView Plus uses a protocol to acknowledge receipt of each statistics data packet. Refer to the StrataView Plus Operations documentation, for more details on statistics and statistical alarms.
A BPX Service Node network provides network-wide, intelligent clock synchronization. It uses a fault-tolerant network synchronization architecture recommended for Integrated Services Digital Network (ISDN). The BPX Service Node internal clock operates as a Stratum-3 clock per ANSI T1.101.
Since the BPX Service Node is designed to be part of a larger communications network, it is capable of synchronizing to higher-level network clocks as well as providing synchronization to lower-level devices. Any network access input can be configured to synchronize the node. Any external T1 or E1 input can also be configured to synchronize network timing. A clock output allows synchronizing an adjacent IPX or other network device to the BPX Service Node and the network. In nodes equipped with optional redundancy, the standby hardware is locked to the active hardware to minimize system disruption during system switchovers.
The BPX Service Node does not accept clock from an IPX. The BPX Service Node can be configured to select clock from the following sources:
Hardware and software components are designed to provide a node availability in excess of 99.99%. Network availability will be much more impacted by link failure, which has a higher probability of occurrence, than equipment failure.
Because of this, StrataCom switches are designed so that connections are automatically rerouted around network trunk failures often before users detect a problem. System faults are detected and corrective action taken often before they become service affecting. The following paragraphs describe some of the features that contribute to network availability.
System availability is a primary requirement with the BPX Service Node. The designed availability factor of a BPX Service Node is (99.99%) based on a node equipped with optional redundancy and a network designed with alternate routing available. The system software, as well as firmware for each individual system module, incorporates various diagnostic and self-test routines to monitor the node for proper operation and availability of backup hardware.
For protection against hardware failure, a BPX shelf can be equipped with the following redundancy options:
If redundancy is provided for a BPX shelf, when a hardware failure occurs, a hot-standby module is automatically switched into service, replacing the failed module. All cards are hot-pluggable, so replacing a failed card in a redundant system can be performed without disrupting service.
Since the power supplies share the power load, redundant supplies are not idle. All power supplies are active; if one fails, then the others pick up its load. The power supply subsystem is sized so that if any one supply fails, the node will continue to be supplied with adequate power to maintain normal operation of the node. The node monitors each power supply voltage output and measures cabinet temperature to be displayed on the NMS terminal or other system terminal.
Each BPX shelf within the network runs continuous background diagnostics to verify the proper operation of all active and standby cards, backplane control, data, and clock lines, cabinet temperature, and power supplies. These background tests are transparent to normal network operation.
Each card in the node has front-panel LEDs to indicate active, failed, or standby status. Each power supply has green LEDs to indicate proper voltage input and output. An Alarm, Status, and Monitor card collects all the node hardware status conditions and reports it using front panel LED indicators and alarm closures. Indicators are provided for major alarm, minor alarm, ACO, power supply status, and alarm history. Alarm relay contact closures for major and minor alarms are available from each node through a 15-pin D-type connector for forwarding to a site alarm system.
BPX shelves are completely compatible with the network status and alarm display provided by the optional StrataView Plus NMS workstation. In addition to providing network management capabilities, it displays major and minor alarm status on its topology screen for all nodes in a network. The StrataView Plus also provides a maintenance log capability with configurable filtering of the maintenance log output by node name, start time, end time, alarm type, and user specified search string.
Posted: Mon Aug 19 18:06:31 PDT 2002
All contents are Copyright © 1992--2002 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.