|
The MGX 8220 shelf contains 16 slots. Each slot can accommodate a front card and a back card. Six slots are reserved for common equipment modules as described in "Common Equipment Description".
The remaining ten slots (slots 5 through 14) are reserved for Service Modules (SMs). Service Modules provide functionality for such services as Frame Relay, ATM, and Circuit Emulation.
This chapter describes the Service Modules supported by the MGX 8220.
Although Service Resource Modules (SRMs) are categorized as core equipment, they also provide optional functions for the SMs.
The primary function of the FRSM is to convert between the Frame Relay formatted data and ATM/AAL5 cell-formatted data. There are two main types of FRSMs, those for T1 or E1 lines and those for high speed serial lines.
All FRSMs include the following features:
Module-specific features are described in the sections listed below.
FRSMs convert the header format and translate the address for
This section describes the connection types that can be configured on the FRSM to perform these functions.
Frame Relay-to-ATM network interworking (NIW) supports a permanent virtual connection (PVC) between two Frame Relay users over a Cisco network or a multi-vendor network. The traffic crosses the network as ATM cells. To specify NIW for a connection, add the connection with a channel type of network interworking.
Figure 4-1 shows a BPX 8620 network with network interworking connections.
In addition to frame-to-cell and DLCI to VPI/VCI conversion, the network interworking feature maps cell loss priority (CLP) and congestion information from Frame Relay to ATM formats. The CLP and congestion indicators can be modified for individual connections entering the cnfchanmap command.
Each Frame Relay/ATM network interworking connection can be configured as one of the following DE to CLP mapping schemes:
Each Frame Relay/ATM network interworking connection can be configured as one of the following CLP to DE mapping schemes:
Congestion on the Frame Relay/ATM network interworking connection is flagged by the EFCI bit. The setting of this feature is dependent on traffic direction, as described below.
If the EFCI field in the last ATM cell of a segmented frame received is set, then FECN of the Frame Relay frame will be set.
The management of the ATM layer and FR PVC Status Management can operate independently. The PVC status from the ATM layer will be used when determining the status of the FR PVC. However, no direct actions of mapping LMI A bit to OAM AIS will be performed.
By specifying "service interworking" as the channel type when adding a Frame Relay PVC to an FRSM, all PVC data is subject to service interworking translation and mapping in both the Frame Relay-to-ATM and ATM-to-Frame Relay directions.
Figure 4-2 shows a BPX 8620 network with service interworking connections.
Figure 4-2 shows an MGX 8220 unit and an FRSM to the right with three Frame Relay connection endpoints. These connections indicate the Frame Relay ends of service interworking connections. The diagram shows some possibilities for terminating the other ends of the connections.
The service interworking is full Frame Relay Forum (FRF.8) compliant and provides full support for routed and bridged PDUs, transparent and translation modes, and VP translation.
In addition to frame-to-cell and DLCI to VPI/VCI conversion, the service interworking feature maps cell loss priority and congestion information between the Frame Relay and ATM formats. The CLP and congestion parameters can be modified for individual connections with the cnfchanmap command.
Each Frame Relay-to-ATM service interworking connection can be configured as one of the following Discard Eligibility (DE) to cell loss priority (CLP) schemes:
Each Frame Relay-to-ATM service interworking connection can be configured as one of the following CLP to DE mapping schemes:
Setting up the cell loss priority option is accomplished through the MGX 8220 cnfchanmap (configure channel map) command.
Each Frame Relay-to-ATM service interworking connection can be configured as one of the following Forward Explicit Congestion Notification (FECN) to Explicit-Forward Congestion Indicator (EFCI) schemes:
Frame Relay-to-ATM service interworking connections use the following EFCI to FECN/BECN mapping schemes:
Setting up the congestion indication option is accomplished through the cnfchanmap (configure channel map) command.
Command/Response Mapping is provided in both directions.
The FRSM maps the C/R bit of the received Frame Relay frame to the CPCS-UU least-significant bit of the AAL5 CPCS PDU.
The least-significant bit of the CPCS-UU is mapped to the C/R bit of the Frame Relay frame.
Each service interworking (SIW) connection can exist in either translation or transparent mode. In translation mode, the FRSM translates protocols between the FR NLPID encapsulation (RFC 1490) and the ATM LCC encapsulation (RFC 1483). In transparent mode, the FRSM does not translate. Translation mode support includes address resolution by transforming address resolution protocol (ARP, RFC 826) and inverse ARP (in ARP, RFC 1293) between the Frame Relay and ATM formats.
The FRSM card can be configured as "Frame Forwarding" on a port-by-port basis.
Frame forwarding operates the same as standard Frame Relay except
All FRSMs support the ATM Frame-based User-to-Network Interface (FUNI). When a frame arrives from the FUNI interface, the FRSM removes the 2-byte FUNI header and segments the frame into ATM cells by using AAL5. In the reverse direction, the FRSM assembles ATM cells from the network into a frame by using AAL5, adds a FUNI header to the frame, and sends it to the FUNI port.
Loss Priority Indication mapping is provided in both directions.
The CLP bit on the FUNI header is mapped to the CLP bit of every ATM cell that is generated for the FUNI frame.
CLP bit in the FUNI header is always set to 0.
Congestion Indication mapping is provided in both directions.
EFCI is set to 0 for every ATM cell generated by the segmentation process.
If the EFCI field in the last ATM cell of a received segmented frame is set to 1, the CN bit in the FUNI header is set to 1. The two reserve bits (the same positions as C/R and BECN in Frame Relay header) are always set to 0.
There are two types of FRSM modules for T1 and E1 lines—Franctional (unchannelized) modules and channelized modules. Both these module types offer 1:N redundancy via the optional SRM (see Service Resource Modules).
Each interface of a fractional FRSM supports a single port at 56 kbps or nx64 kbps.
Each port can be independently configured to run Frame Relay UNI (FR-UNI), Frame Relay NNI (FR-NNI), ATM-FUNI, or frame forwarding as described in FRSM Connection Types.
Each interface of a channelized FRSM supports multiple ports at 56 kbps or nx64 kbps.
Each port can be independently configured to run Frame Relay UNI (FR-UNI), Frame Relay NNI (FR-NNI), ATM-FUNI, or frame forwarding as described in FRSM Connection Types.
Channelized FRSM cards include
The 8-port channelized FRSM supports a maximum of 1,000 connections.
Figure 4-3 is an illustration of 4-port and an 8-port FRSM front cards for T1 or E1 lines.
There are three FRSMs for high speed serial lines.
Both FRSM-HS1 modules support the following features:
The MGX-FRSM-HS2 is a two-card set consisting of a front card and a back card that supports two HSSI lines.
The Frame Relay Access Service Module (FRASM) is a two-card set consisting of a FRASM front card (supporting channelized,T1, 8 port), and an 8T1 back card. Up to ten FRASM modules may be installed in a shelf in slots 5 through 14.
The main function of the FRASM is to allow IBM network devices and mainframes (IBM 3270 terminals communicating with an IBM mainframe) operating under SNA/SDLC or 3270/BSC (binary synchronous) protocols to communicate with each other using Frame Relay over an ATM network. This is an alternative to the conventional method of using T1, E1, V.35, or X.21 leased lines.
FRASM modules support the following logical connections and protocols:
FRASM modules support the following end-to-end connections on a connection-by-connection basis:
STUN, short for Serial TUNnel, is an IBM technique for transmitting SNA (SDLC) traffic over Frame Relay networks by encapsulating the SNA frames within Frame Relay frames using the protocol of RFC 1490.
There are two methods of achieving this
1. passthrough (or transparent)
The passthrough method encapsulates the entire SNA data stream including data and control fields for transmission over the Frame Relay network. In this method, the Frame Relay network is entirely transparent to the SNA network.
The local acknowledgment method terminates the SNA traffic at the Frame Relay network interface and encapsulates data only, the SNA frames are then reconstructed at the other end.
Both passthrough and local acknowledgment methods are supported by the FRAM.
For both methods, SNA traffic received by the FRASM is converted first to a Frame Relay format and is further converted into cells for transmission over an ATM network The process is then performed in reverse order at the other end.
STUN is used where the requirements call for SNA in and SNA out with the intervening Frame Relay and ATM segments being used merely to transport the SNA traffic.
An application of a STUN connection is shown in Figure 4-5. An SNA/SDLC device is connected to a FRASM port using SDLC protocol. The traffic is first converted to Frame Relay and then to ATM cells for transmission over the network. At the other end, the traffic is first converted back to Frame Relay and the SDLC traffic is then extracted for transmission to a front-end communication processor and then to the IBM mainframe.
Using STUN, the FRASM supports
BSTUN, short for Block Serial TUNnel, is an IBM technique for transmitting bisync traffic over Frame Relay networks by encapsulating the bisync frames within Frame Relay frames using the protocol of RFC 1490.
1. passthrough (or transparent)
The passthrough method encapsulates the entire bisync data stream including data and control fields for transmission over the Frame Relay network. In this method, the Frame Relay network is entirely transparent to the Bisync network. Passthrough mode is supported for 2780, 3780, and 3270 IBM devices.
The local acknowledgment method terminates the Bisync traffic at the Frame Relay network interface and encapsulates data only. The Bisync frames are then reconstructed at the other. Local acknowledgment mode is supported for 3270 devices.
For both methods, Bisync traffic received by the FRASM is converted first to a Frame Relay format and is then further converted into cells for transmission over an ATM network, the process is then performed in reverse order at the other end.
BTUN can also be used for a transparent text mode which permits the unrestricted coding of data (for example, binary, floating point, and so forth).
BSTUN is used where the requirements call for Bisync in and Bisync out with the intervening Frame Relay and ATM segments being used merely to transport the Bisync traffic.
An application of a BSTUN connection is shown in Figure 4-6. A Bisync device, such as an IBM 3270, is connected to a FRASM port using Bisync protocol. The traffic is first converted to Frame Relay and then to ATM cells for transmission over the network. At the other end, the traffic is first converted back to Frame Relay and the Bisync traffic is then extracted for transmission to a front end communication processor and then to the IBM mainframe.
FRAS BNN, short for Frame Relay Boundary Network Node, is a technique for encapsulating SDLC/SNA traffic into Frame Relay frames (to RFC 1490) at one end of the connection only. At the other end of the connection, the data is presented as Frame Relay. This is used for connecting an SDLC device at one end to a Frame Relay device at the other.
SNA traffic received by the FRASM is converted first to a Frame Relay format and is then further converted into cells for transmission over an ATM network, the ATM traffic is then converted back to Frame Relay at the other end.
Using FRASM configured for FRAS BNN connections, many low speed SNA lines can be consolidated into a smaller number of high-speed lines for fast transport through the network. In addition, FRAS BNN can be used for high-speed links between IBM front end processors (FEPs). FEPs running under Network Control Program (NCP) 7.1 support BNN.
An application of a FRAS BNN connection is shown in Figure 4-7. An SDLC device is connected to an FRASM port using SDLC protocol. The traffic is first converted to Frame Relay and then to ATM cells for transmission over the network. At the other end, the traffic is first converted back to Frame Relay for transmission to a front-end communication processor and then to the IBM mainframe.
Using FRAS BNN, the FRASM supports
The supports 8-T1 lines with each line supporting up to 24-DS0 ports for a total of 192 logical ports. The physical interfaces can be configured as follows:
The card data throughput is 1392 kbps. This can be used as 145 ports at 9.6 kbps ports or 24 ports at 56 kbps or any combination of configurable port speeds for a total up through and including 1392 kbps. (See Figure 4-8.)
The conversions are cell loss priority (CLP), Congestion Indication, and PVC Status Management.
Cell loss priority mapping is provided in both directions.
Each Frame Relay/ATM network interworking connection can be configured as one of the following DE to CLP mapping schemes:
Each Frame Relay/ATM network interworking connection can be configured as one of the following CLP to DE mapping schemes:
Congestion Indication mapping is provided in both directions.
If the EFCI field in the last ATM cell of a segmented frame received is set, then FECN of the Frame Relay frame will be set.
The management of ATM layer and FR PVC Status Management can operate independently. The PVC status from the ATM layer will be used when determining the status of the FR PVCs.
The command-line interface (CLI) permits the adding, configuring, deleting, and displaying of lines, channels, and ports on a FRASM card. In addition, the counters on the card can be displayed and cleared.
The FRASM command set permits the user to create protocol groups. Protocol groups are specified as either STUN, BSTUN, BNN, or BAN types. When a group has been created, ports and routes can be assigned as members of the group. Groups can be configured, displayed, and deleted. These commands affect the entire group thus permitting a number of ports to be configured with one command rather than having to configure each individually. Details of the CLI and individual commands are found in the Cisco MGX 8220 Command Reference.
The ATM UNI Service Module (AUSM) is a two-card set consisting of an AUSM function module front card and either a four or eight port T1 or E1 line module back card. The E1 line module cards are further categorized by BNC or DB15 connector type.
Up to 10 AUSMs may be installed in a shelf in slots 5 to 14.
The main function of the AUSM cards is to provide an ATM UNI/NNI interface at T1 or E1 rates so that ATM UNI user devices can transmit and receive traffic over an ATM BPX 8620 network.
The AUSM supports up to a maximum of 256 connections, which can be allocated across 4 T1 or E1 lines in any manner. The connections can be either VPC or VCC as follows:
The BNM performs the appropriate header translation and routes cells to the correct slot.
The AUSM has extensive traffic control features. ForeSight feature, providing virtual circuit and virtual path end-to-end flow control, is supported.
The AUSM contains 8000 cell queue buffers for each ingress and egress data flow. The Usage Parameter Control (UPC) algorithm and the queues are user configurable.
CAC is implemented to support separate % utilization factors, PCRs and MCRs for both ingress and egress CLI.
An illustration of the AUSM card set is provided in Figure 4-9.
The AUSM LED indicators are described in Table 4-1. All LED indicators are located on the faceplate of the front card.
The AUSM-8T1/E1 is a multipurpose card that supports up to 8-T1 or E1 ports and can be used for the following four MGX 8220 applications:
1. ATM Inverse Multiplexing nxT1 and nxE1 trunking
This application supports inverse multiplexed trunks between MGX 8220 shelves. In turn, this supports inverse multiplexed trunks between BPX 8620 and the IGX network nodes via MGX 8220 shelves and remote MGX 8220 shelves.
2. ATM UNI card with eight ports to provide a high port density service module
With all ten available slots installed with the AUSM-8T1/E1 cards, a single MGX 8220 shelf could support up to 80 individual T1/E1 lines.
In UNI/NNI mode each card can support 1000 data connections and 16 management connections. In STI format, each card can support 100 virtual paths.
3. UNI/NNI access to CPE and other Networks
This application allows access over an UNI to IMA-based CPE and over an NNI to another ATM network.
This application supports ATM ports over single T1/E1 line and IMA ports over multiple T1/E1lines (connected to IMA based CPE).
The following back cards are compatible with the AUSM-8T1/E1:
The 4-port AUSM back cards and IMATM backcards are not compatible with the AUSM-8T1/E1.
The AUSM-8T1/E1 has the following features:
AUSM-8T1/E1 LED indicators are described in Table 4-2. All LEDs are located on the faceplate of the front card.
Table 4-2 AUSM-8T1/E1 LED Indicators
|
An illustration of an AUSM-8T1/E1 front card is shown in Figure 4-10.
An illustration of the IMATM cards is provided in Figure 4-11.
The IMATM is a two-card set consisting of a function module front card and a line module back card. The following front card and line module sets are available:
|
The shelf may contain one or multiple IMATM card sets in any available service module slot. 1:1 IMATM redundancy is achieved by installing two card sets and a Y-cable.
The IMATM performs no MGX 8220 functions and is solely an extension to the BPX 8620 BNI card. The BPX 8620 can use up to eight T1 or E1 lines as a trunk (instead of a single T3 or E3 line) by using an IMATM card in the MGX 8220 shelf.
The IMATM accepts trunk signals from the BPX 8620 BNI over a single T3 or E3 connection and inverse multiplexes over multiple T1 or E1 lines. The other end of the inversed multiplexed trunk is another IMATM card in a remote MGX 8220 shelf. (See Figure 4-12.)
The IMATM can also be used to connect a remote MGX 8220 shelf to a BPX 8620 hub as shown in Figure 4-13.
Up to eight T1 or E1 links in the inverse multiplexed channel can be configured depending upon the bandwidth desired. Bandwidth of T1 links range from 1.54 Mbps for one link to 12.35 Mbps for all eight links. Bandwidth of E1 links range from 2 Mbps for one link to 16 Mbps for all eight links. The BNI port bandwidth is configured to match the IMATM bandwidth.
Additional links can be provisioned to provide some protection against link failure. To achieve this, the BNI trunk should be programmed to have a statistical reserve equal to the bandwidth of the extra links. In the event of a link failure, a minor alarm occurs but no rerouting. Without this feature, a single link failure will cause a major alarm and all connections will be rerouted over another trunk.
The IMATM LED indicators are described in Table 4-3. All LED indicators are located on the faceplate of the front card.
Table 4-3 IMATM LED Indicators
|
The 4-port Circuit Emulation Service Module (CESM) is a two-card set consisting of a CESM front card and a 4-port back card for T1 or E1 lines. The E1 line module cards are further categorized by BNC or DB15 connector type. The three possible line modules are
Up to 10 CESMs may be installed in a shelf in slots 5 through 14. A 1:N redundancy is supported through the SRM-T1E1 board.
The main function of the CESM cards is to provide a constant bit rate (CBR) service for T1/E1 ports over ATM BPX 8620 network.
The CESM converts DS1/E1 data streams into CBR AAL1 cells for transport across the ATM network.
The CPE clock source should be configured in "loop" mode.
CESM card supports either 4 T1 or 4 E1 ports. Each T1 or E1 port supports a single synchronous unstructured data stream with a data rate of 1.544 Mbps or T1 and 2.048 Mbps for E1. Data rates are not configurable. A single CESM card supports up to four connections.
Timing for the two ends of a CBR connection (termination at the MGX 8220 shelf) must be the same Stratum reference.
Performance monitoring of user applied structure (framing) is not supported.
The 4-port CESM card supports loopback diagnostics features through the addchanloop and addlnloop commands. Refer to the Cisco MGX 8220 Command Reference for details of these commands.
An illustration of the CESM card is provided in Figure 4-14.
The CESM 4-port LED indicators are described in Table 4-4. All LED indicators are located on the faceplate of the front card.
Table 4-4 CESM LED 4-Port Indicators
|
The 8-port Circuit Emulation Service Module (CESM) is a two-card set consisting of an CESM function module front card and either a 8-T1 or a 8-E1 line module back card. T1 lines use RJ48 connectors, E1 line module cards use either RJ48 or SMB connector types. The possible line modules are
Up to 10 CESMs may be installed in a shelf in slots 5 through 14. For T1 line versions, 1:N redundancy is supported either through redundant line modules or through the SRM-T1E1 board. Likewise, for T1/E1 versions, BERT and loopbacks are supported using the SRM.
A 1:N redundancy for E1 version is provided through redundant line modules only.
The main function of the CESM cards is to provide a constant bit rate (CBR) service for T1/E1 ports over ATM network.
The CESM converts DS1/E1 or data streams into CBR AAL1 cells for transport across the ATM network. The T1/E1 versions support a choice of structured or unstructured data transfer on a per-physical-interface basis.
The CESM card supports loopback diagnostics features through the addlnloop command.
Note The addchanloop command is not supported in the 8-port CESM. |
Refer to the Cisco MGX 8220 Command Reference for details of these commands.
The T1/E1 structured data transfer mode supports
The T1/E1 unstructured data transfer mode supports:
An illustration of the 8-port CESM cards is provided in Figure 4-15.
The CESM 8-port LED indicators are described in Table 4-5. All LEDs are located on the faceplate of the front card.
Table 4-5 CESM 8-Port LED Indicators
|
The available MGX 8220 back cards are as follows.
The MGX 8220 shelf provides back cards for service modules that connect to 4 T1, 4 E1, 8 T1, and 8 E1 lines. Depending upon the number of ports and the type of line (T1 or E1) DB-15, BNC, RJ-48, and SMB connectors are used. The possible back cards (see Figure 4-16) are
The back cards provide the physical line connections to either the T1 or E1 lines and communicate with their front cards through the MGX 8220 backplane. A front card/back card set must always be installed in the same slot position.
The FRSM-HS1 uses a back card that supports 4 X.21 ports using DB-15 connectors. Each port can support up to 4 Mbps.
The X.21 Physical interface specifies a DB-15 female connector (DCE type according to ISO 4903.) Pin functions can be controlled in software to change from DCE to DTE. A converter cable can be used to convert X.21 to V.35, if necessary.
Table 4-6 lists the supported line speeds for the FRSM-HS1.
The HSSI (High-Speed Serial Interface) back card supports two HSSI ports using a female SCSI-II connectors.
DTE-to-DCE control is achieved through combination software controls and a "NULL MODEM" connector.
When the SRM is used to provide 1:N redundancy for T1/E1 service modules, the standby (redundant) card set uses a special redundancy back card. There are three types of 4-port redundancy back cards: R-DB15-4T1, R-DB15-4E1, and R-BNC-4E1. There are three types of 8-port redundancy back cards: R-RJ48-T1-BC, R-RJ48-E1-BC, and R-SMB-E1-BC. The one you use depends upon the number of ports, the line type (T1 or E1), and the connector type (RJ-48, DB-15 or BNC) of your service module. (See Figure 4-16 and Figure 4-17.) When 1:N redundancy is invoked, the physical lines to the failed service module back card are still used. However, the signals are still routed to and from the redundant back card.
Posted: Thu Nov 20 21:43:17 PST 2003
All contents are Copyright © 1992--2003 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.