|
This chapter provides information on configuring and managing trunks which have at least one endpoint on an IGX node. If the trunk has an endpoint on a different type of node, such as a BPX, refer to the appropriate product documentation for specific information on configuring trunks on those nodes (see the "Related Documentation" section).
For information about trunks on the BPX, see the "Configuring Trunks and Adding Interface Shelves " chapter in the Cisco BPX 8600 Installation and Configuration manual.
Trunks are internode communication links used to connect two nodes in a network. A trunk can connect any combination of IGX and BPX nodes.
The IGX supports trunks using the following service modules: the NTM, the UXM, and the UXM-E (see Table 4-1).
For information about the hardware configuration required to set up a specific type of trunk, see Table 4-2. For more information on the cards listed in Table 4-2, see the "Service Modules" section.
Table 4-2 Trunk Types Supported on the IGX
When determining which type of trunk to configure, consider what features are supported by your available hardware, switch software release, and firmware image (see Table 4-3).
Table 4-3 Trunk Features Supported on the IGX
Feature | Description | Service Module | See |
---|---|---|---|
Configures a trunk over a public ATM network, connecting two private subnets. |
The "Virtual Trunking on the IGX" section, and the "Setting Up a Virtual Trunk" section |
||
Combines several T1 or E1 links to form a trunk with larger bandwidth. |
|||
Configures the IGX to allow allocation of switch resources to external controllers for call management or connection with other protocols (such as MPLS). |
Chapter 8, "Cisco IGX 8400 Series ATM Service" |
Chapter 2, "Cisco IGX 8400 Series Cards," provides additional information on features supported on each card. For switch software and firmware compatibility and feature support information, refer to the release notes for the switch software or firmware release.
A virtual trunk is a trunk defined over a public ATM service. Virtual trunks provide customers with a cost-effective way to build a private network over a public ATM network. This hybrid network configuration allows private virtual trunks to use the mesh capabilities of the public network to interconnect the nodes found in the private network.
To establish connectivity through a public ATM cloud, you allocate virtual trunks between the nodes on the edges of the public ATM network. With a single trunk port from the private network attached to a single ATM port within the public ATM network, the node uses virtual trunks to connect to multiple destination nodes on the other side of the public ATM network. Functionally, the virtual trunk is equivalent to a virtual path connection (VPC) provided by the public ATM network. By using a virtual trunk number, you differentiate between the virtual trunks found within a physical port.
ATM equipment within the public ATM network must support virtual path switching and must move incoming cells based on the virtual path ID (VPI) in the cell header. Within the public ATM network, the virtual trunk is a VPC, and can support CBR, VBR and ABR traffic. Because the virtual trunk is switched using the VPI value, the 16 virtual connection ID (VCI) bits defined in the ATM cell header are passed transparently through to the destination node. The VPI must be provided by the public ATM network administrator or your ATM service provider.
Congestion management (resource management) cells are also passed transparently through the network. While Cisco-proprietary features such as Advanced CoS Management and Optimized Bandwidth Management may not be supported within the public ATM network, the information can still carried through the public ATM network into the private, destination node.
The node's physical trunk interface to the public ATM network can be either a standard ATM UNI or NNI interface, as specified by the public ATM network administrator or ATM service provider. If the physical trunk interface is specified as NNI, an additional four bits of VPI addressing space become available.
Note The virtual trunk cannot provide a clock for transport across the public ATM network. |
The VPI value across the virtual trunk is identical for all cells on the virtual trunk. However, the VCI will differ according to the final destination of the cell. Before the cell enters the public ATM network on the virtual trunk, the cell header is translated to the user-configured VPI value for the trunk and a unique VCI value is assigned to the cell by switch software. As cells are received from the public ATM network by a BPX or IGX, these VPI and VCI values are mapped back to the appropriate VPI and VCI addresses used by the node for cell forwarding.
The IGX supports only the ATM-NNI and ATM-UNI cell header formats. The ATM-NNI cell header lacks the GFCI field found in the ATM-UNI cell header, so those four bits are added to the VPI to give a 12-bit VPI on ATM-NNI virtual trunks.
See Table 4-4 for a summary of VPI and VCI values.
Table 4-4 Values Used in VPI and VCI Addressing
Address Type | Value Range for UNI | Value Range for NNI |
---|---|---|
Note VPCs cannot be routed over a virtual trunk, due to the way virtual trunks are represented in the public ATM network. |
For information on virtual trunk support and compatibility, see the "Virtual Trunks Supported on the IGX" section. For information on setting up a virtual trunk, see the "Configuring a Virtual Trunk on the IGX" section.
Note Virtual trunks originating from the UXM and UXM-E URM cannot terminate on the BPX BNI card. For information on virtual trunks and the BPX BNI card, see the "Virtual Trunking" section in Chapter 1, "The BPX Switch: Functional Overview ," in the Cisco BPX 8600 Series Installation and Configuration guide. |
Note You cannot use a virtual trunk as an interface shelf (feeder) trunk; similarly, you cannot configure an interface shelf trunk to act as a virtual trunk, nor can you terminate interface shelf (feeder) connections on a virtual trunk. |
Virtual trunks are not supported in mixed networks, and require switch software Release 9.2 or later. See Table 4-5 for virtual trunk connections supported on the IGX.
Note The IGX supports a maximum of 15 virtual trunks per card, and a combined maximum of 32 logical trunks (physical and virtual trunks) per node. |
Each IGX node supports a combined maximum of 32 logical trunks (includes both physical and virtual trunks) per node.
IMA allows you to group physical T1 or E1 links to form a logical trunk with a higher data rate than a single T1 or E1 trunk. IMA provides the following features:
Note The IMA trunk does not fail unless the number of active ports falls below a user-specified retained link threshold. |
The IMA feeder node feature provides redundancy in case one of the physical lines on an IMA trunk fails. This reduces the chance of a single point of failure when a single feeder trunk is out of service. In addition, this feature allows you to configure the services on a feeder node instead of a routing node.
See Figure 4-2 for an example of an IGX IMA feeder node topology.
This section provides information on configuring a trunk with at least one endpoint on an IGX node. For information on configuring a trunk with one endpoint on a BPX node, also refer to the "Configuring Trunks and Adding Interface Shelves " chapter in the Cisco BPX 8600 Installation and Configuration guide.
When configuring a trunk with an endpoint on an IGX node, you will complete the following tasks:
1. Plan bandwidth usage (see the "Planning Bandwidth Usage" section).
2. Set up the trunk (see the "Setting Up a Trunk" section).
3. (Optional) Configure the virtual trunk (see the "Setting Up a Virtual Trunk" section).
4. (Optional) Configure IMA (see the "IMA on the IGX" section).
5. Configure connections onto the trunk (see the "IGX Line Configuration" section).
Before setting up a trunk on a node, you should plan bandwidth usage for each trunk with an endpoint on the node.
To optimize the node's ability to handle network traffic, you should plan for cellbus bandwidth allocation on the IGX node (see the "Planning for Cellbus Bandwidth Allocation" section).
To optimize available bandwidth on an IMA trunk or line, you should calculate the maximum transfer and receive rates for the IMA trunk or line (see the "Bandwidth on IMA Trunks and Lines" section).
To reduce the risk of failed connections on a trunk, you should estimate the connection load and calculate the statistical reserve that will be configured for the trunk.
Switch software on the NPM monitors and computes cellbus bandwidth requirements for each card installed in the node. However, for the UXM-E, you can reconfigure the card's cellbus bandwidth allocation in order to optimize the node's ability to handle network traffic.
Note ATM cell and FastPacket bandwidth on the cellbus is measured in universal bandwidth units (UBUs). |
When the UXM-E reports the back card interface to the NPM, switch software allocates a default number of UBUs to the card (see Table 4-6). This default number can be changed using the following procedure:
Note When you use the dspbusbw command, a yes/no prompt asks if you want firmware to retrieve the usage values. If you enter "y," the UXM-E readsthen clearsits registers and restarts its statistics gathering. If you enter "n," switch software displays the current values stored on the NPM. |
TimeSaver The Network Modeling Tool (NMT) helps you estimate the cellbus requirements using the projected load for all UXM-Es in the network. |
Step 2 Using the switch software cnfbusbw command, set the desired cellbus bandwidth allocation for the card.
Step 3 Continue with planning bandwidth usage (see the "Bandwidth on IMA Trunks and Lines" section).
Table 4-6 Default Cellbus Bandwidth Allocations for UXM-E Interfaces
Interface Type | Ports | Default UBUs | Default Cell Traffic (cps) | Default Cell + FastPacket Traffic (cps and fps) | Maximum UBUs | Maximum Cell Traffic (cps) | Maximum Cell and FastPacket Traffic (cps and fps) |
---|---|---|---|---|---|---|---|
The transfer and receive rates for an IMA trunk or line is the sum of all physical lines minus the overhead used by the IMA protocol. The overhead used by the IMA protocol is defined in the following rules:
For example, using an IMA line group defined as 8.1-4 with T1 lines, the following total bandwidth is possible:
TX (transfer) rate = RX (receive) rate = 24 x 4 DS0s - 1 DS0 = 95 DS0s
For an IMA line group defined as 8.1-5 with T1 lines, the following total bandwidth is possible:
TX rate = RX rate = 24 x 5 DS0s - 2 DS0s = 118 DS0s
If a physical line fails and the retained links threshold has not been reached, the switch automatically adjusts the total bandwidth downward to compensate for the failed physical line.
See Table 4-7 for available port speeds with different combinations of T1 or E1 interfaces for an IMA trunk or line group.
Table 4-7 Available Trunk Speeds for IMA Trunk or Line Groups
Interface | Trunk Speed (DS0) | Trunk Speed (cps) |
---|---|---|
.
Before setting up a trunk, finish setting up the nodes (see Chapter 3, "Cisco IGX 8400 Series Nodes"). After setting up the nodes, follow this procedure to set up a trunk between the nodes:
Step 2 Activate the trunk so it can begin generating idle cells to allow end-to-end communication by running the switch software uptrk command at each end of the trunk.
Tip If you run the uptrk command at only one end of the trunk, the trunk shows up in an alarm state on the node. To clear the alarm, run the uptrk command at both ends of the trunk. |
Step 3 Display the existing trunk parameters and determine which parameters need to be changed from the default values with the switch software dsptrkcnf command.
Step 4 Override the default values for the trunk by running the switch software cnftrk command at each end of the trunk.
Step 5 Add the trunk to the node with the switch software addtrk command. Adding the trunk causes the node to see it as a usable resource. You do not have to use the addtrk command on both ends of the trunk.
Note Virtual trunking is a purchased feature. Contact your Cisco account manager for more information (see the "Obtaining Technical Assistance" section on). |
Tip For information on setting up CoS, virtual slave interfaces, and other ATM services, see Chapter 8, "Cisco IGX 8400 Series ATM Service." |
Before setting up a virtual trunk, you must have finished setting up the nodes to be connected with a virtual trunk. Follow this procedure to configure a virtual trunk on the IGX:
Step 2 Confirm that the right front cards and back cards are in the correct slot, and that there are no compatibility issues.
Step 3 Activate the trunk with the switch software uptrk slot.port.vtrk command.
Step 4 Change the VPI to the value obtained from your ATM service provider with the switch software cnftrk command. For UNI virtual trunks, the VPI can range from 1 to 255. For NNI virtual trunks, the VIP can range from 1 to 4095.
Step 5 (Optional) Configure the number of connection IDs and the available bandwidth for the virtual trunk with the switch software cnfrsrc command.
Step 6 Add the virtual trunk with the switch software addtrk slot.port.vtrk command. You only need to use the addtrk command on one end of the trunk.
Note Each end of a virtual trunk can have a different port interface. However, both ends of the trunk must have the same trunk bandwidth, connection channels, cell format, and traffic classes. |
Managing IGX trunks primarily involves logging events, reconfiguring trunks as required by changing networking environments, and responding to alarms or error messages by troubleshooting the trunk as necessary. For information on troubleshooting a trunk on the IGX, see the "IGX Trunk Troubleshooting" section.
All trunk log events display the trunk number. Trunk event logs are accessible through the NMS or by using the switch software dsplog command at the CLI.
See Table 4-8 for an example of an IGX event log messaging.
Tip Some trunk parameters cannot be changed without first deleting the trunk. Check the full command description for the switch software cnftrk command in the Cisco WAN Switching Command Reference for details on the parameters that require trunk deletion. |
Note MPLS partitions are not affected by the reconfiguration of trunks or lines. |
Before reconfiguring a trunk, check the current trunk parameters using the switch software dsptrkcnf command. Then follow this procedure to reconfigure the trunk:
Step 2 (For parameters that require trunk deletion) Delete the trunk by entering the switch software deltrk command on the local node.
Step 3 Reconfigure the trunk on the local node with the switch software cnftrk command.
Step 4 Open a virtual terminal session with the remote node with the switch software vt command.
Step 5 Reconfigure the trunk on the remote node with the switch software cnftrk command.
Step 6 Enter the switch software bye command to close the virtual terminal session.
Step 7 If you deleted a trunk, use the switch software addtrk command on the local node to add the trunk.
To remove a trunk, follow this procedure:
Step 2 Using the switch software dntrk command on both nodes, deactivate (down) the trunk.
This section contains information on trunk alarms and switch software commands related to troubleshooting trunks on the IGX. These alarms and error messages display on the nodes serving as endpoints for the trunk.
For information on trunk alarms, see the "Trunk Alarms" section.
For information on troubleshooting procedures, see the "Troubleshooting an IGX Node" section in the Cisco IGX 8400 Series Installation Guide.
Trunk alarms indicate operational problems in the trunk and can be used to troubleshoot the trunk. Physical trunk alarms also apply to virtual trunks, and apply to all other trunks on the port. For more information on trunk alarms, see Table 4-9.
Note Switch software supports per-trunk statistical alarming on cell drops from each of the advanced CoS management queues on a virtual trunk. |
Table 4-9 Physical and Logical Trunk Alarms
Alarm Type | Physical | Logical | Statistical | Integrated | ||||
---|---|---|---|---|---|---|---|---|
T1 | E1 | T3 | E3 | SONET | ||||
Full command descriptions for the switch software commands listed in Table 4-10 can be accessed at one of the following links:
For information on IGX lines, refer to Chapter 5, "Cisco IGX 8400 Series Lines"
For installation and basic configuration information, see the Cisco IGX 8400 Series Installation Guide, Chapter 1, "Cisco IGX 8400 Series Product Overview"
For more information on switch software commands, refer to the Cisco WAN Switching Command Reference, Chapter 1, "Command Line Fundamentals ."
Posted: Mon May 12 15:35:41 PDT 2003
All contents are Copyright © 1992--2003 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.