cc/td/doc/product/wanbu/bpx8600/9_1
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Tag Switching

Tag Switching

This chapter contains an overview of tag switching and instructions for configuring the BPX 8650 for the tag switching feature:

This chapter contains the following:

Introduction

Tag switching enables routers at the edge of a network to apply simple tags to packets (frames), allowing devices in the network core to switch packets according to these tags with minimal lookup activity. Tag switching in the network core can be performed by switches, such as ATM switches, or by existing routers.

Tag Switching Benefits

For multi-service networks, tag switching enables the BPX switch to provide ATM, frame relay, and IP Internet service all on a single platform in a highly scalable way. Support of all these services on a common platform provides operational cost savings and simplifies provisioning for multi-service providers.

For internet service providers (ISPs) using ATM switches at the core of their networks, tag switching enables the Cisco BPX 8600 series and the Lightstream 1010 ATM switches to provide a more scalable and manageable networking solution than just overlaying IP over an ATM network. Tag switching avoids the scalability problem of too many router peers and provides support for a hierarchical structure within an ISPs network, improving scalability and manageability.

By integrating the switching and routing functions, tag switching combines the reachability information provided by the router function with the traffic engineering optimizing capabilities of the switches.

When integrated with ATM switches, tag switching takes advantage of switch hardware that is optimized to take advantage of the fixed length of ATM cells, and to switch these cells at wire speeds.

Tag Switching Overview

Tag switching is a high-performance, packet (frame) forwarding technology. It integrates the performance and traffic management capabilities of data link layer 2 with the scalability and flexibility of network layer 3 routing.

Tag switching enables switch networks to perform IP forwarding. It is applicable to networks using any layer 2 switching, but has particular advantages when applied to ATM networks. It integrates IP routing with ATM switching to offer scalable IP-over-ATM networks.

Tag switching is based on the concept of label switching, in which packets or cells are assigned short, fixed length labels. Switching entities perform table lookups based on these simple labels to determine where data should be forwarded.

In conventional layer 3 forwarding, as a packet traverses the network, each router extracts all the information relevant to forwarding from the layer 3 header. This information is then used as an index for a routing table lookup to determine the packet's next hop. This is repeated at each router across a network.

In the most common case, the only relevant field in the header is the destination field. However, as other fields could be relevant, a complex header analysis must be done at each router through which the packet travels.

In tag switching the complete analysis of the layer 3 header is performed just once, at the tag edge router at each edge of the network. It is here that the layer 3 header is mapped into a fixed length label, called a tag.

At each router across the network, only the tag needs to be examined in the incoming cell or packet in order to send the cell or packet on its way across the network. At the other end of the network, a tag edge router swaps the label out for the appropriate header data linked to that label.

Elements in a Tag Switching Network

The basic elements in a tag switching network are tag edge routers, tag switches, and a tag distribution protocol as defined in the following:

  Tag edge routers are located at the boundaries of a network, performing value-added network layer services and applying tags to packets. These devices can be either routers, such as the Cisco 7500, or multilayer LAN switches, such as the Cisco Catalyst 5000.
  These devices switch tagged packets or cells based on the tags. Tag switches may also support full Layer 3 routing or Layer 2 switching in addition to tag switching. Examples of tag switches include the Cisco LightStream 1010, Cisco BPX 8650, Cisco 7500, and future gigabit router systems from Cisco.
  The tag distribution protocol (TDP) is used in conjunction with standard network layer routing protocols to distribute tag information between devices in a tag switched network.

Tag Switching Operation at Layer 3

Tag switching operation comprises two major components:

Forwarding

The forwarding component is based on label swapping. When a tag switch (or router in a packet context) receives a packet with a tag, the tag is used as an index in a Tag Forwarding Information Base (TFIB). Each entry in the TFIB consists of an incoming tag and one or more sub-entries of the form

<outgoing tag, outgoing interface, outgoing link level information>

For each sub-entry, the tag switch replaces the incoming tag with the outgoing tag and sends the packet on its way over the outgoing interface with the corresponding link level information.

Figure 9-1 shows an example of tag switching. It shows an untagged IP packet with destination 128.89.25.4 arriving at Router A (RTA). RTA checks its TFIB and matches the destination with prefix 128.89.0.0/16. (The /16 denotes 16 network masking bits per the Classless Interdomain Routing (CIDR) standard.) The packet is tagged with an outgoing tag of 4 and sent toward its next hop RTB. RTB receives the packet with an incoming tag of 4 that it uses as an index to the TFIB. The incoming tag of 4 is swapped with outgoing tag 9, and the packet is sent out over interface 0 with the appropriate layer 2 information (e.g., MAC address) according to the TFIB. RTB did not have to do any prefix IP lookup based on the destination as was done by RTA. Instead, RTB used the tag information to do the tag forwarding. When the packet arrives at RTC, it removes the tag from the packet and forwards it as an untagged IP packet.

Control

The control component consists of tag allocation and maintenance procedures. The control component is responsible for creating tag bindings between a tag and IP routes, and then distributing these tag bindings to the tag switches.

The tag distribution protocol (TDP) is a major part of the control component. TDP establishes peer sessions between tag switches and exchanges the tags needed by the forwarding function.


Figure 9-1: Tag Forwarding Information Base (TFIB) in an IP Packet Environment


Tag Switching in an ATM WAN

With tag switching over an ATM network, the forwarding and control components can be described as follows:

Forwarding

Figure 9-2 shows the forwarding operation of an ATM switch in which the tags are designated VCIs. In Figure 9-2, an untagged IP packet with destination 128.89.25.4 arrives at router A (RTA). RTA checks its TFIB and matches the destination with prefix 128.89.0.0/16. RTA converts the AAL5 frame to cells, and sends the frame out as a sequence of cells on VCI 40. RTB, which is an ATM Tag Switch Router (TSR) controlled by a routing engine, performs a normal switching operation by switching incoming cells on interface 2/VCI 40 to interface 0/VCI 50


Figure 9-2: Tag Forwarding Information Base (TFIB) in an ATM Environment


Control

ATM-TSRs use the downstream-on-demand allocating mechanism. Each ATM-TSR maintains a forwarding information base (FIB) that contains a list of all IP routes that the ATM-TSR uses. This function is handled by the routing engine function which is either embedded in the switch or runs on an outside controller. For each route in its forwarding information base, the ATM Edge TSR identifies the next hop for a route. It then issues via TDP a request to the next hop for a tag binding for that route.

When the next hop ATM-TSR receives the route, it allocates a tag, creates an entry in its TFIB with the incoming tag changed to the allocated outgoing tag. The next action depends on whether the tag allocation is in an optimistic mode or a conservative mode. In optimistic mode, it will immediately return the binding between the incoming tag and the route to the TSR that sent the request. However, this may mean that it is not immediately able to forward tagged packets which arrive, as the ATM-TSR may not yet have an outgoing tag/VCI for the route. In conservative mode, it does not immediately return the binding, but waits until it has an outgoing tag.

In optimistic mode, the TSR that initiated the request receives the binding information, it creates an entry in its TFIB, and sets the outgoing tag in the entry to the value received from the next hop. The next hop ATM TSR then repeats the process, sending a binding request to its next hop, and the process continues until all tag bindings along the path are allocated.

In conservative mode, the next hop TSR sends a new binding request to its next hop, and the process repeats until the destination ATM edge TSR is reached. It then returns a tag binding to the previous ATM-TSR, causing it to return a tag binding, and so on until all the tag bindings along the path are established.

Figure 9-3 shows an example of conservative allocation. ATM edge TSR RTA is an IP routing peer to ATM-TSR RTB. In turn, ATM-TSR RTB is an IP routing peer to ATM-TSR-RTC. IP routing updates are exchanged over VPI/VCI 0/32 between RTA-RTB and RTB-RTC. For example:

    1. RTA sends a tag binding request toward RTB in order to bind prefix 128.89.0.0/16 to a specific VCI.

    2. RTB allocates VCI 40 and creates an entry in its TFIB with VCI 40 as the incoming tag.

    3. RTB then sends a bind request toward RTC.

    4. RTC issues VCI 50 as a tag.

    5. RTC sends a reply to RTB with the binding between prefix 128.89.0.0/16 and the VSI 50 tag.

    6. RTB sets the outgoing tag to VCI 50.

    7. RTB sends a reply to RTA with the binding between prefix 128.89.0.0/16 and the VCI 40 tag.

    8. RTA then creates an entry in its TFIB and sets the outgoing tag to VCI 40.

Optimistic mode operation is similar to that shown in Figure 9-3, except that the events labeled 7 and 8 in the figure may occur concurrently with event 3.


Figure 9-3: Downstream on Demand Tag Allocation, Conservative Mode Shown


Tag Switching and the BPX 8650

With tag switching the router function can be accomplished by either integrating the routing engine into the switch or by using a separate routing controller (associated router). The BPX 8650 tag switch combines a BPX switch with a separate router controller (Cisco Series 7200 or 7500 router). This has the advantage of separating the various services (e.g., AutoRoute, SVCs and tag switching) into separate logical spaces that do not interfere with one another. Figure 9-4 shows two scenarios, one in which the IP packets are applied to the network via the edge routers (either part of the BPX 8650 Tag Switches or independent 7500 Tag Edge Routers), and the other where IP packets are routed via a BPX 8620 to a BPX 8650 via Frame Relay permanent virtual circuits (PVCs).

Example 1: An IP packet is applied to the network via BPX 8650s on the edge of the network and then tag switching is used to forward the packet across the network via BPX 8650s. In this example the shortest path is not used, but rather the tag switch connection is routed across BPX 8650 TS-A, BPX 8650 TS-B, BPX 8650 TS-C, BPX 8650 TS-D, and 7500 TER-S. This particular routing path might, for example, have been selected with administrative weights set by the network operator. The designated tags for the cells transmitted across the network in this example are shown as 40, 60, 70, and 50, respectively. The router component of the tag switches that are located at the boundaries of the network (BPX 8650 TS-A, BPX 8650 TS-C, BPX 8650 TS-H), perform edge-routing network layer services including the application of tags to incoming packets. The tag edge routers, 7500 TER-S, 7500 TER-T, and 7500 TER-U, perform the same edge-routing network layer services in this example.

Example 2: An IP packet is routed to BPX 8650 TS-H at the interior of the network via BPX 8620 switch-F, using a Frame Relay PVC. The BPX switch interface for a Frame Relay PVC might be an MGX 8220 as shown. The applicable Frame Relay interface for BPX 8650 TS-H is connected via cable to a Frame Relay interface on its TSC where tag switching is performed on the incoming IP packet. The designated tag switching cells are shown with an tag designation of 12. These tag switching cells are then forwarded to BPX 8650 TS-D where they are converted back to an IP packet and routed to the CPE at the edge of the network as a Frame Relay PVC via an MGX 8220.

Tag Edge Router functionality is necessary to add and remove tags from IP packets, but not to switch tagged packets. Figure 9-4 shows 3 stand-alone Tag Edge Routers (TERs).These would typically be co-located with BPX 8650 Tag Switches in Points of Presence. However the Tag Switch Controller in a BPX 8650 can also act as a TER if required.

In Figure 9-4, Tag Switches A, C, D and H use this combined Tag Switch/Tag Edge Router functionality. Only Tag Switch B acts purely as a Tag Switch. Note also that the Tag Edge Router performance of a BPX 8650 Tag Switch is significantly lower than its Tag Switching performance. Typically there will be several Tag Edge Routers (or combined TSC/TERs) for each BPX Tag Switch.


Figure 9-4: BPX Tag Switching


Virtual Switch Interfaces

Figure 9-5 shows how virtual switch interfaces are implemented by the BPX switch in order to facilitate tag switching. A virtual switch interface (VSI) provides a standard interface so that a resource in the BPX switch can be controlled by additional controllers other than the BPX controller card such as a tag switch controller.

The tag switch controller is connected to the BPX switch using ATM T3/E3/OC3 interfaces on the TSC device (a 7200 or 7500 series router) and on a BXM card. The ATM OC3 interface on the 7200 router is provided by an ATM port adapter, on the 7500 router by an AIP or a VIP with ATM Port Adapter, and for the BXM front card by an ATM OC3 4-port or 8-port back card.


Figure 9-5: BPX Switch VSI Interfaces


A distributed slave model is used for implementing VSI in a BPX switch. Each BXM in a BPX switch is a VSI slave and communicates with the controller and other slaves, if needed when processing VSI commands. The VSI master sends a VSI message to one slave. Depending on the command, the slave either handles the command entirely by itself, or communicates with a remote slave to complete the command. For example, a command to obtain configuration information would be processed by one slave only. A command for connection setup would cause the local slave to communicate with the remote slave in order to coordinate with both endpoints of the connection.

Figure 9-6 shows a simplified example of a connection setup with endpoints on the same slave (BXM VSI), and an example of a connection setup with endpoints on different slaves (BXM VSIs) is shown in Figure 9-7.


Figure 9-6: Connection Setup, End Points on same VSI Slave



Figure 9-7:
Connection Setup, End Points on Different VSI Slaves


Tag Switching Resource Configuration Parameters

This section describes resource partitioning for tag switching. It includes the following:

Summary

Most tag switching configuration, including the provisioning of connections, is performed directly by the Tag Switch Controller. This is discussed separately; refer to the Tag Switching for the Cisco 7500/7200 Series Routers documentation. Configuration for tag switching on the BPX 8650 itself, consists of basic VSI configuration, including resource partitioning.

The following items need to be configured or checked on the BPX 8650:

  On each interface (port or trunk) on the BXM cards used for tag switching, two sets of resources must be divided up between traditional PVC connections and tag switching connections. The traditional PVC connections are configured directly on the BPX platform, and tag switching connections are set up by the TSC using the VSI. The following resources are partitioned on each interface:
  As with all ATM switches, the BPX switch supports up to a specified number of connections. On the BPX switch, the number of connections supported depends on the number of port/trunk cards installed. On each interface, space for connections is divided up between traditional BPX switch permanent virtual circuit (PVC) connections, and Tag Switching VCs (TVCs). The details of connection partitioning using the cnfrsrc command are discussed later in this section.
  These should be automatically configured correctly, but it is possible to change the configuration manually. Consequently, the configuration of the queues should be checked as part of the process of enabling tag switching. Configuration of these parameters using the cnfqbin command is discussed later in this chapter.
  A trunk must be enabled as VSI control interface, to allow a TSC to be connected. This is done using the addshelf command and selecting the VSI option.
  

Configuring VSI LCNS

In the first release of tag switching, each BXM card supports 16k connections in total, including PVCs, tag switching VSI connections, and connections used for internal signaling.

On the BXM, the ports are grouped into port groups, and a certain number of connections is available to each port group. For example, an 8-port-OC3 BXM has two port groups, consisting of ports 1-4 and 5-8, respectively.

Each port group for the various versions of the BXM cards has a separate connection pool as specified in Table 9-1.


Table 9-1: BXM Port Groups
BXM Card Type Number of Port Groups Port Group Size LCN Limit per Port Group Average Connections per Port

8-T3/E3

1

8 ports

16k

2048

12-T3/E3

1

12 ports

16k

1365

4-OC3

2

2 ports

8k

4096

8-OC3

2

4 ports

8k

2048

1-OC12

1

1 port

16k

16384

2-OC12

2

1 port

8k

8192

For tag switching, connections are allocated to VSI partitions. On the BPX 8650, for Release 9.1, only one VSI partition is used. In the future, other VSI partitions may be used to support controllers other than the Tag Switch Controller (e.g., 7200 and 7500 series routers). Also, currently there is only one VSI partition per port, but in the future multiple VSI partitions may be assigned to a given port.

When configuring connection partitioning for a BXM card, with one VSI partition per port, a number of connection spaces (LCNs) are assigned to each port as listed in Table 9-2. The cnfrsrc command is used to configure partition resources.


Note When the configuring the port using the cnfrsrc command, the term LCN is used in place of connection.

Table 9-2: Port Connection Allocations
Connection Type cnfrsrc cmd parameter Variable Description

AutoRoute LCNs

maxpvclcns

a(x)

Represents the number of AutoRoute (PVC) LCNs configured for a port.

Minimum VSI LCNs for partition 1

minvsilcns

n1(x)

Represents the guaranteed minimum number of LCNs configured for the port VSI partition. This value is not necessarily always available. Reaching it is dependent on FIFO access to the unallocated LCNs in the port group common pool.

Maximum VSI LCNs for partition 1

maxvsilcns

m1(x)

Represents the maximum number of LCNs configured for the port VSI partition. This value is not necessarily reached. It is dependent on FIFO access to the unallocated LCNs in the port group common pool.

(where x is the port number, and subscript "1" is the partition number)

AutoRoute is guaranteed to have its assigned connection spaces (LCNs) available. Tag switching, uses one connection space (LCN) per Tag VC (TVC). This is usually one connection space (LCN) per source-destination pair using the port where the sources and destinations are tag edge routers.

Beyond the guaranteed minimum number of connection spaces (LCNs) configured for a port VSI partition, a tag switching partition uses unallocated LCNs on a FIFO basis from the common pool shared by all ports in the port group. These unallocated LCNs are accessed only after a port partition has reached its guaranteed minimum limit, "minvsilcns", as configured by the cnfrsrc command.

Useful Default Allocations

Reasonable default values for all ports on all cards are listed in Table 9-3. If these values are not applicable, then other values may be configured using the cnfrsrc command.


Table 9-3:
Port Connection Allocations, Useful Default Values
Connection Type Variable Useful Default Value cnfrsrc cmd parameter

AutoRoute LCNs

a(x)

256

maxpvclcns

Minimum VSI LCNs for partition 1

n1(x)

512

minvsilcns

Maximum VSI LCNs for partition 1

m1(x)

16384

maxvsilcns

Different types of BXM cards support different maximums. If you enter a value greater than the allowed maximum, a message is displayed with the allowable maximum.

Here, a(x) = 256, n1(x) = 512, and m1(x) = 16384.

The next section describes more rigorous allocations which may be configured in place of using these default allocations.

Details of More Rigorous Allocations

More rigorous allocations are possible as may be desired when the default values are not applicable. For example, the LCN allocations for a port group must satisfy the following limit:

sum ( a (x) ) + sum ( n1 (x) ) +t*270 <= g

In this expression, "a (x)" represents AutoRoute LCNs, "n1 (x)" represents the guaranteed minimum number of VSI LCNs, "t" is the number of ports in the port group that are configured as AutoRoute trunks, and "g" is the total number of LCNs available to the port group. Figure 9-8 shows the relationship of these elements.

The "270" value reflects the number of LCNs which are reserved on each AutoRoute trunk for internal purposes. If the port is configured in port rather than trunk mode, "t" = 0, and t*270 drops out of the expression.

For detailed information on the allocation of resources for VSI partitions, refer to the cnfrsrc command description in the section, Command Reference in this chapter.


Figure 9-8: Port VSI Partition LCN Allocation Elements



Note  Tag switching can operate on a BXM card configured for either trunk (network) or port (service) mode. If a BXM card is configured for port (service) mode, all ports on the card are configured in port (service) mode. If a BXM card is configured for trunk (network) mode, all ports on the card are configured for trunk (network) mode. When the card is configured for trunk mode, the trunks reserve some connection bandwidth.

Requirements

List of Terms

The following terms are defined for a tag switching context only, not for general situations:

ATM edge TSR—A tag switching router that is connected to the ATM-TSR cloud through TC-ATM interfaces. The ATM edge TSR adds tags to untagged packets and strips tags from tagged packets.

ATM-TSR—A tag switching router with a number of TC-ATM interfaces. The router forwards the cells from these interfaces using tags carried in the VPI and/or VCI field.

BPX switch—The BPX switch is a carrier quality switch, with trunk and CPU hot standby redundancy.

BPX-TSR—An ATM tag switch router consisting a tag switch controller (series 7200 or 7500 router) and a tag controlled switch (BPX switch).

BXM—Broadband Switch Module. ATM port and trunk card for the BPX switch.

CLI—Command line interface.

extended tag ATM interface—A new type of interface supported by the remote ATM switch driver and a particular switch-specific driver that supports tag switching over an ATM interface on a remotely controlled switch.

external ATM interface—One of the interfaces on the slave ATM switch other than the slave control port. It is also referred to as an exposed ATM interface, because it is available for connections outside of the tag controlled switch.

LCNs—A common pool of logical connection numbers is defined per port group. The partitions in the same port group share these LCNs. New connections are assigned LCNs from the common pool.

master control port—A physical interface on a TSC that is connected to one end of a slave control link.

Ships in the Night (SIN)—The ability to support both tag switching procedures and ATM Forum protocols on the same physical interface, or on the same router or switch platform. In this mode, the two protocol stacks operate independently.

slave ATM switch—An ATM switch that is being controlled by a TSC.

slave control link—A physical connection, such as an ATM link, between the TSC and the slave switch, that runs a slave control protocol such as VSI.

slave control port—An interface that uses a TSC to control the operation of a slave ATM switch (for example, VSI). The protocol runs on the slave control link.

remote ATM switch driver—A set of interfaces that allow IOS software to control the operation of a remote ATM switch through a control protocol, such as VSI.

tag controlled switch—The tag switch controller and slave ATM switch that it controls, viewed together as a unit.

Tag switch controller (TSC)—An IOS platform that runs the generic tag switching software and is capable of controlling the operation of an external ATM (or other type of) switch, making the interfaces of the latter appear externally as TC-ATM interfaces.

tag switching router (TSR)—A Layer 3 router that forwards packets based on the value of a tag encapsulated in the packets.

TC-ATM interface—A tag switching interface where tags are carried in the VPI/VCI bits of ATM cells and where VC connections are established under the control of tag switching control software.

TFIB—Tag Forwarding Information Base (TFIB). A data structure and way of managing forwarding in which destinations and incoming tags are associated with outgoing interfaces and tags.

TVC—Tag switched controlled virtual circuit (TVC). A virtual circuit (VC) established under the control of tag switching. A TVC is not a PVC or an SVC. It must traverse only a single hop in a tag-switched path (TSP), but may traverse several ATM hops only if it exists within a VP tunnel.

VP tunnel—In the context of ATM tag switching, a VP tunnel is a TC-ATM interface that traverses one or more ATM switches that do not act as ATM-TSRs.

VSI—Virtual Switch Interface. The protocol that enables a TSC to control an ATM switch over an ATM link.

VSI slave—In a hardware context, a switch or a port card that implements the VSI. In a software context, a process that implements the slave side of the VSI protocol.

VSI master—In a hardware context, a device that controls a VSI switch (for example, a VSI tag switch controller). In a software context, a process that implements the master side of the VSI protocol.

Related Documents

Configuration Management

The BPX switch must be initially installed, configured, and connected to a network. Following this, connections can be added to the BPX switch.

For tag switching, the BPX node must be enabled for tag switching.The BXM cards that will be used to support tag switching connections must also be configured properly, including setting up resources for the tag switching VSIs. In addition, a Tag Switch Controller (7200 or 7500 series router) must be connected to one of the BXM cards configured for tag switching.

Instructions for configuring the BPX switch and BXM cards for tag switching are provided in the next section.

Instructions for configuring the router are provided in the applicable tag switch controller documents, such as the:

  

Configuration Criteria

Tag switching for VSIs on a BXM card is configured using the cnfrsrc and cnfqbin commands. Qbin 10 is assigned to tag switching.

The cnfqbin Command

The cnfqbin command is used to adjust the threshold for the traffic arriving in Qbin 10 of a given VSI interface as away of fine tuning traffic delay.

If the cnfqbin command is used to set an existing Qbin to disabled, the egress of the connection traffic to the network is disabled. Re-enabling the Qbin restores the egress traffic.

The cnfrsrc Command

The cnfrsrc command is used to enable a VSI partition and to allocate resources to the partition. An example of a cnfrsrc command is shown in the following example. If the cnfrsrc command is used to disable a partition, those connections are deleted.

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Port/Trunk : 4.1 Maximum PVC LCNS: 256 Maximum PVC Bandwidth:26000 Min Lcn(1) : 0 Min Lcn(2) : 0 Partition 1 Partition State : Enabled Minimum VSI LCNS: 512 Maximum VSI LCNS: 7048 Start VSI VPI: 2 End VSI VPI : 15 Minimum VSI Bandwidth : 26000 Maximum VSI Bandwidth : 100000 Last Command: cnfrsrc 4.1 256 26000 1 e 512 7048 2 15 26000 100000 Next Command:

A detailed description of the cnfrsrc parameters is provided later in this chapter in the Command Reference section under the heading cnfrsrc. A brief summary of the parameters and their use is provided in Table 9-4.


Table 9-4: cnfrsrc Parameter Summary
Parameter (cnfrsrc) Example Value Description

slot.port

4.1

Specifies the slot and port number for the BXM

maxpvclcns

256

The maximum number of LCNs allocated for AutoRoute PVCs for this port.

maxpvcbw

26000

The maximum bandwidth of the port allocated for AutoRoute use.

partition

1

Partition number

e/d

e

enables or disables the VSI partition

minvsilcns

512

The minimum number of LCNs guaranteed for this partition.

maxvsilcns

7048

The total number of LCNs the partition is allowed for setting up connections. Cannot exceed the port group max shown by The dspcd command.

vsistartvpi

2

Should be set to "2" or higher for ports in trunk mode because "1" is reserved for AutoRoute. For ports in port mode it should be set to "1". By default the TSC (e.g, 7200 or 7500 series router) will use either a starting VSI VPI of 1 or 2 for tag switching, whichever is available. They default to 1.

vsiendvpi

15

Two VPIs are sufficient for the current release, although it may be advisable to reserve a larger range of VPIs for later expansion, for example, VPIs 2-15.

vsiminbw

26000

The minimum port bandwidth allocated to this partition in cells/sec. Not used in this release. Entered values are ignored.

vsimaxbw

100000

The maximum port bandwidth guaranteed to this partition. The actual bw may be as high as the line rate. This value is used for VSI QBIN bandwidth scaling.

Configuration Example

The following initial configuration example for a BPX tag switching router is with respect to a BXM OC3 card located in slot 4 of the BPX switch, a Tag Switch Controller (e.g., 7500 or 7200 series router) connected to BXM port 4.1, and with connections to two tag switching routers in the network at BXM ports 4.2 and 4.3, respectively, as shown in Figure 9-9.


Note  Whether a BXM card operates in trunk or port mode is determined by how the first port is brought up. Once the first port is upped, the following ports can only be upped in the same mode, that is by using either the upport or uptrk command, as applicable. For tag switching, the BXM may operate in either trunk or port mode.

Figure 9-9: BPX Tag Switching Router with BXM in Slot 4



Step 1   Log in to the BPX switch.

Step 2   Check the card status by entering the command:

  dspcds

The card status, for card in slot 4 in this example, should be "standby".

If the card status is OK, proceed to step 4, otherwise, proceed to step 3.

Step 3   If the card does not come up in standby, perform the following actions as required:

  resetcd 4 h

Step 4   Enter the dspcd command to check the port group max that can be entered for the maxvsilcn parameter of the cnfrsrc command. In this example, the maximum value for a port group is 7048.

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Detailed Card Display for BXM-155 in slot 4 Status: Active Revision: CD18 Serial Number: 693313 Fab Number: 28-2158-02 Queue Size: 228300 Support: FST, 4 Pts,OC3,Vc Chnls:16320,PG[1]:7048,PG[2]:7048 PG[1]:1,2, PG[2]:3,4, Backcard Installed Type: LM-BXM Revision: BA Serial Number: 688284 Supports: 8 Pts, OC3, MMF Md Last Command: dspcd 4 Next Command:

Step 5   On the BXM in slot 4, bring up the ports 4.1, 4.2, and 4.3, as follows:


Note The following example enables ports 4.1, 4.2, and 4.3 in trunk mode with the uptrk command, they could also all be upped in port mode using the upport command. This is because tag switching and the VSI make no distinction between a "port" and a "trunk".
  uptrk 4.1
  uptrk 4.2
  uptrk 4.3

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:39 PST TRK Type Current Line Alarm Status Other End 2.1 OC3 Clear - OK j4a/2.1 3.1 E3 Clear - OK j6c(AXIS) 5.1 E3 Clear - OK j6a/5.2 5.2 E3 Clear - OK j3b/3 5.3 E3 Clear - OK j5c(IPX/AF) 6.1 T3 Clear - OK j4a/4.1 6.2 T3 Clear - OK j3b/4 4.1 OC3 Clear - OK VSI(VSI) Last Command: uptrk 4.1 Next Command:

Step 6   Port 4.1 is the slave interface to the tag switch controller. Configure the VSI partitions for port 4.1 as follows:

  cnfrsrc 4.1
  PVC LCNs: [256] {accept default value}
  max PVC bandwidth: 26000
  partition: 1
  enabled: e
  VSI min LCNs: 512
  VSI max LCNs: 7048 {varies with BXM type
  VSI start VPI: 2
  VSI end VPI: 15
  VSI min b/w: 26000
  VSI max b/w: 100000

or with one entry as follows:

  cnfrsrc 4.1 256 26000 1 e 512 7048 2 15 26000 100000

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Port/Trunk : 4.1 Maximum PVC LCNS: 256 Maximum PVC Bandwidth:26000 Min Lcn(1) : 0 Min Lcn(2) : 0 Partition 1 Partition State : Enabled Minimum VSI LCNS: 512 Maximum VSI LCNS: 7048 Start VSI VPI: 2 End VSI VPI : 15 Minimum VSI Bandwidth : 26000 Maximum VSI Bandwidth : 100000 Last Command: cnfrsrc 4.1 256 26000 1 e 512 7048 2 15 26000 100000 Next Command:
Note It is possible to have PVCs terminating on the Tag Switch Controller itself, as shown in
Figure 9-4. This example reserves approximately 10 Mbps (26000 cells/sec) for PVCs, and allows up to 256 PVCs on the switch port connected to the TSC.

Note The VSI max and min logical connections (LCNs) will determine the maximum number of tag virtual connections (TVCs) that can be supported on the interface. The number of TVCs required on the interface depends on the routing topology of the tag switch.

Note By default the TSC will use either a starting VSI VPI of 1 or 2 for tag switching, whichever is available. If both are available, a starting VSI VPI of 1 is used. The VPI range should be 2-3 on a BPX VSI connected to a 7200 or 7500 AIP. If VPI 2 is not to be used, the tag switching VPI interface configuration command can be used on the TSC to override the defaults

Note The VSI range for tag switching on the BPX switch is configured as a VSI partition, usually VSI partition number 1. VSI VPI 1 is reserved for autoroute, so the VSI partition for tag switching should start at VPI 2. Two VPIs are sufficient for the current release, although it may be advisable to reserve a larger range of VPIs for later expansion, for example, VPIs 2-15.

Step 7   Ports 4.2 and 4.3 are connected to other tag switch router ports in this example and support TVCs across the network. Configure the VSI partitions for ports 4.2 and 4.3 by repeating the procedures in the previous step, but entering 4.2 and 4.3, where applicable.

Maximum VSI LCNs (logical connection numbers) determine the number of connections that can be made to each port. For a description of how the LCNs may be assigned to a port, refer to Configuring VSI LCNS.

If the interfaces require other than a max PVC bandwidth of 10 Mbps or require other than a PVC LCN configuration of 256, adjust the configuration accordingly.

Step 8   For this release, Class of Service buffer 10 is used for tag switching connections. Check the queue buffer 10 configurations for port 4.1 as follows:

  dspqbin 4.1 10

The qbin configuration should be as shown in the following example:


Note  VC connections are grouped into large buffers called qbins. (per-VC queues can be specified on a connection-by connection basis also). In this release, all VSI connections use qbin 10 on each interface.

Sample Display:

Sample Display: n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:41 PST Qbin Database 4.1 on BXM qbin 10 Qbin State: Enabled Minimum Bandwith: 0 Qbin Discard threshold: 65536 Low CLP/EPD threshold: 95% High CLP/EPD threshold: 100% EFCI threshold: 40% This Command: cnfqbin 4.1 10 'E' to Enable, 'D' to Disable [E]: Next Command:

If the qbin is not configured as shown in the example, configure the queues on the ports using the cnfqbin command:

  cnfqbin 4.1 10
  enable/disable: e

For all other parameters, accept the (default).

The previous parameters can also be set for qbin 10 as follows:

  cnfqbin 4.1 10 e 0 65536 95 100 40

Sample Display:

Sample Display: n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:41 PST Qbin Database 4.1 on BXM qbin 10 Qbin State: Enabled Minimum Bandwith: 0 Qbin Discard threshold: 65536 Low CLP/EPD threshold: 95% High CLP/EPD threshold: 100% EFCI threshold: 40% Last Command: cnfqbin 4.1 10 e 0 65536 95 100 40 Next Command:

Step 9   Configure the Qbin 10 for ports 4.2 and 4.3 by performing the procedures in the previous step, but entering port 4.2 and 4.3 where applicable.

Step 10   5. Add a VSI controller to port 4.1, controlling partition 1

  addshelf 4.1 vsi 1 1

Note  The second "1" in the addshelf command is a controller ID. Controller IDs must be in the range 1-32, and must be set identically on the TSC and in the addshelf command. A controller id of 1 is the default used by the TSC.

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:42 PST BPX Interface Shelf Information Trunk Name Type Alarm 3.1 j6c AXIS MIN 5.3 j5c IPX/AF MIN 4.1 VSI VSI OK Last Command: addshelf 4.1 vsi 1 1 Next Command:

Checking and Troubleshooting

Use the following procedure as a quick checkout of the tag switching configuration and operation with respect to the BPX switch.


Step 1   Wait a while, and check whether the controller sees the interfaces correctly;

on the TSC, enter the following command:

   tsc# show controllers VSI descriptor

and the an example output is:


Note Check the TSC on-line documentation for the most current information.
Phys desc: 4.1 Log intf: 0x00040100 (0.4.1.0) Interface: slave control port IF status: n/a IFC state: ACTIVE Min VPI: 0 Maximum cell rate: 10000 Max VPI: 10 Available channels: 999 Min VCI: 0 Available cell rate (forward): 100000 Max VCI: 65535 Available cell rate (backward): 100000

Phys desc: 4.2 Log intf: 0x00040200 (0.4.2.0) Interface: ExtTagATM2 IF status: up IFC state: ACTIVE Min VPI: 0 Maximum cell rate: 10000 Max VPI: 10 Available channels: 999 Min VCI: 0 Available cell rate (forward): 100000 Max VCI: 65535 Available cell rate (backward): 100000

Phys desc: 4.3 Log intf: 0x00040300 (0.4.3.0) Interface: ExtTagATM3 IF status: up IFC state: ACTIVE Min VPI: 0 Maximum cell rate: 10000 Max VPI: 10 Available channels: 999 Min VCI: 0 Available cell rate (forward): 100000 Max VCI: 65535 Available cell rate (backward): 100000

-------

Step 2   If there are no interfaces present, first check that card 4 is up,

with, on the BPX switch:

   dspcds

and, if the card is not up:

   resetcd 4 h

and/or remove the card to get it to reset if necessary.


Note This example assumes that the controller is connected to card 4 on the switch. Substitute a different card number, as applicable.

Step 3   Check the trunk status with the following command:

   dsptrks

The dsptrks screen should show 4.1, 4.2 and 4.3, with the "Other End" of 4.1 reading

"VSI (VSI)". A typical dsptrks screen example follows:

Sample Display

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:45 PST TRK Type Current Line Alarm Status Other End 2.1 OC3 Clear - OK j4a/2.1 3.1 E3 Clear - OK j6c(AXIS) 5.1 E3 Clear - OK j6a/5.2 5.2 E3 Clear - OK j3b/3 5.3 E3 Clear - OK j5c(IPX/AF) 6.1 T3 Clear - OK j4a/4.1 6.2 T3 Clear - OK j3b/4 4.1 OC3 Clear - OK VSI(VSI) 4.2 OC3 Clear - OK VSI(VSI) 4.3 OC3 Clear - OK VSI(VSI) Last Command: dsptrks Next Command:

Step 4   Enter the dspnode command.

   dspnode

The resulting screens should show trunk 4.1 as type VSI. A typical dspnode screen follows:

Example of dspnode screen.

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:46 PST BPX Interface Shelf Information Trunk Name Type Alarm 3.1 j6c AXIS MIN 5.3 j5c IPX/AF MIN 4.1 VSI VSI OK 4.2 VSI VSI OK 4.3 VSI VSI OK Last Command: dspnode Next Command:

Step 5   Enter the dsprsrc command as follows:

   dsprsrc 4.1 1

The resulting screen should show the settings shown in the following example:

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:47 PST Port/Trunk : 4.1 Maximum PVC LCNS: 256 Maximum PVC Bandwidth:26000 Min Lcn(1) : 0 Min Lcn(2) : 0 Partition 1 Partition State : Enabled Minimum VSI LCNS: 512 Maximum VSI LCNS: 7048 Start VSI VPI: 2 End VSI VPI : 15 Minimum VSI Bandwidth : 26000 Maximum VSI Bandwidth : 100000 Last Command: dsprsrc 4.1 1 Next Command:

Step 6   Enter the dspqbin command as follows:

   dspqbin 4.1 10

The resulting screen should show the settings shown in the following example:

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:48 PST Qbin Database 4.1 on BXM qbin 10 Qbin State: Enabled Minimum Bandwidth: 0 Qbin Discard threshold: 65536 Low CLP threshold: 95% High CLP threshold: 100% EFCI threshold: 40% Last Command: dspqbin 4.1 10 Next Command:

Step 7   If interfaces 4.2 and 4.3 are present, but not enabled, perform the previous debugging steps for interfaces 4.2 and 4.3 instead of 4.1, except for the dspnode command which does not show anything useful pertaining to ports 4.2 and 4.3.

Step 8   Try a ping on the tag switch connections. If the ping doesn't work, but all the tag switching and routing configuration looks correct, check that the TSC has found the VSI interfaces correctly by entering the following command at the TSC:

   tsc# show tag int

Step 9   If the interfaces are not shown, re-check the configuration of port 4.1 on the BPX switch as described in the previous steps.

Step 10   If the VSI interfaces are shown, but are down, check whether the TSRs connected to the BPX switch show that the lines are up. If not, check such items as cabling and connections.

Step 11   If the TSCs and BPX switch show the interfaces are up, but the TSC doesn't, enter the following command on the TSC:

   tsc# reload

Step 12   If the "show tag int" shows that the interfaces are up, but the ping doesn't work, enter the follow command at the TSC:

   tsc# sho tag tdp disc

The resulting display should show something similar to the following:

Local TDP Identifier: 30.30.30.30:0 TDP Discovery Sources: Interfaces: ExtTagATM2.1: xmit/recv ExtTagATM3.1: xmit/recv -----------------

Step 13   If the interfaces on the display show "xmit" and not "xmit/recv", then the TSC is sending TDP messages, but not getting responses. Enter the following command on the neighboring TSRs.

  tsc# sho tag tdp disc
  If resulting displays also show "xmit" and not "xmit/recv", then one of two things is likely:

Step 14   Check the VSI configuration on the switch again, for interfaces 4.1, 4.2, and 4.3, paying particular attention to:


Note VSI partitioning and resources must be set up correctly on the interface connected to the TSC, interface 4.1 in this example, as well as interfaces connected to other tag switching devices.

Provisioning and Managing Connections

Instructions for configuration of the BPX switch including the setting of VSI partitions for tag switching are provided in this document. Adding (provisioning) and administering connections is performed from the Tag Switch Controller. For further information on the Tag Switch Controller, refer to:

  Tag Switching for the Cisco 7500/7200 Series Routers

Statistics

Statistics are monitored via the Tag Switch Controller. Refer to the Cisco StrataView Plus Operations Guide for information on monitoring statistics.

Command Reference

This section provides a description of the BPX switch and TSC commands referenced in this chapter on tag switching. They are presented in the following order:

BPX Switch Commands

A summary of the following commands is provided in this section. For complete descriptions of user and superuser commands, refer to the Cisco WAN Switching Command Reference and the Cisco WAN Switching SuperUser Command Reference documents.

TSC Commands

tsc# show controller vsi descriptor

tsc# show tag int

tsc# reload

tsc# sho tag tdp disc

For the TSC command reference information, refer to the appropriate router 7200 or 7500 source documentation.

addshelf

Adds an ATM link between a hub node and an interface shelf such as an MGX 8220, IPX shelf, or IGX shelf in a tiered network, or an ATM link between a BXM card on a BPX node and a tag switch controller such as a series 7200 or 7500 router.

Syntax

Tag switch controller:

addshelf <slot.port> <device-type> <control partition> <control ID>

Interface shelf:

addshelf <slot.port> <shelf-type> <vpi> <vci>

Examples

Tag switch controller: addshelf 4.1 vsi 1 1

Interface shelf: addshelf 12.1 A 21 200

Attributes

Privilege Jobs Log Node Lock

1-4

Yes

Yes

BPX switch for tag switch controller,

BPX switch and IGX switch for IPX and IGX shelves,

BPX switch for the MGX 8220

Yes

Related Commands

delshelf, dspnode, dsptrk, dspport

Description for Tag Switching

For tag switching, before it can carry traffic, the link to a tag switch controller must be "upped" (using either uptrk or upport) at the BPX node. The link can then be "added" to the network (using addshelf). Also, the link must be free of major alarms before you can add it with the addshelf command.


Note Once a port on the BXM is upped in either trunk or port mode by either the uptrk or upport commands, respectively, all other ports can only be "upped" in the same mode.

Tag Switching Parameters-addshelf

Parameter Description

slot.port

Specifies the BXM slot and port number. (The port may be configured for either trunk (network) or port (service) mode.)

device-type

vsi, which is "virtual switch interface" and specifies a virtual interface to a tag switch controller (TSR) such as a 7200 or 7500 series router.

control partition

_________________________________________________

control ID

Control IDs must be in the range 1-32, and must be set identically on the TSC and in the addshelf command. A control ID of "1" is the default used by the tag switch controller (TSC).

Example for Tag Switching

Add a tag switch controller link to a BPX node, by entering the addshelf command at the desired BXM port as follows:

addshelf 4.1 vsi 1 1

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST BPX Interface Shelf Information Trunk Name Type Alarm 5.1 j6c AXIS MIN 5.3 j5c IPX/AF MIN 4.1 VSI VSI OK Last Command: addshelf 4.1 vsi 1 1 Next Command:
Description for Interface Shelves

An interface shelf can be one of the following:

The signaling protocol that applies to the trunk on an interface shelf is Annex G.

Each IPX/AF, IGX/AF, or MGX 8220 has one trunk that connects to the BPX or IGX node serving as an access hub. A BPX hub can support up to 16 T3 trunks to the interface shelves. An IGX hub can support up to 4 trunks to the interface shelves.

Before it can carry traffic, the trunk on an interface shelf must be "upped" (using uptrk) on both the interface shelf and the hub node and "added" to the network (using addshelf). Also, a trunk must be free of major alarms before you can add it with the addshelf command.

Interface Shelf Parameters-addshelf

Parameter Description

slot.port

slot.port,

Specifies the slot and port number of the trunk

shelf type

I or A,

On a BPX node, shelf type specifies the type of interface shelf when you execute addshelf. The choices are I for IPX/AF or IGX/AF or A for the MGX 8220. On an IGX hub, only the IGX/AF is possible, so shelf type does not appear.

vpi vci

Specifies the vpi and vci (Annex G vpi and vci used). For the MGX 8220 only, the valid range for vpi is 5-14 and for vci is 16 -271. For an IPX/AF interface shelf, the valid range for both vpi and vci is 1-255.

Example for Interface Shelves

Add an MGX 8220 at trunk 11.1. After you add the shelf, the screen displays a confirmation message and the name of the shelf. Add the MGX 8220 (may be referred to in screen as AXIX) as follows:

addshelf 11.1 a

The sample display shows the partial execution of the command with the prompt requesting that the I/F type be entered:

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST BPX Interface Shelf Information Trunk Name Type Alarm 1.3 AXIS240 AXIS OK 11.2 A242 AXIS OK This Command: addshelf 11.1 Enter Interface Shelf Type: I (IPX/AF), A (AXIS)

cnfqbin

Tag switched VC connections are grouped into large buffers called Qbins. This command configures the Qbins. For the EFT release of tag switching, Qbin 10 is used for tagged switch connections.

Syntax

cnfqbin <slot.port> <Qbin_#> <e/d> <BWmin> <discard_thr> <CLPlo> <CLPhi> <EFCI_thr>

Example

cnfqbin 13.4 10 E 0 65536 6095 80100 40

Attributes

Privilege Jobs Log Node Lock

BPX switch

Related Commands

dspqbin

Parameters-cnfqbin
Parameter Description

slot.port

slot.port

Specifies the slot and port number for the BXM

Qbin number

Specifies the number of the Qbin to be configured

e/d

Enables or disables the Qbin.

Minimum bandwidth

Bandwidth allocated to the VCs

Qbin discard threshold

Low CLP threshold

Specifies a percentage of the Qbin depth such that, when the Qbin level falls below this level, the node stops discarding CLP=1 cells.

High CLP threshold

Specifies a percentage of the Qbin depth. When the threshold is exceeded, the node discards cells with CLP=1 in the connection until the Qbin level falls below the depth specified by CLP Lo.

EFCI threshold

Explicit Forward Congestion Indication.The percentage of Qbin depth that causes EFCI to be set.

Description

The following example shows the configuration of a BXM Qbin on port 4.1 for tag switching

Example

Configure a Qbin by enabling it and accepting the defaults for the other parameters:

cnfqbin 4.1 10 e 0 65536 95 100 40

Sample Display: n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Qbin Database 4.1 on BXM qbin 10 Qbin State: Enabled Minimum Bandwith: 0 Qbin Discard threshold: 65536 Low CLP/EPD threshold: 95% High CLP/EPD threshold: 100% EFCI threshold: 40% Last Command: cnfqbin 4.1 10 e 0 65536 95 100 40 Next Command:

cnfrsrc

This command configures resources among AutoRoute PVCs and VSI partitions.
Syntax

cnfrsrc slot.port maxpvclcns maxpvcbw partition e/d minvsilcns maxvsilcns vsistartvpi

vsiendvpi vsiminbw vsimaxbw

Example

cnfrsrc 4.1 256 26000 1 e 512 7048 2 15 26000 100000

Attributes

Privilege Jobs Log Node Lock

BPX switch

Related Commands

Parameters-cnfrsrc
Parameter (cnfrsrc) Description

slot.port

Specifies the slot and port number for the BXM

maxpvclcns

The maximum number of LCNs allocated for AutoRoute PVCs for this port. For trunks there are additional LCNs allocated for AutoRoute that are not configurable.

The dspcd <slot> command displays the maximum number of LCNs configurable via the cnfrsrc command for the given port. For trunks, "configurable LCNs" represent the LCNs remaining after the BCC has subtracted the "additional LCNs" needed.

For a port card, a larger number is shown, as compared with a trunk card.

Setting this field to zero would enable configuring all of the configurable LCNs to the VSI.

maxpvcbw

The maximum bandwidth of the port allocated for AutoRoute use.

partition

Partition number

e/d

enables or disables the VSI partition

minvsilcns

The minimum number of LCNs guaranteed for this partition. The VSI controller guarantees at least this many connection endpoints in the partition, provided that there are sufficient free LCNs in the common pool to satisfy the request at the time the partition is added. When a new partition is added or the value is increased, it may be that existing connections have depleted the common pool so that there are not enough free LCNs to satisfy the request. The BXM gives priority to the request when LCNs are freed. The net effect is that the partition may not receive all the guaranteed LCNs (min LCNs) until other LCNs are returned to the common pool.

This value may not be decreased dynamically. All partitions in the same port group must be deleted first and reconfigured in order to reduce this value.

The value may be increased dynamically. However, this may cause the "deficit" condition described above.

The command line interface warns the user when the action is invalid, except for the "deficit" condition.

To avoid this deficit condition which could occur with maximum LCN usage by a partition or partitions, it is recommended that all partitions be configured ahead of time before adding connections. Also, it is recommended that all partitions be configured before adding a VSI controller via the addshelf command.

maxvsilcns

The total number of LCNs the partition is allowed for setting up connections. The min LCNs is included in this calculation. If max LCNs equals min LCNs, then the max LCNs are guaranteed for the partition.

Otherwise, (max - min) LCNs are allocated from the common pool on a FIFO basis.

If the common pool is exhausted, new connection setup requests will be rejected for the partition, even though the max LCNs has not been reached.

This value may be increased dynamically when there are enough unallocated LCNs in the port group to satisfy the increase.

The value may not be decreased dynamically. All partitions in the same port group must be deleted first and reconfigured in order to reduce this value.

Different types of BXM cards support different maximums. If you enter a value greater than the allowed maximum, a message is displayed with the allowable maximum.

vsistartvpi

By default the TSC (e.g, 7200 or 7500 series router) will use either a starting VSI VPI of 1 or 2 for tag switching, whichever is available. If both are available, a starting VSI VPI of 1 is used. The VPI range should be 2-15 on a BPX 8620 VSI. The VSI range for tag switching on the BPX 8620 is configured as a VSI partition, usually VSI partition number 1. VSI VPI 1 is reserved for AutoRoute PVCs, so the VSI partition for tag switching should start at VPI 2. If VPI 2 is not to be used, the tag switching VPI interface configuration command can be used on the TSC to override the defaults

vsiendvpi

Two VPIs are sufficient for the current release, although it may be advisable to reserve a larger range of VPIs for later expansion, for example, VPIs 2-15.

vsiminbw

The minimum port bandwidth allocated to this partition in cells/sec. (Multiply by 400 based on 55 bytes per ATM cell to get approximate bits/sec.)

vsimaxbw

The maximum port bandwidth allocated to this partition. This value is used for VSI QBIN bandwidth scaling.

Description

The following paragraphs describe various configurations of BXM port resources for tag switching. The first allocation example is using default allocations. The second allocation example describes more rigorous allocations where default allocations are not applicable.

Useful Default Allocations

Reasonable default values for all ports on all cards are listed in Table 9-5. If these values are not applicable, then other values may be configured using the cnfrsrc command.


Table 9-5:
Port Connection Allocations, Useful Default Values
Connection Type Variable Useful Default Value cnfrsrc cmd parameter

AutoRoute LCNs

a(x)

256

maxpvclcns

Minimum VSI LCNs for partition 1

n1(x)

512

minvsilcns

Maximum VSI LCNs for partition 1

m1(x)

7048

maxvsilcns

Different types of BXM cards support different maximums. If you enter a value greater than the allowed maximum, a message is displayed with the allowable maximum

Here, a(x) = 256, n1(x) = 512, and m1(x) = 16384.

Example:

Configure the VSI partition for port 4.1 by entering the following command:

cnfrsrc 4.1 256 26000 1 e 512 16384 2 15 26000 100000

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Port/Trunk : 4.1 Maximum PVC LCNS: 256 Maximum PVC Bandwidth:26000 Min Lcn(1) : 0 Min Lcn(2) : 0 Partition 1 Partition State : Enabled Minimum VSI LCNS: 512 Maximum VSI LCNS: 7048 Start VSI VPI: 2 End VSI VPI : 15 Minimum VSI Bandwidth : 26000 Maximum VSI Bandwidth : 100000 Last Command: cnfrsrc 4.1 256 26000 1 e 512 7048 2 15 26000 100000 Next Command:

Details of More Rigorous Allocations

More rigorous allocations are possible when default values are not applicable. For example, the LCN allocations for a port group must satisfy the following limit:

sum ( a (x) ) + sum ( n1 (x) ) +t*270 <= g

In this expression, "a (x)" represents AutoRoute LCNs, "n1 (x)" represents the guaranteed minimum number of VSI LCNs, "t" is the number of ports in the port group that are configured as AutoRoute trunks, and "g" is the total number of LCNs available to the port group. Figure 9-10 shows the relationship of these elements.

The "270" value reflects the number of LCNs which are reserved on each AutoRoute trunk for internal purposes. If the port is configured in port rather than trunk mode, "t" = 0, and t*270 drops out of the expression.


Figure 9-10: Port VSI Partition LCN Allocation Elements



Note  Tag switching can operate on a BXM card configured for either trunk (network) or port (service) mode. If a BXM card is configured for port (service) mode, all ports on the card are configured in port (service) mode. If a BXM card is configured for trunk (network) mode, all ports on the card are configured for trunk (network) mode. When the card is configured for trunk mode, the trunks reserve some connection bandwidth.

In the following expression, "z1" equals the number of unallocated LCNs in the common pool of LCNs available for use by the port VSI partitions. The value of "z1" is the number of LCNs available after subtracting the AutoRoute LCNs [sum ( a (x) ], VSI LCNs [sum (n1 (x) )], and LCNs for trunk use [t*270] from the total number of LCNs "g" available at the port. For a BXM card with ports configured in "port" mode, "t" = 0.

z1 = (g - sum ( a(x) ) - sum ( n1(x) -t*270)

When a port partition has exhausted its configured guaranteed LCNs (min LCNs), it may draw LCNs for new connections on a FIFO basis from the unallocated LCNs, "z1", until its maximum number of LCNs, "m1(x)", is reached or the pool, "z1", is exhausted.

No limit is actually placed on what may be configured for "m1 (x)", although " m1 (x)" is effectively ignored if larger than "z1 + n1".The value "m1 (x)" is a non-guaranteed maximum value of connection spaces that may be used for a new connection or shared by a number of connections at a given time if there are a sufficient number of unallocated "LCNs available in "z1". The value m1 (x) typically is not used in Release 9.1, but in future releases allows more control over how the LCNs are shared among multiple VSI partitions.

The following two examples, one for a BXM in port mode and the other for a BXM in trunk mode, provide further detail on the allocation of connections.

Example 1, 8-Port OC3 BXM Configured in Trunk Mode

This example is for an 8-port OC3 BXM configured for trunk mode and therefore, in Release 9.1, with all ports configured as trunks. Table 9-6 lists the configured connection space (LCN) allocations for each port of "a (x)", "n1 (x)", and "m1 (x)". It also shows the unallocated LCN pool, "z1" for each port group and the total common pool access, "g".


Note LCN is the variable affected when configuring connection space allocations using the cnfrsrc command.

The port groups in the example are ports 1-4 and 5-8, and the maximum number of connection spaces (LCNs) per port group is 8192 for this 8-port-OC3 BXM card. The allocations for ports 1-4 are shown in Figure 9-11. The allocations for ports 5-8 are similar to that shown in Figure 9-11, but with correspondingly different values.

As shown in Figure 9-11, "g" is the total number of connection spaces (LCNs) available to port group 1-4 and is equal to 8192 LCNs in this example. To find the number of unallocated LCNs available for use by port partitions that exhaust their assigned number of LCNs, proceed as follows:

From "g", subtract the sum of the AutoRoute connections, "a (x)", and the sum of minimum guaranteed LCNs, "n1 (x)". Also, since the ports in this example are configured in trunk mode, 270 LCNs per port are subtracted from "g". Since there are four ports, "t" equals "4" in the expression "t*270". The resulting expression is as follows:

z1 = (g - sum ( a (x) ) - sum ( n1 (x) ) - t*270)

The remaining pool of unallocated LCNs is "z1" as shown. This pool is available for use by ports 1-4 that exceed their minimum VSI LCN allocations "n1 (x)" for partition 1.

The maximum number of LCNs that a port partition can access on a FIFO basis from the unallocated pool "z1" for new connections can only bring its total allocation up to either "(z1 + n1 (x) ) or m1(x)", whichever value is smaller. Also, since "z1" is a shared pool, the value of "z1" will vary as the common pool is accessed by other port partitions in the group.

The values shown in Table 9-6 are obtained as follows:

  The values shown in Table 9-6 for the port group containing ports 1-4 may be summarized as follows:
3827 TVCs, subject to availability of unallocated LCNs "z1" on a FIFO basis. The configured maximum limit "m1(3)" of 7048 LCNs is ignored, as it is greater than the unallocated LCNs, "z1", of 3827.

  The values shown in Table 9-6 for the port group containing ports 5-8 may be summarized as follows:

Table 9-6:
Port (x) a(x) n1(x) m1(x) z1 = unallocated LCNs Total LCNS available to Port VSI Partition = min ( z1 + n1(x), max m1 (x) )

Port Group 1

1

120

3000

3500

3827

3500

2

50

0

0

3827

0

3

15

0

7048

3827

3827

4

0

100

100

3827

100

Sum, for x =1 through 4

185

3100

N/A

N/A

Port Group 2

5

6000

10

7048

702

712

6

0

0

100

702

100

7

100

200

200

702

200

8

0

100

2100

702

802

Sum for x = 5

through 8

6100

310

N/A

N/A

LCN Allocations for 8-port OC3 BXM, Ports Configured in Trunk Mode

Figure 9-11:
LCN Allocations for Ports 1-4, Ports Configured in Trunk Mode Example



Example 2, 8-Port OC3 BXM Configured in Port Mode

BXM ports configured for port mode rather than trunk mode have more connection spaces available for use by the TVC connections as it is not necessary to provide connection spaces for use by the AutoRoute trunks. This example is for an 8-port OC3 BXM configured for port mode and therefore, in Release 9.1, with all ports configured as ports. Table 9-7 lists the configured connection space (LCN) allocations for each port of "a (x)", "n1 (x)", and " m1 (x)". It also shows the unallocated LCN pool, "z1" for each port group and the total common pool access, "g".


Note LCN is the variable affected when configuring connection space allocations using the cnfrsrc command.

The port groups in the example are ports 1-4 and 5-8, and the maximum number of connection spaces (LCNs) per port group is 8192 for this 8-port-OC3 BXM card. The allocations for ports 1-4 are shown in Figure 9-12. The allocations for ports 5-8 are similar to that shown in Figure 9-12, but with correspondingly different values.

As shown in Figure 9-12, "g" is the total number of connection spaces (LCNs) available to port group 1-4 and is equal to 8192 LCNs in this example. To find the number of unallocated LCNs available for use by port partitions that exhaust their assigned number of LCNs, proceed as follows:

From "g", subtract the sum of the AutoRoute connections, "a (x)", and the sum of minimum guaranteed LCNs, "n1 (x)". Also, since the ports in this example are configured in port mode, "t" equals zero in the expression "t*270". This is indicated as follows:

z1 = (g - sum ( a (x) ) - sum ( n1 (x) ) - t*270 )

The remaining pool of unallocated LCNs is "z1" as shown. This pool is available for use by ports 1-4 that exceed their minimum VSI LCN allocations "n1 (x)" for partition 1.

The maximum number of LCNs that a port partition can access on a FIFO basis from the unallocated pool "z1" for new connections can only bring its total allocation up to either "(z1 + n1 (x) ) or m1(x)", whichever value is smaller. Also, since "z1" is a shared pool, the value of "z1" will vary as the common pool is accessed by other port partitions in the group.

The values shown in Table 9-7 are obtained as follows:

  The values shown in Table 9-7 for the port group containing ports 1-4 may be summarized as follows:
4907 TVCs, subject to availability of unallocated LCNs "z1" on a FIFO basis. The configured maximum limit "m1(3)" of 7588 LCNs is ignored, as it is greater than the unallocated LCNs, "z1", of 4907.
  The values shown in Table 9-7 for the port group containing ports 5-8 may be summarized as follows:


Table 9-7:
LCN Allocations for 8-Port OC3 BXM, Ports Configured in Port Mode
Port (x) a(x) n1(x) m1(x) z1 = unallocated LCNs Total LCNS available to Port VSI Partition = min ( z1 + n1(x), max m1 (x) )

Port Group 1

1

120

3000

3500

4907

3500

2

50

0

0

4907

0

3

15

0

7588

4907

4907

4

0

100

100

4907

100

Sum, for x =1 through 4

185

3100

N/A

N/A

Port Group 2

5

6000

10

7588

1782

1792

6

0

0

100

1782

100

7

100

200

200

1782

200

8

0

100

2100

1782

1882

Sum for x = 5 through 8

6100

310

N/A

N/A


Figure 9-12:
LCN Allocations for Ports 1-4, Ports Configured in Port Mode Example


dspcd

Displays the status, revision, and serial number of a card. If a back card is present, its type, revision, and serial number appear. The displayed information can vary with different card types.

Syntax

dspcd <slot>

Example

dspcd 5

Attributes

Privilege Jobs Log Node Lock

1-6

No

No

IPX switch, IGX switch, BPX switch

No

Related Commands

dncd, dspcds, resetcd, upcd

Parameters-dspcd
Parameter Description

slot

slot number of card.

Description

The following shows an example of the dspcd command for a BXM card.

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Detailed Card Display for BXM-155 in slot 4 Status: Active Revision: CD18 Serial Number: 693313 Fab Number: 28-2158-02 Queue Size: 228300 Support: FST, 4 Pts,OC3,Vc Chnls:16320,PG[1]:7588,PG[2]:7588 PG[1]:1,2, PG[2]:3,4, Backcard Installed Type: LM-BXM Revision: BA Serial Number: 688284 Supports: 8 Pts, OC3, MMF Md Last Command: dspcd 4 Next Command:

dspcds

Displays the cards in a shelf, front and back, with their type, revision, and status.

Syntax

dspcds [l]

Example

dspcds

Attributes

Privilege Jobs Log Node Lock

1-6

No

No

IPX switch, IGX switch, BPX switch

No

Related Commands

dncd, dspcd, resetcd, upcd

Parameters-dspcds
Parameter Description

l

Directs the system to display status of the cards on just the lower shelf of an IPX 32 or IGX 8430. If not entered, dspcds displays the top shelf by default.

Description

For front and back card sets, the status field applies to the cards as a set. A letter "T" opposite a card indicates that it is running self-test. A letter "F" opposite a card indicates that it has failed a test. If lines or connections have been configured for a slot, but no suitable card is present, the display will list the missing cards at the top of the screen. If a special backplane is installed or if a card was previously installed, empty slots are identified as "reserved".

For an IPX 32 or IGX 8430, the screen initially displays only the upper shelf with a "Continue?" prompt. Typing "y" to the prompt displays the cards in the lower shelf. The command dspcds followed by the letter "L" (for lower shelf) displays card status for just the lower shelf. For an IPX 8 or IGX 8410, the card information appears in only the left column. The status and update messages are as follows:

· Active

Card in use, no failures detected.

· Active—F

Card in use, failure(s) detected.

· Active—T

Card active, background test in progress.

· Active—F-T

Card active, minor failures detected, background test in progress.

· Standby

Card idle, no failures.

· Standby—F

Card idle, failure(s) detected.

· Standby—T

Card idle, background test in progress.

· Standby—F-T

Card idle, failure(s) detected, background test in progress.

· Failed

Card failed.

· Down

Card downed by user.

· Down—F

Card downed, failure(s) detected.

· Down—T

Card downed, failure(s) detected, background test in progress.

· Mismatch

Mismatch between front card and back card.

· Update *

Configuration RAM being updated from active control card.

· Locked*

Incompatible version of old software is being maintained in case it is needed.

· Dnlding*

Downloading new system software from the active BCC (BPX switch), or NPC (IPX switch or IGX switch), adjacent node, or from StrataView Plus.

· Dnldr*

Looking to adjacent nodes or StrataView Plus for either software to load or other software needs you have not specifically requested.

In the preceding messages, an asterisk (*) means an additional status designation for BCC, NPC, or NPM cards. "F" flag in the card status indicates that a non-terminal failure was detected. Cards with an "F" status are activated only when necessary (for example, when no other card of that type is available). Cards with a "Failed" status are never activated.

Example

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST FrontCard BackCard FrontCard BackCard Type Rev Type Rev Status Type Rev Type Rev Status 1 Empty 9 ASI-155 BE02 MMF-2 AB Standby 2 BXM-155 BB16 MM-8 BA Active 10 BME-622 KDJ MM-2 FH Active 3 Empty 11 BXM-E3 BB16 TE3-12P04 Active 4 BNI-E3 CE08 E3-3 JY Active 12 BXM-155 BB16 MM-8 BA Active 5 BNI-E3 CE08 E3-3 EY Active 13 BXM-155 AC30 SM-4 P05 Active 6 BNI-T3 CF08 T3-3 FH Active 14 Empty 7 BCC-3 DJL LM-2 AA Active 15 ASM ACB LMASM P01 Active 8 BCC-3 DJL LM-2 AA Standby Last Command: dspcds Next Command:

dspnode

Displays a summary of interface devices connected to a routing node, or when executed from an IPX or IGX interface shelve shows the name of its hub node and trunk number.

Syntax:

dspnode

Related Commands

addshelf, delshelf, dsptrk

Attributes

Privilege Jobs Log Node Lock

1-6

No

No

BPX switch, IGX switch

Yes

Description

The command displays tag switch controller devices connected to a BPX node and interface shelves connected to an IGX switch or BPX node. The command can be used to isolate the shelf or tag switch controller where an alarm has originated.

The routing nodes in a network do not indicate the interface shelf or tag switch controller where an alarm condition exists, so dspnode may be executed at a hub node to find out which interface device originated the alarm.

When executed on an IPX or IGX interface shelve, dspnode shows the name of the hub node and the trunk number. Note that to execute a command on an IPX or IGX interface shelf, you must either use a control terminal directly attached to the IPX or IGX switch or telnet to the IPX/AF, as the vt command is not applicable.

Example

Displays information about tag switch controllers and interface shelves (executed on the BPX hub node).

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PSTT BPX Interface Shelf Information Trunk Name Type Alarm 3.1 j6c AXIS MIN 5.3 j5c IPX/AF MIN 4.1 VSI VSI OK 4.2 VSI VSI OK 4.3 VSI VSI OK Last Command: dspnode Next Command:

dspqbin

Displays the configuration of the specified Qbin on a BXM.

Syntax

dspqbin <slot.port> <qbin number>

Example

dspqbin 4.1 10

Attributes

Privilege Jobs Log Node Lock

BPX switch

Related Commands

cnfqbin

Parameters-dspqbin
Parameter Description

slot.port

The slot and port number of interest

qbin number

The qbin number. For EFT tag switching, this is Qbin number 10.

Description

The following example shows configuration of Qbin 10 on port 4.1 of a BXM card.

Example

dspqbin 4.1 10

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Qbin Database 4.1 on BXM qbin 10 Qbin State: Enabled Minimum Bandwith: 0 Qbin Discard threshold: 65536 Low CLP/EPD threshold: 95% High CLP/EPD threshold: 100% EFCI threshold: 40% This Command: dspqbin 4.1 10 Next Command:

dsprsrc

Displays the tag switching resource configuration of the specified partition on a BXM card.

Syntax

dsprsrc <slot.port> <partition>

Example

dsprsrc 4.1 1

Attributes

Privilege Jobs Log Node Lock

BPX switch

Related Commands

cnfrsrc

Parameters-dspcds
Parameter Description

slot.port

Specifies the BXM slot and port

partition

Specifies the vsi partition.

Description

The following example shows configuration of vsi resources for partition 1 at BXM port 4.1.

Example Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Port/Trunk : 4.1 Maximum PVC LCNS: 256 Maximum PVC Bandwidth:26000 Min Lcn(1) : 0 Min Lcn(2) : 0 Partition 1 Partition State : Enabled Minimum VSI LCNS: 512 Maximum VSI LCNS: 7048 Start VSI VPI: 2 End VSI VPI : 15 Minimum VSI Bandwidth : 26000 Maximum VSI Bandwidth : 100000 Last Command: dsprsrc 4.1 1 Next Command:

dsptrks

Display information on the trunk configuration and alarm status for the trunks at a node. The trunk numbers with three places represent virtual trunks.

Syntax

dsptrks

Related Commands

addtrk, deltrk, dntrk, uptrk

Attributes

Privilege Jobs Log Node Lock

1-6

No

No

IPX switch, IGX switch, BPX switch

No

Description

Displays basic trunk information for all trunks on a node. This command applies to both physical only and virtual trunks. The displayed information consists of:

For trunks that have been added to the network with the addtrk or addshelf command, the information includes the device name and trunk number at the other end. Trunks that have a "-" in the Other End column have been upped with uptrk but not yet added. For disabled trunks, the trunk numbers appear in reverse video on the screen. Virtual trunk numbers contain three parts, for example, 4.1.1.

Example

Enter the dsptrks command as follows to display the trunks on a BPX switch:

dsptrks

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST TRK Type Current Line Alarm Status Other End 2.1 OC3 Clear - OK j4a/2.1 3.1 E3 Clear - OK j6c(AXIS) 5.1 E3 Clear - OK j6a/5.2 5.2 E3 Clear - OK j3b/3 5.3 E3 Clear - OK j5c(IPX/AF) 6.1 T3 Clear - OK j4a/4.1 6.2 T3 Clear - OK j3b/4 4.1 OC3 Clear - OK VSI(VSI) 4.2 OC3 Clear - OK VSI(VSI) 4.3 OC3 Clear - OK VSI(VSI) Last Command: dsptrks Next Command:

resetcd

The reset card command resets the hardware and software for a specified card.

Syntax

resetcd <slot_num> <reset_type>

Example

resetcd 5 H

Attributes

Privilege Jobs Log Node Lock

1-3

Yes

Yes

IPX switch, IGX switch, BPX switch

Yes

Related Commands

dspcd

Parameters-resetcds

Parameter Description

slot number

Specifies the card number to be reset.

H/F

Specifies whether the hardware or failure history for the card is to be reset. An "H" specifies hardware; an "F" specifies failure history.

Description

A hardware reset is equivalent to physically removing and reinserting the front card of a card group and causes the card's logic to be reset. When you reset the hardware of an active card other than a controller card (an NPC, NPM, or BCC), a standby card takes over if one is available. A failure reset clears the card failures associated with the specified slot. If a slot contains a card set, both the front and back cards are reset.

Do not use the reset command on an active NPC, NPM, or BCC because this causes a temporary interruption of all traffic while the card is re-booting. (Resetting a controller card does not destroy configuration information.) Where a redundant NPC, NPM, or BCC is available, the switchcc command is used to switch the active controller card to standby and the standby controller card to active. If a standby card is available, resetting an active card (except for a NPC, NPM, or BCC) does not cause a system failure. H/F Resetting of an active card that has no standby does disrupt service until the self-test finishes.

Example 1

resetcd 3 H

Sample Display:

No display is generated.

upport

Displays the cards in a shelf, front and back, with their type, revision, and status.

Syntax

upport <slot.port>

Example

upport 4.2

Attributes

Privilege Jobs Log Node Lock

1-2

Yes

Yes

BPX switch

Yes

Related Commands

dnport, cnfport, upln

Parameters-dspcds
Parameter Description

slot.port

Specifies the slot number and port number of the port to be activated.

Related Commands

dnport, cnfport, upln

Description

The following example shows the screen that is displayed when the following command is entered to up a port on an ASI card:

upport 4.2

System Response

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST Port: 4.2 [ACTIVE ] Interface: T3-2 Type: UNI Speed: 96000 (cps) CBR Queue Depth: 200 CBR Queue CLP High Threshold: 80% CBR Queue CLP Low Threshold: 60% CBR Queue EFCI Threshold: 80% VBR Queue Depth: 1000 ABR Queue Depth: 9800 VBR Queue CLP High Threshold: 80% ABR Queue CLP High Threshold: 80% VBR Queue CLP Low Threshold: 60% ABR Queue CLP Low Threshold: 60% VBR Queue EFCI Threshold: 80% ABR Queue EFCI Threshold: 80% Last Command: upport 4.2 Next Command:

uptrk

Activates (or "ups") a trunk.

Syntax

uptrk <slot.port>[.vtrk]

Example

uptrk 4.1

Related Commands

addtrk, dntrk

Attributes

Privilege Jobs Log Node Lock

1-2

Yes

Yes

IPX switch, IGX switch, BPX switch

Yes

Parameters-uptrk
Parameter Description

slot.port

Specifies the slot and port of the trunk to activate. If the card has only one port, the port parameter is not necessary. An NTM, for example, has one port.

Optional Parameters-uptrk
Parameter Description

vtrk

Specifies the virtual trunk number. The maximum on a node is 32. The maximum on a T3 or E3 line is 32. The maximum for user traffic on an OC3/STM1 trunk is 11 (so more than one OC3/STM1 may be necessary).

Description

After you have upped the trunk but not yet added it, the trunk carries line signaling but does not yet carry live traffic. The node verifies that the trunk is operating properly. When the trunk is verified to be correct, the trunk alarm status goes to clear. The trunk is then ready to go into service, and can be added to the network.

If you need to take an active trunk between nodes out of service, the dntrk command may be used. However, this will result in temporary disruptions in service as connections are rerouted. The dntrk command causes the node to reroute any existing traffic if sufficient bandwidth is available.

Interface Shelves and Tag Switch Controllers: For interface shelves or tag switch controllers connected to a node, connections from those devices will also be disrupted when the links to them are deleted. For an interface shelf, the delshelf command is used to deactivate the trunk between the IGX or BPX routing node and the shelf.

Tag Switch Controller: For a tag switch controller, the delshelf command is also used to deactivate the link between the BPX routing node and the tag switch controller. In the case of tag switching, this is a link between a port on the BXM card and the tag switch controller. This link can be connected to a port that has been upped by either the upport or uptrk command, as the tag switching operation does not differentiate between these modes on the BXM.

Virtual Trunks: If you include the optional vtrk parameter, uptrk activates the trunk as a virtual trunk. If the front card is a BXM (in a BPX switch), uptrk indicates to the BXM that it is supporting a trunk rather than a UNI port. (See the upln description for the BXM in port mode.)

You cannot mix physical and virtual trunk specifications. For example, after you up a trunk as a standard trunk, you cannot add it as a virtual trunk when you execute addtrk. Furthermore, if you want to change trunk types between standard and virtual, you must first down the trunk with dntrk then up it as the new trunk type.

You cannot up a trunk if the required card is not available. Furthermore, if a trunk is executing self-test, a "card in test" message may appear on-screen. If this message appears, re-enter uptrk.

Example 1

Activate (up) trunk 21—a single-port card, in this case, so only the slot is necessary.

uptrk 21

Example 2

This example shows the screen when BXM trunk 4.1 connected to a Tag Switch Controller is upped with the following command:

uptrk 4.1

Sample Display:

n4 TN SuperUser BPX 15 9.1 Apr. 4 1998 16:40 PST TRK Type Current Line Alarm Status Other End 2.1 OC3 Clear - OK j4a/2.1 5.1 E3 Clear - OK j6c(AXIS) 5.1 E3 Clear - OK j6a/5.2 5.2 E3 Clear - OK j3b/3 5.3 E3 Clear - OK j5c(IPX/AF) 6.1 T3 Clear - OK j4a/4.1 6.2 T3 Clear - OK j3b/4 4.1 OC3 Clear - OK VSI(VSI) Last Command: uptrk 4.1 Next Command:
Example 3

Activate (up) trunk 6.1.1—a virtual trunk, in this case, which the third digit indicates.

uptrk 6.1.1


hometocprevnextglossaryfeedbacksearchhelp
Posted: Mon Jan 15 19:54:47 PST 2001
All contents are Copyright © 1992--2001 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.