|
This chapter describes the BXM Virtual Switch Interface (VSI) and provides configuration procedures:
For information on configuring SES PNNI controllers to work with BPX switches, refer to the Cisco SES PNNI Controller Software Configuration Guide.
For information on configuring MPLS controllers to work with BPX switches, refer to the Cisco MPLS Controller Software Configuration Guide.
Refer to Cisco WAN Switching Command Reference for details about the commands mentioned here for both PNNI and MPLS controllers. Refer to Release Notes for supported features.
The Virtual Switch Interface (VSI) is a common control interface between the BPX 8650 or the MGX 8850 switches and an external controller that supports the VSI protocol.
VSIs allows a node to be controlled by multiple controllers, such as Multiprotocol Label Switching (MPLS) and the Service Expansion Shelf Private Network-to-Network Interface (SES PNNI).
When a VSI is activated on a port, trunk, or virtual trunk so that it can be used by a master controller, such as an SES PNNI or an MPLS controller, the resources of the virtual interface associated with the port, trunk, or virtual trunk are made available to the VSI. These control planes can be external or internal to the switch. The VSI provides a mechanism for networking applications to control the switch and use a partition of the switch resources.
VSI on the BPX provides:
VSI was implemented first on the BPX 8650 in Release 9.1, which uses VSI to perform Multiprotocol Label Switching and allowed support for VSI on BXM cards and for partitioning BXM resources between Automatic Routing Management and a VSI MPLS controller. BPX software uses a partition to identify and assign resources such as LCNs, VPIs, and bandwidth to a controller. Multiple VSI partitions may be defined on a single physical port.
BPX Release 9.2 supports up to three VSI partitions in addition to Automatic Routing Management. The VSI partitions are controlled by VSI Masters such as the PNNI or MPLS controllers. When configuring, allocate switch resources to Automatic Routing Management and VSI slaves. Resources allocated to Automatic Routing Management and then reallocated to VSI.
Release 9.3.10 introduces Soft Partitioning and Dynamic Partitioning in order to support the smooth introduction of another VSI controller into a BPX network already configured with an existing VSI controller, easier tuning of switch resources, and the migration of Automatic Routing Management to PNNI (see the section Soft and Dynamic Partitioning later in this chapter).
Multiprotocol Label Switching (MPLS, previously called Tag Switching) enables routers at the edge of a network to apply simple labels to packets (frames), allowing devices in the network core to switch packets according to these labels with minimal lookup activity. MPLS in the network core can be performed by switches, such as ATM switches, or by existing routers.
MPLS integrates virtual circuit switching with IP routing to offer scalable IP networks over ATM. MPLS support data, voice, and multimedia service over ATM networks. MPLS summarizes routing decisions so that switches can perform IP forwarding, as well as bringing other benefits that apply even when MPLS is used in router-only networks.
Using MPLS techniques, it is possible to set up explicit routes for data flows that are constrained by path, resource availability, and requested Quality of Service (QoS). MPLS also facilitates highly scalable Virtual Private Networks.
MPLS assigns labels to IP flows, placing them in the IP frames. The frames can then be transported across packet or cell-based networks and switched on the labels rather than being routed using IP address look-up.
A routing protocol such as OSPF, uses the Label Distribution Protocol (LDP) to set up MPLS virtual connections (VCs) on the switch.
MPLS is a standardized version of Cisco's original Tag Switching proposal. MPLS and Tag Switching are identical in principle and nearly so in operation. MPLS terminology has replaced obsolete Tag Switching terminology.
An exception to the terminology is Tag Distribution Protocol (TDP). TDP and the MPLS Label Distribution Protocol (LDP) are nearly identical, but use different message formats and procedures. TDP is used in this design guide only when it is important to distinguish TDP from LDP. Otherwise, any reference to LDP in this design guide also applies to TDP.
In the VSI control model, a controller sees the switch as a collection of slaves with their interfaces. The controller can establish connections between any two interfaces. The controller uses resources allocated to its partition.
Each VSI interface can be assigned a default Class of Service template upon activation. Use the switch software CLI or Cisco WAN Manager to configure a different template to an interface.
The procedure for adding a VSI-based controller such as the MPLS controller to the BPX is similar to adding an MGX 8220 interface shelf to the BPX. To attach a controller to a node to control the node, use the addshelf command.
The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI protocol. The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.
To configure VSI resources on a given interface, use the cnfrsrc command.
This section provides the basic procedures to:
To add an MPLS controller to any BXM trunk, use the addshelf command with the V (VSI) option.
To add an SES PNNI controller, use the addshelf command with an X option.
To identify VSI controllers and distinguish them from feeders, use the V (VSI) option of the addshelf command.
To add a SES PNNI controller to a BPX node through an AAL5 interface shelf or feeder type configured with VSI controller capabilities, use the addctrlr command.
If you are adding two controllers that are intended to be used in a redundant configuration, you must specify the same partition when you add them to the node by using the addshelf command.
To add an MPLS controller (or a generic VSI controller that does not need AnnexG protocol):
Step 2 Add an MPLS controller by using the addshelf command with feeder type set to "V".
Step 3 Display the controllers and interface shelves attached to the node by using the dspnode command.
Step 4 Display the VSI controllers on a BPX node by using the dspctrlrs command.
Note that addshelf and addtrk are mutually exclusive commands; that is, you can use either addshelf or addtrk, but not both on the same interface shelf.
To add a PNNI controller, use these commands:
Step 2 Configure resource on the trunk interface for the PNNI controller's control channels by using the cnfrsrc command.
Step 3 Add the SES PNNI to the BPX and enable AnnexG protocol to run between the BPX and the SES by using the addshelf command with feeder type set to "X".
Step 4 Enable the VSI capabilities on the trunk interface by using the addctrlr command.
Display commands such as dspnw and dspnode show interface shelves.
To view conditions on an interface shelf (feeder) trunk, use:
To view conditions of VSI controllers, use:
The designation for an Multiprotocol Label Switching (MPLS) controller serving as an interface shelf is LSC.
In Release 9.3.10, the external network management system can query the BPX via SNMP to discover VSI controller IDs and IP addresses.
To delete a controller or interface (feeder) shelf, first delete it from the network. Then down the port and trunk. This applies to MPLS controllers or generic VSI controllers that do not need AnnexG protocols.
To delete an MPLS controller:
Step 2 Down the port by using the dnport command.
OR:
Step 3 Down the trunk by using the dntrk command.
To delete a PNNI controller:
Step 2 Delete the SES attached to the trunk interface by using the delshelf command.
Step 3 Disable the VSI resource partition allocated for PNNI controller on the trunk interface by using the cnfrsrc command.
Step 4 Down the trunk interface (provided no other VSI partitions are active on the trunk interface) by using the dntrk command.
This section is key for configuring VSIs.
Prior to Release 9.1, LCNs, VPI range, and bandwidth allocation were managed exclusively by the BCC. With the introduction of VSI, the switch must allocate a range of LCNs, VPIs, and how much bandwidth for use by VSI (not BXM).
When configuring resource partitions on a VSI interface, the following commands are typically used:
The next step to complete when adding a VSI-based controller such as an LSC or a PNNI controller is to configure resource partitions on BXM interfaces to allow the controller to control the BXM interfaces. To do this, first create resource partitions on these interfaces. Use the cnfrsrc command to add, delete and modify a partition on a specified interface.
You may have up to three VSI controllers on the same partition (referred to as VSI master redundancy). The master redundancy feature allows multiple VSI masters to control the same partition.
See Table 23-1 for a listing of cnfrsrc parameters, ranges and values, and descriptions. These descriptions are oriented to actions and behavior of the BXM firmware; in most cases, objects (messages) are sent to switch software. Most of these parameters appear on the cnfrsrc screen.
Parameter (Object) Name | Range/Values | Default | Description |
---|---|---|---|
VSI partition | 1-3 | 1 | Identifies the partition |
Partition state | 0 = Disable Partition 1 = Enable Partition | NA | For Partition state = 1, Objects are mandatory |
Min LCNs | 0-64K | NA | Minimum LCNs (connections) guaranteed for this partition. |
Max LCNs | 0-64K | NA | Maximum LCNs permitted on this partition |
Start VPI | 0-4095 | NA | Partition Start VPI |
End VPI | 0-4095 | NA | Partition End VPI |
Min Bw | 0-Line Rate | NA | Minimum Partition bandwidth |
Max Bw | 0-Line Rate | NA | Maximum Partition bandwidth |
PVC VPI Range 1 | 0-4095 | -1 | Dynamic partitioning |
PVC VPI Range 2 | 0-4095 | -1 | Dynamic partitioning |
PVC VPI Range 3 | 0-4095 | -1 | Dynamic partitioning |
PVC VPI Range 4 | 0-4095 | -1 | Dynamic partitioning |
Soft and Dynamic Partitioning (new in Release 9.3.10) supports smooth introduction of another VSI controller into an existing BPX network already configured with an existing VSI controller, easier tuning of switch resources, and the migration of Automatic Routing Management to PNNI.
Soft Partitioning provides resource guarantees for LCNs and bandwidth per partition and a pool of resources available to all partitions in addition to the guaranteed resources. Dynamic Partitioning provides the ability to rather easily increase the allocation of a resource to a partition.
Define and manage the number of LCNs assigned to a given VSI partition by modifying the "Minimum VSI LCNs" and "Maximum VSI LCNs" fields of the cnfrsrc CLI command.
To give more LCNs from Automatic Routing Management to VSI, change the Min LCNs or Max LCNs to cause BPX software to produce a bigger number.
To increase the LCNs reserved to a VSI partition, increase the "Minimum VSI LCNs" or "Maximum VSI LCNs" fields of the appropriate VSI partition. The VSI LCN boundary is moved into Automatic Routing Management if there are enough free Automatic Routing Management LCNs to fulfill the request.
If there are not enough free LCNs in the Automatic Routing Management (AR) space, the cnfrsrc command does not fulfill a request to increase the VSI LCN space. In such a case, the cnfrsrc command displays a failure message showing the number of currently free AR LCNs. You can reissue the cnfrsrc command specifying a smaller increase to the VSI partition. If that is not acceptable, you must first delete and reroute the necessary number of AR connections. Then you can attempt cnfrsrc again.
Moving the VSI LCN boundary into the Automatic Routing Management space might step over LCNs that are currently allocated. BPX software reprogram the necessary channels so that new channels out of the lower AR LCN space are picked instead. Before starting the process of reprogramming the necessary number of AR connections, the cnfrsrc command displays a warning message and waits for your permission to proceed. The warning message shows the number of Automatic Routing Management (AR) connections that will be reprogrammed. After reprogramming the necessary channels the LCN boundary is moved into the Automatic Routing Management space.
Note You can migrate Automatic Routing Management (AutoRoute) connections only if the VPI range of the recipient VSI partition is adjacent to Automatic Routing Management. To migrate Automatic Routing Management connections to a nonadjacent VSI partition requires different VPIs within the recipient VPI boundary. |
The ATM Class of Service templates (or Service Class Template, SCT) provide a means of mapping a set of extended parameters. These are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave in a BXM port interface during initial setup of the interface.
A set of service templates is stored in each BPX 8650 switch and downloaded to the service modules (BXMs) as needed during initial configuration of the VSI interface when a trunk or line is enabled on the BXM.
Each service template type has an associated Qbin. The Qbins provide the ability to manage bandwidth by temporarily storing cells and then serving them out based on a number of factors, including bandwidth availability and the relative priority of different Classes of Service.
When ATM cells arrive from the Edge LSR at the BXM port with one of four CoS labels, they receive CoS handling based on that label. A table look-up is performed, and the cells are processed, based on their connection classification. Based on its label, a cell receives the ATM differentiated service associated with its template type and service type (for example, label cos2 bw), plus associated Qbin characteristics and other associated ATM parameters.
A default service template is automatically assigned to a logical interface (VI) when you up the interface by using the commands upport and uptrk. The corresponding Qbin template is then copied into the card's (BXM) data structure of that interface.
Following are some examples of assigning a default service template by using the commands upport and uptrk:
This default template has the identifier of 1. To change the service template from service template 1 to another service template, use the cnfvsiif command.
To assign a selected service template to an interface (VI) use the cnfvsiif command, specifying the template number. It has this syntax:
cnfvsiif <slot.port.vtrk> <tmplt_id>
For example:
cnfvsiif 1.1 2
cnfvsiif 1.1.1 2
Use the dspvsiif command to display the type of service template assigned to an interface (VI). It has the following syntax:
dspvsiif <slot.port.vtrk>
dspvsiif 1.1
dspvsiif 1.1.1
To change some of the template's Qbin parameters, use the cnfqbin command. The Qbin is now "user configured" as opposed to "template configured."
To view this information, use the command dspqbin.
dspsct
Use the dspsct command to display the Service Class Template number assigned to an interface. The command has three levels of operation:
dspqbint
Displays the Qbin templates
cnfqbin
Configures the Qbin. You can answer yes when prompted and the command will use the card Qbin values from the Qbin templates.
dspqbin
Displays Qbin parameters currently configured for the virtual interface.
dspcd
Displays the card configuration.
When you activate an interface by using an uptrk or upport command, a default service template (MPLS1) is automatically assigned to that interface. The corresponding Qbin templates are simultaneously set up in the BXM's data structure for that interface. This service template has an identifier of "1".
To change the service template assigned to an interface, use the cnfvsiif command. You can do this only when there are no active VSI connections on the BXM.
To display the assigned templates, use the dspvsiif command.
Each template table row includes an entry that defines the Qbin to be used for that Class of Service (see Figure 23-10).
This mapping defines a relationship between the template and the interface Qbin's configuration.
A Qbin template defines a default configuration for the set of Qbins for the logical interface. When a template assignment is made to an interface, the corresponding default Qbin configuration becomes the interface's Qbin configuration.
Once a service template has been assigned, you can then adjust some of the parameters of this configuration on a per-interface basis. Changes you make to the Qbin configuration of an interface affect only that interface's Qbin configuration. Your changes do not affect the Qbin template assigned to that interface.
To change the template's configuration of the interface, provide new values by using the cnfqbin command. The Qbin is now "user configured" as opposed to "template configured." This information is displayed on the dspqbin screen, which indicates whether the values in the Qbin are from the template assigned to the interface, or whether the values have been changed to user-defined values.
To see the Qbin's default service type and the Qbin number, execute the dspsct command.
Use the following commands to configure Qbins:
You can enable VSI ILMI functionality both on line (port) interfaces and trunk interfaces when using PNNI. Note that VSI ILMI functionality cannot be enabled on trunks to which feeders or VSI controllers are attached.
To enable VSI ILMI functionality on line (port) interfaces:
Step 2 Up the port interface by using the upport command.
Step 3 Configure the port to enable ILMI protocol and ensure that the protocol runs on the BXM card by enabling the "Protocol by the card" option of the cnfport command.
Step 4 Configure a VSI partition on the line interface by using the cnfrsrc command.
Step 5 Enable VSI ILMI functionality for the VSI partition by using the cnfvsipart command.
To enable VSI ILMI functionality on physical trunk interfaces:
Step 2 Configure the trunk to enable ILMI protocol to run on the BXM card by enabling the "Protocol by the card" option of the cnftrk command.
Step 3 Configure a VSI partition on the trunk interface by using the cnfrsrc command.
Step 4 Enable VSI ILMI session for the VSI partition by using the cnfvsipart command.
To enable VSI ILMI functionality on virtual trunk interfaces:
Step 2 Configure the trunk VPI by using the cnftrk command. (The ILMI automatically runs on the BXM card for virtual trunks.) This is not configurable by using the cnftrk command.
Step 3 Configure a VSI partition on the virtual trunk interface by using the cnfrsrc command.
Step 4 Enable VSI ILMI functionality for the VSI partition by using the cnfvsipart command.
Note VSI ILMI can be enabled for only one VSI partition on the trunk interface. |
To display VSI ILMI functionality on interfaces:
The VSI virtual trunking feature lets you use BXM virtual trunks as VSI interfaces. Using this capability, VSI master controllers can terminate connections on virtual trunk interfaces.
Activate and configure VSI resources on a virtual trunk using the same commands you use to configure physical interfaces (for example, cnfrsrc, dsprsrc). The syntax used to identify a trunk has an optional virtual trunk identifier that you append to the slot and port information to identify virtual trunk interfaces.
A virtual trunk is a VPC that terminates at each end on the switch port. Each virtual trunk can contain up to 64,000 VCCs, but it might not contain any VPCs.
Virtual trunk interfaces cannot be shared between VSI and Automatic Routing Management. Therefore, configuring a trunk as a VSI interface prevents you from adding the trunk as an Automatic Routing Management trunk. Similarly, a trunk that has been added to the Automatic Routing Management topology cannot be configured as a VSI interface.
Virtual trunks on the BPX use a single configurable VPI. Because virtual trunk interfaces are dedicated to VSI, the entire range of VCIs is available to the VSI controllers.
The virtual trunking feature introduces the concept of defining multiple trunks within a single trunk port interface. This creates a fan-out capability on the trunk card.
Once VSI is enabled on the virtual trunk, Automatic Routing Management does not include this trunk in its route selection process.
To configure a VSI virtual trunk:
Step 2 Set up VPI value and trunk parameters by using the command
cnftrk <slot.port.vtrunk>
Step 3 Enable VSI partition by using the command
cnfrsrc <slot.port.vtrunk>
This section provides detailed reference to virtual interfaces, service templates, and Qbins.
For information on configuring SES PNNI controllers to work with BPX switches, see the Cisco SES PNNI Controller Software Configuration Guide.
For information on configuring MPLS controllers to work with BPX switches, see the Cisco MPLS Controller Software Configuration Guide.
Refer to Cisco WAN Switching Command Reference for details about the commands mentioned here for both PNNI and MPLS controllers. Refer to Release Notes for supported features.
The BXM supports 31 Virtual Switch Interfaces that provide a number of resources including Qbin buffering capability. One Virtual Switch Interface is assigned to each logical trunk (physical or virtual) when the trunk is enabled. (See Figure 23-1.)
Each virtual switch interface has 16 Qbins assigned to it. Qbins 0-9 are used for Automatic Routing Management. Qbins 10 through 15 are available for use by a Virtual Switch Interface. (In Release 9.1, only Qbin 10 was used.) The Qbins 10 through 15 support Class of Service (CoS) templates on the BPX.
You may enable a Virtual Switch Interface on a port, trunk, or virtual trunk. The Virtual Switch Interface is assigned the resources of the associated virtual interface.
With virtual trunking, a physical trunk can comprise a number of logical trunks called virtual trunks. Each of these virtual trunks (equivalent to a virtual interface) is assigned the resources of one of the 31 Virtual Switch Interfaces on a BXM (see Figure 23-1).
A controller application uses a VSI master to control one or more VSI slaves. For the BPX, the controller application and master VSI reside in an external 7200 or 7500 series router and the VSI slaves are resident in BXM cards on the BPX node (see Figure 23-2).
The controller sets up these types of connections:
The controller establishes a link between the VSI master and every VSI slave on the associated switch. The slaves in turn establish links between each other (see Figure 23-3).
With a number of switches connected together, there are links between switches with cross-connects established within the switch as shown in Figure 23-4.
When a connection request is received by the VSI slave, it is first subjected to a Connection Admission Control (CAC) process before being forwarded to the FW layer responsible for actually programming the connection. The granting of the connection is based on the following criteria:
LCNs available in the VSI partition:
QoS guarantees:
When the VSI slave accepts (that is, after CAC) a connection setup command from the VSI master in the MPLS controller, it receives information about the connection including service type, bandwidth parameters, and QoS parameters. This information is used to determine an index into the VI's selected Service Template's VC Descriptor table thereby establishing access to the associated extended parameter set stored in the table.
Ingress traffic is managed differently and a pre-assigned ingress service template containing CoS Buffer links is used.
The Virtual Switch Interface must partition the resources between competing controllers, Automatic Routing Management, MPLS, and PNNI for example. You partition resources by using the cnfrsrc command.
Note Release 9.3 supports up to three partitions. |
Table 23-2 shows the three resources that must be configured for a partition designated ifci, which stands for interface controller 1 in this instance.
ifci parameters | Min | Max |
---|---|---|
lcns | min_lcnsi | max_lcnsi |
bw | min_bwi | max_bwi |
vpi | min_vpi | max_vpi |
Some ranges of values available for a partition are listed in Table 23-3:
Range | |
---|---|
trunks | 1-4095 VPI range |
ports | 1-4095 VPI range |
virtual trunk | Only one VPI available per virtual trunk since a virtual trunk is currently delineated by a specific VP |
virtual trunk | Each virtual trunk can either be Automatic Routing Management or VSI, not both |
When a trunk is added, the entire bandwidth is allocated to Automatic Routing Management. To change the allocation in order to provide resources for a VSI, use the cnfrsrc command on the BPX switch. A view of the resource partitioning available is shown in Figure 23-5.
You can configure partition resources between Automatic Routing Management PVCs and three VSI controllers (LSC or PNNI). Up to three VSI controllers in different control planes can independently control the switch with no communication between controllers. The controllers are essentially unaware of the existence of other control planes sharing the switch. This is possible because different control planes used different partitions of the switch resources.
You can add one or more redundant LSC controllers to one partition, and one or more redundant PNNI controllers to the other partition. With Release 9.2.3, six new Service Class Templates were added for interfaces (for a total of nine) with multiple partitions controlled simultaneously by a PNNI controller and an LSC.
The master redundancy feature allows multiple controllers to control the same partition. In a multiple partition environment, master redundancy is independently supported on each partition.
These limitations apply to multiple VSI partitioning:
The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag off. The multiple partitions capability is treated as a card attribute and added to the attribute list.
Use of a partition with an ID higher than 1 requires support for multiple VSI partitions in both switch software and BXM firmware, even if this is the only partition active on a the card. In a Y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards.
A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions will be blocked.
Each logical switch can be seen as a collection of interfaces each with an associated set of resources.
Consider a BPX switch with 4 interfaces:
Also assume the resource partitioning in Table 23-4.
Interface | Automatic Routing Management | Partition 1 | Partition 2 |
---|---|---|---|
10.1 | Enable | Enable | Enable |
10.2.1 | Enable | Disable | Disable |
11.1 | Enable | Enable | Enable |
11.7.1 | Disable | Enable | Disable |
Three virtual switches are defined by this configuration:
A logical switch is configured by enabling the partition and allocating resources to the partition. This must be done for each of the interfaces in the partition. The same procedure must be followed to define each of the logical switches. As resources are allocated to the different logical switches, a partition of the switch resources is defined.
The resources that are partitioned amongst the different logical switches are:
Resources are configured and allocated per interface, but the pool of resources may be managed at a different level. The pool of LCNs is maintained at the card level, and there are also limits at the port group level. The bandwidth is limited by the interface rate, and therefore the limitation is at the interface level. Similarly the range of VPI is also defined at the interface level.
You configure these parameters on a VSI partition on an interface:
In addition to partitioning of resources between VSI and Automatic Routing Management, multiple partitioning allows subpartitioning of the VSI space among multiple VSI partitions. Multiple VSI controllers can share the switch with each other and also with Automatic Routing Management.
The difference between the two types of partitioning is that all the VSI resources are under the control of the VSI-slave, while the management of Automatic Routing Management resources remains the province of the switch software.
These commands are used for multiple partitioning:
The ability to have multiple VSI controllers is referred to as VSI master redundancy. Master redundancy enables multiple VSI masters to control the same partition.
You add a redundant controller by using the addshelf command, the same way you add an interface (feeder) shelf, except that you specify a partition that is already in use by another controller. This capability can be used by the controllers for cooperative or exclusive redundancy:
The switch software has no knowledge of the state of the controllers. The state of the controllers is determined by the VSI entities. From the point of view of the BCC, there is no difference between cooperative redundant controllers and exclusive redundant controllers.
For illustrations of a VSI Master and Slave, see to Figure 23-3. For an illustration of a switch with redundant controllers that support master redundancy, see to Figure 23-8.
Switch software supports master redundancy in these ways:
The intercontroller communication channel is set up by the controllers. This could be an out-of-band channel, or the controllers can use the controllers interface information advertised by the VSI slaves to set up an intermaster channel through the switch.
Figure 23-8 below shows a switch with redundant controllers and the connectivity required to support master redundancy.
The controller application and Master VSI reside in an external VSI controller (MPLS or PNNI), such as the Cisco 6400 or the MPLS controller in a 7200 or 7500 series router. The VSI slaves are resident in BXM cards on the BPX node.
You add a VSI controller, such as an MPLS or PNNI controller by using the addshelf command with the VSI option. The VSI option of the addshelf command identifies the VSI controllers and distinguishes them from interface shelves (feeders).
The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI interface.
The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.
Two controllers intended to be used in a redundant configuration must specify the same partition when added to the node with the addshelf command.
When a controller is added to the node, switch software will set up the infrastructure so that the controllers can communicate with the slaves in the node. The VSI entities decide how and when to use these communication channels.
In addition, the controllers require a communication channel between them. This channel could be in-band or out-of-band. When a controller is added to the switch, switch software will send controller information to the slaves. This information will be advertised to all the controllers in the partition. The controllers may decide to use this information to set up an intermaster channel. Alternatively, the controllers may use an out-of-band channel to communicate.
The maximum number of controllers that can be attached to a given node is limited by the maximum number of feeders that can be attached to a BPX hub. The total number of interface shelves (feeders) and controllers is 16.
Prior to Release 9.2, hot standby functionality was supported only for Automatic Routing Management connections. This was accomplished by the BCC keeping both the active and standby cards in sync with respect to all configuration, including all connections set up by the BCC. However, the BCC does not participate in, nor is it aware of, the VSI connections that are set up independently by the VSI controllers.
Therefore, the task of keeping the redundant card in a hot standby state (for all the VSI connections) is the responsibility of the two redundant pair slaves. This is accomplished by a bulk update (on the standby slave) of the existing connections at the time that (line and trunk) Y-redundancy is added, as well as an incremental update of all subsequent connections.
The hot standby slave redundancy feature enables the redundant card to fully duplicate all VSI connections on the active card, and to be ready for operation on switchover. On bringup, the redundant card initiates a bulk retrieval of connections from the active card for fast sync-up. Subsequently, the active card updates the redundant card on a real-time basis.
The VSI Slave Hot Standby Redundancy feature provides the capability for the slave standby card to be preprogrammed the same as the active card so that when the active card fails, the slave card switchover operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus messages from the BCC to the standby BXM card.
The following sections describe some of the communication between the switch software and firmware to support VSI master and slave redundancy.
To provide a smooth migration of the VSI feature on the BXM card, line and trunk Y-redundancy is supported. You can pair cards with and without the VSI capability as a Y-redundant pair if the feature is not configured on the given slot. As long as the feature is not configured on a given slot, switch software will not perform "mismatch checking" if the BXM firmware does not support the VSI feature.
A maximum of two partitions are possible. The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag to OFF. The multiple partitions capability is treated as a card attribute and added to the attribute list.
In a Y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards. A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions are blocked.
You add a controller, including Label Switch Controllers, to a node by using the addshelf command. You add a redundant controller in the same way, except that you specify a partition that may already be in use by another controller. The addshelf command allows for the addition of multiple controllers that manage the same partition.
Use the addctrlr command to attach a controller to a node for the purposes of controlling the node for controllers that require Annex G capabilities in the controller interface. Note that you must first add the shelf by using the addshelf command.
You add VSI capabilities to the interface by using the addctrlr command. The only interface that supports this capability is an AAL5 feeder interface.
When adding a controller, you must specify a partition ID. The partition ID identifies the logical switch assigned to the controller. The valid partitions are 1 and 2. The user interface blocks the activation of partitions with ID higher than 1 if the card does not support multiple partitions.
To display the list of controllers in the node, use the command dspctrlrs.
The functionality is also available via SNMP using the switchIfTable in the switch MIB.
You can add one or more redundant MPLS controllers to one partition, and one or more redundant PNNI controllers to the other partition.
When using the addshelf command to add a VSI controller to the switch, you must specify the controller ID. This is a number between 1 and 32 that uniquely identifies the controller. Two different controllers must always be specified with different controller IDs.
Note The Controller ID for a PNNI controller must be 2. |
The management of resources on the VSI slaves requires that each slave in the node has a communication control VC to each of the controllers attached to the node. When a controller is added to the BPX by using the addshelf command, the BCC sets up the set of master-slave connections between the new controller port and each of the active slaves in the switch.
The connections are set up using a well known VPI.VCI. The value of the VPI is 0. The value of the VCI is (40 + (slot - 1)), where slot is the logical slot number of the slave. These values are default. You can modify them by using the addctrlr command.
Note that once the controllers have been added to the node, the connection infrastructure is always present. The controllers may decide to use it or not, depending on their state.
The addition of a controller to a node will fail if there are not enough channels available to set up the control VCs in one or more of the BXM slaves.
The BCC also informs the slaves of the new controller through a VSI configuration CommBus message (the BPX's internal messaging protocol). The message includes a list of controllers attached to the switch and their corresponding controller IDs. This internal firmware command includes the interface where the controller is attached. This information, when advertised by the slaves, can be used by the controllers to set up an inter-master communication channel.
When the first controller is added, the BCC behaves as it did in releases previous to Release 9.2. The BCC will send a VSI configuration CommBus message to each of the slaves with this controller information, and it will set up the corresponding control VCs between the controller port and each of the slaves.
When a new controller is added to drive the same partition, the BCC will send a VSI configuration CommBus message with the list of all controllers in the switch, and it will set up the corresponding set of control VCs from the new controller port to each of the slaves.
To delete a controller from the switch, use either delshelf or delctrlr.
Use the command delshelf to delete generic VSI controllers.
Use the command delctrlr to delete controllers that have been added to Annex G-capable interfaces.
When one of the controllers is deleted by using the delshelf command, the master-slave connections associated with this controller will be deleted. The control VCs associated with other controllers managing the same partition will not be affected.
The deletion of the controller triggers a new VSI configuration (internal) CommBus message. This message includes the list of the controllers attached to the node. The deleted controller will be removed from the list. This message will be sent to all active slaves in the shelf. In cluster configurations, the deletion of a controller will be communicated to the remote slaves by the slave directly attached through the interslave protocol.
While there is at least one controller attached to the node controlling a given partition, the resources in use on this partition should not be affected by a controller having been deleted. Only when a given partition is disabled will the slaves release all the VSI resources used on that partition.
The addshelf command allows multiple controllers on the same partition. You will be prompted to confirm the addition of a new VSI shelf with a warning message indicating that the partition is already used by a different controller.
When a new slave is activated in the node, the BCC will send a VSI configuration CommBus (internal BPX protocol) message with the list of the controllers attached to the switch.
The BCC will also set up a master-slave connection from each controller port in the switch to the added slave.
When a slave is deactivated in the node, the BCC will tear down the master-slave VCs between each of the controller ports in the shelf and the slave.
VSI LCNs are used for setting up the following management channels:
Intershelf blind channels are used in cluster configuration for communication between slaves on both sides of a trunk between two switches in the same cluster node.
The maximum number of slaves in a switch is 12. Therefore, a maximum of 11 LCNs are necessary to connect a slave to all other slaves in the node.
If a controller is attached to a shelf, master-slave connections are set up between the controller port and each of the slaves in the shelf.
For each slave that is not directly connected, the master-slave control VC consists of two legs:
For the slave that is directly connected to the controller, the master-slave control VC consists of a single leg between the controller port and the slave. Therefore, 12 LCNs are needed in the directly connected slave, and 1 LCN in each of the other slaves in the node for each controller attached to the shelf.
These LCNs will be allocated from the Automatic Routing Management pool. This pool is used by Automatic Routing Management to allocate LCNs for connections and networking channels.
For a given slave the number of VSI management LCNs required from the common pool is:
The function of the slave hot standby is to preprogram the slave standby card the same as the active card so that when the active card fails, the slave card switch over operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus (internal BPX protocol) messages from BCC to standby BXM card.
Because the master VSI controller does not recognize the standby slave card, the active slave card forwards VSI messages it received from the Master VSI controller to the standby Slave VSI card.
Also, when the standby slave VSI card is first started (either by having been inserted into the slot, or if you issue the addyred command from the CLI console), the active slave VSI card needs to forward all VSI messages it had received from the Master VSI controller card to the standby Slave VSI controller card.
In summary, these are the hot standby operations between active and standby card:
1. CommBus messages are duplicated to standby slave VSI card by the BCC.
Operation 1 does not need to implement because it had been done by the BCC.
2. VSI messages (from master VSI controller or other slave VSI card) are forwarded to the standby slave VSI card by the active slave VSI card.
Operation 2 is normal data transferring, which occurs after both cards are in-sync.
3. When the standby slave VSI card starts up, it retrieves all VSI messages from the active slave VSI card and processes these messages.
Operation 3 is initial data transferring, which occurs when the standby card first starts up.
The data transfer from the active card to the standby card should not affect the performance of the active card. Therefore, the standby card takes most actions and simplifies the operations in the active card. The standby card drives the data transferring and performs the synchronization. The active card functions just forward VSI messages and respond to the standby card requests.
Class of Service Templates (COS Templates) provide a means of mapping a set of standard connection protocol parameters to "extended" platform-specific parameters. Full Quality of Service (QoS) implies that each VC is served through one of a number of Class of Service buffers (Qbins), which are differentiated by their QoS characteristics.
A Qbin template defines a default configuration for the set of Qbins for a logical interface. When you assign a template to an interface, the corresponding default Qbin configuration is copied to this interface's Qbin configuration and becomes the current Qbin configuration for this interface.
Qbin templates deal only with Qbins that are available to VSI partitions, which are 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by Automatic Routing Management.
The Service Class templates provide a means of mapping a set of extended parameters, which are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave during connection setup.
A set of service templates is stored in each switch (such as BPX) and downloaded to the service modules (such as BXMs) as needed.
The service templates contains two classes of data:
The general types of parameters passed from a VSI Master to a Slave include:
Each VC added by a VSI master is assigned to a specific Service Class by means of a 32-bit service type identifier. Current identifiers are for:
When a connection setup request is received from the VSI master in the Label Switch Controller, the VSI slave (in the BXM, for example) uses the service type identifier to index into a Service Class Template database containing extended parameter settings for connections matching that index. The slave uses these values to complete the connection setup and program the hardware.
One of the parameters specified for each service type is the particular BXM Class of Service buffer (Qbin) to use. The Qbin buffers provide separation of service type to match the QoS requirements.
Service templates on the BPX are maintained by the BCC and are downloaded to the BXM cards as part of the card configuration process for:
The templates are nonconfigurable.
There are three types of templates:
You can assign any one of the nine templates to a Virtual Switch Interface. (See Figure 23-9.)
Each template table row includes an entry that defines the Qbin to be used for that Class of Service. See Figure 23-9 for an illustration of how Service Class databases map to Qbins. This mapping defines a relationship between the template and the interface Qbin's configuration.
A Qbin template defines a default configuration for the set of Qbins for the logical interface. When a template assignment is made to an interface, the corresponding default Qbin configuration becomes the interface's Qbin configuration.
Some of the parameters of the interface's Qbin configuration can be changed on a per-interface basis. Such changes affect only that interface's Qbin configuration and no others, and do not affect the Qbin templates.
Qbin templates are used only with Qbins that are available to VSI partitions, specifically, Qbins 10 through 15. Qbins 10 through 15 are used by the VSI on interfaces configured as trunks or ports. The rest of the Qbins (0-9) are reserved for and configured by Automatic Routing Management.
Each template table row includes an entry that defines the Qbin to be used for that Class of Service. This mapping defines a relationship between the template and the interface Qbin's configuration. As a result, you need to define a default Qbin configuration to be associated with the template.
Note The default Qbin configuration, although sometime referred as a "Qbin template," behaves differently from that of the Class of Service templates. |
The service-type parameter for a connection is specified in the connection bandwidth information parameter group. The service-type and service-category parameters determine the Service Class to be used from the service template.
There are five major service categories and several sub-categories. The major service categories are shown in Table 23-5. A list of the supported service sub-categories is shown in LCNs.
Service Category | Service Type Identifiers |
---|---|
Cbr | 0x0100 |
Vbr-rt | 0x0101 |
Vbr-Nrt | 0x0102 |
Ubr | 0x0103 |
Abr | 0x0104 |
The service type identifier is a 32-bit number.
There are three service types:
The service type identifier appears on the dspsct screen when you specify a Service Class template number and service type; for example:
dspsct <2> <vbrrt1>
A list of supported service templates and associated Qbins, and service types is shown in Table 23-6.
Template Type | Service Type Identifiers | Service Types | Associated Qbin |
---|---|---|---|
VSI Special Types | 0x0000 0x0001 0x0002 | Null Default Signaling | - 13 10 |
ATMF Types | 0x0100 0x0101 0x0102 0x0103 0x0104 0x0105 0x0106 0x0107 0x0108 0x0109 0x010A 0x010B | Cbr.1 Vbr.1-RT Vbr.2-RT Vbr.3-RT Vbr.1-nRT Vbr.2-nRT Vbr.3-nRT Ubr.1 Ubr.2 Abr Cbr.2 Cbr.3 | 10 11 11 11 12 12 12 13 13 14 10 10 |
MPLS Types | 0x0200 0x0201 0x0202 0x0203 0x0204 0x0205 0x0206 0x0207 0x0210 | label cos0, per-class service label cos1, per-class service label cos2, per-class service label cos3, per-class service label cos4, per-class service label cos5, per-class service label cos6, per-class service label cos7, per-class service label Abr, (Tag w/ Abr flow control) | 10 11 12 13 10 11 12 13 14 |
A summary of the parameters associated with each of the service templates is provided in Table 23-7 through Table 23-10. Table 23-11 provides a description of these parameters and also the range of values that may be configured if the template does not assign an arbitrary value.
Table 23-7 lists the parameters associated with Default (0x0001) and Signaling (0x0002) service template categories.
Parameter | VSI Default (0x0001) | VSI Signalling (0x0002) |
---|---|---|
Qbin Number | 10 | 15 |
UPC Enable | 0 | * |
UPC CLP Selection | 0 | * |
Policing Action (GCRA #1) | 0 | * |
Policing Action (GCRA #2) | 0 | * |
PCR | - | 300 kbps |
MCR | - | 300 kbps |
SCR | - | - |
ICR | - | - |
MBS | - | - |
CoS Min BW | 0 | * |
CoS Max BW | 0 | * |
Scaling Class | 3 | 3 |
CAC Treatment ID | 1 | 1 |
VC Max Threshold | Q_max/4 | * |
VC CLPhi Threshold | 75 | * |
VC CLPlo Threshold | 30 | * |
VC EPD Threshold | 90 | * |
VC EFCI Threshold | 60 | * |
VC discard selection | 0 | * |
Table 23-8 and Table 23-9 lists the parameters associated with the PNNI service templates.
Parameter | Cbr.1 | Cbr.2 | Cbr.3 | Ubr.1 | Ubr.2 | Abr |
---|---|---|---|---|---|---|
Qbin Number | 10 | 10 | 10 | 13 | 13 | 14 |
UPC Enable | 1 | 1 | 1 | 1 | 1 | 1 |
UPC CLP Selection | * | * | * | * | * | * |
Policing Action (GCRA #1) | * | * | * | * | * | * |
Policing Action (GCRA #2) | * | * | * | * | * | * |
PCR |
|
|
|
|
|
|
MCR | - | - | - | * | * | * |
SCR | - | - | - | 50 | 50 | * |
ICR | - | - | - | - | - | * |
MBS | - | - | - | - | - | * |
CoS Min BW | 0 | 0 | 0 | 0 | 0 | 0 |
CoS Max BW | 100 | 100 | 100 | 100 | 100 | 100 |
Scaling Class | * | * | * | * | * | * |
CAC Treatment ID | * | * | * | * | * | * |
VC Max Threshold | * | * | * | * | * | * |
VC CLPhi Threshold | * | * | * | * | * | * |
VC CLPlo Threshold | * | * | * | * | * | * |
VC EPD Threshold | * | * | * | * | * | * |
VC EFCI Threshold | * | * | * | * | * | * |
VC discard selection | * | * | * | * | * | * |
VSVD/FCES | - | - | - | - | - | * |
ADTF | - | - | - | - | - | 500 |
RDF | - | - | - | - | - | 16 |
RIF | - | - | - | - | - | 16 |
NRM | - | - | - | - | - | 32 |
TRM | - | - | - | - | - | 0 |
CDF |
|
|
|
|
| 16 |
TBE | - | - | - | - | - | 16777215 |
FRTT | - | - | - | - | - | * |
Parameter | Vbrrt.1 | Vbrrt.2 | Vbrrt.3 | Vbrnrt.1 | Vbrnrt.2 | Vbrnrt.3 |
---|---|---|---|---|---|---|
Qbin Number | 11 | 11 | 11 | 12 | 12 | 12 |
UPC Enable | 1 | 1 | 1 | 1 | 1 | 1 |
UPC CLP Selection | * | * | * | * | * | * |
Policing Action (GCRA #1) | * | * | * | * | * | * |
Policing Action (GCRA #2) | * | * | * | * | * | * |
PCR |
|
|
|
|
|
|
MCR | * | * | * | * | * | * |
SCR | * | * | * | * | * | * |
ICR | - | - | - | - | - | - |
MBS | * | * | * | * | * | * |
CoS Min BW | 0 | 0 | 0 | 0 | 0 | 0 |
CoS Max BW | 100 | 100 | 100 | 100 | 100 | 100 |
Scaling Class | * | * | * | * | * | * |
CAC Treatment ID | * | * | * | * | * | * |
VC Max Threshold | * | * | * | * | * | * |
VC CLPhi Threshold | * | * | * | * | * | * |
VC CLPlo Threshold | * | * | * | * | * | * |
VC EPD Threshold | * | * | * | * | * | * |
VC EFCI Threshold | * | * | * | * | * | * |
VC discard selection | * | * | * | * | * | * |
Table 23-10 lists the connection parameters and their default values for label switching service templates.
Parameter | CoS 0/4 | CoS 1/5 | CoS 2/6 | CoS3/7 | Tag-Abr |
---|---|---|---|---|---|
Qbin # | 10 | 11 | 12 | 13 | 14 |
UPC Enable | 0 | 0 | 0 | 0 | 0 |
UPC CLP Selection | 0 | 0 | 0 | 0 | 0 |
Policing Action (GCRA #1) | 0 | 0 | 0 | 0 | 0 |
Policing Action (GCRA#2) | 0 | 0 | 0 | 0 | 0 |
PCR | - | - | - | - | cr/10 |
MCR | - | - | - | - | 0 |
SCR | - | - | - | - | P_max |
ICR | - | - | - | - | 100 |
MBS | - | - | - | - | - |
CoS Min BW | 0 | 0 | 0 | 0 | 0 |
CoS Max BW | 0 | 0 | 0 | 0 | 100 |
Scaling Class | 3 | 3 | 2 | 1 | 2 |
CAC Treatment | 1 | 1 | 1 | 1 | 1 |
VC Max | Q_max/4 | Q_max/4 | Q_max/4 | Q_max/4 | cr/200ms |
VC CLPhi | 75 | 75 | 75 | 75 | 75 |
VC CLPlo | 30 | 30 | 30 | 30 | 30 |
VC EPD | 90 | 90 | 90 | 90 | 90 |
VC EFCI | 60 | 60 | 60 | 60 | 30 |
VC discard selection | 0 | 0 | 0 | 0 | 0 |
VSVD/FCES | - | - | - | - | 0 |
ADTF | - | - | - | - | 500 |
RDF | - | - | - | - | 16 |
RIF | - | - | - | - | 16 |
NRM | - | - | - | - | 32 |
TRM | - | - | - | - | 0 |
CDF | - | - | - | - | 16 |
TBE | - | - | - | - | 16777215 |
FRTT | - | - | - | - | 0 |
Table 23-11 describes the connection parameters that are listed in the preceding tables and also lists the range of values that may be configured, if not already preconfigured.
Every Service Class does not include all parameters. For example, a Cbr service type have fewer parameters than an Abr service type.
Note Not every Service Class has a value defined for every parameter listed in Table 23-11 below. |
Object Name | Range/Values | Template Units |
---|---|---|
Qbin Number | 10-15 | Qbin # |
Scaling Class | 0-3 | enumeration |
CDVT | 0-5M (5 sec) | secs |
MBS | 1-5M | cells |
ICR | MCR-PCR | cells |
MCR | 50-LR | cells |
SCR | MCR-LineRate | cells |
UPC Enable | 0-Disable GCRAs 1-Enabled GCRAs 2-Enable GCRA #1 3-Enable GCRA #2 | enumeration |
UPC CLP Selection | 0 - Bk 1: CLP (0+1) Bk 2: CLP (0) 1 - Bk 1: CLP (0+1) Bk 2: CLP (0+1) 2-Bk 1: CLP (0+1) Bk 2: Disabled | enumeration |
Policing Action (GCRA #1) | 0-Discard 1-Set CLP bit 2-Set CLP of untagged cells, disc. tagged cells | enumeration |
Policing Action (GCRA #2) | 0-Discard 1-Set CLP bit 2-Set CLP of untagged cells, disc. tagged cells | enumeration |
VC Max |
| cells |
CLP Lo | 0-100 | %Vc Max |
CLP Hi | 0-100 | %Vc Max |
EFCI | 0-100 | %Vc Max |
VC Discard Threshold Selection | 0-CLP Hysteresis 1-EPD | enumeration |
VSVD | 0: None 1: VSVD 2: VSVD with external Segment | enumeration |
Reduced Format ADTF | 0-7 | enumeration |
Reduced Format Rate Decrease Factor (RRDF) | 1-15 | enumeration |
Reduced Format Rate Increase Factor (RRIF) | 1-15 | enumeration |
Reduced Format Time Between Fwd RM cells (RTrm) | 0-7 | enumeration |
Cut-Off Number of RM Cells (CRM) | 1-4095 | cells |
Qbin templates deal only with Qbins that are available to VSI partitions, namely 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by Automatic Routing Management.
When you execute a dspsct command, it will give you the default service type, and the Qbin number.
The available Qbin parameters are shown in Table 23-12.
Notice that the Qbins available for VSI are restricted to Qbins 10-15 for that interface. All 32 possible virtual interfaces are provided with 16 Qbins.
Template Object Name | Template Units | Template Range/Values |
---|---|---|
Qbin Number | enumeration | 0-15 (10-15 valid for VSI) |
Max Qbin Threshold | msec | 1-2000000 |
Qbin CLP High Threshold | % of max Qbin threshold | 0-100 |
Qbin CLP Low Threshold | % of max Qbin threshold | 0-100 |
EFCI Threshold | % of max Qbin threshold | 0-100 |
Discard Selection | enumeration | 1-CLP Hystersis 2-Frame Discard |
Weighted Fair Queueing | enable/disable | 0: Disable 1: Enable |
The Qbin default settings are shown in Table 23-13. The Service Class Template default settings for Label Switch Controllers and PNNI controllers are shown in Table 23-14.
Note Templates 2, 4, 6, and 8 support policing on PPD. |
.
Qbin | Max Qbin Threshold (usec) | CLP High | CLP Low/EPD | EFCI | Discard Selection |
---|---|---|---|---|---|
LABEL Template 1 | |||||
10 (Null, Default, Signalling, Tag0,4) | 300,000 | 100% | 95% | 100% | EPD |
11 (Tag1,5) | 300,000 | 100% | 95% | 100% | EPD |
12 (Tag2,6) | 300,000 | 100% | 95% | 100% | EPD |
13 (Tag3,7) | 300,000 | 100% | 95% | 100% | EPD |
14 (Tag Abr) | 300,000 | 100% | 95% | 6% | EPD |
15 (Tag unused) | 300,000 | 100% | 95% | 100% | EPD |
PNNI Templates 2 (with policing) and 3 | |||||
10 (Null, Default, Cbr) | 4200 | 80% | 60% | 100% | CLP |
11 (VbrRt) | 53000 | 80% | 60% | 100% | EPD |
12 (VbrNrt) | 53000 | 80% | 60% | 100% | EPD |
13 (Ubr) | 105000 | 80% | 60% | 100% | EPD |
14 (Abr) | 105000 | 80% | 60% | 20% | EPD |
15 (Unused) | 105000 | 80% | 60% | 100% | EPD |
Full Support for ATMF and reduced support for Tag CoS without Tag-Abr Templates 4 (with policing) and 5 | |||||
10 (Tag 0,4,1,5, Default, Ubr, Tag-Abr*) | 300,000 | 100% | 95% | 100% | EPD |
11 (VbrRt) | 53000 | 80% | 60% | 100% | EPD |
12 (VbrNrt) | 53000 | 80% | 60% | 100% | EPD |
13 (Tag 2,6,3,7) | 300,000 | 100% | 95% | 100% | EPD |
14 (Abr) | 105000 | 80% | 60% | 20% | EPD |
15 (Cbr) | 4200 | 80% | 60% | 100% | CLP |
Full Support for Tag Abr and ATMF without Tag CoS Templates 6 (with policing) and 7 | |||||
10 (Tag 0,4,1,5,2,6,3,7 Default, Ubr) | 300,000 | 100% | 95% | 100% | EPD |
11 (VbrRt) | 53000 | 80% | 60% | 100% | EPD |
12 (VbrNrt) | 53000 | 80% | 60% | 100% | EPD |
13 (Tag-Abr) | 300,000 | 100% | 95% | 6% | EPD |
14 (Abr) | 105000 | 80% | 60% | 20% | EPD |
15 (Cbr) | 4200 | 80% | 60% | 100% | CLP |
Full Support for Tag CoS and reduced support for ATMF Templates 8 (with policing) and 9 | |||||
10 (Cbr, Vbr-rt) | 4200 | 80% | 60% | 100% | CLP |
11 (Vbr-nrt, Abr) | 53000 | 80% | 60% | 20% | EPD |
12 (Ubr, Tag 0,4) | 300,000 | 100% | 95% | 100% | EPD |
13 (Tag 1, 5, Tag-Abr) | 300,000 | 100% | 95% | 6% | EPD |
14 (Tag 2,6) | 300,000 | 100% | 95% | 100% | EPD |
15 (Tag 3, 7) | 300,000 | 100% | 95% | 100% | EPD |
Parameter with Default Setting | Label | PNNI |
---|---|---|
MCR | Tag0-7: N/A | Abr: 0% |
AAL5 Frame Base Traffic Control (Discard Selection) | EPD | Hystersis |
CDVT(0+1) | 250,000 | 250,000 |
VSVD | Tag0-7: N/A | Abr: None |
SCR | Tag0-7: N/A | Vbr: 100% |
MBS | Tag0-7: N/A | Vbr: 1000 |
Policing | Policing Disable | VbrRt1: VbrRt2: VbrRt3: VbrNrt1: VbrNrt2: VbrNrt3: Ubr1: Ubr2: Abr: Cbr1: Cbr2: Cbr3: |
ICR | Tag0-7: N/A | Abr: 0% |
ADTF | Tag0-7: N/A | Abr: 1000 msec |
Trm | Tag0-7: N/A | Abr: 100 |
VC Qdepth | 61440 | 10,000 |
CLP Hi | 100 | 80 |
CLP Lo / EPD | 40 | 35 |
EFCI | TagAbr: 20 | 20 (not valid for non-Abr) |
RIF | Tag0-7: N/A | Abr: 16 |
RDF | Tag0-7: N/A | Abr: 16 |
Nrm | Tag0-7: N/A | Abr: 32 |
FRTT | Tag0-7: N/A | Abr: 0 |
TBE | Tag0-7: N/A | Abr: 16,777,215 |
IBS | N/A | N/A |
CAC Treatment | LCN | Vbr: CAC4 |
Scaling Class | Ubr - Scaled 1st | Vbr: Vbr -Scaled 3rd |
CDF | 16 | 16 |
Mnemonic | Description |
---|---|
addctrlr | Attach a controller to a node; for controllers that require Annex G capabilities in the controller interface. Add a PNNI VSI controller to a BPX node through an AAL5 interface shelf. |
addshelf | Add a trunk between the hub node and interface shelf or VSI MPLS controller.) |
cnfqbin | Configure Qbin card. If you answer Yes when prompted, the command will use the card Qbin values from the Qbin templates. |
cnfrsrc | Configure resources, for example, for Automatic Routing Management PVCs and MPLS controller (LSC). |
cnfvsiif | Configure VSI Interface or a different template to an interface. |
cnfvsipart | Configure VSI partition characteristics for VSI ILMI. |
delctrlr | Delete a controller, such as a Service Expansion Shelf (SES) PNNI controller, from a BPX node. |
delshelf | Delete a trunk between a hub node and access shelf. |
dspcd | Display the card configuration. |
dspchuse | Display a summary of channel distribution in a given slot. |
dspctrlrs | Display the VSI controllers, such as an PNNI controller, on a BPX node. |
dspqbin | Displays Qbin parameters currently configured for the virtual interface. |
dspqbintmt | Display Qbin template. |
dsprsrc | Display LSC resources. |
dspsct | Display Service Class Template assigned to an interface. The command has three levels of operation: dspsct dspsct <tmplt_id> dspsct <tmplt_id> |
dspvsiif | Display VSI Interface. |
dspvsipartcnf | Display information about VSI ILMI functionality. |
dspvsipartinfo | Display VSI resource status for the trunk and partition. |
Posted: Fri Jul 27 17:49:11 PDT 2001
All contents are Copyright © 1992--2001 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.