cc/td/doc/product/wanbu/bpx8600/9_3_0
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Configuring BXM Virtual Switch Interfaces

Configuring BXM Virtual Switch Interfaces

This chapter describes the BXM Virtual Switch Interfaces (VSIs) and provides configuration procedures:

For information on configuring SES PNNI controllers to work with BPX switches, see the Cisco SES PNNI Controller Software Configuration Guide.

For information on configuring MPLS controllers to work with BPX switches, see the Cisco MPLS Controller Software Configuration Guide.

Refer to Cisco WAN Switching Command Reference for details about the commands mentioned here for both PNNI and MPLS controllers. Refer to Release Notes for supported features.

Virtual Switch Interfaces

Virtual Switch Interface (VSI) is a common control interface between the BPX 8650 or the MGX 8850 switches and an external controller that supports the VSI protocol.

Virtual Switch Interfaces (VSIs) allow a node to be controlled by multiple controllers, such as MPLS (Multiprotocol Label Switching) and the Service Expansion Shelf Private Network-to-Network Interface (SES PNNI).

When a virtual switch interface (VSI) is activated on a port, trunk, or virtual trunk so that it can be used by a master controller, such as a SES PNNI or an MPLS controller, the resources of the virtual interface associated with the port, trunk or virtual trunk are made available to the VSI. These control planes can be external or internal to the switch. The VSI provides a mechanism for networking applications to control the switch and use a partition of the switch resources.

VSI was implemented first on the BPX 8650 in Release 9.1, which uses VSI to perform Multiprotocol Label Switching. Release 9.1 allowed support for VSI on BXM cards and for partitioning BXM resources between Automatic Routing Management (formerly called AutoRoute) and a VSI-MPLS controller.

You can configure partition resources between Automatic Routing Management PVCs and one VSI control plane, but not both. You can also configure partition resources between Automatic Routing Management PVCs and three VSI controllers (MPLS or PNNI).

VSI on the BPX provides:

Multiprotocol Label Switching

Label switching enables routers at the edge of a network to apply simple labels to packets (frames), allowing devices in the network core to switch packets according to these labels with minimal lookup activity. Label switching in the network core can be performed by switches, such as ATM switches, or by existing routers.

Multiprotocol Label Switching (MPLS, previously called Tag Switching) integrates virtual circuit switching with IP routing to offer scalable IP networks over ATM. MPLS support data, voice, and multimedia service over ATM networks. MPLS summarizes routing decisions so that switches can perform IP forwarding, as well as bringing other benefits that apply even when label switching is used in router-only networks.

Using MPLS techniques, it is possible to set up explicit routes for data flows that are constrained by path, resource availability, and requested Quality of Service (QoS). MPLS also facilitates highly scalable Virtual Private Networks.

MPLS assigns labels to IP flows, placing them in the IP frames. The frames can then be transported across packet or cell-based networks and switched on the labels rather than being routed using IP address look-up.

A routing protocol such as OSPF, uses the Label Distribution Protocol (LDP) to set up MPLS virtual connections (VCs) on the switch.

MPLS Terminology

MPLS is a standardized version of Cisco's original Tag Switching proposal. MPLS and Tag Switching are identical in principle and nearly so in operation. MPLS terminology has replaced obsolete Tag Switching terminology.

An exception to the terminology is Tag Distribution Protocol (TDP). TDP and the MPLS Label Distribution Protocol (LDP) are nearly identical, but use different message formats and procedures. TDP is used in this design guide only when it is important to distinguish TDP from LDP. Otherwise, any reference to LDP in this design guide also applies to TDP.

VSI Configuration Procedures

In the VSI control model, a controller sees the switch as a collection of slaves with their interfaces. The controller can establish connections between any two interfaces. The controller uses resources allocated to its partition.

You can assign each VSI interface a default class of service template when you activate it. You can use the switch software CLI or Cisco WAN Manager to configure a different template to an interface.

The procedure for adding a VSI-based controller such as the MPLS controller to the BPX is similar to adding an MGX 8220 interface shelf to the BPX. To attach a controller to a node to control the node, use the addshelf command.

The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI protocol. The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.

To configure VSI resources on a given interface, use the cnfrsrc command.

This section provides the basic procedures to:

Adding a Controller

To add an MPLS controller to any BXM trunk, use the addshelf command with the V(si) option.

To add an SES PNNI controller, use the addshelf command with an X option.

To identify VSI controllers and distinguish them from feeders, use the vsi option of the addshelf command.

To add a SES PNNI controller to a BPX node through an AAL5 interface shelf or feeder type configured with VSI controller capabilities, use the addctrlr command. See "Adding a Controller" later in this chapter.

If you are adding two controllers that are intended to be used in a redundant configuration, you must specify the same partition when you add them to the node by using the addshelf command.

To add an MPLS controller (or a generic VSI controller that does not need AnnexG protocol):


Step 1   Up the trunk by using the uptrk command.

Step 2   Add an MPLS controller by using the addshelf command with feeder type set to "V".

Step 3   Display the controllers and interface shelves attached to the node by using the dspnode command.

Step 4   Display the VSI controllers on a BPX node by using the dspctrlrs command.

Note that addshelf and addtrk are mutually exclusive commands; that is, you can use either addshelf or addtrk, but not both on the same interface shelf.


To add a PNNI controller, use the following commands:


Step 1   Up a trunk interface by using the uptrk command.

Step 2   Configure resource on the trunk interface for the PNNI controller's control channels by using the cnfrsrc command.

Step 3   Add the SES PNNI to the BPX and enable AnnexG protocol to run between the BPX and the SES by using the addshelf command with feeder type set to "X".

Step 4   Enable the VSI capabilities on the trunk interface by using the addctrlr command.


Viewing Controllers and Interfaces

Display commands such as dspnw and dspnode show interface shelves.

To view conditions on an interface shelf (feeder) trunk, use:

To view conditions of VSI controllers, use:

The designation for a MPLS (Multiprotocol Label Switching) Controller serving as an interface shelf is LSC.

Deleting a Controller

To delete a controller or interface (feeder) shelf, first delete it from the network. Then down the port and trunk. This applies to MPLS controllers or generic VSI controllers that do not need AnnexG protocols.

To delete a MPLS controller:


Step 1   Delete a MPLS controller from a BPX node by using the delshelf command.

Step 2   Down the port by using the dnport command.

Step 3   Down the trunk by using the dntrk command.


To delete a PNNI controller:


Step 1   Delete the VSI capabilities on the trunk interface by using the delctrlr command.

Step 2   Delete the SES attached to the trunk interface by using the delshelf command.

Step 3   Disable the VSI resource partition allocated for PNNI controller on the trunk interface by using the cnfrsrc command.

Step 4   Down the trunk interface (provided no other VSI partitions are active on the trunk interface) by using the dntrk command.


Configuring Partition Resources on Interfaces

This section is key for configuring VSI.

Prior to release 9.1, LCNs, VPI range, and Bandwidth allocation were managed exclusivelyby the BCC. With the introduction of VSI, the switch must allocate a range of LCNs, VPIs, and how much bandwidth for use by VSI (not BXM).

When configuring resource partitions on a VSI interface, you typically use the following commands:

The next step to complete when adding a VSI-based controller such as an LSC (Label Switching Controller) or a PNNI controller is to configure resource partitions on BXM interfaces to allow the controller to control the BXM interfaces. To do this, you must create resource partitions on these interfaces. Use the cnfrsrc command to add, delete and modify a partition on a specified interface.

You may have up to three VSI controllers on the same partition (referred to as VSI master redundancy). The master redundancy feature allows multiple VSI masters to control the same partition.

See Table 23-1 for a listing of cnfrsrc parameters, ranges and values, and descriptions. These descriptions are oriented to actions and behavior of the BXM firmware; in most cases, objects (messages) are sent to switch software. Most of these parameters appear on the cnfrsrc screen.


Table 23-1:
cnfrsrc Parameters, Ranges/Values, and Descriptions
Parameter (Object) Name Range/Values Default Description

VSI partition

1... 3

1

Identifies the partition

Partition state

0 = Disable Partition

1 = Enable Partition

NA

For Partition state = 1, Objects are mandatory

Min LCNs

0...64K

NA

Min LCNs (connections) guaranteed for this partition.

Max LCNs

0...64K

NA

Maximum LCNs permitted on this partition

Start VPI

0 .. 4095

NA

Partition Start VPI

End VPI

0 .. 4095

NA

Partition End VPI

Min Bw

0 .. Line Rate

NA

Minimum Partition bandwidth

Max Bw

0 .. Line Rate

NA

Maximum Partition bandwidth

Assigning a Service Template to an Interface

The ATM Class of Service templates (or Service Class Template, SCT) provide a means of mapping a set of extended parameters. These are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave in a BXM port interface during initial setup of the interface.

A set of service templates is stored in each BPX 8650 switch and downloaded to the service modules (BXMs) as needed during initial configuration of the VSI interface when a trunk or line is enabled on the BXM.

Each service template type has an associated qbin. The qbins provide the ability to manage bandwidth by temporarily storing cells and then serving them out based on a number of factors, including bandwidth availability and the relative priority of different classes of service.

When ATM cells arrive from the edge LSR at the BXM port with one of four CoS labels, they receive CoS handling based on that label. A table look-up is performed, and the cells are processed, based on their connection classification. Based on its label, a cell receives the ATM differentiated service associated with its template type and service type (for example, label cos2 bw), plus associated qbin characteristics and other associated ATM parameters.

A default service template is automatically assigned to a logical interface (VI) when you up the interface by using the commands upport and uptrk. The corresponding qbin template is then copied into the card's (BXM) data structure of that interface.

Here are some examples of assigning a default service template by using the commands upport and uptrk:

This default template has the identifier of 1. To change the service template from service template 1 to another service template, use the cnfvsiif command.

To assign a selected service template to an interface (VI) use the cnfvsiif command, specifying the template number. It has this syntax:

cnfvsiif <slot.port.vtrk> <tmplt_id>

For example:

cnfvsiif 1.1 2 cnfvsiif 1.1.1 2

Use the dspvsiif command to display the type of service template assigned to an interface (VI). It has the following syntax:

dspvsiif <slot.port.vtrk>

dspvsiif 1.1 dspvsiif 1.1.1

To change some of the template's qbin parameters, use the cnfqbin command. The qbin is now "user configured" as opposed to "template configured".

To view this information, use the command dspqbin.

SCT Commands

dspsct
Use the dspsct command to display the service class template number assigned to an interface. The command has three levels of operation:

dspsct
With no arguments lists all the service templates resident in the node.

dspsct <tmplt_id>
Lists all the Service Classes in the template

dspsct <tmplt_id>
Service Classes lists all the parameters of that Service Class.

dspqbint
Displays the qbin templates

cnfqbin
Configures the qbin. You can answer yes when prompted and
the command will use the card qbin values from the qbin templates.

dspqbin
Displays qbin parameters currently configured for the virtual interface.

dspcd
Displays the card configuration.

Configuring the BXM Card's Qbin

When you activate an interface by using an uptrk or upport command, a default service template (MPLS1) is automatically assigned to that interface. The corresponding qbin templates are simultaneously set up in the BXM's data structure for that interface. This service template has an indentifier of "1".

To change the service template assigned to an interface, use the cnfvsiif command. You can do this only when there are no active VSI connections on the BXM.

To display the assigned templates, use the dspvsiif command.

Each template table row includes an entry that defines the qbin to be used for that class of service (see Figure 23-10).

This mapping defines a relationship between the template and the interface qbin's configuration.

A qbin template defines a default configuration for the set of qbins for the logical interface. When a template assignment is made to an interface, the corresponding default qbin configuration becomes the interface's qbin configuration.

Once a service template has been assigned, you can then adjust some of the parameters of this configuration on a per-interface basis. Changes you make to the qbin configuration of an interface affect only that interface's qbin configuration. Your changes do not affect the qbin template assigned to that interface.

To change the template's configuration of the interface, provide new values by using the cnfqbin command. The qbin is now "user configured" as opposed to "template configured". This information is displayed on the dspqbin screen, which indicates whether the values in the qbin are from the template assigned to the interface, or whether the values have been changed to user-defined values.

To see the qbin's default service type and the qbin number, execute the dspsct command.

Use the following commands to configure qbins:

Enabling VSI ILMI Functionality for the PNNI Controller

You can enable VSI ILMI functionality both on line (port) interfaces and trunk interfaces when using PNNI. Note that VSI ILMI functionality cannot be enabled on trunks to which feeders or VSI controllers are attached.

To enable VSI ILMI functionality on line (port) interfaces:


Step 1   Up a line interface by using the upln command

Step 2   Up the port interface by using the upport command.

Step 3   Configure the port to enable ILMI protocol and ensure that the protocol runs on the BXM card by enabling the "Protocol by the card" option of the cnfport command.

Step 4   Configure a VSI partition on the line interface by using the cnfrsrc command.

Step 5   Enable VSI ILMI functionality for the VSI partition by using the cnfvsipart command.


To enable VSI ILMI functionality on physical trunk interfaces:


Step 1   Up a physical trunk by using the uptrk command.

Step 2   Configure the trunk to enable ILMI protocol to run on the BXM card by enabling the "Protocol by the card" option of the cnftrk command.

Step 3   Configure a VSI partition on the trunk interface by using the cnfrsrc command.

Step 4   Enable VSI ILMI session for the VSI partition by using the cnfvsipart command.


To enable VSI ILMI functionality on virtual trunk interfaces:


Step 1   Up a virtual trunk by using the uptrk command.

Step 2   Configure the trunk VPI by using the cnftrk command.
NOTE: ILMI automatically runs on the BXM card for virtual trunks.
This is not configurable by using the cnftrk command.

Step 3   Configure a VSI paritition on the virtual trunk interface by using the cnfrsrc command.

Step 4   Enable VSI ILMI functionality for the VSI partition by using the cnfvsipart command.
NOTE: VSI ILMI can be enabled for only one VSI partition on trunk interface.

To display VSI ILMI functionality on interfaces:


VSIs and Virtual Trunking

The VSI virtual trunking feature lets you use BXM virtual trunks as VSI interfaces. Using this capability, VSI master controllers can terminate connections on virtual trunk interfaces.

You activate and configure VSI resources on a virtual trunk using the same commands you use to configure physical interfaces (for example, cnfrsrc, dsprsrc). The syntax you use to identify a trunk has an optional virtual trunk identifier that you append to the slot and port information to identify virtual trunk interfaces.

A virtual trunk is a VPC that terminates at each end on the switch port. Each virtual trunk can contain up to 64,000 VCCs, but it may not contain any VPCs.

Virtual trunk interfaces cannot be shared between VSI and Automatic Routing Management. Therefore, configuring a trunk as a VSI interface prevents you from adding the trunk as an Automatic Routing Management trunk. Similarly, a trunk that has been added to the Automatic Routing Management topology cannot be configured as a VSI interface.

Virtual trunks on the BPX use a single configurable VPI. Because virtual trunk interfaces are dedicated to VSI, the entire range of VCIs is available to the VSI controllers.

The virtual trunking feature introduces the concept of defining multiple trunks within a single trunk port interface. This creates a fan-out capability on the trunk card. Virtual trunking is implemented on the BNI, UXM, and BXM cards.

Once VSI is enabled on the virtual trunk, Automatic Routing Management does not include this trunk in its route selection process.

To configure a VSI virtual trunk:


Step 1   Activate the virtual trunk by using the command
uptrk <slot.port.vtrunk>

Step 2   Set up VPI value and trunk parameters by using the command
cnftrk <slot.port.vtrunk>

Step 3   Enable VSI partition by using the command
cnfrsrc <slot.port.vtrunk>


Overview: How VSI Works

This section provides detailed reference to virtual interfaces, service templates, and qbins.

For information on configuring SES PNNI controllers to work with BPX switches, see the Cisco SES PNNI Controller Software Configuration Guide.

For information on configuring MPLS controllers to work with BPX switches, see the Cisco MPLS Controller Software Configuration Guide.

Refer to Cisco WAN Switching Command Reference for details about the commands mentioned here for both PNNI and MPLS controllers. Refer to Release Notes for supported features.

Virtual Interfaces and Qbins

The BXM has 31 virtual interfaces that provide a number of resources including qbin buffering capability. One virtual interface is assigned to each logical trunk (physical or virtual) when the trunk is enabled. (See Figure 23-1.)

Each virtual interface has 16 qbins assigned to it. Qbins 0-9 are used for Autoroute and 10 through 15 are available for use by a VSI enabled on the virtual interface. (In Release 9.1, only qbin 10 was used.) The qbins 10 through 15 support class of service (CoS) templates on the BPX.

You may enable a virtual switch interface on a port, trunk, or virtual trunk. The virtual switch interface is assigned the resources of the associated virtual interface.

With virtual trunking, a physical trunk can comprise a number of logical trunks called virtual trunks. Each of these virtual trunks (equivalent to a virtual interface) is assigned the resources of one of the 31 virtual interfaces on a BXM (see Figure 23-1).


Figure 23-1: BXM Virtual Interfaces and Qbins


VSI Master and Slaves

A controller application uses a VSI master to control one or more VSI slaves. For the BPX, the controller application and Master VSI reside in an external 7200 or 7500 series router and the VSI slaves are resident in BXM cards on the BPX node (see Figure 23-2).

The controller sets up these types of connections:


Figure 23-2: VSI, Controller and Slave VSIs


The controller establishes a link between the VSI master and every VSI slave on the associated switch. The slaves in turn establish links between each other (see Figure 23-3).


Figure 23-3: VSI Master and VSI Slave Example


With a number of switches connected together, there are links between switches with cross connects established within the switch as shown in Figure 23-4.


Figure 23-4: Cross Connects and Links between Switches


Connection Admission Control

When a connection request is received by the VSI Slave, it is first subjected to a Connection Admission Control (CAC) process before being forwarded to the FW layer responsible for actually programming the connection. The granting of the connection is based on the following criteria:

LCNs available in the VSI partition:

QoS guarantees:

When the VSI slave accepts (that is, after CAC) a connection setup command from the VSI master in the MPLS Controller, it receives information about the connection including service type, bandwidth parameters, and QoS parameters. This information is used to determine an index into the VI's selected Service Template's VC Descriptor table thereby establishing access to the associated extended parameter set stored in the table.

Service templates used for egress traffic are described here.

Ingress traffic is managed differently and a pre-assigned ingress service template containing CoS Buffer links is used.

Partitioning

The VSIs need to partition the resources between competing controllers, Autoroute, Label Switching, and PNNI for example. You do partitioning by using the cnfrsrc command.


Note    Release 9.3 supports up to three partitions.

For Release. 9.1 and Release 9.2, just one controller (of a particular type) is supported. However, you can have different types of controllers splitting up a partition's assets. For example, Autoroute and Label, or Autoroute and PNNI (svcs), and both PNNI and MPLS.

Table 23-2 shows the three resources that must be configured for a partition designated ifci, which stands for interface controller 1 in this instance.


Table 23-2: ifci Parameters (Virtual Switch Interface)
ifci parameters Min Max

lcns

min_lcnsi

max_lcnsi

bw

min_bwi

max_bwi

vpi

min_vpi

max_vpi

The controller is supplied with a logical lcn connection number, that is slot, port, and so on., information that is converted to a logical connection number (lcn).

Some ranges of values available for a partition are listed in Table 23-3:


Table 23-3: Partition Criteria
Range

trunks

1-4095 VPI range

ports

1-4095 VPI range

virtual trunk

only one VPI available per virtual trunk since a virtual trunk is currently delineated by a specific VP

virtual trunk

each virtual trunk can either be Autoroute or vsi, not both

When a trunk is added, the entire bandwidth is allocated to Autoroute. To change the allocation in order to provide resources for a vsi, use the cnfrsrc command on the BPX switch. A view of the resource partitioning available is shown in Figure 23-5.


Figure 23-5: Graphical View of Resource Partitioning, Autoroute and vsi


Multiple Partitioning

You can configure partition resources between Automatic Routing Management PVCs and three VSI controllers (LSC or PNNI). Up to three VSI controllers in different control planes can independently control the switch with no communication between controllers. The controllers are essentially unaware of the existence of other control planes sharing the switch. This is possible because different control planes used different partitions of the switch resources.

You can add one or more redundant LSC controllers to one partition, and one or more redundant PNNI controllers to the other partition. With Release 9.2.3, six new templates were added for interfaces (for a total of nine) with multiple partitions controlled simultaneously by a PNNI controller and an LSC.

The master redundancy feature allows multiple controllers to control the same partition. In a multiple partition environment, master redundancy is independently supported on each partition.

These limitations apply to multiple VSI partitioning:

Compatibility

The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag off. The multiple partitions capability is treated as a card attribute and added to the attribute list.

Use of a partition with ID higher than 1 requires support for multiple VSI partitions in both switch software and BXM firmware, even if this is the only partition active on a the card. In a y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards.

A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions will be blocked.

Multiple Partition Example

Each logical switch can be seen as a collection of interfaces each with an associated set of resources.

Consider a BPX switch with 4 interfaces:

Also assume the resource partitioning in Table 23-4.


Figure 23-6: Virtual Switches



Table 23-4: Partitioning Example
Interface AutoRoute Partition 1 Partition 2

10.1

Enable
lcns: 2000
bw: 20000 cps
vpi: 1-199

Enable
lcns: 4000
bw:30000 cps
vpi: 200-239

Enable
lcns: 4000
bw: 20000 cps
vpi: 240-255

10.2.1

Enable
lcns: 10000
bw:10000 cps
vpi: 200-200

Disable

Disable

11.1

Enable
lcns: 2000
bw: 100000 cps
vpi: 1-199

Enable
lcns: 3000
bw: 50000 cps
vpi: 200-249

Enable
lcns:4000
bw: 10000
vpi: 250-255

11.7.1

Disable

Enable
lcns: 5000
bw: 200000cps
vpi: 250-250

Disable

Three virtual switches are defined by this configuration:

Resource Partitioning

A logical switch is configured by enabling the partition and allocating resources to the partition. This must be done for each of the interfaces in the partition. The same procedure must be followed to define each of the logical switches. As resources are allocated to the different logical switches a partition of the switch resources is defined.

The resources that are partitioned amongst the different logical switches are:

Resources are configured and allocated per interface, but the pool of resources may be managed at a different level. The pool of LCNs is maintained at the card level, and there are also limits at the port group level. The bandwidth is limited by the interface rate, and therefore the limitation is at the interface level. Similarly the range of VPI is also defined at the interface level.

You configure these parameters on a VSI partition on an interface:

Partitioning Between AutoRoute and VSI

In addition to partitioning of resources between VSI and AutoRoute, multiple partitioning allows sub-partitioning of the VSI space among multiple VSI partitions. Multiple VSI controllers can share the switch with each other and also with AutoRoute.

The difference between the two types of partitioning is that all the VSI resources are under the control of the VSI-slave, while the management of AutoRoute resources remains the province of the switch software.


Figure 23-7: Resource Partitioning Between AutoRoute and VSI


These commands are used for multiple partitioning:

VSI Master and Slave Redundancy

The ability to have multiple VSI controllers is referred to as VSI master redundancy. Master redundancy enables multiple VSI masters to control the same partition.

You add a redundant controller by using the addshelf command, the same way you add an interface (feeder) shelf, except that you specify a partition that is already in use by another controller. This capability can be used by the controllers for cooperative or exclusive redundancy:

The switch software has no knowledge of the state of the controllers. The state of the controllers is determined by the VSI entities. From the point of view of the BCC, there is no difference between cooperative redundant controllers and exclusive redundant controllers.

For illustrations of a VSI Master and Slave, see to Figure 23-3. For an illustration of a switch with redundant controllers that support master redundancy, see to Figure 23-8.

Switch software supports master redundancy in these ways:

The intercontroller communication channel is set up by the controllers. This could be an out-of-band channel, or the controllers can use the controllers interface information advertised by the VSI slaves to set up an intermaster channel through the switch.

Figure 23-8 below shows a switch with redundant controllers and the connectivity required to support master redundancy.


Figure 23-8: Switch with Redundant Controllers to Support Master Redundancy


The controller application and Master VSI reside in an external VSI controller (MPLS or PNNI), such as the Cisco 6400 or the MPLS controller in a 7200 or 7500 series router. The VSI slaves are resident in BXM cards on the BPX node.

Master Redundancy

You add a VSI controller, such as an MPLS or PNNI controller by using the addshelf command with the vsi option. The vsi option of the addshelf command identifies the VSI controllers and distinguishes them from interface shelves (feeders).

The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI interface.

The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.

Two controllers intended to be used in a redundant configuration must specify the same partition when added to the node with the addshelf command.

When a controller is added to the node, switch software will set up the infrastructure so that the controllers can communicate with the slaves in the node. The VSI entities decide how and when to use these communication channels.

In addition, the controllers require a communication channel between them. This channel could be in-band or out-of-band. When a controller is added to the switch, switch software will send controller information to the slaves. This information will be advertised to all the controllers in the partition. The controllers may decide to use this information to set up an intermaster channel. Alternatively the controllers may use an out-of-band channel to communicate.

The maximum number of controllers that can be attached to a given node is limited by the maximum number of feeders that can be attached to a BPX hub. The total number of interface shelves (feeders) and controllers is 16.

Slave Redundancy

Prior to Release 9.2, hot standby functionality was supported only for Automatic Routing Management connections. This was accomplished by the BCC keeping both the active and standby cards in sync with respect to all configuration, including all connections set up by the BCC. However, the BCC does not participate in, nor is it aware of the VSI connections that are set up independently by the VSI controllers.

Therefore, the task of keeping the redundant card in a hot standby state (for all the VSI connections) is the responsibility of the two redundant pair slaves. This is accomplished by a bulk update (on the standby slave) of the existing connections at the time that (line and trunk) Y redundancy is added, as well as an incremental update of all subsequent connections.

The hot standby slave redundancy feature enables the redundant card to fully duplicate all VSI connections on the active card, and to be ready for operation on switchover. On bringup, the redundant card initiates a bulk retrieval of connections from the active card for fast sync-up. Subsequently, the active card updates the redundant card on a real-time basis.

The VSI Slave Hot Standby Redundancy feature provides the capability for the slave standby card to be preprogrammed the same as the active card so that when the active card fails, the slave card switchover operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus messages from the BCC to the standby BXM card.

The following sections describe some of the communication between the switch software and firmware to support VSI master and slave redundancy.

VSI Slave Redundancy Mismatch Checking

To provide a smooth migration of the VSI feature on the BXM card, line and trunk Y-redundancy is supported. You can pair cards with and without the VSI capability as a Y-redundant pair if the feature is not configured on the given slot. As long as the feature is not configured on a given slot, switch software will not perform "mismatch checking" if the BXM firmware does not support the VSI feature.

A maximum of two partitions are possible. The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag to OFF. The multiple partitions capability is treated as a card attribute and added to the attribute list.

In a y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards. A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions are blocked.

When Happens When You Add a Controller

You add a controller, including Label Switch Controllers, to a node by using the addshelf command. You add a redundant controller in the same way, except that you specify a partition that may already be in use by another controller. The addshelf command allows for the addition of multiple controllers that manage the same partition.

Use the addctrlr command to attach a controller to a node for the purposes of controlling the node for controllers that require Annex G capabilities in the controller interface. Note that you must first add the shelf by using the addshelf command.

You add VSI capabilities to the interface by using the addctrlr command. The only interface that supports this capability is an AAL5 feeder interface.

When adding a controller, you must specify a partition ID. The partition ID identifies the logical switch assigned to the controller. The valid partitions are 1 and 2. The user interface blocks the activation of partitions with ID higher than 1 if the card does not support multiple partitions.

To display the list of controllers in the node, use the command dspctrlrs.

The functionality is also available via SNMP using the switchIfTable in the switch MIB.

You can add one or more redundant MPLS controllers to one partition, and one or more redundant PNNI controllers to the other partition.

When using the addshelf command to add a VSI controller to the switch, you must specify the controller ID. This is a number between 1 and 32 that uniquely identifies the controller. Two different controllers must always be specified with different controller IDs.

The management of resources on the VSI slaves requires that each slave in the node has a communication control VC to each of the controllers attached to the node. When a controller is added to the BPX by using the addshelf command, the BCC sets up the set of master-slave connections between the new controller port and each of the active slaves in the switch. The connections are set up using a well known VPI.VCI. The value of the VPI is 0. The value of the VCI is (40 + (slot - 1)), where slot is the logical slot number of the slave.

Note that once the controllers have been added to the node, the connection infrastructure is always present. The controllers may decide to use it or not, depending on their state.

The addition of a controller to a node will fail if there are not enough channels available to set up the control VCs in one or more of the BXM slaves.

The BCC also informs the slaves of the new controller through a VSI configuration CommBus message (the BPX's internal messaging protocol). The message includes a list of controllers attached to the switch and their corresponding controller IDs. This internal firmware command includes the interface where the controller is attached. This information, when advertised by the slaves, can be used by the controllers to set up an inter-master communication channel.

When the first controller is added, the BCC behaves as it did in releases previous to Release 9.2. The BCC will send a VSI configuration CommBus message to each of the slaves with this controller information, and it will set up the corresponding control VCs between the controller port and each of the slaves.

When a new controller is added to drive the same partition, the BCC will send a VSI configuration CommBus message with the list of all controllers in the switch, and it will set up the corresponding set of control VCs from the new controller port to each of the slaves.

What Happens When You Delete a Controller

To delete a controller from the switch, use either delshelf or delctrlr.

Use the command delshelf to delete generic VSI controllers.

Use the command delctrlr to delete controllers that have been added to Annex G-capable interfaces.

When one of the controllers is deleted by using the delshelf command, the master-slave connections associated with this controller will be deleted. The control VCs associated with other controllers managing the same partition will not be affected.

The deletion of the controller triggers a new VSI configuration (internal) CommBus message. This message includes the list of the controllers attached to the node. The deleted controller will be removed from the list. This message will be sent to all active slaves in the shelf. In cluster configurations, the deletion of a controller will be communicated to the remote slaves by the slave directly attached through the inter-slave protocol.

While there is at least one controller attached to the node controlling a given partition, the resources in use on this partition should not be affected by a controller having been deleted. Only when a given partition is disabled, the slaves will release all the VSI resources used on that partition.

The addshelf command allows multiple controllers on the same partition. You will be prompted to confirm the addition of a new VSI shelf with a warning message indicating that the partition is already used by a different controller.

What Happens When a Slave is Added

When a new slave is activated in the node, the BCC will send a VSI configuration CommBus (internal BPX protocol) message with the list of the controllers attached to the switch.

The BCC will also set up a master-slave connection from each controller port in the switch to the added slave.

What Happens When a Slave is Deleted

When a slave is deactivated in the node, the BCC will tear down the master-slave VCs between each of the controller ports in the shelf and the slave.

How Resources are Managed

VSI LCNs are used for setting up the following management channels:

Intershelf blind channels are used in cluster configuration for communication between slaves on both sides of a trunk between two switches in the same cluster node.

The maximum number of slaves in a switch is 12. Therefore a maximum of 11 LCNs are necessary to connect a slave to all other slaves in the node. This set of LCNs is allocated from the reserved range of LCNs.

If a controller is attached to a shelf, master-slave connections are set up between the controller port and each of the slaves in the shelf.

For each slave that is not directly connected, the master-slave control VC consists of two legs:

For the slave that is directly connected to the controller, the master-slave control VC consists of a single leg between the controller port and the slave. Therefore, 12 LCNs are needed in the directly-connected slave, and 1 LCN in each of the other slaves in the node for each controller attached to the shelf.

These LCNs will be allocated from the Automatic Routing Management pool. This pool is used by Automatic Routing Management to allocate LCNs for connections and networking channels.

For a given slave the number of VSI management LCNs required from the common pool is:

n X 12 + m

where:

n is the number of controllers attached to this slave

m is the number of controllers in the switch directly attached to other slaves

VSI Slave Redundancy (Hot Slave Redundancy)

The function of the slave hot standby is to preprogram the slave standby card the same as the active card so that when the active card fails, the slave card switch over operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus (internal BPX protocol) messages from BCC to standby BXM card.

Because the master VSI controller does not recognize the standby slave card, the active slave card forwards VSI messages it received from the Master VSI controller to the standby Slave VSI card.

Also, when the standby slave VSI card is first started (either by having been inserted into the slot, or if you issue the addyred command from the CLI console), the active slave VSI card needs to forward all VSI messages it had received from the Master VSI controller card to the standby Slave VSI controller card.

In summary, these are the hot standby operations between active and standby card:

    1. CommBus messages are duplicated to standby slave VSI card by the BCC.
    Operation 1 does not need to implement because it had been done by the BCC.

    2. VSI messages (from Master VSI controller or other slave VSI card) are forwarded to the standby slave VSI card by the active slave VSI card.
    Operation 2 is normal data transferring, which occurs after both cards are in-sync.

    3. When the standby slave VSI card starts up, it retrieves all VSI messages from the active slave VSI card and processes these messages.
    Operation 3 is initial data transferring, which occurs when the standby card first starts up.

The data transfer from the active card to the standby card should not affect the performance of the active card. Therefore, the standby card takes most actions and simplifies the operations in the active card. The standby card drives the data transferring and performs the synchronization. The active card functions just forward VSI messages and respond to the standby card requests.

Class of Service Templates and Qbins

Class of Service Templates (COS Templates) provide a means of mapping a set of standard connection protocol parameters to "extended" platform specific parameters. Full Quality of Service (QoS) implies that each VC is served through one of a number of Class of Service buffers (Qbins) which are differentiated by their QoS characteristics.

A qbin template defines a default configuration for the set of qbins for a logical interface. When you assign a template to an interface, the corresponding default qbin configuration is copied to this interface's qbin configuration and becomes the current qbin configuration for this interface.

Qbin templates deal only with qbins that are available to VSI partitions, which are 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the qbins are reserved and configured by Automatic Routing Management.

How Service Templates Work

The service class template provide a means of mapping a set of extended parameters, which are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave during connection setup.

A set of service templates is stored in each switch (such as BPX) and downloaded to the service modules (such as BXMs) as needed.

The service templates contains two classes of data:

The general types of parameters passed from a VSI Master to a Slave include:

Each VC added by a VSI master is assigned to a specific service class by means of a 32-bit service type identifier. Current identifiers are for:

When a connection setup request is received from the VSI master in the Label Switch Controller, the VSI slave (in the BXM, for example) uses the service type identifier to index into a Service Class Template database containing extended parameter settings for connections matching that index. The slave uses these values to complete the connection setup and program the hardware.

One of the parameters specified for each service type is the particular BXM class of service buffer (qbin) to use. The qbin buffers provide separation of service type to match the QoS requirements.

Service templates on the BPX are maintained by the BCC and are downloaded to the BXM cards as part of the card configuration process as a result of:

The templates are non-configurable.

Structure of Service Class Templates

There are 3 types of templates:

You can assign any one of the nine templates to a virtual switch interface. (See Figure 23-9.)

Each template table row includes an entry that defines the qbin to be used for that class of service. See Figure 23-9 for an illustration of how service class databases map to qbins. This mapping defines a relationship between the template and the interface qbin's configuration.

A qbin template defines a default configuration for the set of qbins for the logical interface. When a template assignment is made to an interface, the corresponding default qbin configuration becomes the interface's qbin configuration.

Some of the parameters of the interface's qbin configuration can be changed on a per interface basis. Such changes affect only that interface's qbin configuration and no others, and do not affect the qbin templates.


Figure 23-9: Service Template Overview


Qbin templates are used only with qbins that are available to VSI partitions, specifically, qbins 10 through 15. Qbins 10 through 15 are used by the VSI on interfaces configured as trunks or ports. The rest of the qbins (0-9) are reserved for and configured by Automatic Routing Management.

Each template table row includes an entry that defines the qbin to be used for that class of service. This mapping defines a relationship between the template and the interface qbin's configuration. As a result, you need to define a default qbin configuration to be associated with the template.


Note   The default qbin configuration, although sometime referred as a "qbin template," behaves differently from that of the class of service templates.


Figure 23-10:
Service Template and Associated Qbin Selection


Extended Service Types Support

The service-type parameter for a connection is specified in the connection bandwidth information parameter group. The service-type and service-category parameters determine the service class to be used from the service template.

Supported Service Categories

There are five major service categories and several sub-categories. The major service categories are shown in Table 23-5. A list of the supported service sub-categories is shown in LCNs.


Table 23-5: Service Category Listing
Service Category Service Type Identifiers

CBR

0x0100

VBR-RT

0x0101

VBR-NRT

0x0102

UBR

0x0103

ABR

0x0104

Supported Service Types

The service type identifier is a 32-bit number.

There are three service types:

The service type identifier appears on the dspsct screen when you specify a service class template number and and service type; for example:

dspsct <2> <vbrrt1>

A list of supported service templates and associated qbins, and service types is shown in Table 23-6.


Table 23-6: Service Category Listing
Template Type Service Type Identifiers Service Types Associated Qbin

VSI Special Types

0x0000

0x0001

0x0002

Null

Default

Signaling

-

13

10

ATMF Types

0x0100

0x0101

0x0102

0x0103

0x0104

0x0105

0x0106

0x0107

0x0108

0x0109

0x010A

0x010B

CBR.1

VBR.1-RT

VBR.2-RT

VBR.3-RT

VBR.1-nRT

VBR.2-nRT

VBR.3-nRT

UBR.1

UBR.2

ABR

CBR.2

CBR.3

10

11

11

11

12

12

12

13

13

14

10

10

MPLS Types

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

label cos0, per-class service

label cos1, per-class service

label cos2, per-class service

label cos3, per-class service

label cos4, per-class service

label cos5, per-class service

label cos6, per-class service

label cos7, per-class service

label ABR, (Tag w/ ABR flow control)

10

11

12

13

10

11

12

13

14

VC Descriptors

A summary of the parameters associated with each of the service templates is provided in Table 23-7 through Table 23-10. Table 23-11 provides a description of these parameters and also the range of values that may be configured if the template does not assign an arbitrary value.

Table 23-7 lists the parameters associated with Default (0x0001) and Signaling (0x0002) service template categories.


Table 23-7: VSI Special Service Types
Parameter VSI Default (0x0001) VSI Signalling (0x0002)

QBIN Number

10

15

UPC Enable

0

*

UPC CLP Selection

0

*

Policing Action (GCRA #1)

0

*

Policing Action (GCRA #2)

0

*

PCR

-

300 kbps

MCR

-

300 kbps

SCR

-

-

ICR

-

-

MBS

-

-

CoS Min BW

0

*

CoS Max BW

0

*

Scaling Class

3

3

CAC Treatment ID

1

1

VC Max Threshold

Q_max/4

*

VC CLPhi Threshold

75

*

VC CLPlo Threshold

30

*

VC EPD Threshold

90

*

VC EFCI Threshold

60

*

VC discard selection

0

*

Table 23-8 and Table 23-9 lists the parameters associated with the PNNI service templates.


Table 23-8: ATM Forum Service Types, CBR, UBR, and ABR
Parameter CBR.1 CBR.2 CBR.3 UBR.1 UBR.2 ABR

QBIN Number

10

10

10

13

13

14

UPC Enable

1

1

1

1

1

1

UPC CLP Selection

*

*

*

*

*

*

Policing Action (GCRA #1)

*

*

*

*

*

*

Policing Action (GCRA #2)

*

*

*

*

*

*

PCR

MCR

-

-

-

*

*

*

SCR

-

-

-

50

50

*

ICR

-

-

-

-

-

*

MBS

-

-

-

-

-

*

CoS Min BW

0

0

0

0

0

0

CoS Max BW

100

100

100

100

100

100

Scaling Class

*

*

*

*

*

*

CAC Treatment ID

*

*

*

*

*

*

VC Max Threshold

*

*

*

*

*

*

VC CLPhi Threshold

*

*

*

*

*

*

VC CLPlo Threshold

*

*

*

*

*

*

VC EPD Threshold

*

*

*

*

*

*

VC EFCI Threshold

*

*

*

*

*

*

VC discard selection

*

*

*

*

*

*

VSVD/FCES

-

-

-

-

-

*

ADTF

-

-

-

-

-

500

RDF

-

-

-

-

-

16

RIF

-

-

-

-

-

16

NRM

-

-

-

-

-

32

TRM

-

-

-

-

-

0

CDF

16

TBE

-

-

-

-

-

16777215

FRTT

-

-

-

-

-

*


Table 23-9: ATM Forum VBR Service Types
Parameter VBRrt.1 VBRrt.2 VBRrt.3 VBRnrt.1 VBRnrt.2 VBRnrt.3

QBIN Number

11

11

11

12

12

12

UPC Enable

1

1

1

1

1

1

UPC CLP Selection

*

*

*

*

*

*

Policing Action (GCRA #1)

*

*

*

*

*

*

Policing Action (GCRA #2)

*

*

*

*

*

*

PCR

MCR

*

*

*

*

*

*

SCR

*

*

*

*

*

*

ICR

-

-

-

-

-

-

MBS

*

*

*

*

*

*

CoS Min BW

0

0

0

0

0

0

CoS Max BW

100

100

100

100

100

100

Scaling Class

*

*

*

*

*

*

CAC Treatment ID

*

*

*

*

*

*

VC Max Threshold

*

*

*

*

*

*

VC CLPhi Threshold

*

*

*

*

*

*

VC CLPlo Threshold

*

*

*

*

*

*

VC EPD Threshold

*

*

*

*

*

*

VC EFCI Threshold

*

*

*

*

*

*

VC discard selection

*

*

*

*

*

*

* indicates not applicable

Table 23-10 lists the connection parameters and their default values for tag switching service templates.


Table 23-10: MPLS Service Types
Parameter CoS 0/4 CoS 1/5 CoS 2/6 CoS3/7 Tag-ABR

Qbin #

10

11

12

13

14

UPC Enable

0

0

0

0

0

UPC CLP Selection

0

0

0

0

0

Policing Action (GCRA #1)

0

0

0

0

0

Policing Action (GCRA#2)

0

0

0

0

0

PCR

-

-

-

-

cr/10

MCR

-

-

-

-

0

SCR

-

-

-

-

P_max

ICR

-

-

-

-

100

MBS

-

-

-

-

-

CoS Min BW

0

0

0

0

0

CoS Max BW

0

0

0

0

100

Scaling Class

3

3

2

1

2

CAC Treatment

1

1

1

1

1

VC Max

Q_max/4

Q_max/4

Q_max/4

Q_max/4

cr/200ms

VC CLPhi

75

75

75

75

75

VC CLPlo

30

30

30

30

30

VC EPD

90

90

90

90

90

VC EFCI

60

60

60

60

30

VC discard selection

0

0

0

0

0

VSVD/FCES

-

-

-

-

0

ADTF

-

-

-

-

500

RDF

-

-

-

-

16

RIF

-

-

-

-

16

NRM

-

-

-

-

32

TRM

-

-

-

-

0

CDF

-

-

-

-

16

TBE

-

-

-

-

16777215

FRTT

-

-

-

-

0

VC Descriptor Parameters

Table 23-11 describes the connection parameters that are listed in the preceding tables and also lists the range of values that may be configured, if not pre-configured.

Every service class does not include all parameters. For example, a CBR service type have fewer parameters than an ABR service type.


Note   Every service class does not have a value defined for every parameter listed in Table 23-11 below.


Table 23-11: Connection Parameter Descriptions and Ranges
Object Name Range/Values Template Units

QBIN Number

10 - 15

qbin #

Scaling Class

0 - 3

enumeration

CDVT

0 - 5M (5 sec)

secs

MBS

1 - 5M

cells

ICR

MCR - PCR

cells

MCR

50 - LR

cells

SCR

MCR - LineRate

cells

UPC Enable

0 - Disable GCRAs

1 - Enabled GCRAs

2 - Enable GCRA #1

3 - Enable GCRA #2

enumeration

UPC CLP Selection

0 - Bk 1: CLP (0+1)

Bk 2: CLP (0)

1 - Bk 1: CLP (0+1)

Bk 2: CLP (0+1)

2 - Bk 1: CLP (0+1)

Bk 2: Disabled

enumeration

Policing Action (GCRA #1)

0 - Discard

1 - Set CLP bit

2 - Set CLP of

untagged cells,

disc. tag'd cells

enumeration

Policing Action (GCRA #2)

0 - Discard

1 - Set CLP bit

2 - Set CLP of

untagged cells,

disc. tag'd cells

enumeration

VC Max

cells

CLP Lo

0 - 100

%Vc Max

CLP Hi

0 - 100

%Vc Max

EFCI

0 - 100

%Vc Max

VC Discard Threshold Selection

0 - CLP Hysteresis

1 - EPD

enumeration

VSVD

0: None

1: VSVD

2: VSVD w / external Segment

enumeration

Reduced Format ADTF

0 - 7

enumeration

Reduced Format Rate Decrease Factor (RRDF)

1 - 15

enumeration

Reduced Format Rate Increase Factor (RRIF)

1 - 15

enumeration

Reduced Format Time Between Fwd RM cells (RTrm)

0 - 7

enumeration

Cut-Off Number of RM Cells (CRM)

1 - 4095

cells

Qbin Dependencies

Qbin templates deal only with qbins that are available to VSI partitions, namely 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the qbins are reserved and configured by Automatic Routing Management.

When you execute a dspsct command, it will give you the default service type, and the qbin number.

The available qbin parameters are shown in Table 23-12.

Notice that the qbins available for VSI are restricted to qbins 10-15 for that interface. All 32 possible virtual interfaces are provided with 16 qbins.


Table 23-12: Service Template Qbn Parameters
Template Object Name Template Units Template Range/Values

QBIN Number

enumeration

0-15 (10-15 valid for VSI)

Max QBIN Threshold

u sec

1-2000000

QBIN CLP High Threshold

% of max Qbin threshold

0-100

QBIN CLP Low Threshold

% of max Qbin threshold

0-100

EFCI Threshold

% of max qbin threshold

0 - 100

Discard Selection

enumeration

1 - CLP Hystersis

2 - Frame Discard

Weighted Fair Queueing

enable/disable

0: Disable

1: Enable

Qbin Default Settings

The qbin default settings are shown in Table 23-13. The Service Class Template default settings for Label Switch Controllers and PNNI controllers are shown in Table 23-14

Note: Templates 2, 4, 6, and 8 support policing on PPD

.


Table 23-13: Qbin Default Settings
QBIN Max Qbin Threshold (usec) CLP High CLP Low/EPD EFCI Discard Selection
LABEL
Template 1

10 (Null, Default, Signalling, Tag0,4)

300,000

100%

95%

100%

EPD

11 (Tag1,5)

300,000

100%

95%

100%

EPD

12 (Tag2,6)

300,000

100%

95%

100%

EPD

13 (Tag3,7)

300,000

100%

95%

100%

EPD

14 (Tag Abr)

300,000

100%

95%

6%

EPD

15 (Tag unused)

300,000

100%

95%

100%

EPD

PNNI
Templates 2 (with policing) and 3

10 (Null, Default, CBR)

4200

80%

60%

100%

CLP

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Ubr)

105000

80%

60%

100%

EPD

14 (Abr)

105000

80%

60%

20%

EPD

15 (Unused)

105000

80%

60%

100%

EPD

Full Support for ATMF and reduced support for Tag CoS without Tag-Abr
Templates 4 (with policing) and 5

10 (Tag 0,4,1,5, Default, UBR, Tag-Abr*)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag 2,6,3,7)

300,000

100%

95%

100%

EPD

14 (Abr)

105000

80%

60%

20%

EPD

15 (Cbr)

4200

80%

60%

100%

CLP

Full Support for Tag ABR and ATMF without Tag CoS
Templates 6 (with policing) and 7

10 (Tag 0,4,1,5,2,6,3,7 Default, UBR)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag-Abr)

300,000

100%

95%

6%

EPD

14 (Abr)

105000

80%

60%

20%

EPD

15 (Cbr)

4200

80%

60%

100%

CLP

Full Support for Tag CoS and reduced support for ATMF
Templates 8 (with policing) and 9

10 (Cbr, Vbr-rt)

4200

80%

60%

100%

CLP

11 (Vbr-nrt, Abr)

53000

80%

60%

20%

EPD

12 (Ubr, Tag 0,4)

300,000

100%

95%

100%

EPD

13 (Tag 1, 5, Tag-Abr)

300,000

100%

95%

6%

EPD

14 (Tag 2,6)

300,000

100%

95%

100%

EPD

15 (Tag 3, 7)

300,000

100%

95%

100%

EPD


Table 23-14: Service Class Template Default Settiings
Parameter with Default Settings Label PNNI

MCR

Tag0-7: N/A
TagAbr: 0% of PCR

Abr: 0%

AAL5 Frame Base Traffic Control (Discard Selection)

EPD

Hystersis

CDVT(0+1)

250,000

250,000

VSVD

Tag0-7: N/A
TagAbr: None

Abr: None

SCR

Tag0-7: N/A
TagAbr: 0

Vbr: 100%
Abr: 0

MBS

Tag0-7: N/A
TagAbr: 0

Vbr: 1000

Policing

Policing Disable

VbrRt1:
GCRA_1_2, CLP01_CLP01, DISCARD on both policing action

VbrRt2:
GCRA_1_2,
CLP01_CLP0, DISCARD on both policing action

VbrRt3:
GCRA_1_2,
CLP01_CLP0, CLP DISCARD for 1st policier and CLP for 2nd policier

VbrNRt1:
same as VbrRt1

VbrNRt2:
same as VbrRt2

VbrNRt3:
same as VbrRt3

Ubr1:
GCRA_1
CLP01, Discard

Ubr2:
GCRA_1_2
CLP01 DISCARD on
policer 1.
CLP01 TAG on policer 2

Abr:
same as ubr1

Cbr1:
same as ubr1

Cbr2:
GCRA_1_2
CLP01_CLP0, Discard on both policing action

Cbr3:
GCRA_1_2
CLP01_CLP0, CLP UNTAG for policier 1 and CLP for policier 2

ICR

Tag0-7: N/A
TagAbr: NCR

Abr: 0%

ADTF

Tag0-7: N/A
TagAbr: 500 msec

Abr: 1000 msec
(ATM forum it's 500)

Trm

Tag0-7: N/A
TagAbr: 0

Abr: 100

VC Qdepth

61440

10,000
160 - cbr
1280 - vbr

CLP Hi

100

80

CLP Lo / EPD

40

35

EFCI

TagABR: 20

20 (not valid for non-ABR)

RIF

Tag0-7: N/A
TagAbr: 16

Abr: 16

RDF

Tag0-7: N/A
TagAbr: 16

Abr: 16

Nrm

Tag0-7: N/A
TagAbr: 32

Abr: 32

FRTT

Tag0-7: N/A
TagAbr: 0

Abr: 0

TBE

Tag0-7: N/A
TagAbr: 16,777,215

Abr: 16,777,215

IBS

N/A

N/A

CAC Treatment

LCN

vbr: CAC4
Ubr:LCN
Abr: MIN BW
Cbr: CAC4

Scaling Class

UBR - Scaled 1st

Vbr: VBR -Scaled 3rd
Ubr: UBR - Scaled 1st
Abr: ABR - Scaled 2nd
Cbr: CBR - Scaled 4th

CDF

16

16

Summary of VSI Commands


Table 23-15:
Commands for Setting up a VSI (Virtual Switch Interface) Controller
Mnemonic Description

addctrlr

Attach a controller to a node; for controllers that require Annex G capabilities in the controller interface. Add a PNNI VSI controller to a BPX node through an AAL5 interface shelf

addshelf

Add a trunk between the hub node and interface shelf or VSI-MPLS (Multiprotocol Label Switching) controller).

cnfqbin

Configure qbin card. If you answer Yes when prompted, the command will used the card qbin values from the qbin templates.

cnfrsrc

Configure resources, for example, for Automatic Routing Management PVCs and MPLS (Multiprotocol Label Switching) Controller (LSC)

cnfvsiif

Configure VSI Interface or a different template to an interface.

cnfvsipart

Configure VSI paritition characteristics for VSI ILMI.

delctrlr

Delete a controller, such as a Service Expansion Shelf (SES) PNNI controller, from a BPX node

delshelf

Delete a trunk between a hub node and access shelf

dspcd

Display the card configuration.

dspchuse

Display a summary of chennel distribution in a given slot.

dspctrlrs

Display the VSI controllers, such as an PNNI controller, on a BPX node

dspqbin

Displays qbin parameters currently configured for the virtual interface

dspqbintmt

Display qbin template

dsprsrc

Display LSC (Label Switching Controller) resources

dspsct

Display Service Class Template assigned to an interface . The command has three levels of operation:

dspsct
With no arguments lists all the service templates resident in the node.

dspsct <tmplt_id>
Lists all the Service Classes in the template.

dspsct <tmplt_id>
SC lists all the parameters of that Service Class.

dspvsiif

Display VSI Interface

dspvsipartcnf

Display information about VSI ILMI functionality.

dspvsipartinfo

Display VSI resource status for the trunk and partition.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Fri Jul 27 16:21:54 PDT 2001
All contents are Copyright © 1992--2001 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.