cc/td/doc/product/wanbu/bpx8600/9_3_3
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table Of Contents

Configuring BXM Virtual Switch Interface

Virtual Switch Interface

Overview: How VSI Works

Virtual Switch Interfaces and Qbins

VSI Master and Slaves

Partitioning

Multiple Partitioning

VSI Configuration Procedures

Adding a Controller

Viewing Controllers and Interfaces

Deleting a Controller

Configuring Partition Resources on Interfaces

Configuring Enhanced BXM Cards to Support 60K Connections

Soft and Dynamic Partitioning

Assigning a Service Template to an Interface

Configuring the BXM Card's Qbin

Enabling VSI ILMI Functionality for the PNNI Controller

VSIs and Virtual Trunking

VSI Master and Slave Redundancy

Master Redundancy

Slave Redundancy

VSI Slave Redundancy Mismatch Checking

What Happens When You Add a Controller

What Happens When You Delete a Controller

What Happens When a Slave Is Added

What Happens When a Slave is Deleted

Managing Resources

VSI Slave Redundancy (Hot Slave Redundancy)

Class of Service Templates and Qbins

How Service Class Templates Work

Structure of Service Class Templates

Qbin Dependencies

Qbin Default Settings

Understanding MPLS VC Merge

VC Merge Characteristics

Displaying Card Support for VC Merge

Enabling VC Merge

Disabling VC Merge

Interpreting the Messages

Displaying the Status of VC Merge

Summary of VSI Commands


Configuring BXM Virtual Switch Interface


This chapter describes the BXM Virtual Switch Interface (VSI) and provides configuration procedures.

Contents of this chapter include:

Virtual Switch Interface

Overview: How VSI Works

VSI Masters and Slaves

Partitioning

VSI Configuration Procedures

Add a controller

View controllers and interfaces

Delete a controller

Enable VSI ILMI functionality

Configure partition resources on VSI

VSI Master and Slave Redundancy

Class of Service Templates and Qbins

Tables of template default settings

Understanding MPLS VC Merge

Summary of VSI Commands

For information on configuring SES PNNI controllers to work with BPX switches, refer to the Cisco SES PNNI Controller Software Configuration Guide.

For information on configuring MPLS controllers to work with BPX switches, refer to the Cisco MPLS Controller Software Configuration Guide.

Refer to Cisco WAN Switching Command Reference for details about the commands mentioned here for both PNNI and MPLS controllers. Refer to Release Notes for supported features.

Virtual Switch Interface

The Virtual Switch Interface (VSI) is a common control interface between the BPX 8650 or the
MGX 8850 switches and an external controller that supports the VSI protocol.

VSIs allows a node to be controlled by multiple controllers, such as Multiprotocol Label Switching (MPLS) and the Service Expansion Shelf Private Network-to-Network Interface (SES PNNI).

When a VSI is activated on a port, trunk, or virtual trunk so that it can be used by a master controller, such as an SES PNNI or an MPLS controller, the resources of the virtual interface associated with the port, trunk, or virtual trunk are made available to the VSI. These control planes can be external or internal to the switch. The VSI provides a mechanism for networking applications to control the switch and use a partition of the switch resources.

VSI on the BPX provides:

Class of Service templates

Virtual trunk support for VSI

Support for VSI master redundancy

Multiple VSI partitions

Soft and Dynamic Partitioning

Cisco WAN Manager support for VSI

Multiple Partitioning

VSI was implemented first on the BPX 8650 in Release 9.1, which uses VSI to perform Multiprotocol Label Switching and allowed support for VSI on BXM cards and for partitioning BXM resources between Automatic Routing Management and a VSI MPLS controller. BPX software uses a partition to identify and assign resources such as LCNs, VPIs, and bandwidth to a controller. Multiple VSI partitions may be defined on a single physical port.

Multiprotocol Label Switching

Multiprotocol Label Switching (MPLS, previously called Tag Switching) enables routers at the edge of a network to apply simple labels to packets (frames), allowing devices in the network core to switch packets according to these labels with minimal lookup activity. MPLS in the network core can be performed by switches, such as ATM switches, or by existing routers.

MPLS integrates virtual circuit switching with IP routing to offer scalable IP networks over ATM. MPLS support data, voice, and multimedia service over ATM networks. MPLS summarizes routing decisions so that switches can perform IP forwarding, as well as bringing other benefits that apply even when MPLS is used in router-only networks.

Using MPLS techniques, it is possible to set up explicit routes for data flows that are constrained by path, resource availability, and requested Quality of Service (QoS). MPLS also facilitates highly scalable Virtual Private Networks.

MPLS assigns labels to IP flows, placing them in the IP frames. The frames can then be transported across packet or cell-based networks and switched on the labels rather than being routed using IP address look-up.

A routing protocol such as OSPF, uses the Label Distribution Protocol (LDP) to set up MPLS virtual connections (VCs) on the switch.

MPLS Terminology

MPLS is a standardized version of Cisco's original Tag Switching proposal. MPLS and Tag Switching are identical in principle and nearly so in operation. MPLS terminology has replaced obsolete Tag Switching terminology.

An exception to the terminology is Tag Distribution Protocol (TDP). TDP and the MPLS Label Distribution Protocol (LDP) are nearly identical, but use different message formats and procedures. TDP is used in this design guide only when it is important to distinguish TDP from LDP. Otherwise, any reference to LDP in this design guide also applies to TDP.

Overview: How VSI Works

This section provides detailed reference to virtual interfaces, service templates, and Qbins.

For information on configuring SES PNNI controllers to work with BPX switches, refer to the Cisco SES PNNI Controller Software Configuration Guide.

For information on configuring MPLS controllers to work with BPX switches, refer to the Cisco MPLS Controller Software Configuration Guide.

For details about the commands mentioned here for both PNNI and MPLS controllers, refer to Cisco WAN Switching Command Reference. Refer to Release Notes for supported features.

Virtual Switch Interfaces and Qbins

The BXM supports 31 Virtual Switch Interfaces that provide a number of resources including Qbin buffering capability. One Virtual Switch Interface is assigned to each logical trunk (physical or virtual) when the trunk is enabled (see Figure 23-1).

Each virtual switch interface has 16 Qbins assigned to it. Qbins 0 to 9 are used for Automatic Routing Management. Qbins 10 through 15 are available for use by a Virtual Switch Interface. (In Release 9.1, only Qbin 10 was used.) The Qbins 10 through 15 support Class of Service (CoS) templates on the BPX.

You may enable a Virtual Switch Interface on a port, trunk, or virtual trunk. The Virtual Switch Interface is assigned the resources of the associated virtual interface.

With virtual trunking, a physical trunk can comprise a number of logical trunks called virtual trunks. Each of these virtual trunks (equivalent to a virtual interface) is assigned the resources of one of the 31 Virtual Switch Interfaces on a BXM (see Figure 23-1).

Figure 23-1 BXM Virtual Interfaces and Qbins

VSI Master and Slaves

A controller application uses a VSI master to control one or more VSI slaves. For the BPX, the controller application and master VSI reside in an external 7200 or 7500 series router and the VSI slaves are resident in BXM cards on the BPX node (see Figure 23-2).

The controller sets up the following types of connections:

Control virtual connections (VCs)

master to slave

slave to slave

User Connection

User connection (that is, cross-connect)

Figure 23-2 VSI, Controller and Slave VSI

The controller establishes a link between the VSI master and every VSI slave on the associated switch. The slaves in turn establish links between each other (see Figure 23-3).

Figure 23-3 VSI Master and VSI Slave Example

With a number of switches connected together, there are links between switches with cross-connects established within the switch as shown in Figure 23-4.

Figure 23-4 Cross-Connects and Links between Switches

Connection Admission Control

When a connection request is received by the VSI slave, it is first subjected to a Connection Admission Control (CAC) process before being forwarded to the FW layer responsible for actually programming the connection. The granting of the connection is based on the following criteria:

LCNs available in the VSI partition:

Qbin

Service Class

QoS guarantees:

max CLR

max CTD

max CDV

When the VSI slave accepts (that is, after CAC) a connection setup command from the VSI master in the MPLS controller, it receives information about the connection including service type, bandwidth parameters, and QoS parameters. This information is used to determine an index into the VI's selected Service Template's VC Descriptor table thereby establishing access to the associated extended parameter set stored in the table.

Ingress traffic is managed differently and a preassigned ingress service template containing CoS Buffer links is used.

Partitioning

The Virtual Switch Interface must partition the resources between competing controllers, Automatic Routing Management, MPLS, and PNNI for example. You partition resources by using the cnfrsrc command.


Note Release 9.3 supports up to three partitions.


The three resources that must be configured for a partition designated ifci, which stands for interface controller 1 in this instance are listed in Table 23-1.

Table 23-1 ifci Parameters (Virtual Switch Interface)

ifci parameters
Min
Max

lcns

min_lcnsi

max_lcnsi

bw

min_bwi

max_bwi

vpi

min_vpi

max_vpi


Some ranges of values available for a partition are listed in Table 23-2.

Table 23-2 Partition Criteria

Type
Range

trunks

1 to 4095 VPI range

ports

1 to 4095 VPI range

virtual trunk

Only one VPI available per virtual trunk since a virtual trunk is currently delineated by a specific VP

virtual trunk

Each virtual trunk can either be Automatic Routing Management or VSI, not both


When a trunk is added, the entire bandwidth is allocated to Automatic Routing Management. To change the allocation in order to provide resources for a VSI, use the cnfrsrc command on the BPX switch.
A view of the resource partitioning available is shown in Figure 23-5.

Figure 23-5 Graphical View of Resource Partitioning, Automatic Routing Management, and VSI

Multiple Partitioning

You can configure partition resources between Automatic Routing Management PVCs and three VSI controllers (LSC or PNNI). Up to three VSI controllers in different control planes can independently control the switch with no communication between controllers. The controllers are essentially unaware of the existence of other control planes sharing the switch. This is possible because different control planes used different partitions of the switch resources.

You can add one or more redundant LSC controllers to one partition, and one or more redundant PNNI controllers to the other partition. With Release 9.2.3, six new Service Class Templates were added for interfaces (for a total of nine) with multiple partitions controlled simultaneously by a PNNI controller and an LSC.

The master redundancy feature allows multiple controllers to control the same partition. In a multiple partition environment, master redundancy is independently supported on each partition.

The following limitations apply to multiple VSI partitioning:

Up to three VSI partitions are supported.

Resources cannot be redistributed amongst different VSI partitions.

The resources that are allocated to a partition: LCNS, Bandwidth and VPI range.

Resources are also allocated to Automatic Routing Management. The resources allocated to Automatic Routing Management can be freed from Automatic Routing Management and then allocated to VSI.

No multiple partitions on Virtual Trunks. A Virtual Trunk is managed by either Automatic Routing Management or by a single VSI partition.

Only one VSI controller can be added to a BPX interface. Other controllers must be added to different interfaces on the switch.

Compatibility

The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag off. The multiple partitions capability is treated as a card attribute and added to the attribute list.

Use of a partition with an ID higher than 1 requires support for multiple VSI partitions in both switch software and BXM firmware, even if this is the only partition active on a the card. In a Y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards.

A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions are blocked.

Resource Partitioning

A logical switch is configured by enabling the partition and allocating resources to the partition. This must be done for each of the interfaces in the partition. The same procedure must be followed to define each of the logical switches. As resources are allocated to the different logical switches, a partition of the switch resources is defined.

The resources that are partitioned amongst the different logical switches are:

LCNs

Bandwidth

VPI range

Resources are configured and allocated per interface, but the pool of resources may be managed at a different level. The pool of LCNs is maintained at the card level, and there are also limits at the port group level. The bandwidth is limited by the interface rate, and therefore the limitation is at the interface level. Similarly the range of VPI is also defined at the interface level.

You configure the following parameters on a VSI partition on an interface:

min lcn: guaranteed LCNs for the partition on the interface.

max lcn: total number of LCNs the partition is allowed for setting up connections on the interface.

min bw: guaranteed bandwidth for the partition on the interface.

max bw: maximum bandwidth for this partition on the interface.

start vpi: the lower bound of the VPI range reserved for this partition on the interface.

end vpi: the upper bound of the VPI range reserved for this partition on the interface.

Partitioning Between Automatic Routing Management and VSI

In addition to partitioning of resources between VSI and Automatic Routing Management, multiple partitioning allows subpartitioning of the VSI space among multiple VSI partitions. Multiple VSI controllers can share the switch with each other and also with Automatic Routing Management.

The difference between the two types of partitioning is that all the VSI resources are under the control of the VSI-slave, while the management of Automatic Routing Management resources remains the province of the switch software.

Figure 23-6 Resource Partitioning Between Automatic Routing Management and VSI

The commands used for multiple partitioning are described in Table 23-3.

Table 23-3 Commands Used for Multiple Partitioning 

Name
Description

dspvsipartinfo

Displays the information about the current usage of partition resources.

dspchuse

Displays a summary of the channel distribution in a given slot.

dspvsiif

Displays the Service Class Template assigned to an interface along with a summary of the resources allocated to each partition.

dspvsich

Displays the list and information for the LCNs used for VSI control channels, including interslave channels and master-slave controllers for all controllers in all partitions.


Multiple Partition Example

Each logical switch can be seen as a collection of interfaces each with an associated set of resources.

Consider a BPX switch with the following four interfaces:

10.1

10.2.1

11.1

11.7.1

Also assume the resource partitioning listed in Table 23-4.

Figure 23-7 Virtual Switches

Table 23-4 Partitioning Example 

Interface
Automatic Routing Management
Partition 1
Partition 2

10.1

Enable
lcns: 2000
bw: 20000 cps
vpi: 1-199

Enable
lcns: 4000
bw:30000 cps
vpi: 200-239

Enable
lcns: 4000
bw: 20000 cps
vpi: 240-255

10.2.1

Enable
lcns: 10000
bw:10000 cps
vpi: 200-200

Disable

Disable

11.1

Enable
lcns: 2000
bw: 100000 cps
vpi: 1-199

Enable
lcns: 3000
bw: 50000 cps
vpi: 200-249

Enable
lcns:4000
bw: 10000
vpi: 250-255

11.7.1

Disable

Enable
lcns: 5000
bw: 200000cps
vpi: 250-250

Disable


Three virtual switches are defined by this configuration:

Automatic Routing Management:
10.1: 2000 lcns, 20000 cps, vpi: 1-199;
10.2.1: 10000 lcns, 10000 cps, vpi 200;
11.1: 2000 lcns, 100000 cps, vpi: 1-199}

Partition 1:
10.1: 4000 lcns, 30000 cps, vpi: 200-239;
11.1: 3000 lcns, 50000 cps, vpi: 200-249;
11.7.1: 5000 lcns, 200000 cps, vpi: 250-250}

Partition 2:
10.1: 4000 lcns, 20000 cps, vpi: 240-255;
11.1: 4000 lcns, 10000 cps, vpi: 250-255}

VSI Configuration Procedures

In the VSI control model, a controller sees the switch as a collection of slaves with their interfaces. The controller can establish connections between any two interfaces. The controller uses resources allocated to its partition.

Each VSI interface can be assigned a default Class of Service template upon activation. Use the switch software CLI or Cisco WAN Manager to configure a different template to an interface.

The procedure for adding a VSI-based controller such as the MPLS controller to the BPX is similar to adding an MGX 8220 interface shelf to the BPX. To attach a controller to a node to control the node, use the addshelf command.

The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI protocol. The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.

To configure VSI resources on a given interface, use the cnfrsrc command.

This section provides the following basic procedures:

Add a controller

View controllers and interfaces

Delete a controller

Enable VSI ILMI functionality

Configure partition resources on VSI

Adding a Controller

To add an MPLS controller to any BXM trunk, use the addshelf command with the V (VSI) option.

To add an SES PNNI controller, use the addshelf command with an X option.

To identify VSI controllers and distinguish them from feeders, use the V (VSI) option of the addshelf command.

To add a SES PNNI controller to a BPX node through an AAL5 interface shelf or feeder type configured with VSI controller capabilities, use the addctrlr command.

If you are adding two controllers that are intended to be used in a redundant configuration, you must specify the same partition when you add them to the node by using the addshelf command.

To add an MPLS controller (or a generic VSI controller that does not need AnnexG protocol), use the following procedure.


Step 1 Up the trunk by using the uptrk command.

Step 2 Add an MPLS controller by using the addshelf command with feeder type set to "V".

Step 3 Display the controllers and interface shelves attached to the node by using the dspnode command.

Step 4 Display the VSI controllers on a BPX node by using the dspctrlrs command.

Note that addshelf and addtrk are mutually exclusive commands; that is, you can use either addshelf or addtrk, but not both on the same interface shelf.


To add a PNNI controller, use the following procedure:


Step 1 Up a trunk interface by using the uptrk command.

Step 2 Configure resource on the trunk interface for the PNNI controller's control channels by using the cnfrsrc command.

Step 3 Add the SES PNNI to the BPX and enable AnnexG protocol to run between the BPX and the SES by using the addshelf command with feeder type set to "X".

Step 4 Enable the VSI capabilities on the trunk interface by using the addctrlr command.


Viewing Controllers and Interfaces

Display commands such as dspnw and dspnode show interface shelves.

To view conditions on an interface shelf (feeder) trunk, use the dspnode command to identify the hub and interface shelf (feeder) nodes and shows the alarm status.

To view conditions of VSI controllers, use the dspctrlrs command to display all VSI controllers attached to the BPX. The VSI controllers are either a PNNI controller or an MPLS controller.

The designation for an Multiprotocol Label Switching (MPLS) controller serving as an interface shelf is LSC.

The external network management system can query the BPX via SNMP to discover VSI controller IDs and IP addresses.

Deleting a Controller

To delete a controller or interface (feeder) shelf, first delete it from the network. Then down the port and trunk. This applies to MPLS controllers or generic VSI controllers that do not need AnnexG protocols.

To delete an MPLS controller, use the following procedure.


Step 1 Delete an MPLS controller from a BPX node by using the delshelf command.

Step 2 Down the port by using the dnport command.

or

Step 3 Down the trunk by using the dntrk command.


To delete a PNNI controller, use the following procedure:


Step 1 Delete the VSI capabilities on the trunk interface by using the delctrlr command.

Step 2 Delete the SES attached to the trunk interface by using the delshelf command.

Step 3 Disable the VSI resource partition allocated for PNNI controller on the trunk interface by using the cnfrsrc command.

Step 4 Down the trunk interface (provided no other VSI partitions are active on the trunk interface) by using the dntrk command.


Configuring Partition Resources on Interfaces

This section is key for configuring VSIs.

Prior to Release 9.1, LCNs, VPI range, and bandwidth allocation were managed exclusively by the BCC. With the introduction of VSI, the switch must allocate a range of LCNs, VPIs, and how much bandwidth for use by VSI (not BXM).

When configuring resource partitions on a VSI interface, the following commands are typically used:

cnfrsrc

dsprsrc

dspvsipartinfo

dspvsipartcnf

uptrk

upln

upport

The next step to complete when adding a VSI-based controller such as an LSC or a PNNI controller is to configure resource partitions on BXM interfaces to allow the controller to control the BXM interfaces. To do this, first create resource partitions on these interfaces. Use the cnfrsrc command to add, delete and modify a partition on a specified interface.

You may have up to three VSI controllers on the same partition (referred to as VSI master redundancy). The master redundancy feature allows multiple VSI masters to control the same partition.

The cnfrsrc parameters, ranges and values, and descriptions are listed in Table 23-5. These descriptions are oriented to actions and behavior of the BXM firmware; in most cases, objects (messages) are sent to switch software. Most of these parameters appear on the cnfrsrc screen.

Table 23-5 cnfrsrc Parameters, Ranges/Values, and Descriptions 

Parameter (Object) Name
Range/Values
Default
Description

VSI partition

1 to 3

1

Identifies the partition

Partition state

0 = Disable Partition

1 = Enable Partition

For Partition state = 1, Objects are mandatory

Min LCNs

0 to 64K

Minimum LCNs (connections) guaranteed for this partition.

Max LCNs

0 to 64K

Maximum LCNs permitted on this partition

Start VPI

0 to 4095

Partition Start VPI

End VPI

0 to 4095

Partition End VPI

Min Bw

0—Line Rate

Minimum Partition bandwidth

Max Bw

0—Line Rate

Maximum Partition bandwidth

PVC VPI Range 1

0 to 4095

-1

Dynamic partitioning

PVC VPI Range 2

0 to 4095

-1

Dynamic partitioning

PVC VPI Range 3

0 to 4095

-1

Dynamic partitioning

PVC VPI Range 4

0 to 4095

-1

Dynamic partitioning


Configuring Enhanced BXM Cards to Support 60K Connections

The Enhanced BXM model DX and EX can support up to 60K connections. BPX software releases prior to 9.3.30 provide support for only up to 32K connections for both VSI and non-VSI connections. Current versions of the software now allow attached VSI controllers to manage up to 60K connections per enhanced BXM card (model DX or EX). The maximum number of connections (PVCs) managed by the BCC is still at 32K.

The software automatically configures any newly installed BXM-E DX or EX cards to support 60K connections. For existing BXM-E cards that had been configured for 32K connections, a new "super user" CLI, upgdvsilcn, is provided for configuring them to support 60K connections. Once the card has been configured for 60K connections, VSI resource partitions on the card can be changed using the cnfrsrc command to take advantage of the added number of connections available for VSI.

The upgdvsilcn command can be executed on a standalone BXM configuration or Y-redundancy configuration. The command is hitless and does not impact existing connections. For a detailed discussion of the upgdvsi and related commands, refer to the Cisco WAN Switching Command Reference.

Soft and Dynamic Partitioning

Soft and Dynamic Partitioning (new in Release 9.3.10) supports smooth introduction of another VSI controller into an existing BPX network already configured with an existing VSI controller, easier tuning of switch resources, and the migration of Automatic Routing Management to PNNI.

Soft Partitioning provides resource guarantees for LCNs and bandwidth per partition and a pool of resources available to all partitions in addition to the guaranteed resources. Dynamic Partitioning provides the ability to rather easily increase the allocation of a resource to a partition.

Define and manage the number of LCNs assigned to a given VSI partition by modifying the "Minimum VSI LCNs" and "Maximum VSI LCNs" fields of the cnfrsrc CLI command.

To give more LCNs from Automatic Routing Management to VSI, change the Min LCNs or Max LCNs to cause BPX software to produce a bigger number.

To increase the LCNs reserved to a VSI partition, increase the "Minimum VSI LCNs" or "Maximum VSI LCNs" fields of the appropriate VSI partition. The VSI LCN boundary is moved into Automatic Routing Management if there are enough free Automatic Routing Management LCNs to fulfill the request.

If there are not enough free LCNs in the Automatic Routing Management (AR) space, the cnfrsrc command does not fulfill a request to increase the VSI LCN space. In such a case, the cnfrsrc command displays a failure message showing the number of currently free AR LCNs. You can reissue the cnfrsrc command specifying a smaller increase to the VSI partition. If that is not acceptable, you must first delete and reroute the necessary number of AR connections. Then you can attempt cnfrsrc again.

Moving the VSI LCN boundary into the Automatic Routing Management space might step over LCNs that are currently allocated. BPX software reprogram the necessary channels so that new channels out of the lower AR LCN space are picked instead. Before starting the process of reprogramming the necessary number of AR connections, the cnfrsrc command displays a warning message and waits for your permission to proceed. The warning message shows the number of Automatic Routing Management (AR) connections that are reprogrammed. After reprogramming the necessary channels the LCN boundary is moved into the Automatic Routing Management space.


Note You can migrate Automatic Routing Management connections only if the VPI range of the recipient VSI partition is adjacent to Automatic Routing Management. To migrate Automatic Routing Management connections to a nonadjacent VSI partition requires different VPIs within the recipient VPI boundary.


Assigning a Service Template to an Interface

The ATM Class of Service templates (or Service Class Template, SCT) provide a means of mapping a set of extended parameters. These are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave in a BXM port interface during initial setup of the interface.

A set of service templates is stored in each BPX 8650 switch and downloaded to the service modules (BXMs) as needed during initial configuration of the VSI interface when a trunk or line is enabled on the BXM.

Each service template type has an associated Qbin. The Qbins provide the ability to manage bandwidth by temporarily storing cells and then serving them out based on a number of factors, including bandwidth availability and the relative priority of different Classes of Service.

When ATM cells arrive from the Edge LSR at the BXM port with one of four CoS labels, they receive CoS handling based on that label. A table look-up is performed, and the cells are processed, based on their connection classification. Based on its label, a cell receives the ATM differentiated service associated with its template type and service type (for example, label cos2 bw), plus associated Qbin characteristics and other associated ATM parameters.

A default service template is automatically assigned to a logical interface (VI) when you up the interface by using the commands upport and uptrk. The corresponding Qbin template is then copied into the card's (BXM) data structure of that interface.

Following are some examples of assigning a default service template by using the commands upport and uptrk:

uptrk 1.1

uptrk 1.1.1 (virtual trunk)

upport 1.1

This default template has the identifier of 1. To change the service template from service template 1 to another service template, use the cnfvsiif command.

To assign a selected service template to an interface (VI) use the cnfvsiif command, specifying the template number. It has this syntax:

cnfvsiif <slot.port.vtrk> <tmplt_id>

For example:

cnfvsiif 1.1 2
cnfvsiif 1.1.1 2

Use the dspvsiif command to display the type of service template assigned to an interface (VI). It has the following syntax:

dspvsiif <slot.port.vtrk>

dspvsiif 1.1
dspvsiif 1.1.1

To change some of the template's Qbin parameters, use the cnfqbin command. The Qbin is now "user configured" as opposed to "template configured."

To view this information, use the command dspqbin.

SCT Commands

The Service Class Template (SCT) commands are described in Table 23-6.

Table 23-6 Commands Used for the Service Class Template 

Name
Description

dspsct

Displays the SCT number assigned to an interface. The command has three levels of operation:

dspsct—Lists all the service templates resident in the node with no arguments.

dspsct <tmplt_id>—Lists all the Service Classes in the template

dspsct <tmplt_id>—Lists all the parameters of that Service Class.

dspqbint

Displays the Qbin templates.

cnfqbin

Configures the Qbin. You can answer yes when prompted and the command will use the card Qbin values from the Qbin templates.

dspqbin

Displays Qbin parameters currently configured for the virtual interface.

dspcd

Displays the card configuration.


Configuring the BXM Card's Qbin

When you activate an interface by using an uptrk or upport command, a default service template (MPLS1) is automatically assigned to that interface. The corresponding Qbin templates are simultaneously set up in the BXMs data structure for that interface. This service template has an identifier of "1".

To change the service template assigned to an interface, use the cnfvsiif command. You can do this only when there are no active VSI connections on the BXM.

To display the assigned templates, use the dspvsiif command.

Each template table row includes an entry that defines the Qbin to be used for that Class of Service
(see Figure 23-10).

This mapping defines a relationship between the template and the interface Qbin's configuration.

A Qbin template defines a default configuration for the set of Qbins for the logical interface. When a template assignment is made to an interface, the corresponding default Qbin configuration becomes the interface's Qbin configuration.

Once a service template has been assigned, you can then adjust some of the parameters of this configuration on a per-interface basis. Changes you make to the Qbin configuration of an interface affect only that interface's Qbin configuration. Your changes do not affect the Qbin template assigned to that interface.

To change the template's configuration of the interface, provide new values by using the cnfqbin command. The Qbin is now "user configured" as opposed to "template configured." This information is displayed on the dspqbin screen, which indicates whether the values in the Qbin are from the template assigned to the interface, or whether the values have been changed to user-defined values.

To see the Qbin's default service type and the Qbin number, execute the dspsct command.

Use the following commands to configure Qbins:

cnfqbin

dspqbin

dspqbint

Enabling VSI ILMI Functionality for the PNNI Controller

You need to enable VSI ILMI functionality on VSI-enabled trunk interfaces when using PNNI. Note that VSI ILMI functionality cannot be enabled on trunks to which feeders or VSI controllers are attached.

To enable VSI ILMI functionality on physical trunk interfaces, use the following procedure.


Step 1 Up a physical trunk by using the uptrk command.

Step 2 Configure the trunk to enable ILMI protocol to run on the BXM card by enabling the "Protocol by the card" option of the cnftrk command.

Step 3 Configure a VSI partition on the trunk interface by using the cnfrsrc command.

Step 4 Enable VSI ILMI session for the VSI partition by using the cnfvsipart command.


To enable VSI ILMI functionality on virtual trunk interfaces, use the following procedure.


Step 1 Up a virtual trunk by using the uptrk command.

Step 2 Configure the trunk VPI by using the cnftrk command. (The ILMI automatically runs on the BXM card for virtual trunks.) This is not configurable by using the cnftrk command.

Step 3 Configure a VSI partition on the virtual trunk interface by using the cnfrsrc command.

Step 4 Enable VSI ILMI functionality for the VSI partition by using the cnfvsipart command.


Note VSI ILMI can be enabled for only one VSI partition on the trunk interface.


To display VSI ILMI functionality on interfaces, use the dspvsipartcnf command to display the VSI ILMI status (whether enabled or not) for various VSI partitions on the interface.


VSIs and Virtual Trunking

The VSI virtual trunking feature lets you use BXM virtual trunks as VSI interfaces. Using this capability, VSI master controllers can terminate connections on virtual trunk interfaces.

Activate and configure VSI resources on a virtual trunk using the same commands you use to configure physical interfaces (for example, cnfrsrc, dsprsrc). The syntax used to identify a trunk has an optional virtual trunk identifier that you append to the slot and port information to identify virtual trunk interfaces.

A virtual trunk is a VPC that terminates at each end on the switch port. Each virtual trunk can contain up to 64,000 VCCs, but it might not contain any VPCs.

Virtual trunk interfaces cannot be shared between VSI and Automatic Routing Management. Therefore, configuring a trunk as a VSI interface prevents you from adding the trunk as an Automatic Routing Management trunk. Similarly, a trunk that has been added to the Automatic Routing Management topology cannot be configured as a VSI interface.

Virtual trunks on the BPX use a single configurable VPI. Because virtual trunk interfaces are dedicated to VSI, the entire range of VCIs is available to the VSI controllers.

The virtual trunking feature introduces the concept of defining multiple trunks within a single trunk port interface. This creates a fan-out capability on the trunk card.

Once VSI is enabled on the virtual trunk, Automatic Routing Management does not include this trunk in its route selection process.

To configure a VSI virtual trunk, use the following procedure.


Step 1 Activate the virtual trunk by using the command
uptrk <slot.port.vtrunk>

Step 2 Set up VPI value and trunk parameters by using the command
cnftrk <slot.port.vtrunk>

Step 3 Enable VSI partition by using the command
cnfrsrc <slot.port.vtrunk>


VSI Master and Slave Redundancy

The ability to have multiple VSI controllers is referred to as VSI master redundancy. Master redundancy enables multiple VSI masters to control the same partition.

You add a redundant controller by using the addshelf command, the same way you add an interface (feeder) shelf, except that you specify a partition that is already in use by another controller.

The capability can be used by the controllers for cooperative or exclusive redundancy as follows:

Cooperative redundancy
Both controllers can be active in a partition, and can control the resources simultaneously.

Exclusive redundancy
Only one controller is active at a time. It is up to the controllers to resolve which should be active.

The switch software has no knowledge of the state of the controllers. The state of the controllers is determined by the VSI entities. From the point of view of the BCC, there is no difference between cooperative redundant controllers and exclusive redundant controllers.

For illustrations of a VSI Master and Slave, see Figure 23-3. For an illustration of a switch with redundant controllers that support master redundancy, see Figure 23-8.

Switch software supports master redundancy in the following methods:

It allows you to add multiple controllers to control the same partition.

It sets up the control master-slave VCs between each of the controller ports and the slaves in the node.

It provides controller information to the slaves. The slave advertises this information to the controllers in the partition. The controllers can then use this information to set up an inter-master channel.

The intercontroller communication channel is set up by the controllers. This could be an out-of-band channel, or the controllers can use the controllers interface information advertised by the VSI slaves to set up an intermaster channel through the switch.

Figure 23-8 shows a switch with redundant controllers and the connectivity required to support master redundancy.

Figure 23-8 Switch with Redundant Controllers to Support Master Redundancy

The controller application and Master VSI reside in an external VSI controller (MPLS or PNNI), such as the Cisco 6400 or the MPLS controller in a 7200 or 7500 series router. The VSI slaves are resident in BXM cards on the BPX node.

Master Redundancy

You add a VSI controller, such as an MPLS or PNNI controller by using the addshelf command with the VSI option. The VSI option of the addshelf command identifies the VSI controllers and distinguishes them from interface shelves (feeders).

The VSI controllers are allocated a partition of the switch resources. VSI controllers manage their partition through the VSI interface.

The controllers run the VSI master. The VSI master entity interacts with the VSI slave running on the BXMs through the VSI interface to set up VSI connections using the resources in the partition assigned to the controller.

Two controllers intended to be used in a redundant configuration must specify the same partition when added to the node with the addshelf command.

When a controller is added to the node, switch software will set up the infrastructure so that the controllers can communicate with the slaves in the node. The VSI entities decide how and when to use these communication channels.

In addition, the controllers require a communication channel between them. This channel could be in-band or out-of-band. When a controller is added to the switch, switch software will send controller information to the slaves. This information is advertised to all the controllers in the partition. The controllers may decide to use this information to set up an intermaster channel. Alternatively, the controllers may use an out-of-band channel to communicate.

The maximum number of controllers that can be attached to a given node is limited by the maximum number of feeders that can be attached to a BPX hub. The total number of interface shelves (feeders) and controllers is 16.

Slave Redundancy

Prior to Release 9.2, hot standby functionality was supported only for Automatic Routing Management connections. This was accomplished by the BCC keeping both the active and standby cards in sync with respect to all configuration, including all connections set up by the BCC. However, the BCC does not participate in, nor is it aware of, the VSI connections that are set up independently by the VSI controllers.

Therefore, the task of keeping the redundant card in a hot standby state (for all the VSI connections) is the responsibility of the two redundant pair slaves. This is accomplished by a bulk update (on the standby slave) of the existing connections at the time that (line and trunk) Y-redundancy is added, as well as an incremental update of all subsequent connections.

The hot standby slave redundancy feature enables the redundant card to fully duplicate all VSI connections on the active card, and to be ready for operation on switchover. On bringup, the redundant card initiates a bulk retrieval of connections from the active card for fast sync-up. Subsequently, the active card updates the redundant card on a real-time basis.

The VSI Slave Hot Standby Redundancy feature provides the capability for the slave standby card to be preprogrammed the same as the active card so that when the active card fails, the slave card switchover operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus messages from the BCC to the standby BXM card.

The following sections describe some of the communication between the switch software and firmware to support VSI master and slave redundancy.

VSI Slave Redundancy Mismatch Checking

To provide a smooth migration of the VSI feature on the BXM card, line and trunk Y-redundancy is supported. You can pair cards with and without the VSI capability as a Y-redundant pair if the feature is not configured on the given slot. As long as the feature is not configured on a given slot, switch software will not perform "mismatch checking" if the BXM firmware does not support the VSI feature.

A maximum of two partitions are possible. The card uses a flag in the capability message to report multiple partition capability. Firmware releases that do not support multiple partitions set this flag to OFF. The multiple partitions capability is treated as a card attribute and added to the attribute list.

In a Y-red pair configuration, the multiple partition capability is determined by the minimum of the two cards. A card with no multiple partition capabilities will mismatch if any of the interfaces has an active partition with ID higher than 1. Attempts to enable a partition with ID higher than 1 in a logical card that does not support multiple partitions are blocked.

What Happens When You Add a Controller

You add a controller, including Label Switch Controllers, to a node by using the addshelf command. You add a redundant controller in the same way, except that you specify a partition that may already be in use by another controller. The addshelf command allows for the addition of multiple controllers that manage the same partition.

Use the addctrlr command to attach a controller to a node for the purposes of controlling the node for controllers that require Annex G capabilities in the controller interface. Note that you must first add the shelf by using the addshelf command.

You add VSI capabilities to the interface by using the addctrlr command. The only interface that supports this capability is an AAL5 feeder interface.

When adding a controller, you must specify a partition ID. The partition ID identifies the logical switch assigned to the controller. The valid partitions are 1 and 2. The user interface blocks the activation of partitions with ID higher than 1 if the card does not support multiple partitions.

To display the list of controllers in the node, use the command dspctrlrs.

The functionality is also available via SNMP using the switchIfTable in the switch MIB.

You can add one or more redundant MPLS controllers to one partition, and one or more redundant PNNI controllers to the other partition.

When using the addshelf command to add a VSI controller to the switch, you must specify the controller ID. This is a number between 1 and 32 that uniquely identifies the controller. Two different controllers must always be specified with different controller IDs.


Note The Controller ID for a PNNI controller must be 2.


The management of resources on the VSI slaves requires that each slave in the node has a communication control VC to each of the controllers attached to the node. When a controller is added to the BPX by using the addshelf command, the BCC sets up the set of master-slave connections between the new controller port and each of the active slaves in the switch.

The connections are set up using a well known VPI.VCI. The value of the VPI is 0. The value of the VCI is (40 + (slot - 1)), where slot is the logical slot number of the slave. These values are default. You can modify them by using the addctrlr command.


Note Once the controllers have been added to the node, the connection infrastructure is always present. Depending on their state, the controllers may decide to use it or not.


The addition of a controller to a node will fail if there are not enough channels available to set up the control VCs in one or more of the BXM slaves.

The BCC also informs the slaves of the new controller through a VSI configuration CommBus message (the BPXs internal messaging protocol). The message includes a list of controllers attached to the switch and their corresponding controller IDs. This internal firmware command includes the interface where the controller is attached. This information, when advertised by the slaves, can be used by the controllers to set up an inter-master communication channel.

When the first controller is added, the BCC behaves as it did in releases previous to Release 9.2. The BCC will send a VSI configuration CommBus message to each of the slaves with this controller information, and it will set up the corresponding control VCs between the controller port and each of the slaves.

When a new controller is added to drive the same partition, the BCC will send a VSI configuration CommBus message with the list of all controllers in the switch, and it will set up the corresponding set of control VCs from the new controller port to each of the slaves.

What Happens When You Delete a Controller

To delete a controller from the switch, use either delshelf or delctrlr.

Use the command delshelf to delete generic VSI controllers.

Use the command delctrlr to delete controllers that have been added to Annex G-capable interfaces.

When one of the controllers is deleted by using the delshelf command, the master-slave connections associated with this controller are deleted. The control VCs associated with other controllers managing the same partition will not be affected.

The deletion of the controller triggers a new VSI configuration (internal) CommBus message. This message includes the list of the controllers attached to the node. The deleted controller is removed from the list. This message is sent to all active slaves in the shelf. In cluster configurations, the deletion of a controller is communicated to the remote slaves by the slave directly attached through the interslave protocol.

While there is at least one controller attached to the node controlling a given partition, the resources in use on this partition should not be affected by a controller having been deleted. Only when a given partition is disabled will the slaves release all the VSI resources used on that partition.

The addshelf command allows multiple controllers on the same partition. You are prompted to confirm the addition of a new VSI shelf with a warning message indicating that the partition is already used by a different controller.

What Happens When a Slave Is Added

When a new slave is activated in the node, the BCC will send a VSI configuration CommBus (internal BPX protocol) message with the list of the controllers attached to the switch.

The BCC will also set up a master-slave connection from each controller port in the switch to the added slave.

What Happens When a Slave is Deleted

When a slave is deactivated in the node, the BCC will tear down the master-slave VCs between each of the controller ports in the shelf and the slave.

Managing Resources

VSI LCNs are used for setting up the following management channels:

interslave

master-slave

intershelf blind channels

Intershelf blind channels are used in cluster configuration for communication between slaves on both sides of a trunk between two switches in the same cluster node.

The maximum number of slaves in a switch is 12. Therefore, a maximum of 11 LCNs are necessary to connect a slave to all other slaves in the node.

If a controller is attached to a shelf, master-slave connections are set up between the controller port and each of the slaves in the shelf.

For each slave that is not directly connected, the master-slave control VC consists of two legs:

One leg from the VSI master to the backplane, through the directly connected slave

A second leg from the backplane to the corresponding VSI slave

For the slave that is directly connected to the controller, the master-slave control VC consists of a single leg between the controller port and the slave. Therefore, 12 LCNs are needed in the directly connected slave, and 1 LCN in each of the other slaves in the node for each controller attached to the shelf.

These LCNs are allocated from the Automatic Routing Management pool. This pool is used by Automatic Routing Management to allocate LCNs for connections and networking channels.

For a given slave the number of VSI management LCNs required from the common pool is:

n X 12 + m

where:

n is the number of controllers attached to this slave

m is the number of controllers in the switch directly attached to other slaves

VSI Slave Redundancy (Hot Slave Redundancy)

The function of the slave hot standby is to preprogram the slave standby card the same as the active card so that when the active card fails, the slave card switch over operation can be done quickly (within 250 ms). Without the VSI portion, the BXM card already provided the hot standby mechanism by duplicating CommBus (internal BPX protocol) messages from BCC to standby BXM card.

Because the master VSI controller does not recognize the standby slave card, the active slave card forwards VSI messages it received from the Master VSI controller to the standby Slave VSI card.

Also, when the standby slave VSI card is first started (either by having been inserted into the slot, or if you issue the addyred command from the CLI console), the active slave VSI card needs to forward all VSI messages it had received from the Master VSI controller card to the standby Slave VSI controller card.

In summary, these are the hot standby operations between active and standby card:

1. CommBus messages are duplicated to standby slave VSI card by the BCC.
Operation 1 does not need to implement because it had been done by the BCC.

2. VSI messages (from master VSI controller or other slave VSI card) are forwarded to the standby slave VSI card by the active slave VSI card.
Operation 2 is normal data transferring, which occurs after both cards are in-sync.

3. When the standby slave VSI card starts up, it retrieves all VSI messages from the active slave VSI card and processes these messages.
Operation 3 is initial data transferring, which occurs when the standby card first starts up.

The data transfer from the active card to the standby card should not affect the performance of the active card. Therefore, the standby card takes most actions and simplifies the operations in the active card. The standby card drives the data transferring and performs the synchronization. The active card functions just forward VSI messages and respond to the standby card requests.

Class of Service Templates and Qbins

Class of Service Templates (COS Templates) provide a means of mapping a set of standard connection protocol parameters to "extended" platform-specific parameters. Full Quality of Service (QoS) implies that each VC is served through one of a number of Class of Service buffers (Qbins), which are differentiated by their QoS characteristics.

A Qbin template defines a default configuration for the set of Qbins for a logical interface. When you assign a template to an interface, the corresponding default Qbin configuration is copied to this interface's Qbin configuration and becomes the current Qbin configuration for this interface.

Qbin templates deal only with Qbins that are available to VSI partitions, which are 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by Automatic Routing Management.

How Service Class Templates Work

The Service Class templates provide a means of mapping a set of extended parameters, which are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave during connection setup.

A set of service templates is stored in each switch (such as BPX) and downloaded to the service modules (such as BXMs) as needed.

The service templates contains two classes of data:

Parameters necessary to establish a connection (that is, per VC), including entries such as UPC actions, various bandwidth-related items, per VC thresholds, and so on.

Parameters necessary to configure the associated Class of Service buffers (Qbins) that provide QoS support.

The general types of parameters passed from a VSI Master to a Slave include:

A service type identifier

QoS parameters (CLR, CTD, CDV)

Bandwidth parameters (such as PCR, MCR)

Other ATM Forum Traffic Management 4.0 parameters

Each VC added by a VSI master is assigned to a specific Service Class by means of a 32-bit service type identifier. The following are the current identifiers:

ATM Forum service types

Automatic Routing Management

MPLS Switching

When a connection setup request is received from the VSI master in the Label Switch Controller, the VSI slave (in the BXM, for example) uses the service type identifier to index into a Service Class Template database containing extended parameter settings for connections matching that index. The slave uses these values to complete the connection setup and program the hardware.

One of the parameters specified for each service type is the particular BXM Class of Service buffer (Qbin) to use. The Qbin buffers provide separation of service type to match the QoS requirements.

Service templates on the BPX are maintained by the BCC and are downloaded to the BXM cards as part of the card configuration process for:

Y-red card additions

BCC (control card) switchovers

Cards with active interfaces and that are reset (hardware reset)

BCC (control card) rebuilds

The templates are nonconfigurable.

Structure of Service Class Templates

There are three types of templates:

VSI Special Types

ATMF Types

MPLS Types

You can assign any one of the nine templates to a Virtual Switch Interface (see Figure 23-9).

Each template table row includes an entry that defines the Qbin to be used for that Class of Service. See Figure 23-9 for an illustration of how Service Class databases map to Qbins. This mapping defines a relationship between the template and the interface Qbin's configuration.

A Qbin template defines a default configuration for the set of Qbins for the logical interface. When a template assignment is made to an interface, the corresponding default Qbin configuration becomes the interface's Qbin configuration.

Some of the parameters of the interface's Qbin configuration can be changed on a per-interface basis. Such changes affect only that interface's Qbin configuration and no others, and do not affect the Qbin templates.

Figure 23-9 Service Template Overview

Qbin templates are used only with Qbins that are available to VSI partitions, specifically, Qbins 10 through 15. Qbins 10 through 15 are used by the VSI on interfaces configured as trunks or ports. The rest of the Qbins (0-9) are reserved for and configured by Automatic Routing Management.

Each template table row includes an entry that defines the Qbin to be used for that Class of Service. This mapping defines a relationship between the template and the interface Qbin's configuration. As a result, you need to define a default Qbin configuration to be associated with the template.


Note The default Qbin configuration, although sometime referred as a "Qbin template," behaves differently from that of the Class of Service templates.


Figure 23-10 Service Template and Associated Qbin Selection

Extended Service Types Support

The service-type parameter for a connection is specified in the connection bandwidth information parameter group. The service-type and service-category parameters determine the Service Class to be used from the service template.

Supported Service Categories

There are five major service categories and several subcategories. The major service categories are described in Table 23-7. A list of the supported service subcategories is shown in LCNs.

Table 23-7 Service Category Listing

Service Category
Service Type Identifiers

CBR

0x0100

VBR-rt

0x0101

VBR-Nrt

0x0102

UBR

0x0103

ABR

0x0104


Supported Service Types

The service type identifier is a 32-bit number.

There are three service types:

VSI Special Type

ATMF Types

MPLS types

The service type identifier appears on the dspsct screen when you specify a Service Class template number and service type; for example:

dspsct <2> <vbrrt1>

A list of supported service templates and associated Qbins, and service types is described in Table 23-8.

Table 23-8 Service Category Listing 

Template Type
Service Type Identifiers
Service Types
Associated Qbin

VSI Special Types

0x0000

0x0001

0x0002

Null

Default

Signaling

-

13

10

ATMF Types

0x0100

0x0101

0x0102

0x0103

0x0104

0x0105

0x0106

0x0107

0x0108

0x0109

0x010A

0x010B

CBR.1

VBR.1-RT

VBR.2-RT

VBR.3-RT

VBR.1-nRT

VBR.2-nRT

VBR.3-nRT

UBR.1

UBR.2

ABR

CBR.2

CBR.3

10

11

11

11

12

12

12

13

13

14

10

10

MPLS Types

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

label cos0, per-class service

label cos1, per-class service

label cos2, per-class service

label cos3, per-class service

label cos4, per-class service

label cos5, per-class service

label cos6, per-class service

label cos7, per-class service

label ABR, (Tag w/ ABR flow control)

10

11

12

13

10

11

12

13

14


VC Descriptors

A summary of the parameters associated with each of the service templates is provided in Table 23-9 through Table 23-12. Table 23-13 provides a description of these parameters and also the range of values that may be configured if the template does not assign an arbitrary value.

Table 23-9 lists the parameters associated with Default (0x0001) and Signaling (0x0002) service template categories.

Table 23-9 VSI Special Service Types 

Parameter
VSI Default
(0x0001)
VSI Signalling
(0x0002)

Qbin Number

10

15

UPC Enable

0

*

UPC CLP Selection

0

*

Policing Action (GCRA #1)

0

*

Policing Action (GCRA #2)

0

*

PCR

-

300 kbps

MCR

-

300 kbps

SCR

-

-

ICR

-

-

MBS

-

-

CoS Min BW

0

*

CoS Max BW

0

*

Scaling Class

3

3

CAC Treatment ID

1

1

VC Max Threshold

Q_max/4

*

VC CLPhi Threshold

75

*

VC CLPlo Threshold

30

*

VC EPD Threshold

90

*

VC EFCI Threshold

60

*

VC discard selection

0

*


Table 23-10 and Table 23-11 lists the parameters associated with the PNNI service templates.

Table 23-10 ATM Forum Service Types, CBR, UBR, and ABR 

Parameter
CBR.1
CBR.2
CBR.3
UBR.1
UBR.2
ABR

Qbin Number

10

10

10

13

13

14

UPC Enable

1

1

1

1

1

1

UPC CLP Selection

*

*

*

*

*

*

Policing Action (GCRA #1)

*

*

*

*

*

*

Policing Action (GCRA #2)

*

*

*

*

*

*

PCR

           

MCR

-

-

-

*

*

*

SCR

-

-

-

50

50

*

ICR

-

-

-

-

-

*

MBS

-

-

-

-

-

*

CoS Min BW

0

0

0

0

0

0

CoS Max BW

100

100

100

100

100

100

Scaling Class

*

*

*

*

*

*

CAC Treatment ID

*

*

*

*

*

*

VC Max Threshold

*

*

*

*

*

*

VC CLPhi Threshold

*

*

*

*

*

*

VC CLPlo Threshold

*

*

*

*

*

*

VC EPD Threshold

*

*

*

*

*

*

VC EFCI Threshold

*

*

*

*

*

*

VC discard selection

*

*

*

*

*

*

VS/VD/FCES

-

-

-

-

-

*

ADTF

-

-

-

-

-

500

RDF

-

-

-

-

-

16

RIF

-

-

-

-

-

16

NRM

-

-

-

-

-

32

TRM

-

-

-

-

-

0

CDF

         

16

TBE

-

-

-

-

-

16777215

FRTT

-

-

-

-

-

*


Table 23-11 ATM Forum VBR Service Types 

Parameter
Vbrrt.1
Vbrrt.2
Vbrrt.3
Vbrnrt.1
Vbrnrt.2
Vbrnrt.3

Qbin Number

11

11

11

12

12

12

UPC Enable

1

1

1

1

1

1

UPC CLP Selection

*

*

*

*

*

*

Policing Action (GCRA #1)

*

*

*

*

*

*

Policing Action (GCRA #2)

*

*

*

*

*

*

PCR

           

MCR

*

*

*

*

*

*

SCR

*

*

*

*

*

*

ICR

-

-

-

-

-

-

MBS

*

*

*

*

*

*

CoS Min BW

0

0

0

0

0

0

CoS Max BW

100

100

100

100

100

100

Scaling Class

*

*

*

*

*

*

CAC Treatment ID

*

*

*

*

*

*

VC Max Threshold

*

*

*

*

*

*

VC CLPhi Threshold

*

*

*

*

*

*

VC CLPlo Threshold

*

*

*

*

*

*

VC EPD Threshold

*

*

*

*

*

*

VC EFCI Threshold

*

*

*

*

*

*

VC discard selection

*

*

*

*

*

*

* indicates not applicable


The connection parameters and their default values for label switching service templates are listed in Table 23-12.

Table 23-12 MPLS Service Types 

Parameter
CoS 0/4
CoS 1/5
CoS 2/6
CoS3/7
Tag-ABR

Qbin #

10

11

12

13

14

UPC Enable

0

0

0

0

0

UPC CLP Selection

0

0

0

0

0

Policing Action (GCRA #1)

0

0

0

0

0

Policing Action (GCRA#2)

0

0

0

0

0

PCR

-

-

-

-

cr/10

MCR

-

-

-

-

0

SCR

-

-

-

-

P_max

ICR

-

-

-

-

100

MBS

-

-

-

-

-

CoS Min BW

0

0

0

0

0

CoS Max BW

0

0

0

0

100

Scaling Class

3

3

2

1

2

CAC Treatment

1

1

1

1

1

VC Max

Q_max/4

Q_max/4

Q_max/4

Q_max/4

cr/200ms

VC CLPhi

75

75

75

75

75

VC CLPlo

30

30

30

30

30

VC EPD

90

90

90

90

90

VC EFCI

60

60

60

60

30

VC discard selection

0

0

0

0

0

VS/VD/FCES

-

-

-

-

0

ADTF

-

-

-

-

500

RDF

-

-

-

-

16

RIF

-

-

-

-

16

NRM

-

-

-

-

32

TRM

-

-

-

-

0

CDF

-

-

-

-

16

TBE

-

-

-

-

16777215

FRTT

-

-

-

-

0


VC Descriptor Parameters

If not already preconfigured, the connection parameters list the range of values that may be configured as described in Table 23-13.

Every Service Class does not include all parameters. For example, a CBR service type have fewer parameters than an ABR service type.


Note Not every Service Class has a value defined for every parameter listed in Table 23-13.


Table 23-13 Connection Parameter Descriptions and Ranges 

Object Name
Range/Values
Template Units

Qbin Number

10-15

Qbin #

Scaling Class

0-3

enumeration

CDVT

0-5M (5 sec)

secs

MBS

1-5M

cells

ICR

MCR-PCR

cells

MCR

50-LR

cells

SCR

MCR-LineRate

cells

UPC Enable

0-Disable GCRAs

1-Enabled GCRAs

2-Enable GCRA #1

3-Enable GCRA #2

enumeration

UPC CLP Selection

0 - Bk 1: CLP (0+1)

Bk 2: CLP (0)

1 - Bk 1: CLP (0+1)

Bk 2: CLP (0+1)

2-Bk 1: CLP (0+1)

Bk 2: Disabled

enumeration

Policing Action (GCRA #1)

0-Discard

1-Set CLP bit

2-Set CLP of

untagged cells,

disc. tagged cells

enumeration

Policing Action (GCRA #2)

0-Discard

1-Set CLP bit

2-Set CLP of

untagged cells,

disc. tagged cells

enumeration

VC Max

 

cells

CLP Lo

0 to 100

%Vc Max

CLP Hi

0 to 100

%Vc Max

EFCI

0 to 100

%Vc Max

VC Discard Threshold Selection

0-CLP Hysteresis

1-EPD

enumeration

VS/VD

0: None

1: VS/VD

2: VS/VD with external Segment

enumeration

Reduced Format ADTF

0 to 7

enumeration

Reduced Format Rate Decrease Factor (RRDF)

1 to 15

enumeration

Reduced Format Rate Increase Factor (RRIF)

1 to 15

enumeration

Reduced Format Time Between Fwd RM cells (RTrm)

0 to 7

enumeration

Cut-Off Number of RM Cells (CRM)

1 to 4095

cells


Qbin Dependencies

Qbin templates deal only with Qbins that are available to VSI partitions, namely 10 through 15. Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by Automatic Routing Management.

When you execute a dspsct command, it will give you the default service type, and the Qbin number.

The available Qbin parameters are listed in Table 23-14.

Notice that the Qbins available for VSI are restricted to Qbins 10 to 15 for that interface. All 32 possible virtual interfaces are provided with 16 Qbins.

Table 23-14 Service Template Qbn Parameters

Template Object Name
Template Units
Template Range/Values

Qbin Number

enumeration

0 to 15 (10 to 15 valid for VSI)

Max Qbin Threshold

msec

1 to 2000000

Qbin CLP High Threshold

% of max Qbin threshold

0 to 100

Qbin CLP Low Threshold

% of max Qbin threshold

0 to 100

EFCI Threshold

% of max Qbin threshold

0 to 100

Discard Selection

enumeration

1-CLP Hystersis

2-Frame Discard

Weighted Fair Queueing

enable/disable

0: Disable

1: Enable


Qbin Default Settings

The Qbin default settings are listed in Table 23-15. The Service Class Template default settings for Label Switch Controllers and PNNI controllers are listed in Table 23-16.


Note Templates 2, 4, 6, and 8 support policing on PPD.


.

Table 23-15 Qbin Default Settings 

Qbin
Max Qbin Threshold (usec)
CLP High
CLP Low/EPD
EFCI
Discard Selection
LABEL
Template 1

10 (Null, Default, Signalling, Tag0,4)

300,000

100%

95%

100%

EPD

11 (Tag1,5)

300,000

100%

95%

100%

EPD

12 (Tag2,6)

300,000

100%

95%

100%

EPD

13 (Tag3,7)

300,000

100%

95%

100%

EPD

14 (Tag ABR)

300,000

100%

95%

6%

EPD

15 (Tag unused)

300,000

100%

95%

100%

EPD

PNNI
Templates 2 (with policing) and 3

10 (Null, Default, CBR)

4200

80%

60%

100%

CLP

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (UBR)

105000

80%

60%

100%

EPD

14 (ABR)

105000

80%

60%

20%

EPD

15 (Unused)

105000

80%

60%

100%

EPD

Full Support for ATMF and reduced support for Tag CoS without Tag-ABR
Templates 4 (with policing) and 5

10 (Tag 0,4,1,5, Default, UBR, Tag-ABR*)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag 2,6,3,7)

300,000

100%

95%

100%

EPD

14 (ABR)

105000

80%

60%

20%

EPD

15 (CBR)

4200

80%

60%

100%

CLP

Full Support for Tag ABR and ATMF without Tag CoS
Templates 6 (with policing) and 7

10 (Tag 0,4,1,5,2,6,3,7 Default, UBR)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag-ABR)

300,000

100%

95%

6%

EPD

14 (ABR)

105000

80%

60%

20%

EPD

15 (CBR)

4200

80%

60%

100%

CLP

Full Support for Tag CoS and reduced support for ATMF
Templates 8 (with policing) and 9

10 (CBR, VBR-rt)

4200

80%

60%

100%

CLP

11 (VBR-nrt, ABR)

53000

80%

60%

20%

EPD

12 (UBR, Tag 0,4)

300,000

100%

95%

100%

EPD

13 (Tag 1, 5, Tag-ABR)

300,000

100%

95%

6%

EPD

14 (Tag 2,6)

300,000

100%

95%

100%

EPD

15 (Tag 3, 7)

300,000

100%

95%

100%

EPD


Table 23-16 Service Class Template Default Settings 

Parameter with Default Setting
Label
PNNI

MCR

Tag0-7: N/A
TagAbr: 0% of PCR

ABR: 0%

AAL5 Frame Base Traffic Control (Discard Selection)

EPD

Hystersis

CDVT(0+1)

250,000

250,000

VS/VD

Tag0-7: N/A
TagAbr: None

ABR: None

SCR

Tag0-7: N/A
TagAbr: 0

VBR: 100%
ABR: 0

MBS

Tag0-7: N/A
TagAbr: 0

VBR: 1000

Policing

Policing Disable

VbrRt1:
GCRA_1_2, CLP01_CLP01, DISCARD on both policing action

VbrRt2:
GCRA_1_2,
CLP01_CLP0, DISCARD on both policing action

VbrRt3:
GCRA_1_2,
CLP01_CLP0, CLP DISCARD for 1st policier and CLP for 2nd policier

VbrNrt1:
same as VbrRt1

VbrNrt2:
same as VbrRt2

VbrNrt3:
same as VbrRt3

Ubr1:
GCRA_1
CLP01, Discard

Ubr2:
GCRA_1_2
CLP01 DISCARD on
policer 1.
CLP01 TAG on policer 2

ABR:
same as ubr1

Cbr1:
same as ubr1

Cbr2:
GCRA_1_2
CLP01_CLP0, Discard on both policing action

Cbr3:
GCRA_1_2
CLP01_CLP0, CLP UNTAG for policier 1 and CLP for policier 2

ICR

Tag0-7: N/A
TagAbr: NCR

ABR: 0%

ADTF

Tag0-7: N/A
TagAbr: 500 msec

ABR: 1000 msec
(ATM forum it's 500)

Trm

Tag0-7: N/A
TagAbr: 0

ABR: 100

VC Qdepth

61440

10,000
160-cbr
1280-vbr

CLP Hi

100

80

CLP Lo / EPD

40

35

EFCI

TagAbr: 20

20 (not valid for non-ABR)

RIF

Tag0-7: N/A
TagAbr: 16

ABR: 16

RDF

Tag0-7: N/A
TagAbr: 16

ABR: 16

Nrm

Tag0-7: N/A
TagAbr: 32

ABR: 32

FRTT

Tag0-7: N/A
TagAbr: 0

ABR: 0

TBE

Tag0-7: N/A
TagAbr: 16,777,215

ABR: 16,777,215

IBS

N/A

N/A

CAC Treatment

LCN

VBR: CAC4
UBR:LCN
ABR: MIN BW
CBR: CAC4

Scaling Class

UBR - Scaled 1st

VBR: VBR -Scaled 3rd
UBR: UBR - Scaled 1st
ABR: ABR - Scaled 2nd
CBR: CBR - Scaled 4th

CDF

16

16


Understanding MPLS VC Merge

Virtual Circuit (VC) merge improves the scalability of MPLS networks and allows multiple incoming VCs to be merged into a single outgoing VC, which is called merged VC. Tag switching is supported. In the BPX 8600 series switch, VC Merge is implemented as part of the output buffering for ATM interfaces.


Note The enhanced BXM (BXM-E) cards are required by VC Merge.


Only frame-based connections are implemented. While preserving AAl5 framing, the key to VC Merge is to switch cells from the merging Label Virtual Circuit (LVC) to the merged LVC that points to the destination. An AAL5 frame consists of multiple cells. The cells from a particular frame must be in an uninterrupted sequence when it goes out after merging. Cells from one AAL5 frame cannot be intermingled with cells from another AAL5 frame on the same VC. Several AAL5 frames can arrive on different VCs and are merged onto a single VC without interleaving the frames.

VC Merge is responsible for processing any incoming VSI messages from the MPLS controller and identifies the merging request. VC-merge is enabled by filling at least 1023 VSI channels on any interface on the card. The Label Switch Controller (LSC) controls the limit of merged VCs you can have.

Both without VC Merge and with VC Merge examples are shown in Figure 23-11.

Figure 23-11 VC Merge Example

VC Merge Characteristics

The following are the characteristics of VC Merge:

Connections are unidirectional.

Each merge is done in egress direction.

All merged connections use the same service type.

Single-end point connections are not supported.

VC-merge connections can be added or deleted individually.

OAM cells are not supported for VC-merge connections.

Only the Virtual Channel Connection (VCC) is merged.

Merging Virtual Path Connections (VPC) is not supported.

Both interslave and intraslave connections are allowed.

tagABR for the MPLS controller is not supported.

Displaying Card Support for VC Merge

When adding y-redundancy (card redundancy) where one card supports VC Merge and the other card does not, a feature mismatch is declared with the following message displayed while executing the addyred command:

card pair is incompatible due to VC Merge support.

If VC Merge is enabled on a card and that card is replaced with a card that does not support VC Merge, the dspcd screen shows the following message:

Front card must support VC Merge

When you enter the dspcds command, the status of the card displays mismatch.

For parameter definitions that are used for the dspcd command, refer to the Cisco WAN Switching Command Reference, Release 9.3.30.

To display card support for VC Merge, use the following procedure.


Step 1 Enter the dspcd command to view a detailed card summary as follows:

Last Command: dspcd <slot number>

Step 2 Enter the applicable slot number for the enhanced BXM-E cards. V indicates that VC Merge is supported.

The following example is a detailed card display summary:

m2 TN Cisco BPX 8620 9.3.a0 May 7 2001 21:32 GMT

Detailed Card Display for BXM-155 in slot 4
Status: Active
Revision: HN04 Backcard Installed
Serial Number: 760313 Type: LM-BXM
Top Asm Number: 28215802 Revision: BC
Queue Size: 524280 Serial Number: A74072
Support: 8 Pts, OC3, FST, VcShp Top Asm Number:
Supp: VT,ChStLv 1,VSI(Lv 3,ITSMV) Supp: 8 Pts,OC3,SMF,RedSlot:NO
Supp: APS(FW)
Support: LMIv 1,ILMIv 1,NbrDisc,XL
Support: OAMLp, TrfcGen
#Ch:32768,PG[1]:32736,PG[2]:32736
PG[1]:1,2,3,4,PG[2]:5,6,7,8,
#Sched_Ch:61440 #Total_Ch:61376
Type: BXME, revision DX

Last Command: dspcd 4


Enabling VC Merge

To enable VC Merge, use the following procedure.


Note Before you enable VC Merge, use the cnfrsrc command to ensure that the minimum number of channels is 1023 for the minvsilcns and maxvsilcns parameters. For more information about the cnfrsrc command, refer to the Cisco WAN Switching Command Reference, Release 9.3.30.



Step 1 Enter the cnfcdparm command to configure the card parameter as follows:

This Command: cnfcdparm <slot number><value><e|d>

Step 2 Enter the applicable card slot for the enhanced BXM-E cards.

Step 3 Enter parameter 2 as the designated location to implement VC Merge.

Step 4 Enter e to enable VC Merge.

The following example validates that VC Merge is enabled:


m2 TN Cisco BPX 8620 9.3.a0 May 7 2001 21:37 GMT

Card Parameters

1 Channel Statistics Level ......................................... 1
2 VC Merge State ......................................... E


Last Command: cnfcdparm 5 2 e


VC Merge enabled on this card.

The channel statistics level is not related to the VC Merge state. For a description of all four channel statistics levels, see Chapter 5, "BXM Card Sets: T3/E3, 155, and 622."

If there is no acknowledgement and the BPX switch timed out, the following message will appear:

Card rejected cmd. VC Merge NOT enabled!

For a description of VC Merge messages, see Table 23-17.


Disabling VC Merge

To disable VC Merge, use the following procedure.


Step 1 Enter the cnfcdparm command to configure the card parameter as follows:

This Command: cnfcdparm <slot number><value><e|d>


Step 2 Enter the applicable slot number for the enhanced BXM-E cards.

Step 3 Enter parameter 2 as the designated VC Merge location.

The following example appears:


m2 TN Cisco BPX 8620 9.3.a0 May 7 2001 21:47 GMT

Card Parameters

1 Channel Statistics Level ......................................... 1
2 VC Merge State ......................................... E


This Command: cnfcdparm 4 2


'E' to Enable, 'D' to Disable [E]:

Step 4 Enter d to disable VC Merge.

The following message appears as an acknowledgement:

Disabling VC Merge with active VSI partns on card may result in dropped conns Continue?

Step 5 Enter y to continue.

The following example validates that VC Merge is disabled:


TN Cisco BPX 8620 9.3.a0        May 01 2001 20:41 GMT
             Card Parameters

1 Channel Statistics Level ......................................... 1
2 VC Merge State ......................................... D


Last Command: cnfcdparm 5 2 D


VC Merge disabled on this card.

The channel statistics level is not related to the VC Merge state. For a description of all four channel statistics levels, see Chapter 5, "BXM Card Sets: T3/E3, 155, and 622."

If there is no acknowledgement and the BPX switch timed out, the following message appears:

Card rejected cmd. VC Merge NOT disabled!

For a description of VC Merge messages, see Table 23-17.


Warning If the last partition on a slot is disabled while VC Merge is enabled, VC Merge is disabled on that slot.


The following message appears:

Disabling of last partn on slot has caused disabling of VC Merge.


Interpreting the Messages

After using the cnfcdparm command to either enable or disable VC Merge, the messages that are displayed after execution are described in Table 23-17.

Table 23-17 Messages for VC Merge 

Message
Description

Could not send request to card

The request to enable or disable VC Merge cannot be sent to the BXM card.

Card rejected cmd.

The BXM card rejects the request to enable or disable VC Merge.

No response from card

The BXM card did not respond to the request to enable or disable VC Merge.


Displaying the Status of VC Merge

After VC Merge is either enabled or disabled, enter the dspcdparm command to view the current status of VC Merge:

Last Command: dspcdparm 5 2

When the applicable slot number is selected, the Card Parameters summary appears to indicate whether VC Merge is enabled or disabled.

If the slot specified does not support VC Merge, N/A appears as the VC Merge State in the Card Parameters summary.

Summary of VSI Commands

The commands for VSI are listed in Table 23-18.

Table 23-18 Commands for Setting up a VSI Controller 

Mnemonic
Description

addctrlr

Attaches a controller to a node; for controllers that require Annex G capabilities in the controller interface. Add a PNNI VSI controller to a BPX node through an AAL5 interface shelf.

addshelf

Adds a trunk between the hub node and interface shelf or VSI MPLS controller.

cnfqbin

Configures the Qbin card. If you answer Yes when prompted, the command will use the card Qbin values from the Qbin templates.

cnfrsrc

Configures the resources, for example, for Automatic Routing Management PVCs and MPLS controller (LSC).

cnfvsiif

Configures the VSI Interface or a different template to an interface.

cnfvsipart

Configure VSI partition characteristics for VSI ILMI.

delctrlr

Deletes a controller, such as a Service Expansion Shelf (SES) PNNI controller, from a BPX node.

delshelf

Deletes a trunk between a hub node and access shelf.

dspcd

Displays the card configuration.

dspchuse

Displays a summary of channel distribution in a given slot.

dspctrlrs

Displays the VSI controllers, such as an PNNI controller, on a BPX node.

dspqbin

Displays the Qbin parameters currently configured for the virtual interface.

dspqbintmt

Displays the Qbin template.

dsprsrc

Displays the LSC resources.

dspsct

Displays the SCT assigned to an interface. The command has the following three levels of operation:

dspsct—Lists all the service templates resident in the node with no arguments.

dspsct <tmplt_id>—Lists all the Service Classes in the template.

dspsct <tmplt_id>—Lists all the parameters of that Service Class.

dspvsiif

Displays the VSI Interface.

dspvsipartcnf

Displays the information about VSI ILMI functionality.

dspvsipartinfo

Displays the VSI resource status for the trunk and partition.



hometocprevnextglossaryfeedbacksearchhelp

Posted: Tue May 10 21:21:48 PDT 2005
All contents are Copyright © 1992--2005 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.