cc/td/doc/product/wanbu/igx8400/9_3_30
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Cisco IGX 8400 Series IP Service
IP Service—Functional Overview
MPLS Connections Supported on the IGX
IP Service Provisioning
Managing IP Services
Where to Go Next

Cisco IGX 8400 Series IP Service


IP Service—Functional Overview

The Cisco IGX 8400 series delivers in-chassis IP routing through the Universal Router Module (URM), a dual-processor card set delivering high-density voice and data interfaces. You can also set up IP routing services using an external router and configuring ATM PVCs on the IGX.

IP service on the IGX functions through configuration of virtual slave interfaces (VSIs) that allow a node to be managed by multiple label switch controllers (LSCs), such as Multiprotocol Label Switching (MPLS).


Note   Private Network-to-Network Interface (PNNI) is not supported on the URM.

This chapter primarily contains information related to MPLS support on the IGX using the URM. For information on configuring MPLS using an external router, such as a Cisco 7200, see the Update to the CiscoIG X  8400 Series Referenc e Guide for Switch Software Release 9.3.1.

For information on additional Cisco IOS features supported on the IGX, see the Cisco IOS documents listed in the "Related Documentation" section.

Required Hardware and Software

Table 10-1 contains information on the hardware and software required to provision IP services across an IGX node.


Note   Refer to the Compatibility Matrix  for Cisco IOS software, switch software, and firmware compatibility requirements.

Table 10-1   Required Hardware and Software for IP Services

Hardware Options Service Card Firmware Cisco IOS Release Switch Software Release

To configure the node for IP service with an external router, you need the following hardware:

  • Cisco IGX 8410, 8420, or 8430 with
    • NPM-64B
    • UXM service card
  • LSC router with 32 MB RAM (64 MB recommended)

UXM Model C firmware

12.1(3)T or later (IP-only release recommended)

9.3.10 or later

To configure the node for IP service using the in-chassis URM, you need the following hardware:

  • Cisco IGX 8410, 8420, or 8430 with
    • NPM-64B
    • URM

URM Administration Firmware Version XAA or later

Note BC-URI-2FE back card support requires URM Administration Firmware Version XBA

12.2(2)XB or later (for VPN and voice features only)

12.2(8)T or later (for MPLS, VPN, and voice features)

9.3.20 or later (for voice features only)

9.3.30 or later (for MPLS, VPN, and voice features)

URM


Note   Except for the differences noted in this chapter, the URM can be configured as though it were an external router and a UXM or UXM-E card. Switch software setup on the embedded UXM-E portion of the card is the same as for a UXM or UXM-E, while the embedded router is configured like any external Cisco router. For more information on the URM, see the "Universal Router Module" section 2-84.

The URM consists of a logically-partitioned front card connected to a universal router interface (URI) back card. The front card contains an embedded UXM-E running an administration firmware image, and an embedded router running a Cisco IOS image. The embedded UXM-E and the embedded router connect through a logical internal ATM interface, with capability equivalent to an OC3 ATM port.

The logically-defined internal ATM interface is seen as a physical interface between the embedded router and the embedded UXM-E processor. However, remote connections terminating on the URM can use the internal ATM interface as an endpoint, with the embedded UXM-E processor passing transmissions to the embedded router.

The URM supports the following types of IP service:

To configure the URM for any IP service, you must use both switch software and Cisco IOS commands. See "Functional Overview" for more information on basic URM installation and setup.

Virtual Slave Interfaces


Note   VSIs can only be configured on the UXM or UXM-E card sets. FR support for VSI controllers functions through FRF.8 service interworking on the UXM or UXM-E front card.

VSIs allow a node to be managed by multiple controllers, such as MPLS.

In the VSI control model, a controller sees the switch as a collection of slaves with their interfaces. The controller can establish connections between any two interfaces, using the resources allocated to its partition. For example, an MPLS controller can only access interfaces that have been configured in the MPLS controller's partition.

A VSI interface becomes available to the controller after the VSI partition is created and enabled. The controller manages its partition through the VSI protocol and runs the VSI master. The VSI master interacts with each VSI slave in the VSI partition and sets up and terminates VSI connections.

A maximum of three VSI partitions can be enabled on the IGX. These VSI partitions can function together or independently, and are in addition to AutoRoute on each interface.

VSIs on the IGX provide the following features:

For information on configuring VSI partitions and VSIs on the IGX, see the "VSI Configuration" section.

VSI Masters and Slaves

A controller application uses a VSI master to control one or more VSI slaves. For an IGX without a URM, the controller application and Master VSI reside in an external router and the VSI slaves exist in UXM cards on the IGX node (see Figure 10-1).

IGX nodes with an installed URM utilize the embedded router on the URM front card as the location for the controller application and the Master VSI.


Figure 10-1   VSI, Controller, and Slave VSIs


The controller establishes a link between the VSI master and every VSI slave on the associated switch. The slaves in turn establish links between each other (see Figure 10-2).


Figure 10-2   VSI Master and VSI Slave Example


When multiple switches are connected together, cross-connects within the individual switch enable links between switches to be established (see Figure 10-3).


Figure 10-3   Cross Connects and Links Between Switches


Connection Admission Control

When a connection request is received by the VSI slave, it is first subjected to a Connection Admission Control (CAC) process before being forwarded to the FW layer responsible for actually programming the connection. The granting of the connection is based on the following criteria:

After CAC, the VSI slave accepts a connection setup command from the VSI master in the MPLS controller, and receives connection information including service type, bandwidth parameters, and QoS parameters. This information is used to determine an index into the VI's selected Service Template VC Descriptor table which establishes access to the associated extended parameter set stored in the table.

A preassigned ingress service template containing CoS Buffer links manages ingress traffic.

Service Class Templates


Note   Service class templates (SCTs) are primarily used with virtual circuits (VCs) and must be used when configuring the IGX to work with a VSI master in a label switch controller (LSC).

SCTs provide a way to map a set of standard connection protocol parameters to different hardware platforms. For example, SCTs for the BPX and the IGX are different, but the BPX and IGX can still deliver equivalent CoS for full QoS.

On the IGX, the NPM stores a set of SCTs. When a UXM or UXM-E is initially configured, the appropriate SCTs are downloaded to the card. Later, if you configure a new interface on the card, the appropriate SCTs for that new interface will also be downloaded to the card.

Each SCT contains the following information:

Each SCT has an associated Qbin mapping table, which manages bandwidth by temporarily storing cells and serving them to the interface based on bandwidth availability and CoS priority.


Note   The default SCT, Template 1, is automatically assigned to a virtual interface (VI) when you configure the interface.

The following nine SCTs are available for assignment to a VSI:

For more information on how SCTs work, see Figure 10-4. For information on supported SCT characteristics, see Table 10-2.


Caution   SCTs can be reassigned on an operational interface, triggering a resynchronization process between the UXM or UXM-E and the controllers. However, for a Cisco MPLS VSI controller, reassignment of an SCT on an operational interface will cause all connections on the card to be resynchronized with the controller, and can impact service.


Figure 10-4   Service Template Overview


Supported Service Types

The service type identifier is a 32-bit number.

The service types supported are:

The service type identifier appears on the dspsct screen when you specify a service class template number and service type. For example:

dspsct <1> <TagABR>

A list of supported service templates, associated Qbins, and service types is shown in Table 10-2.

Table 10-2   Service Category Listing

Template Type Service Type Identifier Service Type Associated Qbin

VSI special type

0x0001

Default

13 templates for MPLS1, ATMF1, and ATMF2

0x0002

Signaling

10 templates for MPLS1

MPLS type

0x0001

0x0002

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

Default

Signaling

Tag0

Tag1

Tag2

Tag3

Tag4

Tag5

Tag6

Tag7

TagABR

13

10

10

11

12

13

10

11

12

13

14

ATMF_tagcos_1*

ATMF_tagcos_2*

0x0001

0x0100

0x0101

0x0102

0x0103

0x0104

0x0105

0x0106

0x0107

0x0108

0x0109

0x010A

0x010B

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

Default

CBR.1

VBR.1-RT

VBR.2-RT

VBR.3-RT

VBR.1-nRT

VBR.2-nRT

VBR.3-nRT

UBR.1

UBR.2

ABR

CBR.2

CBR.3

Tag0

Tag1

Tag2

Tag3

Tag4

Tag5

Tag6

Tag7

TagABR

10

15

11

11

11

12

12

12

10

10

14

15

15

10

10

13

13

10

10

13

13

14

ATMF_TagABR_1*

ATMF_TagABR_2*

0x0001

0x0100

0x0101

0x0102

0x0103

0x0104

0x0105

0x0106

0x0107

0x0108

0x0109

0x010A

0x010B

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

Default

CBR.1

VBR.1-RT

VBR.2-RT

VBR.3-RT

VBR.1-nRT

VBR.2-nRT

VBR.3-nRT

UBR.1

UBR.2

ABR

CBR.2

CBR.3

Tag0

Tag1

Tag2

Tag3

Tag4

Tag5

Tag6

Tag7

TagABR

10

15

11

11

11

12

12

12

10

10

14

15

15

10

10

10

10

10

10

10

10

13

ATMF_TagCoS_TagABR_1*

ATMF_TagCoS_TagABR_2*

0x0001

0x0100

0x0101

0x0102

0x0103

0x0104

0x0105

0x0106

0x0107

0x0108

0x0109

0x010A

0x010B

0x0200

0x0201

0x0202

0x0203

0x0204

0x0205

0x0206

0x0207

0x0210

Default

CBR.1

VBR.1-RT

VBR.2-RT

VBR.3-RT

VBR.1-nRT

VBR.2-nRT

VBR.3-nRT

UBR.1

UBR.2

ABR

CBR.2

CBR.3

Tag0

Tag1

Tag2

Tag3

Tag4

Tag5

Tag6

Tag7

TagABR

10

10

10

10

10

11

11

11

12

12

11

10

10

12

13

14

15

12

13

14

15

13

* Indicates ATMF types not supported in this release

ATM CoS Service Templates and Qbins on the IGX

The service class templates provide a means of mapping a set of extended parameters. These are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave in a UXM port interface during initial bringup of the interface.

A set of service templates is stored in each switch and downloaded to the service modules (UXMs) as needed during initial configuration of the VSI interface when a trunk or line is enabled on the UXM.

An MPLS service template is assigned to the VSI interface when the trunk or port is initialized. The label switch controller (LSC) automatically sets up LVCs via a routing protocol (such as OSPF) and the label distribution protocol (LDP), when the CoS multiple LVC option is enabled at the edge label switch routers (LSRs).

With the multiple VC option enabled (at edge LSRs), four LVCs are configured for each IP source-destination. Each of the four LVCs is assigned a service template type. For example, one of the four cell labels might be assigned to label cos2 service type category.

Each service template type has an associated Qbin. Qbins provide the ability to manage bandwidth by temporarily storing cells, and then serving them out as bandwidth is available. This is based on factors including bandwidth availability, and the relative priority of different classes of service.

When ATM cells arrive from the edge LSR at the UXM port with one of four CoS labels, they receive CoS handling based on that label. A table lookup is performed, and the cells are processed based on their connection classification. Based on its label, a cell receives the ATM differentiated service associated with its template type, (MPLS1 template), and service type (for example, label cos2 bw), plus associated Qbin characteristics and other associated ATM parameters.

For information on setting up service class templates on the IGX, see "ATM ServiceFunctional Overview."

VC Descriptor Parameters

Table 10-3 describes the connection parameters and range of values that may be configured, if not already preconfigured, for ATM service classes per VC.

Every service class does not include all parameters. For example, a CBR service type has fewer parameters than an ABR service type.


Note   Every service class does not have a value defined for every parameter listed in Table 10-3.

Table 10-3   Connection Parameter Descriptions and Ranges

Object Name Range/Values Template Units

Qbin no.

10 - 15

Qbin no.

Scaling class

0 - 3

Enumeration

CDVT

0 - 5M (5 sec)

Seconds

MBS

1 - 5M

Cells

ICR

MCR - PCR

Cells

MCR

50 - LR

Cells

SCR

MCR - LineRate

Cells

UPC enable

0 - Disable GCRAs

1 - Enabled GCRAs

2 - Enable GCRA No. 1

3 - Enable GCRA No. 2

Enumeration

UPC CLP selection

0 - Bk 1: CLP (0+1)

Bk 2: CLP (0)

1 - Bk 1: CLP (0+1)

Bk 2: CLP (0+1)

2 - Bk 1: CLP (0+1)

Bk 2: Disabled

Enumeration

Policing action (GCRA No. 1)

0 - Discard

1 - Set CLP bit

2 - Set CLP of

untagged cells,

disc. tagged cells

Enumeration

Policing action (GCRA No. 2)

0 - Discard

1 - Set CLP bit

2 - Set CLP of untagged cells, disc. tagged cells

Enumeration

VC max

 

Cells

CLP lo

0 - 100

Percent VC max

CLP hi

0 - 100

Percent VC max

EFCI

0 - 100

Percent VC max

VC discard threshold selection

0 - CLP hysteresis

1 - EPD

Enumeration

VSVD

0: None

1: VSVD

2: VSVD w / external segment

Enumeration

Reduced format ADTF

0 - 7

Enumeration

Reduced format rate decrease factor (RRDF)

1 - 15

Enumeration

Reduced format rate increase factor (RRIF)

1 - 15

Enumeration

Reduced format time between forward RM cells (RTrm)

0 - 7

Enumeration

Cut-off no. of RM cells (CRM)

1 - 4095

Cells

SVC Descriptors

A summary of the parameters associated with each of the service templates is provided in Table 10-4.

Table 10-4   MPLS Service Categories

Parameter Default Signaling Tag 0/4 Tag 1/5 Tag 2/6 Tag 3/7 Tag-ABR

Qbin No.

13

10

10

11

12

13

14

UPC enable

None

None

None

None

None

None

None

Scaling class

1

1

1

1

1

1

2

CAC treatment

LCN

LCN

LCN

LCN

LCN

LCN

LCN

VC max

61440

0

61440

61440

61440

61440

61440

VC discard selection

EPD

Hystersis

EPD

EPD

EPD

EPD

EPD

VC CLPhi

100

75

100

100

100

100

100

VC CLPlo

30

VC EPD

40

40

40

40

40

40

Cell delay variation tolerance

250000

UPC CLP selection

Policing action (GCRA No. 1)

Policing action (GCRA No. 2)

PCR

MCR

0

SCR

0

ICR

100

MBS

1024

VC EFCI

20

VSVD/FCES

None

ADTF

500

RDF

16

RIF

16

NRM

32

TRM

0

CDF

16

TBE

16777215

FRTT

0

Qbins

Qbins store cells and serve them to an interface based on bandwidth availability and CoS priority (see Figure 10-5. For example, if CBR and ABR cells must exit the switch from the same interface, but the interface is already transmitting CBR cells from another source, the newly-arrived CBR and ABR cells are held in the Qbin associated with that interface. As the interface becomes accessible, the Qbin passes CBR cells to the interface for transmission. After the CBR cells have been transmitted, the ABR cells are passed to the interface and transmitted to their destination.


Figure 10-5   UXM Virtual Interfaces and Qbins


Qbins are used with VIs, in situations where the VI is a VSI with a VSI master running on a separate controller (a label switch controller or LSC). For a VSI master to handle a VSI, each virtual circuit (VC, also known as virtual channel when used in FR networks) must receive a specific service class specified through a 32-bit service type identifier. The IGX supports identifiers for the following service types:

When a connection setup request is received from the VSI master in the LSC, the VSI slave uses the service type identifier to index into an SCT database with extended parameter settings for connections matching that service type identifier. The VSI slave then uses these extended parameter settings to complete connection setup and necessary configuration for connection maintenance and termination as needed.

The VSI master normally sends the VSI slave a service type identifier (either ATM Forum or MPLS), QoS parameters (such as CLR or CDV), and bandwidth parameters (such as PCR or MCR).

Qbin Templates

A Qbin template defines a default configuration for the set of Qbins attached to an interface. When you assign an SCT to an interface, switch software copies the Qbin configuration from the Qbin template and applies the Qbin configuration to all the Qbins attached to the interface.

Qbin templates only apply to the Qbins available to VSI partitions, meaning that Qbin templates only apply to Qbins 10-15. Qbins 0-9 are reserved and configured by automatic routing management (ARM, or AutoRoute).

Some parameters on the Qbins attached to the interface can be reconfigured for each interface. These changes do not affect the Qbin templates, which are stored on the NPM, although they do affect the Qbins attached to the interface.

For a visual description of the interaction between SCTs and Qbin templates, see Figure 10-6.


Figure 10-6   Service Template and Associated Qbin Selection


Qbin Default Settings

The Qbin and SCT default settings for LSCs are shown in Table 10-5.


Note   Templates 2, 4, 6, and 8 support policing on partial packet discard (PPD).

Table 10-5   Qbin Default Settings

Qbin Max Qbin Threshold (usec) CLP High CLP Low/EPD EFCI Discard Selection
LABEL
Template 1

10 (Null, Signaling, Tag 0, 4)

300,000

100%

95%

100%

EPD*

11 (Tag1, 5)

300,000

100%

95%

100%

EPD

12 (Tag2, 6)

300,000

100%

95%

100%

EPD

13 (Tag3, 7), Default

300,000

100%

95%

100%

EPD

14 (Tag Abr)

300,000

100%

95%

6%

EPD

15 (Tag unused)

300,000

100%

95%

100%

EPD

10 (Tag 0, 2, 3, 4, 1, 5, Default, UBR, Tag-Abr*)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag 2, 6, 3, 7)

300,000

100%

95%

100%

EPD

14 (Abr)

105000

80%

60%

20%

EPD

15 (Cbr)

4200

80%

60%

100%

CLP

10 (Tag 0, 4, 1, 5, 2, 6, 3, 7 UBR)

300,000

100%

95%

100%

EPD

11 (VbrRt)

53000

80%

60%

100%

EPD

12 (VbrNrt)

53000

80%

60%

100%

EPD

13 (Tag-Abr), Default

300,000

100%

95%

6%

EPD

14 (Abr)

105000

80%

60%

20%

EPD

15 (Cbr)

4200

80%

60%

100%

CLP

10 (Cbr, Vbr-rt)

4200

80%

60%

100%

CLP

11 (Vbr-nrt, Abr)

53000

80%

60%

20%

EPD

12 (Ubr, Tag 0, 4)

300,000

100%

95%

100%

EPD

13 (Tag 1, 5, Tag-Abr)

300,000

100%

95%

6%

EPD

14 (Tag 2, 6)

300,000

100%

95%

100%

EPD

15 (Tag 3, 7)

300,000

100%

95%

100%

EPD

* Indicates early packet discard (EPD)

Qbin Dependencies

Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by AutoRoute.

When you execute a dspsct command, it will give you the default service type and the Qbin number.

The available Qbin parameters are shown in Table 10-6.


Note   The Qbins available for VSI are restricted to Qbins 10-15 for that interface. All 16 possible virtual interfaces are provided with 16 Qbins.

Table 10-6   Service Template Qbin Parameters

Template Object Name Template Units Template Range/Values

Qbin no.

Enumeration

0-15 (10-15 valid for VSI)

Max Qbin threshold

U sec

1-2000000

Qbin CLP high threshold

Percent of max Qbin threshold

0-100

Qbin CLP low threshold

Percent of max Qbin threshold

0-100

EFCI threshold

Percent of max Qbin threshold

0 - 100

Discard selection

Enumeration

1 - CLP hysteresis

2 - Frame discard

Weighted fair queuing

Enable/disable

0: Disable

1: Enable

MPLS Overview

MPLS enables edge routers to apply labels to packets or frames before transmission into the network. After the packets or frames are transmitted into the network, these labels allow network core devices to switch labeled packets with minimal lookup activity. This process integrates virtual circuit switching with IP routing, enabling scalable IP networks over ATM backbones. By summarizing routing decisions, MPLS enables switches to perform IP forwarding, optimizing the packet's route through the network core.

With MPLS, you can set up explicit data flow routes using path, resource availability, and requested quality of service (QoS) constraints.

You can enable MPLS on an IGX node in two ways—by connecting an external label switch controller (LSC), such as the Cisco 7204VXR, to function as an MPLS controller for all IGX nodes in the network, or by configuring an installed URM as an MPLS controller. Support for MPLS is enabled through the use of a common control interface, or VSI, between the IGX and the controller.


Note   Setting up MPLS requires one LSC for each partition on each IGX node running MPLS in the network.


Tip To save rack space, use multiple, separately-installed URMs as LSCs for multiple partitions on the same IGX node.

For more information on MPLS on the IGX, refer to MPLS Label Switch Controller and Enhancements 12.2(8)T.

MPLS Labeling Criteria

For enabling business IP services, the most significant benefit of MPLS is the ability to assign labels that have special meanings. Sets of labels distinguish destination address and application type or service class (see Figure 10-7).


Figure 10-7   Benefits of MPLS Labels


The MPLS label is compared to precomputed switching tables in core devices, such as the IGX ATM LSR, allowing each switch to automatically apply the correct IP services to each packet. Tables are precalculated, to avoid reprocessing packets at every hop. This strategy not only makes it possible to separate types of traffic, such as best-effort traffic from mission-critical traffic, it also makes an MPLS solution highly scalable.

Because MPLS uses different policy mechanisms to assign labels to packets, it decouples packet forwarding from the content of IP headers. Labels have local significance, and they are used many times in large networks. Therefore, it is almost impossible to run out of labels. This characteristic is essential to implementing advanced IP services such as QoS, large-scale VPNs, and traffic engineering.

MPLS CoS on the IGX

This section describes MPLS CoS with the use of the Cisco IGX 8410, 8420, and 8430 ATM label switch router (ATM LSR). MPLS CoS is also supported in networks using the URM as a LSC.


Note   The URM does not support MPLS CoS when configured as an LSR, and networks using URM-LSRs cannot run MPLS CoS across those network segments containing the URM-LSR.

The MPLS CoS feature enables network administrators to provide differentiated types of service across an MPLS switching network. Differentiated service satisfies a range of requirements by supplying the specific type of service specified for each packet by its CoS service can be specified in different ways—for example, through use of the IP precedence bit settings in either IP packets or in source and destination addresses.

The MPLS CoS feature can be used optionally with MPLS virtual private networks. MPLS CoS can also be used in any MPLS switching network.

In supplying differentiated service, MPLS CoS offers packet classification, congestion avoidance, and congestion management. Table 10-7 lists these functions and how they are delivered.

Table 10-7   CoS Services and Features

Service CoS Function Description

Packet classification

Committed access rate (CAR). Packets are classified at the edge of the network before labels are assigned.

CAR uses the type of service (TOS) bits in the IP header to classify packets according to input and output transmission rates. CAR is often configured on interfaces at the edge of a network in order to control traffic into or out of the network. You can use CAR classification commands to classify or reclassify a packet.

Congestion avoidance

Weighted random early detection (WRED). Packet classes are differentiated based on drop probability.

WRED monitors network traffic, trying to anticipate and prevent congestion at common network and internetwork bottlenecks. WRED can selectively discard lower priority traffic when an interface begins to get congested. It can also provide differentiated performance characteristics for different classes of service.

Congestion management

Weighted fair queuing (WFQ). Packet classes are differentiated based on bandwidth and bounded delay.

WFQ is an automated scheduling system that provides fair bandwidth allocation to all network traffic. WFQ classifies traffic into conversations and uses weights (priorities) to determine how much bandwidth each conversation is allocated, relative to other conversations.

MPLS CoS lets you duplicate Cisco IOS IP CoS (Layer 3) features as closely as possible in MPLS switching devices, including label switching routers (LSRs), edge LSRS, and ATM label switching routers (ATM LSRs). MPLS CoS functions map nearly one-for-one to IP CoS functions on all interface types.

For additional information, refer to Cisco router and MPLS-related Cisco IOS documentation (see the "Cisco IOS Software Documentation" section).

Requirements for MPLS CoS

To use the MPLS CoS feature, your network must run these Cisco IOS features:

Also, the IGX must have:


Tip For information on switch software, Cisco IOS software, and card firmware compatibility, see the Compatibili ty Matrix at http://www.cisco.com/kobayashi/sw-center/sw-wan.shtml.

MPLS CoS in an IP+ATM Network

In IP+ATM networks, MPLS uses predefined sets of labels for each service class, so switches automatically know which traffic requires priority queuing. A different label is used per destination to designate each service class (see Figure 10-8).

There can be up to four labels per IP source-destination. Using these labels, core LSRs implement class-based WFQ to allocate specific amounts of bandwidth and buffer to each service class. Cells are queued by class to implement latency guarantees.

On a Cisco IP+ATM LSR, the weights assigned to each service class are relative, not absolute. The switch can therefore borrow unused bandwidth from one class and allocate it to other classes according to weight. This scenario enables very efficient bandwidth utilization. The class-based WFQ solution ensures that customer traffic is sent whenever unused bandwidth is available, whereas ordinary ATM VCs drop cells in oversubscribed classes even when bandwidth is available.


Figure 10-8   Multiple LVCs for IP QoS Services


Packets have precedence bits in the type of service field of the IP header, set at either the host or an intermediate router, which could be the edge label switch router (LSR). The precedence bits define a CoS 0-3, such as available, standard, premium, or control.

To establish CoS operation when the IGX and the associated LSC router are initially configured, the binding type assigned each LVC interface on the IGX is configured to be multiple LVCs.

Then under the routing protocol (OSPF, for example), four LVCs are set up across the network for each IP source to destination requirement. Depending on the precedence bits set in the packets that are received by the edge LSR, the packet ATM cells that are sent to the ATM LSR will be one of four classes (as determined by the cell label, that is, VPI.VCI). Furthermore, two subclasses are distinguishable within each class by the use of the cell loss priority (CLP) bit in the cells.

Then the ATM LSR performs a MPLS data table lookup and assigns the appropriate CoS template and Qbin characteristics. The default mapping for CoS is listed in Table 10-8.

Table 10-8   Type of Service and Related CoS

Class of Service Mapping Class of Service IP ToS

Available

0

ToS 0/4

Standard

1

ToS 1/5

Premium

2

ToS 2/6

Control

3

ToS 3/7

Figure 10-9 shows an example of IP traffic across an ATM core consisting of IGX-ATM LSRs. The host is sending two types of traffic across the network, interactive video, and nontime-critical data. Because multiple LVCs have automatically been generated for all IP source-destination paths, traffic for each source destination is assigned to one of four LVCs, based on the precedence bit setting in the IP packet header.

In this case, the video traffic might be assigned to the premium CoS, and transmitted across the network. This starts with the cell label "51" out of the Edge LSR-A, and continues across the network with the cell label "91" applied to the Edge LSR-C. In each IGX-ATM LSR, the cells are processed with the preassigned bandwidth, queuing, and other ATM QoS functions suitable to "premium" traffic.

In a similar fashion, low-priority data traffic cells with the same IP source-destination might be assigned label "53" out of Edge LSR-A and arrive at Edge LSR-C with the label "93," receiving preassigned bandwidth, queuing, and other ATM QoS functions suitable to "available" traffic.


Figure 10-9   Example of Multiple LVCs CoS on the IGX


MPLS-Enabled VPNs

You can use MPLS to build an entirely new class of IP VPNs. MPLS-enabled IP VPNs (MPLS-VPNs) are connectionless networks with the same privacy as VPNs built using Frame Relay or ATM VCs.

Cisco MPLS solutions offer multiple IP service classes to enforce business-based policies. Providers can offer low-cost managed IP services because they can consolidate services over common infrastructure, and improve provisioning and network operations.

Although Frame Relay and multiservice ATM deliver privacy and CoS, IP delivers any-to-any connectivity, and MPLS on Cisco IP+ATM switches, such as the IGX-ATM LSR, enables providers to offer the benefits of business-quality IP services over their ATM infrastructures.

MPLS-VPNs, created in Layer 3, are connectionless, and therefore substantially more scalable and easier to build and manage than conventional VPNs.

In addition, value-added services, such as application and data hosting, network commerce, and telephony services, can easily be added to a specific MPLS-VPN, the service provider's backbone recognizes each MPLS-VPN as a separate, connectionless IP network. MPLS over IP+ATM VPN networks combine the scalability and flexibility of IP networks with the performance and QoS capabilities of ATM.

From a single access point, it is now possible to deploy multiple VPNs, each of which designates a different set of services (see Figure 10-10). This flexible way of grouping users and services makes it possible to deliver new services more quickly and cost-effectively. The ability to associate closed groups of users with specific services is critical to service provider value-added service strategies.


Figure 10-10   VPN Network


The VPN network must be able to recognize traffic by application type, such as voice, mission-critical applications, or e-mail. The network should easily separate traffic based on its associated VPN without configuring complex, point-to-point meshes.

The network must be "VPN aware" so that the service provider can easily group users and services into intranets or extranets with the services they need. In such networks, VPNs offer service providers a technology that is highly scalable and allows subscribers to quickly and securely provision extranets to new partners. MPLS brings "VPN awareness" to switched or routed networks. It enables service providers to quickly and cost-effectively deploy secure VPNs of all sizes over the same infrastructure.

VPN Quality of Service

As part of their VPN services, service providers can offer premium services defined by SLAs to expedite traffic from certain customers or applications. QoS in IP networks gives devices the intelligence to preferentially handle traffic as dictated by network policy.

The QoS mechanisms give network managers the ability to control the mix of bandwidth, delay, jitter, and packet loss in the network. QoS is not a device feature; it is an end-to-end system architecture. A robust QoS solution includes a variety of technologies that interoperate to deliver scalable, media-independent services throughout the network, with system-wide performance monitoring capabilities.


Note   VPNs can be used with the CoS feature for MPLS. MPLS-VPN does not require use of MPLS CoS. MPLS-VPNs with CoS are supported on the URM-LSC but are not supported on the URM-LSR.

MPLS-enabled IP VPN networks provide the foundation for delivering value-added IP services, such as multimedia application support, packet voice, and application hosting, all of which require specific service quality and privacy. Because QoS and privacy are an integral part of MPLS, they no longer require separate network engineering.

Cisco's comprehensive set of QoS capabilities enables providers to prioritize service classes, allocate bandwidth, avoid congestion, and link Layer 2 and Layer 3 QoS mechanisms:

MPLS makes it possible to apply scalable QoS across very large routed networks and Layer 3 IP QoS in ATM networks, because providers can designate sets of labels that correspond to service classes. In routed networks, MPLS-enabled QoS substantially reduces processing throughout the core for optimal performance. In ATM networks, MPLS makes end-to-end Layer 3-type services possible.

Traditional ATM and Frame Relay networks implement CoS with point-to-point virtual circuits, but this is not scalable because of high provisioning and management overhead. Placing traffic into service classes at the edge enables providers to engineer and manage classes throughout the network. If service providers manage networks based on service classes, rather than point-to-point connections, they can substantially reduce the amount of detail they must track, and increase efficiency without losing functionality.

Compared to per-circuit management, MPLS-enabled CoS in ATM networks provides virtually all the benefits of point-to-point meshes with far less complexity. Using MPLS to establish IP CoS in ATM networks eliminates per-VC configuration. The entire network is easier to provision and engineer.

VPN Security

Subscribers want assurance that their VPNs, applications, and communications are private and secure. Cisco offers many robust security measures to keep information confidential:

In intranet and extranet VPNs based on Cisco MPLS, packets are forwarded using a unique route distinguisher (RD). RDs are unknown to end users and uniquely assigned automatically when the VPN is provisioned. To participate in a VPN, a user must be attached to its associated logical port and have the correct RD. The RD is placed in packet headers to isolate traffic to specific VPN communities.

MPLS packets are forwarded using labels attached in front of the IP header. Because the MPLS network does not read IP addresses in the packet header, it allows the same IP address space to be shared among different customers, simplifying IP address management.

Service providers can deliver fully managed, MPLS-based VPNs with the same level of security that users are accustomed to in Frame Relay/ATM services, without the complex provisioning associated with manually establishing PVCs and performing per-VPN customer premises equipment (CPE) router configuration.

QoS addresses two fundamental requirements for applications that run on a VPN: predictable performance and policy implementation. Policies are used to assign resources to applications, project groups, or servers in a prioritized way. The increasing volume of network traffic, along with project-based requirements, results in the need for service providers to offer bandwidth control and to align their network policies with business policies in a dynamic, flexible way.

VPNs based on Cisco MPLS technology scale to support many thousands of business-quality VPNs over the same infrastructure. MPLS-based VPN services solve peer adjacency and scalability issues common to large virtual circuit (VC) and IP tunnel topologies. Complex permanent virtual circuit/switched virtual circuit (PVC/SVC) meshes are no longer needed, and providers can use new, sophisticated traffic engineering methods to select predetermined paths and deliver IP QoS to premium business applications and services.

MPLS VPNs over IP+ATM Backbones

Service providers can use MPLS to build intelligent IP VPNs across their existing ATM networks. Because all routing decisions are precomputed into switching tables, MPLS both expedites IP forwarding in large ATM networks at the provider edge, and makes it possible to apply rich Layer 3 services via Cisco IOS technologies in Layer 2 cores.

A service provider with an existing ATM core can deploy MPLS-enabled edge switches or routers (LSRs) to enable the delivery of differentiated business IP services. The service provider needs only a small number of VCs to interconnect provider edge switches or routers to deliver many secure VPNs.

Cisco IP+ATM solutions give ATM networks the ability to intelligently "see" IP application traffic as distinct from ATM/Frame Relay traffic. By harnessing the attributes of both IP and ATM, service providers can provision intranet or extranet VPNs. Cisco enables IP+ATM solutions with MPLS, merging the application of Cisco IOS software with carrier-class ATM switches (see Figure 10-11).


Figure 10-11   MPLS-VPNs in Cisco IP+ATM Network


Without MPLS, IP transport over ATM networks require a complex hierarchy of translation protocols to map IP addressing and routing into ATM addressing and routing.

MPLS eliminates complexity by mapping IP addressing and routing information directly into ATM switching tables. The MPLS label-swapping paradigm is the same mechanism that ATM switches use to forward ATM cells. This solution has the added benefit of allowing service providers to continue offering their current Frame Relay, leased-line, and ATM services portfolio while enabling them to provide differentiated business-quality IP services.

Built-In VPN Visibility

To cost-effectively provision feature-rich IP VPNs, providers need features that distinguish between different types of application traffic and apply privacy and QoS—with far less complexity than an overlay IP tunnel, Frame Relay, or ATM "mesh."

Compared to an overlay solution, an MPLS-enabled network can separate traffic and provide privacy without tunneling or encryption. MPLS-enabled networks provide privacy on a network-by-network basis, much as Frame Relay or ATM provides it on a connection-by-connection basis. The Frame Relay or ATM VPN offers basic transport, whereas an MPLS-enabled network supports scalable VPN services and IP-based value added applications. This approach is part of the shift in service provider business from a transport-oriented model to a service-focused one.

In MPLS-enabled VPNs, whether over an IP switched core or an ATM LSR switch core, the provider assigns each VPN a unique identifier called a route distinguisher (RD) that is different for each intranet or extranet within the provider network. Forwarding tables contain unique addresses, called VPN-IP addresses (see Figure 10-12), constructed by linking the RD with the customer IP address. VPN-IP addresses are unique for each endpoint in the network, and entries are stored in forwarding tables for each node in the VPN.


Figure 10-12   VPN-IP Address Format


BGP Protocol

Border Gateway Protocol (BGP) is a routing information distribution protocol that defines who can talk to whom using MPLS extensions and community attributes. In an MPLS-enabled VPN, BGP distributes information about VPNs only to members of the same VPN, providing native security through traffic separation. Figure 10-13 shows an example of a service provider network with service provider edge label switch routers (PE) and customer edge routers (CE). The ATM backbone switches are indicated by a double-ended arrow labeled "BGP."

Additional security is assured because all traffic is forwarded using LSPs, which define a specific path through the network that cannot be altered. This label-based paradigm is the same property that assures privacy in Frame Relay and ATM connections.


Figure 10-13   VPN with Service Provider Backbone


The provider, not the customer, associates a specific VPN with each interface when the VPN is provisioned. Within the provider network, RDs are associated with every packet, so VPNs cannot be penetrated by attempting to "spoof" a flow or packet. Users can participate in an intranet or extranet only if they reside on the correct physical port and have the proper RD. This setup makes Cisco MPLS-enabled VPNs difficult to enter, and provides the same security levels users are accustomed to in a Frame Relay, leased-line, or ATM service.

VPN-IP forwarding tables contain labels that correspond to VPN-IP addresses. These labels route traffic to each site in a VPN (see Figure 10-14).

Because labels are used instead of IP addresses, customers can keep their private addressing schemes, within the corporate Internet, without requiring Network Address Translation (NAT) to pass traffic through the provider network. Traffic is separated between VPNs using a logically distinct forwarding table for each VPN. Based on the incoming interface, the switch selects a specific forwarding table, which only lists valid destinations in the VPN, as specified by BGP. To create extranets, a provider explicitly configures reachability between VPNs. NAT configurations may be required.


Figure 10-14   Using MPLS to Build VPNs


One strength of MPLS is that providers can use the same infrastructure to support many VPNs and do not need to build separate networks for each customer. VPNs loosely correspond to "subnets" of the provider network.

This solution builds IP VPN capabilities into the network itself, so providers can configure a single network for all subscribers that delivers private IP network services such as intranets and extranets without complex management, tunnels, or VC meshes. Application-aware QoS makes it possible to apply customer-specific business policies to each VPN. Adding QoS services to MPLS-based VPNs works seamlessly; the provider Edge LSR assigns correct priorities for each application within a VPN.

MPLS-enabled IP VPN networks are easier to integrate with IP-based customer networks. Subscribers can seamlessly interconnect with a provider service without changing their intranet applications, because these networks have application awareness built in, for privacy, QoS, and any-to-any networking. Customers can even transparently use their private IP addresses without NAT.

The same infrastructure can support many VPNs for many customers, removing the burden of separately engineering a new network for each customer, as with overlay VPNs.

It is also much easier to perform adds, moves, and changes. If a company wants to add a new site to a VPN, the service provider only has to tell the CPE router how to reach the network, and configure the LSR to recognize VPN membership of the CPE. BGP updates all VPN members automatically.

This scenario is easier, faster, and less expensive than building a new point-to-point VC mesh for each new site. Adding a new site to an overlay VPN entails updating the traffic matrix, provisioning point-to-point VCs from the new site to all existing sites, updating OSPF design for every site, and reconfiguring each CPE for the new topology.

Virtual Routing/Forwarding

Each VPN is associated with one or more VPN routing/forwarding instances (VRFs). A VRF table defines a VPN at a customer site attached to a PE router. A VRF table consists of the following:

A 1-to-1 relationship does not necessarily exist between customer sites and VPNs. A specific site can be a member of multiple VPNs. However, a site may be associated with only one VRF. A site VRF contains all the routes available to the site from the VPNs of which it is a member.

Packet forwarding information is stored in the IP routing table and the CEF table for each VRF. Together, these tables are analogous to the forwarding information base (FIB) used in Label Switching.

A logically separate set of routing and CEF tables is constructed for each VRF. These tables prevent information from being forwarded outside a VPN, and prevent packets that are outside a VPN from being forwarded to a router within the VPN.

VPN Route-Target Communities

The distribution of VPN routing information is controlled by using VPN route target communities, implemented by BGP extended communities.

When a VPN route is injected into BGP, it is associated with a list of VPN route target extended communities. Typically the list of VPN communities is set through an export list of extended community-distinguishers associated with the VRF from which the route was learned.

Associated with each VRF is an import list of route-target communities. This list defines the values to be verified by the VRF table, before a route is eligible to be imported into the VPN routing instance.

For example, if the import list for a particular VRF includes community-distinguishers of A, B, and C, then any VPN route that carries any of those extended community-distinguishers—A, B, or C—will be imported into the VRF.

BGP Distribution of VPN Routing Information

A service provider edge (PE) router can learn an IP prefix from a customer edge (CE) router by static configuration, through a Border Gateway Protocol (BGP) session with the CE router, or through the Routing Information Protocol (RIP) with the CE router.

After the router learns the prefix, it generates a VPN-IPv4 (vpnv4) prefix based on the IP prefix, by linking an 8-byte route distinguisher to the IP prefix. This extended VPN-IPv4 address uniquely identifies hosts within each VPN site, even if the site is using globally nonunique (unregistered private) IP addresses.

The route distinguisher (RD) used to generate the VPN-IPv4 prefix is specified by a configuration command on the PE.

BGP uses VPN-IPv4 addresses to distribute network reachability information for each VPN within the service provider network. BGP distributes routing information between IP domains (known as autonomous systems) using messages to build and maintain routing tables. BGP communication takes place at two levels: within the domain (interior BGP or IBGP) and between domains (external BGP or EBGP).

BGP propagates vpnv4 information using the BGP Multi-Protocol extensions for handling these extended addresses (see RFC 2283, Multi-Protocol Extensions for BGP-4). BGP propagates reachability information (expressed as VPN-IPv4 addresses) among PE routers; the reachability information for a specific VPN is propagated only to other members of that VPN. The BGP Multi-Protocol extensions identify the valid recipients for VPN routing information. All members of the VPN learn routes to other members.

MPLS Label Forwarding

Based on the routing information stored in the IP routing table and the CEF table for each VRF, Cisco label switching uses extended VPN-IPv4 addresses to forward packets to their destinations.

An MPLS label is associated with each customer route. The PE router assigns the label that originated the route, and directs the data packets to the correct CE router.

Label forwarding across the provider backbone is based on either dynamic IP paths or Traffic Engineered paths. A customer data packet has two levels of labels attached when it is forwarded across the backbone.

The PE router associates each CE router with a forwarding table that contains only the set of routes that should be available to that CE router.

no auto-summary
redistribute static
exit-address-family
!
address-family ipv4 unicast vrf vrf2
neighbor 10.20.1.11 activate
no auto-summary
redistribute static
exit-address-family
!
! Define a VRF static route
ip route vrf vrf1 12.0.0.0 255.0.0.0 e5/0/1 10.20.0.60

Virtual Circuit Merge on the IGX


Note   VC merge on the IGX is not supported in releases preceding Switch Software Release 9.3.40.

Virtual circuit (VC) merge on the IGX improves the scalability of MPLS networks by combining multiple incoming VCs into a single outgoing VC (known as a merged VC). VC merge is implemented as part of the output buffering for the ATM interfaces found on the UXM-E. Each VC merge is performed in the egress direction for the connections.

Both interslave and intraslave connections are supported. However, neither the OAM cell format nor tagABR for the MPLS controller are supported.


Note   VC merge is not supported on the UXM card.

To use VC merge on the UXM-E, connections must meet the following criteria:


Note    Virtual path connections (VPCs) are not supported by VC merge on the IGX.

MPLS Connections Supported on the IGX

Direct MPLS connections on the IGX are only supported on the URM card. To configure MPLS connections not listed in Table 10-9, use an external label edge router (LER).


Note   For VISM connections, the URM only supports VoIP.

Table 10-9   Connections Supported on the URM

Hardware Platform Connection Endpoint Connection Type Voice Connection Data Connection

Cisco BPX

BXM

CBR

Y

Y

Cisco BPX

BXM

VBRrt

Y

Y

Cisco BPX

BXM

VBRnt

Y

Y

Cisco BPX

BXM

ABR

N

Y

Cisco BPX

BXM

UBR

N

Y

Cisco BPX

BXM

FST

N

Y

Cisco IGX

UXM

CBR

Y

Y

Cisco IGX

UXM

VBRrt

Y

Y

Cisco IGX

UXM

VBRnt

Y

Y

Cisco IGX

UXM

ABR

N

Y

Cisco IGX

UXM

UBR

N

Y

Cisco IGX

UXM

FST

N

Y

Cisco IGX

UFM

FR

Y FRF.8 SIW

Y FRF.8 SIW

Cisco IGX

UFM

FST

N

Y FRF.8 SIW

Cisco IGX

URM

CBR

Y

Y

Cisco IGX

URM

VBRrt

Y

Y

Cisco IGX

URM

VBRnt

Y

Y

Cisco IGX

URM

ABR

N

Y

Cisco IGX

URM

UBR

N

Y

Cisco IGX

URM

FST

N

Y

Cisco IGX

CVM

N

N

Cisco IGX

HDM

N

N

Cisco IGX

LDM

N

N

Cisco MGX

VISM

Y

N

Cisco MGX

RPM

N

Y

Cisco MGX

FRSM

FR

Y FRF.8 SIW

Y FRF.8 SIW

Cisco MGX

FRSM

FST

N

Y FRF.8 SIW

Cisco MGX

AUSM

CBR

Y

Y

Cisco MGX

AUSM

VBRrt

Y

Y

Cisco MGX

AUSM

VBRnt

Y

Y

Cisco MGX

AUSM

ABR

N

Y

Cisco MGX

AUSM

UBR

N

Y

Cisco MGX

AUSM

FST

N

Y


Note   Use FRF.8 SIW transparent mode for VoATM connections, and use FRF.8 SIW translational mode for VoIP and data connections.

IP Service Provisioning

You can provision IP services of varying complexities on the IGX using the URM card.

If you want to use the URM as an in-chassis router for VoIP or VoATM, see CARDS for basic URM setup and the Cisco IOS software documentation supporting the Cisco IOS software being used on the URM.

If you want to use the URM as an in-chassis router with IPsec-VPN capabilities, see the "Installing the Encryption Advanced Interface Module" section in the Cisco IGX 8400 Series Installation Guide for information on installing the correct AIM module for VPN. For information on configuring IPSec, refer to Cisco IOS documentation, as listed in the "Cisco IOS Software Documentation" section.

The following sections describe how to set up the IGX switch for use with external controllers, preparatory to configuring the IGX for MPLS. For information on configuring MPLS on the IGX, see the "MPLS Configuration on the IGX" section. For information on configuring MPLS-VPNs on the IGX, see the "MPLS VPN Sample Configuration" section.


Tip For additional Cisco IOS features supported on the IGX, see the release notes document for the Cisco IOS software release you intend to use on the URM.

Planning for Controller Resources

Controllers require a free bandwidth of at least 150 cells per second (cps) to be reserved for signaling on the IGX port. If a minimum of 150 cps is not available on the port, the switch software command addctrlr is not executed. To calculate free bandwidth, use the following equation:

free bandwidth = port speed - PVC maximum bandwidth -VSI bandwidth

In some cases, you may need to change the bandwidth allocated to AutoRoute to obtain a free bandwidth of 150 cps. Use the switch software command, cnfrsrc, to reallocate bandwidth on a port.

VSI Configuration


Note   While you can add a controller to a UXM interface without configuring a VSI partition on that same interface, you will not be able to use the interface for VSI connections without also configuring a VSI partition. For example, MPLS controllers XTAG interfaces support includes setup of a tag-control-VC between the hosting interface and the XTAG interface. This VC is a VSI connection, so the controller cannot configure the connection unless the hosting interface has a VSI partition.

When configuring a node for VSI, complete the following steps:


Step 1   Plan your resources (see the "Logical Switch Partitioning and Allocation of Resources" section).

Step 2   Using the switch software commands uptrk, upln, and upport, activate the desired trunk, line, and port for the configured partition.

Step 3   Using the switch software command cnfrsrc, configure partition resources on the active interface (see Table 10-10 for command parameters).


Tip The VPI range is of local significance, and do not have to be the same for each port in a node. However, for tracking purposes, Cisco recommends keeping the VPI range the same for each port in the node.

Table 10-10   cnfrsrc Command Parameters

Parameter (Object) Name Range/Values Default Description

VSI partition

1-3

1

Specifies a unique partition ID.

Partition state

D = Disable Partition
E = Enable Partition

D

Enables or disables partitions—requires a mandatory object.

Min LCNs

0-8000

Note 0-941 on the URM

0

Specifies the minimum LCNs (connections) guaranteed for the selected partition.

Max LCNs

0-8000

Note 0-941 on the URM

0

Specifies the maximum LCNs (connections) permitted for the selected partition.

Start VPI

0-255 (UNI)
0-4095 (NNI)

Note The URM does not support NNI.

0

Specifies the initial interface for the selected partition.

End VPI

0-255 (UNI)
0-4095 (NNI)

Note The URM does not support NNI.

0

Specifies the final interface for the selected partition.

Min Bw

0-maximum line rate

0

Specifies the minimum bandwidth available for the selected partition.

Max Bw

0-maximum line rate

0

Specifies the maximum bandwidth available for the selected partition.

Step 4   Using the switch software command addctrlr, add a controller.


Note    The switch software command addctrlr, only supports MPLS and generic VSI controllers that do not require support for the AnnexG protocol.


Tip The switch software command addctrlr, requires you to specify a controller ID, a unique identifier between 1 and 16. Different controllers must be specified with different controller IDs.

Step 5   Assign ATM CoS template to an interface (ATM services only—see "ATM ServiceFunctional Overview").

Step 6   Add a slave (for more information on VSI masters and slaves, see "VSI Masters and Slaves").

Step 7   Configure slave redundancy (UXM and UXM-E only).


Tip The URM does not support hot slave redundancy. For the URM, warm redundancy must be configured by setting up redundant partitions. See MPLS Label Switch Controller and Enhancements 12.2(8)T.

Step 8   Use the switch software command, dspctrlrs, to display your controller configuration.

Step 9   Manage your resources.


Tip Use dspctrlrs to display all VSI controllers attached to the IGX. Use delctrlr to delete a controller from the IGX.


Note   MPLS controllers serving as an interface shelf are designated as Label Switch Controllers (LSCs).



Logical Switch Partitioning and Allocation of Resources

A logical switch is configured by enabling and allocating resources to the partition. This must be done for each partition in the interface. The same procedure must be followed to define each logical switch.

The following resources are partitioned among the different logical switches:

Resources are configured and allocated per interface, but the pool of resources may be managed at a different level. The bandwidth is limited by the interface rate, which places the limitation at the interface level. Similarly, the range of VPI is also defined at the interface level.

Configure these parameters on a VSI partition on an interface:

Configure partitions by using the cnfrsrc command.


Note    Switch Software Release 9.3 or later supports up to three partitions.

Table 10-11 shows the three resources that must be configured for a partition designated ifc1 (interface controller 1).

Table 10-11   ifc1 Parameters (Virtual Switch Interface)

ifc1 Parameters Minimum Maximum

lcns

min_lcn

max_lcn

bw

min_bw

max_bw

vpi

min_vpi

max_vpi

The controller is supplied with a range of LCNs, VPIs, and bandwidth. Examples of available VPI values for a VPI partition are listed in Table 10-12.

Table 10-12   VPI Range for Partitioning

UXM Range

Trunks

1-4095 VPI range (UNI/NNI).

Ports

UNI: 1 - 255/NNI: 1 - 4095.

Virtual trunk

Only one VPI available per virtual trunk because a virtual trunk is currently delineated by a specific VPI.

When a trunk is activated, the entire bandwidth is allocated to AutoRoute. To change the allocation to provide resources for a VSI, use the cnfrsrc command on the IGX switch.

You can configure partition resources between AutoRoute PVCs and three VSI LSC controllers. Up to three VSI controllers in different control planes can independently control the switch without communication between controllers. The controllers are unaware of other control planes sharing the switch because different control planes use different partitions of the switch resources.

The following limitations apply to multiple VSI partitioning:

Multiple Partition Example

Each logical switch represents a collection of interfaces, each with an associated set of resources.

The following example is an IGX switch with four interfaces:

See Example 10-1 for the interface configurations for Figure 10-15. See Table 10-13 for an example with three partitions enabled.


Figure 10-15   Virtual Switches


To display the partitioning resources of an interface use the dsprsrc command as in Example 10-1.


Example 10-1   IGX Configuration with Multiple Partitions
sw188 TN Cisco IGX 8420 9.3.10 Aug. 16 2000
16:47 GMT
VSI Partitions on this node
Interface (slot.port) Part 1 Part 2 Part 3
Line 10.1 E E D
VTrunk 10.2.1 D D D
Trunk 11.1 E E D
VTrunk 11.7.1 E D D
Last Command:dsprsrc
Next Command:

Table 10-13   Partitioning Example

Interface AutoRoute Partition 1 Partition 2 Partition 3

4.2

lcns: 1000
bw: 20000 cps

Enable
lcns: 2000
bw:1000-2000 cps
vpi: 200-250

Enable
lcns: 2000
bw: 77840-77840 cps
vpi: 20-29

Enable
lcns: 2000
bw: 1000-2000 cps
vpi: 30-50

Slave Redundancy for the UXM and UXM-E

The two redundant pair slaves keep the redundant card in a hot standby state for all VSI connections. This is accomplished by a bulk update (on the standby slave) of the existing connections at the time that Y redundancy is added, and also an incremental update of all subsequent connections.

The Slave Hot Standby Redundancy feature enables the redundant card to fully duplicate all VSI connections on the active card, and prepare for operation on switchover. On bringup, the redundant card initiates a bulk retrieval of connections from the active card for fast sync-up. Subsequently, the active card updates the redundant card on a real-time basis.

The VSI Slave Hot Standby Redundancy feature provides the capability for the slave standby card to be preprogrammed the same as the active card. When the active card fails, the slave card switchover operation can be implemented quickly. Without the VSI portion, the UXM card has already provided the hot standby mechanism by duplicating CommBus messages from the NPM to the standby UXM card.

The following sections describe types of communication between the switch software and firmware to support VSI master and slave redundancy.

VSI Slave Redundancy Mismatch Checking

To provide a smooth migration of the VSI feature on the UXM card, line and trunk Y-redundancy is supported. You can pair cards with and without the VSI capability as a Y-redundant pair, if the feature is not enabled on the specific slot. If the feature is not enabled on a specific slot, switch software will not perform "mismatch checking" if the UXM firmware does not support the VSI feature. The VSI capability is treated as a card attribute and added to the attribute list.

In a Y-redundancy pair configuration, the VSI capability is determined by the minimum of the two cards. A card without VSI capabilities will mismatch if any of the interfaces has an active partition on controller. Attempts to enable a partition or add a controller on a logical card that does not support VSI are blocked.

Adding and Deleting Controllers and Slaves

You add an LSC to a node by using the addctrlr command. When adding a controller, you must specify a partition ID. The partition ID identifies the logical switch assigned to the controller. The valid partitions are 1, 2, and 3.


Note   You can configure partition resources between Automatic Routing Management PVCs and three VSI LSC controllers.

To display the list of controllers in the node, use the command dspctrlrs. The functionality is also available via SNMP using the switchIfTable in the switch MIB.

The management of resources on the VSI slaves requires that each slave in the node has a communication control PVC to each of the controllers attached to the node. When a controller is added to the IGX by using the addctrlr command, the NPM sets up the set of master-slave connections between the new controller port and each of the active slaves in the switch. The connections are set up using a well known VPI.VCI. The default value of the VPI for the master-slave connection is 0. The default value of the VCI is (40 + [slot - 2]), where slot is the logical slot number of the slave.


Note   After the controllers are added to the node, the connection infrastructure is always present. The controllers may or may not decide to use it, depending on their state. Inter-slave channels are present whether controllers are present or not.

The addition of a controller to a node will fail if enough channels are not available to set up the control VCs (14 in a 16-slot through 30 in a 32-slot switch) in one or more of the UXM slaves.

When the slaves receive the controller configuration message from the NPM, the slaves send a VSI message trap to the controller informing of the slaves existence. This prompts an exchange from the controller that launches the interface discovery process with the slaves.

When the controller is added, the NPM will send a VSI configuration CommBus message to each slave with this controller information, and it will set up the corresponding control VCs between the controller port and each slave.

Adding a Slave

When a new slave is activated in the node by upping the first line/trunk on a UXM card which supports VSI, the NPM will send a VSI configuration CommBus (internal IGX protocol) message with the list of the controllers attached to the switch.

The NPM will setup master-slave connections from each controller port on the switch to the added slave. It will also set up interslave connections between the new slave and the other active VSI slaves.


Note   Slaves in standby mode are not considered VSI configured and are not accounted for in the interslave connections.

Deleting a Controller

Use the command delctrlr to delete controllers that have been added to interfaces.

When one of the controllers is deleted by using the delctrlr command, the master-slave connections and connections associated with this controller on all the UXM cards in the switch are also deleted. VSI partitions remain configured on the node.

The deletion of the controller triggers a new VSI configuration (internal) message. This message includes the list of the controllers attached to the node, with the deleted controller removed from the list. This message is sent to all active slaves in the node.

As long as one controller is attached to the node with a specific partition, the resources assigned to the partition are not affected by deletion of any other controllers from the node. The slaves only release all VSI resources used on the partition when the partition itself is disabled.

Deleting a Slave

When a slave is deactivated by downing the last line or trunk on the card, the NPM tears down the master-slave connections between the slave and each of the controller ports on the node. The NPM also tears down all the interslave connections connecting the slave to other active VSI slaves.

VC Merge on the IGX


Note   Because VC merge is not supported on the UXM, y-redundancy cannot be set up using a UXM-E and a UXM without generating a feature mismatch error. If y-redundancy is set up between a UXM-E and a UXM, the VC merge feature cannot be enabled.

VC merge on the IGX is supported in Switch Software Release 9.3.40.

Before setting up y-redundancy on two UXM-E cards, make sure that VC merge feature support is enabled on both cards. Both cards must have the appropriate card firmware to support the VC merge feature.

For more information on y-redundancy on the UXM-E, see the "Card Redundancy" section in "Functional Overview."


Tip Before enabling VC merge, set the minimum number of channels to 550 using the cnfrsrc command. If this minimum number of channels is not available on the card, an error message is displayed.

To enable VC merge on the IGX, perform the following steps:


Step 1   Configure the card parameters for VC merge using the cnfcdparm slot number 2 e command.

Step 2   If you receive the error message shown below, repeat Step 1.

Card rejected cmd. VC Merge NOT enabled!

Step 3   Continue with switch configuration or management.


Tip To display the current status of VC merge on the IGX, enter the dspcdparm slot number command.



To disable VC merge on the IGX, perform the following steps:


Step 1   Configure the card parameters for VC merge using the cnfcdparm slot number 2 d command.

Step 2   At the following message, enter y to continue disabling VC merge.

Disabling VC Merge with active VSI partns on card may result in dropped conns
Continue?

Step 3   If you receive the error message shown below, repeat Step 1 and Step 2.

Card rejected cmd. VC Merge NOT disabled!

If you disable the last partition on the slot while VC merge is still enabled, VC merge is disabled on the slot, and the card will display the following error message:

Disabling of last partn on slot has caused disabling of VC Merge.

Step 4   Continue with switch configuration or management.



Switch Software Commands Related to VSIs on the IGX

Table 10-14   Switch Software Commands for Setting up a VSI (Virtual Switch Interface)

Mnemonic Description

addctrlr

Attaches a controller to a node.

cnfctrlr

Configures a controller.

cnfqbin

Configures Qbin.

cnfrsrc

Configures resources. For example—AutoRoute PVCs or an MPLS controller (LSC).

cnfvsiif

Assigns a different class template to an interface.

delctrlr

Deletes a controller, such as MPLS controller, from an IGX node.

dspchuse

Displays a summary of channel distribution in a given slot.

dspctrlrs

Displays the VSI controllers on an IGX node.

dspqbin

Displays Qbin parameters currently configured for the Qbin.

dspqbint

Displays Qbin template.

dsprsrc

Displays partition resources.

dspsct

Displays SCTs assigned to an interface. The command has three levels of operation:

dspsct
With no arguments lists all the service templates resident in the node.

dspsct tmplt_id
Lists all the Service Classes in the template.

dspsct tmplt_id Service_Class
Lists all the parameters of that service class.

dspvsiif

Displays the service class template assigned to an interface.

dspvsipartinfo

Displays VSI resource status for the trunk and partition.

MPLS Configuration on the IGX

The following sections provide a sample MPLS configuration using the network shown in Figure 10-16.

For information on configuring Cisco IOS software for MPLS, see MPLS Label Switch Controller and Enhancements 12.2(8)T.


Figure 10-16   Simplified Example of Configuring an MPLS Network.


Network Description for Figure 10-17

Figure 10-16 provides an example of configuring the IGX as an MPLS label switch (ATM-LSRs) for MPLS switching of IP packets through an ATM network. The figure also shows configuration for Cisco routers for use as label edge routers (edge LSRs) at the edges of the network.

Figure 10-16 displays the configuration for the following components:

The configuration of ATM LSR-3, ATM LSR-4, and ATM LSR-5, is not detailed in this guide. However, it is similar to the sample configurations detailed for ATM LSR-1 and ATM LSR-2. The configuration for Edge LSR-B is similar to Edge LSR-A and LSR-C.

Initial Setup of LVCs

The service template contains two classes of data:


Note    MPLS CoS is not supported on the URM-LSR.

When a connection setup request is received from the VSI master in the LSC, the VSI slave (in the UXM, for example) uses the service type identifier to index into a SCT database that contains extended parameter settings for connections matching that index. The slave uses these values to complete the connection setup and program the hardware.

Configuring an IGX ATM-LSR for MPLS

The ATM-LSR consists of two hardware components—the IGX switch (also called the label switch slave) and a router configured as a label switch controller (LSC). The label switch controller can be either an external Cisco router, such as the Cisco 7204, or the chassis-installed URM. LSC configuration for either router option is essentially the same.

For information on configuring the Cisco IOS software running on the LSC for MPLS, see MPLS Label Switch Controller and Enhancements 12.2(8)T.


Tip When configuring an ATM-LSR on an IGX with installed URM, use two terminal sessions—one to log into the embedded UXM-E on the URM card to configure the label switch slave portion of the ATM-LSR, and one to log into the embedded router on the URM card to configure the LSC portion of the ATM-LSR.

To set up MPLS on an IGX node, complete the following tasks:

1. Configure the ATM LSR.

    a. IGX switch (label switch slave): Configure the IGX for VSI.

    b. Label switch controller (LSC): Configure the router with extended ATM interfaces on the IGX.

2. Set up label edge routers (LERs).

3. MPLS automatically sets up LVCs across the network.

Figure 10-17 shows a high-level view of an MPLS network. The packets destined for 204.129.33.127 could be real-time video, and the packets destined for 204.133.44.129 could be data files transmitted when network bandwidth is available.

When MPLS is set up on the nodes shown in Figure 10-17 (ATM-LSR 1 through ATM-LSR 5, Edge LSR_A, Edge LSR_B, and Edge LSR_C), automatic network discovery is enabled. Then MPLS automatically sets up LVCs across the network. At each ATM LSR, VCI switching (also called "label swapping") transports the cells across previously-determined LVC paths.

At the edge LSRs, labels are added to incoming IP packets, and removed from outgoing packets. Figure 10-17 shows IP packets with host destination 204.129.33.127 transported as labeled ATM cells across LVC 1. The figure also displays IP packets with host destination 204.133.44.129 transported as labeled ATM cells across LVC 2.

IP addresses shown are for illustrative purposes only and are assumed to be isolated from external networks. Check with your network administrator for appropriate IP addresses for your network.


Figure 10-17   High-Level View of Configuration of an MPLS Network


Figure 10-18 shows the MPLS label swapping process. This process might take place during the transportation of the IP packets, in the form of ATM cells across the network on the LVC1 and LVC2 virtual circuits:

1. An unlabeled IP packet with destination 204.133.44.129 arrives at edge label switching router (LSR-A).

2. Edge LSR-A checks its label forwarding information base (LFIB) and matches the destination with prefix 204.133.44.0/8.

3. Edge LSR-A converts the AAL5 frame to cells and sends the frame out as a sequence of cells on 1/VCI 50.

4. ATM-LSR-1 (a Cisco IGX 8410, 8420, or 8430 label switch router), controlled by a routing engine, performs a normal switching operation by checking its LFIB and switching incoming cells on interface 2/VCI 50 to outgoing interface 0/VCI 42.

5. ATM-LSR-2 checks its LFIB and switches incoming cells on interface 2/VCI 42 to outgoing interface 0/VCI 90.

6. Edge LSR-C receives the incoming cells on incoming interface 1/VCI 90, checks its LFIB, converts the ATM cells back to an AAL5 frame, and an IP packet, and then sends the outgoing packet to its LAN destination 204.133.44.129.


Figure 10-18   Label Swapping Detail


Configuration for IGX Switch Portions of the Cisco IGX 8410, 8420, and 8430 ATM-LSRs


Note   IGX nodes must be set up and configured in the ATM network (including links to other nodes) before beginning configuration for MPLS support on the node.

To configure the IGX nodes for operation, set up a virtual interface and associated partition by using the cnfrsrc command.

To link the Cisco router to the IGX, use the addctrlr command to add the router as a VSI controller. This allows the router label switch controller function to control the MPLS operation of a node.

For information on configuring the IGX partition, including distribution of IGX partition resources, see the "VSI Configuration" section.

In this example, assume that a single external controller per node is supported, so that the partition chosen is always 1.

Configuration for IGX 1 Portion of ATM-LSR-1

To configure the Cisco IGX 8410, 8420, and 8430 label switch routers, ATM-LSR-1 and ATM-LSR-2:

Command Description
Step 1 

Check card status:

dspcds 3

The display status of the UXM card. UXM cards that you are configuring should be "Standby" or "Active."

Step 2 

Enable UXM interfaces:

upln 3.1
upport 3.2

In this example, line 3.1 is the link to the LSC controller, and line 3.2 is set up as cross-connect for use by LVCs.

Note A UXM interface is a trunk if it connects to another switch or MGX 8220 feeder. The VSI connection to an LSC is either a trunk or line. Other interfaces are ports, typically to service interfaces.

Step 3 

Configure VSI partitions on the UXM line interfaces:

 
cnfrsrc 3.1 256 26000 y 1 e 512 1500 240 255 26000 105000
 

or if entered individually:

cnfrsrc 3.1
256 {PVC LCNs, accept default value}
26000

Note You do not need to specify bandwidth when establishing trunks.

y {to edit VSI parameters}
1 {partition}
e {enable partition}
512 {VSI min LCNs}
1500 {VSI max LCNs}
240 {VSI starting VPI}
255 {VSI ending VPI}
26000 {VSI min bandwidth}
105000 {VSI max bandwidth}
 

PVC LCNs: [256] default value. Reserve space on this link for 256 AutoRoute PVCs (LCNs = Logical Connection Numbers).

 

VSI min LCNs: 512
VSI max LCNs: 1500

Guarantees that MPLS can set up 512 LVCs on this link, but is allowed to use up to 1500, subject to availability of LCNs.

 

VSI starting VPI: 240
VSI ending VPI: 255

Reserves the VPIs in the range of 240-255 for MPLS. Only one VPI is really required, but a few more can be reserved to save for future use. It is best to always avoid using VPIs "0" and "1" for MPLS on the Cisco IGX 8410, 8420, and 8430.

Note VPIs are locally significant. In this example 240 is shown as the starting VPI for each port. A different value could be used for each of the three ports shown, 6.1, 6.2, and 7.1. However, at each end of a trunk, such as, between port 6.2 on ATM LSR-1 and port 6.2 on ATM LSR-2, the same VPI must be assigned.

 

VSI min bandwidth: 26000
VSI maximum bandwidth: 105000

Guarantees that MPLS can use 26000 cells per second (about 10 Mbps) on this link, but allows it to use up to 105000 cells per second (about 40 Mbps) if bandwidth is available. More can be allocated if required.

 

VSI maximum bandwidth: 26000

Guarantees that PVCs can always use up to 26000 cells per second (about 10 Mbps) on this link.

Step 4 

Repeat for UXM interfaces 3.2 and 4.1

 
cnfrsrc 3.2 256 26000 y 1 e 512 1500 240 255 26000 105000
 
cnfrsrc 4.1 256 26000 y 1 e 512 1500 240 255 26000 105000
 

See description for Step 3.

Step 5 

Enable MPLS queues on UXM:

dspqbin 3.1 10

and verify that it matches the following:

 
Qbin Database 3.1 on UXM qbin 10
Qbin State: Enable
Qbin discard threshold: 65536
EPD threshold: 95%
High CLP threshold: 100%
EFCI threshold: 40%
 

If configuration is not correct, enter

 
cnfqbin 3.1 10 e n 65536 95 100 40
 

Repeat as necessary for UXM interfaces 3.2 and 4.1:

 
cnfqbin 3.2 10 e n 65536 95 100 40
cnfqbin 4.1 10 e n 65536 95 100 40

MPLS CoS uses Qbins 10-14.

Step 6 

Enable the VSI control interface:

addctrlr 3.1 vsi 1 1 100 200

The first "1" after "VSI" is the VSI controller ID, which must be set the same on the IGX and the LSC. The default controller ID on the LSC is "1."

The second "1" after "VSI" indicates that this is a controller for partition 1.

Configuration for IGX 2 Portion of ATM-LSR-2 (URM-LSR)

Proceed with configuration as follows:

Command Description
Step 1 

Check card status:

 
dspcds 6

Display status of the URM card. URM cards that you are configuring should be "Standby" or "Active."

Step 2 

Enable UXM interfaces:

 
addport 6.1
uptrk 3.2
addport 4.1

In this example, port 6.1 is the internal ATM interface between the embedded UXM-E and the embedded router on the URM-LSC. Trunk 3.2 is set up as a cross-connect for use by LVCs. Port 4.1 is the internal ATM interface between the embedded UXM-E and the embedded router on the URM-LSR.

Step 3 

Configure VSI partitions on the UXM interfaces:

 
cnfrsrc 6.1 256 26000 y 1 e 512 1500 240 255 26000 105000
 

or if entered individually:

cnfrsrc 6.1
256 {PVC LCNs, accept default value}
26000
y {to edit VSI parameters}
1 {partition}
e {enable partition}
512 {VSI min LCNs}
1500 {VSI max LCNs}
240 {VSI starting VPI}
255 {VSI ending VPI}
26000 {VSI min bandwidth}
105000 {VSI max bandwidth}
 

Step 4 

Repeat for UXM interfaces 6.2 and 7.1.

 
cnfrsrc 3.2 256 26000 y 1 e 512 1500 240 255 26000 105000
 
cnfrsrc 4.1 256 26000 y 1 e 512 1500 240 255 26000 105000

Step 5 

Enable MPLS queues on UXM:

 
dspqbin 6.1 10
 

and verify that it matches the following:

 
Qbin Database 6.1 on UXM qbin 10
Qbin State: Enable
Qbin discard threshold: 65536
EPD threshold: 95%
High CLP threshold: 100%
EFCI threshold: 40%
 

If configuration is not correct, enter

 
cnfqbin 6.1 10 e n 65536 95 100 40

MPLS CoS uses Qbins 10-14.

Step 6 

Repeat as necessary for UXM interfaces 3.2 and 4.1:

 
cnfqbin 3.2 10 e n 65536 95 100 40
cnfqbin 4.1 10 e n 65536 95 100 40

See description for Step 5.

Step 7 

Enable the VSI controller interface:

addctrlr 6.1 vsi 1 1 100 200

The first "1" after "vsi" is the vsi controller ID, which must be set the same on both the IGX and the LSC. The default controller ID on the LSC is "1."

The second "1" after "vsi" is the partition ID that indicates this is a controller for partition 1.

Configuration for LSC 1 and LSC 2 Portions of the Cisco IGX 8410, 8420, and 8430

Before configuring the routers for the label switch (MPLS) controlling function, it is necessary to perform the initial router configuration. As part of this configuration, it is necessary to configure and enable the ATM adapter interface.

After configuring the ATM adapter interface, the extended ATM interface can be set up for label switching. The IGX ports can be configured by the router as extended ATM ports of the physical router ATM interface, according to the following procedures for LSC1 and LSC2.

Configuration for LSC1 Portion of ATM-LSR-1

Proceed with configuration as follows:

Command Description

 

Before you begin

 

Step 1 

Router LSC1(config)# ip routing

Enables IP routing protocol.

Step 2 

Router LSC1(config)# ip cef

Enables Cisco express forwarding protocol.

Step 3 

Router LSC1(config)# interface ATM3/0

Enables physical interface link to IGX.

Step 4 

Router LSC1(config-if)# no ip address

 

Step 5 

Router LSC1(config-if)# label-control-protocol vsi [controller ID}

Enables router ATM port ATM3/0 as MPLS controller. Controller ID default is 1, optional values up to 32 for IGX.

 

Setting up the interslave control link

 

Step 6 

Router LSC1(config-if)# interface XmplsATM33

Interslave link on 3.3 port of IGX (port 3 on UXM in slot 3). This is an extended port of the router ATM3/0 vsi 0x00010300 port.

Step 7 

Router LSC1(config-if)# extended-port ATM3/0 vsi 0x00010300

Binds extended port XmplsATM13 to IGX slave port 1.3.

Step 8 

Router LSC1(config-if)# ip address 142.4.133.13 255.255.0.0

Assigns ip address to XmplsATM13.

Step 9 

Router LSC1(config-if)# mpls ip

Enables MPLS for xtag interface XmplsATM13.

 

Setting up interslave port

 

Step 10 

Router LSC1(config-if)# interface XmplsATM42

Interslave link on port 4.2 on the IGX (port 2 on the UXM in slot 4). This is an extended port of the router ATM3/0 vsi 0x00010300 port.

Step 11 

Router LSC1(config-if)# extended-port ATM3/0 vsi 5.2

Binds extended port XmplsATM52 to IGX slave port 5.2

Step 12 

Router LSC1(config-if)# ip address 142.6.133.22 255.255.0.0

Assigns an IP address to XmplsATM52.

Step 13 

Router LSC1(config-if)# mpls ip

Enables MPLS for xtag interface XmplsATM52.

Step 14 

Router LSC1 (config-if)# exit

 

 

Configuring routing protocol

Configure Open Shortest Path First (OSPF) Routing Protocol or Enhanced Interior Gateway Routing Protocol (EIGRP).

Step 15 

Router LSC1 (config-if)# Router OSPF 5

Sets up OSPF routing and assigning a process ID of 5 which is locally significant. The ID may be chosen from a wide range of available process ID up to approximately 32,000.

Step 16 

Router LSC1 (config-router)# network 142.4.0.0 0.0.255.255 area 10

 

Step 17 

Router LSC1 (config-router)# network 142.6.0.0 0.0.255.255 area 10

 

Configuration for LSC2 Portion of ATM-LSR-2 (URM-LSR)

Proceed with configuration as follows:

Command Description

 

Before you begin

 

Step 1 

Router LSC2(config)# ip routing

Enables IP routing protocol.

Step 2 

Router LSC2(config)# ip cef

Enables Cisco express forwarding protocol.

Step 3 

Router LSC2(config)# interface ATM0/0

Enable internal ATM interface between embedded UXM-E and embedded router on the URM card.

Step 4 

Router LSC2(config-if)# no ip address

 

Step 5 

Router LSC2(config-if)# label-control-protocol vsi [controller ID]

Enables router ATM port ATM0/0 as MPLS controller. Controller ID default is 1, optional values up to 32 for IGX.

 

Setting up interslave control link

Step 6 

Router LSC2(config-if)# interface XmplsATM33

Interslave link on 3.2 port of IGX (port 2 on the URM in slot 3). This is an extended port of the router ATM0/0 vsi 0x00010300 port.

Step 7 

Router LSC2(config-if)# extended-port ATM0/0 igx 3.2

Binds extended port XmplsATM33 to IGX slave port 3.2.

Step 8 

Router LSC2(config-if)# ip address 142.4.133.15 255.255.0.0

Assigns an IP address to XmplsATM1.

Step 9 

Router LSC2(config-if)# mpls ip

Enables MPLS for xtag interface XmplsATM1.

 

Setting up interslave port

 

Step 10 

Router LSC2(config-if)# interface XmplsATM42

Interslave link on 4.1 port of IGX (port 1 on the UXM in slot 4). This is an extended port of the router ATM0/0 vsi 0x00010300 port.

Step 11 

Router LSC2(config-if)# extended-port ATM0/0 igx 4.1

Binds the extended port XmplsATM42 to IGX slave port 1.

Step 12 

Router LSC2(config-if)# ip address 142.7.133.22 255.255.0.0

Assigns an IP address to XmplsATM42.

Step 13 

Router LSC2(config-if)# mpls ip

Enables MPLS for xtag interface XmplsATM42.

Step 14 

Router LSC2 (config-if)# exit

Exits the interface configuration mode.

 

Configuring routing protocol

Configures OSPF or EIGRP.

Step 15 

Router LSC2 (config-if)# Router OSPF 5

Sets up OSPF routing and assigns a process ID of 5 which is locally significant. The ID may be chosen from a wide range of available process ID up to approximately 32,000.

Step 16 

Router LSC2 (config-router)# network 142.4.0.0 0.0.255.255 area 10

 

Step 17 

Router LSC2 (config-router)# network 142.7.0.0 0.0.255.255 area 10

 

Configuration for Edge Label Switch Routers, LSR-A and LSR-B

Before configuring the routers for the MPLS controlling function, it is necessary to perform the initial router configuration. As part of this configuration, you must enable and configure the ATM Adapter interface.

Then you can set up the extended ATM interface for MPLS. The IGX ports can be configured by the router as extended ATM ports of the physical router ATM interface, according to the following procedures for LSR-A and LSR-C.

To configure the routers performing as label edge routers, use the procedures in the following tables.

Configuration of a Cisco Router as an Edge Router, Edge LSR-A

Proceed with configuration as follows:

Command Description
Step 1 

Router LSR-A (config)# ip routing

Enables IP routing protocol.

Step 2 

Router LSR-A(config)# ip cef distributed switch

Enables MPLS for ATM subinterface.

Step 3 

Router LSR-A(config)# interface ATM4/0/0

 

Step 4 

Router LSR-A(config-if)# no ip address

 

Step 5 

Router LSR-A(config-if)# interface ATM4/0/0.9 mpls

Interface can be basically any number within range limits ATM4/0/0.1, ATM 4/0/0.2.

Step 6 

Router LSR-A(config-if)# ip address 142.6.133.142 255.255.0.0

 

Step 7 

Router LSR-A(config-if)# mpls ip

 

 

Configuring routing protocol

Configure OSPF or EIGRP.

Step 8 

Router LSR-A (config-if)# Router OSPF 5

Sets up OSPF routing and assigns a process ID of 5 which is locally significant. The ID may be chosen from a wide range of available process IDs up to approximately 32,000.

Step 9 

Router LSR-A (config-router)# network 142.6.0.0 0.0.255.255 area 10

 

Configuration of a Cisco Router as an Edge Router, Edge LSR-C

Command Description
Step 1 

Router LSR-C (config)# ip routing

Enables IP routing protocol.

Step 2 

Router LSR-C(config)# ip cef

Enables label switching for ATM subinterface.

Step 3 

Router LSR-C(config)# interface ATM0/0

 

Step 4 

Router LSR-C(config-if)# no ip address

 

Step 5 

Router LSR-C(config-if)# interface ATM0/0.1 mpls

 

Step 6 

Router LSR-C(config-if)# ip address 142.7.133.23 255.255.0.0

 

Step 7 

Router LSR-C(config-if)# mpls ip

 

 

Configuring routing protocol

Configures OSPF or EIGRP.

Step 8 

Router LSR-C (config-if)# Router OSPF 5

Sets up OSPF routing and assigns a process ID of 5 which is locally significant. The ID may be chosen from a wide range of available process IDs up to approximately 32,000.

Step 9 

Router LSR-C (config-router)# network 142.7.0.0 0.0.255.255 area 10

 

Routing Protocol Configures LVCs via MPLS

After you have completed the initial configuration procedures for the IGX and edge routers, the routing protocol (such as OSPF) sets up the LVCs via MPLS as shown in Figure 10-19.


Figure 10-19   Example of LVCs in an MPLS Switched Network


Testing the MPLS Network Configuration

Preliminary testing of the MPLS network consists of:

The following Cisco IOS commands are useful for monitoring and troubleshooting an MPLS network:

Checking the IGX Extended ATM Interfaces

Use the following procedure to test the label switching configuration on the IGX switch (ATM LSR-1, for example):


Step 1   Check whether the controller recognizes the interfaces correctly; on LSC1, for example, enter the following command:

Command Description
Router LSC1# show controllers VSI descriptor

Shows VSI information for extended ATM interfaces.

The sample output for ATM-LSC-1 (Cisco IGX 8410, 8420, and 8430 shelves) is:

Phys desc: 3.1
Log intf: 0x00040100 (0.4.1.0)
Interface: slave control port
IF status: N/A IFC state: ACTIVE
Min VPI: 0 Maximum cell rate: 10000
Max VPI: 10 Available channels: xxx
Min VCI: 0 Available cell rate (forward): xxxxxx
Max VCI: 65535 Available cell rate (backward): xxxxxx
Phys desc: 3.3
Log intf: 0x00040200 (0.4.2.0)
Interface: ExtTagATM13
IF status: up IFC state: ACTIVE
Min VPI: 0 Maximum cell rate: 10000
Max VPI: 10 Available channels: xxx
Min VCI: 0 Available cell rate (forward): xxxxxx
Max VCI: 65535 Available cell rate (backward): xxxxxx
Phys desc: 4.2
Log intf: 0x00040300 (0.4.3.0)
Interface: ExtTagATM22
IF status: up IFC state: ACTIVE
Min VPI: 0 Maximum cell rate: 10000
Max VPI: 10 Available channels: xxx
Min VCI: 0 Available cell rate (forward): xxxxxx
Max VCI: 65535 Available cell rate (backward): xxxxxx
-------

Tip Check online documentation for the most current information. For information on accessing related documents, see the "Accessing User Documentation" section.

Step 2   If there are no interfaces present, first check that card 3 is active and available with the switch software command, dspcds. If the card is not active and available, reset the card with the switch software command, resetcd. Remove the card to reset if necessary.

Step 3   Check the line status using the switch software command, dsplns (see Example 10-2).


Example 10-2   Sample dsplns Output
sanjose TN Cisco IGX 8430 9.3.10 July 12 2000 09:38 PST
Line Type Current Line Alarm Status
6.6 T3/636 Clear - OK
7.8 T1/24 Clear - OK
Last Command: dsplns
Next Command:

Step 4   Check the trunk status using the switch software command, dsptrks (see Example 10-3).


Note    The dsptrks screen for ATM-LSR-1 should show the 3.1, 3.3 and 4.2 MPLS interfaces, with the "Other End" of 3.1 reading "VSI (VSI)".


Example 10-3   Sample dsptrks Output
n4 TN SuperUser IGX 15 9.3 March 4 2000 16:45 PST
TRK Type Current Line Alarm Status Other End
4.1 OC3 Clear - OK j4a/2.1
5.1 E3 Clear - OK j6a/5.2
5.2 E3 Clear - OK j3b/3
5.3 E3 Clear - OK j5c(IPX/AF)
6.1 T3 Clear - OK j4a/4.1
6.2 T3 Clear - OK j3b/4
3.1 OC3 Clear - OK VSI(VSI)
3.3 OC3 Clear - OK
4.2 OC3 Clear - OK
Last Command: dsptrks
Next Command:

Step 5   To see the controllers attached to a node, use the switch software command, dspctrlrs (see Example 10-4). The resulting screens should show trunks configured as links to the LSC as type VSI.


Example 10-4   Sample dspctrlrs Output
sanjose TN Cisco IGX 8430 9.3.10 July 31 2000 20:26 PST
VSI Controller Information
CtrlrId PartId ControlVC Intfc Type CtrlrIP
VPI VCIRange
1 1 0 40-70 6.6 MPLS 192.168.254.1
Last Command: dspctrlrs
Next Command:

Step 6   To view partition configurations on an interface, use the switch software command, dsprsr (see Example 10-5).


Example 10-5   Sample dsprsr Output
sanjose TN Cisco IGX 8430 9.3.10 July 31 2000 20:29 PST
Line : 6.6
Maximum PVC LCNS: 256 Maximum PVC Bandwidth: 48000
(Reserved Port Bandwidth: 150)
State MinLCN MaxLCN StartVPI EndVPI MinBW MaxBW
Partition 1: E 0 100 2 10 0 48000
Partition 2: D
Partition 3: D

Step 7   To see Qbin configuration information, use the switch software command, dspqbin (see Example 10-6).


Example 10-6   Sample dspqbin Output
n4 TN SuperUser IGX 15 9.3 March 4 2000 16:48 PST
Qbin Database 3.1 on UXM qbin 10
Qbin State: Enabled
Minimum Bandwidth: 0
Qbin Discard threshold: 65536
Low CLP threshold: 95%
High CLP threshold: 100%
EFCI threshold: 40%
Last Command: dspqbin 3.1 10
Next Command:

Step 8   If an interface is present but not enabled, perform the previous debugging steps for the interface.

Step 9   Use the Cisco IOS ping command to send a ping over the label switch connections. If the ping does not work, but all the label switching and routing configuration appear correct, check that the LSC has found the VSI interfaces correctly by entering the following Cisco IOS command on the LSC:

Command Description

Router LSC1# show mpls interfaces

Shows the label interfaces.

If the interfaces are not shown, recheck the configuration of port 3.1 on the IGX switch as described in the previous steps.

Step 10   If the VSI interfaces are shown but are down, check whether the LSRs connected to the IGX switch show that the lines are up. If not, check such items as cabling and connections.

Step 11   If the LSCs and IGX switches show the interfaces are up but the LSC does not show this, enter the following command on the LSC:

Router LSC1# reload

If the show mpls interfaces command shows that the interfaces are up but the ping does not work, enter the following command on the LSC (see Example 10-7):

Router LSC1# show tag tdp disc

Example 10-7   Sample show tag tdp disc Command Output
Local TDP Identifier:
30.30.30.30:0
TDP Discovery Sources:
Interfaces:
ExtTagATM1.3: xmit/recv
ExtTagATM2.2: xmit/recv
-----------------

Step 12   If the interfaces on the display show "xmit" and not "xmit/recv," then the LSC is sending LDP messages, but not getting responses. Enter this command on the neighboring LSRs.

Router LSC1# sh tag tdp disc

If resulting displays also show "xmit" and not "xmit/recv," then one of two things is probable:

    a. The LSC is not able to set up VSI connections.

    b. The LSC is able to set up VSI connections, but cells are not transferred because they cannot get into a queue.

Step 13   Check the VSI configuration on the switch again, for interfaces 3.1, 3.3, and 4.2, paying attention to:

    a. Maximum bandwidths at least a few thousand cells per second

    b. Qbins enabled

    c. All Qbin thresholds nonzero


    Note   VSI partitioning and resources must be correctly set up on the interface connected to the LSC, interface 3.1 (in this example), and interfaces connected to other label switching devices.



MPLS VPN Sample Configuration

Before configuring VPN operation, your network must run the following Cisco IOS services:

Configuring the Cisco IGX 8410, 8420, and 8430 ATM LSR for MPLS VPN Operation

For MPLS VPN operation, you must first configure the Cisco IGX 8410, 8420, and 8430 ATM LSR, including its associated Cisco router LSC for MPLS or for MPLS QoS.

Configure network VPN operation on the edge LSRs that act as PE routers.

The Cisco IGX 8410, 8420, and 8430, including its LSC, requires no configuration beyond enabling MPLS and QoS.

Configuring VRFs for MPLS VPN Operation

To configure a VRF and associated interfaces, perform these steps on the PE router:

Command Purpose
Step 1 

Router(config)# ip vrf vrf-name

Enters VRF configuration mode and specifies the VRF name to which subsequent commands apply.

Step 2 

Router(config-vrf)# rd route-distinguisher

Defines the instance by assigning a name and an 8-byte route distinguisher.

Step 3 

Router(config-if)# ip vrf forwarding vrf-name

Associates interfaces with the VRF.

Step 4 

Router(config-router)# address-family ipv4 vrf vrf-name

Configures BGP parameters for the VRF CE session to use BGP between the PE and VRF CE.

The default setting is off for auto-summary and synchronization in the VRF address-family submode.

To ensure that addresses learned through BGP on a PE router from a CE router are properly treated as VPN IPv4 addresses, you must enter the command no bgp default ipv4-activate before configuring CE neighbors.

Step 5 

Router(config-router-af)# exit-address-family

Exits from VRF configuration mode.

Step 6 

Router(config)# ip route [vrf vrf-name]

Configures static routes for the VRF.

Configuring BGPs for MPLS VPN Operation

To configure a BGP between provider routes for distribution of VPN routing information, perform these steps on the PE router:

Command Purpose
Step 1 

Router(config-router)# address-family {ipv4|vpn4}
[unicast|multicast]

Configures BGP address families.

Step 2 

Router(config-router-af)# neighbor {address|peer-group} remote-as as-number}

Defines a BGP session.

Step 3 

Router(config-router)# no bgp default ipv4-activate

Activates a BGP session. Prevents automatic advertisement of address family IPv4 for all neighbors.

Step 4 

Router(config-router)# neighbor address remote-as as-number

Configures a IBGP to exchange VPNv4 NLRIs.

Step 5 

Router(config-router)# neighbor address update-source interface

Defines a IBGP session.

Step 6 

Router(config-router-af)# neighbor address activate

Activates the advertisement of VPNv4 NLRIs.

Configuring Import and Export Routes for MPLS VPN Operation

To configure import and export routes to control the distribution of routing information, perform these steps on the PE router:

Command Purpose
Step 1 

Router(config)# ip vrf vrf-name

Enters VRF configuration mode and specify a VRF.

Step 2 

Router(config-vrf)# route-target import community-distinguisher

Imports routing information to the specified extended community.

Step 3 

Router(config-vrf)# route-target export community-distinguisher

Exports routing information to the specified extended community.

Step 4 

Router(config-vrf)# import map route-map

Associates the specified route map with the VRF.

Verifying MPLS VPN Operation

To verify VPN operation, perform these steps on the PE router:

Command Purpose
Step 1 

Router# show ip vrf

Displays the set of defined VRFs and interfaces.

Step 2 

Router# show ip vrf detail

Displays VRF information including import and export community lists.

Step 3 

Router# show ip route vrf vrf-name

Displays the IP routing table for a VRF.

Step 4 

Router# show ip protocols vrf vrf-name

Displays the routing protocol information for a VRF.

Step 5 

Router# show ip cef vrf vrf-name

Displays the CEF forwarding table associated with a VRF.

Step 6 

Router# show ip interface interface-number

Displays the VRF table associated with an interface.

Step 7 

Router# show ip bgp vpnv4 all [tags]

Displays VPNv4 NLRI information.

Step 8 

Router# show mpls forwarding vrf vrf-name [prefix mask/length][detail]

Displays label forwarding entries that correspond to VRF routes advertised by this router.

Sample MPLS VPN Configuration File

Please see Example 10-8 for a sample MPLS-VPN configuration file from a PE router.


Example 10-8   Sample MPLS-VPN Configuration File from a PE Router Using BGP
Router1# show run
Building configuration...
Current configuration:
!
version 12.1
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname Router1
!
boot system tftp svincent/uxmvsi/c7200-p-mz.121-3.T 255.255.255.255
boot system slot0:c7200-p-mz.121-3.T
enable password lab
!
ip subnet-zero
ip cef
!
interface Loopback0
ip address 10.10.10.10 255.255.255.255
no ip route-cache
no ip mroute-cache
!
interface FastEthernet0/0
no ip address
no ip mroute-cache
no keepalive
shutdown
full-duplex
!
interface FastEthernet1/0
ip address 30.0.0.2 255.0.0.0
no ip mroute-cache
no keepalive
full-duplex
!
interface ATM3/0
no ip address
no ip mroute-cache
shutdown
atm clock INTERNAL
no atm ilmi-keepalive
!
router bgp 101
no synchronization
bgp log-neighbor-changes
network 10.0.0.0
network 30.0.0.0
neighbor 30.0.0.1 remote-as 100
!
no ip classless
no ip http server
!
no cdp advertise-v2
!
line con 0
exec-timeout 0 0
transport input none
line aux 0
line vty 0
exec-timeout 0 0
password lab
login
line vty 1 4
password lab
login
!
end

Example 10-9   Sample MPLS-VPN Configuration from a PE Router Using RIP
Router2# show run
Building configuration...
Current configuration:
!
version 12.1
no service pad
service timestamps debug uptime
no service password-encryption
!
hostname Router2
!
boot system slot1:c7200-tsjpgen-mz.121-1.0.2
boot system tftp /tftpboot/syam/c7200-tsjpgen-mz.121-4.3.T 223.255.254.254
no logging console
enable password lab
!
ip subnet-zero
no ip finger
no ip domain-lookup
ip host PAGENT-SECURITY-V3 87.84.30.96 47.58.0.0
!
ip cef
cns event-service server
!
interface Loopback0
ip address 11.11.11.11 255.255.255.255
!
interface FastEthernet0/0
no ip address
no ip mroute-cache
no keepalive
shutdown
full-duplex
!
interface FastEthernet2/0
ip address 29.0.0.2 255.0.0.0
no ip mroute-cache
no keepalive
full-duplex
!
router rip
version 2
network 11.0.0.0
network 29.0.0.0
!
no ip classless
no ip http server
!
no cdp advertise-v2
!
!
line con 0
exec-timeout 0 0
transport input none
line aux 0
line vty 0 4
password lab
login
!
no scheduler max-task-time
end

Example 10-10   Sample MPLS-VPN Configuration for a URM-LER
URM-LER# show run
Building configuration...
Current configuration : 3830 bytes
!
version 12.2
no service single-slot-reload-enable
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname URM-LER
!
boot system flash:urm-jk2s-mz
logging rate-limit console 10 except errors
!
ip subnet-zero
!
no ip finger
no ip domain-lookup
!
ip vrf test_1
rd 100:1
route-target export 100:1
route-target import 100:1
!
ip vrf test_2
rd 100:2
route-target export 100:2
route-target export 100:1
route-target import 100:2
route-target import 100:1
ip cef
no ip dhcp-client network-discovery
!
fax interface-type modem
mta receive maximum-recipients 0
!
interface Loopback0
ip address 12.12.12.12 255.255.255.255
no ip mroute-cache
!
interface ATM0/0
no ip address
no ip mroute-cache
no atm ilmi-keepalive
!
interface ATM0/0.1 point-to-point
no ip mroute-cache
!
interface ATM0/0.2 point-to-point
no ip mroute-cache
!
interface ATM0/0.3 tag-switching
ip unnumbered Loopback0
no ip mroute-cache
tag-switching atm vpi 2-5
tag-switching ip
!
interface FastEthernet1/0
no ip address
no ip mroute-cache
no keepalive
speed auto
full-duplex
!
interface FastEthernet1/0.1
encapsulation isl 101
ip vrf forwarding test_1
ip address 30.0.0.1 255.0.0.0
no ip redirects
no ip mroute-cache
!
interface FastEthernet1/0.2
encapsulation isl 102
ip vrf forwarding test_2
ip address 29.0.0.1 255.0.0.0
no ip redirects
no ip mroute-cache
!
interface FastEthernet1/1
ip address 1.7.64.30 255.0.0.0
no ip mroute-cache
no keepalive
shutdown
speed 100
full-duplex
!
router ospf 100
log-adjacency-changes
network 12.0.0.0 0.255.255.255 area 100
!
router rip
version 2
!
address-family ipv4 vrf test_2
version 2
redistribute bgp 100 metric 0
network 29.0.0.0
no auto-summary
exit-address-family
!
router bgp 100
no synchronization
no bgp default ipv4-unicast
bgp log-neighbor-changes
neighbor 15.15.15.15 remote-as 100
neighbor 15.15.15.15 update-source Loopback0
neighbor 17.17.17.17 remote-as 100
neighbor 17.17.17.17 update-source Loopback0
!
address-family ipv4 vrf test_2
redistribute rip
no auto-summary
no synchronization
exit-address-family
!
address-family ipv4 vrf test_1
redistribute rip
neighbor 30.0.0.2 remote-as 101
neighbor 30.0.0.2 activate
no auto-summary
no synchronization
exit-address-family
!
address-family vpnv4
neighbor 15.15.15.15 activate
neighbor 15.15.15.15 send-community extended
neighbor 17.17.17.17 activate
neighbor 17.17.17.17 send-community extended
exit-address-family
!
ip default-gateway 1.7.0.1
ip kerberos source-interface any
ip classless
ip route 223.255.254.254 255.255.255.255 1.7.0.1
no ip http server
!
no cdp advertise-v2
!
call rsvp-sync
!
mgcp modem passthrough voip mode ca
no mgcp timer receive-rtcp
!
mgcp profile default
!
dial-peer cor custom
!
line con 0
exec-timeout 0 0
transport input none
line aux 0
line vty 0 4
password lab
login
!
end

Example 10-11   Sample MPLS-VPN Configuration from a LSC
SampleLSC# show run
Building configuration...
Current configuration:
!
version 12.1
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname SampleLSC
!
boot system slot0:c7200-p-mz.121-3.T
enable password lab
!
ip subnet-zero
ip cef
no ip finger
no ip domain-lookup
!
interface Loopback0
ip address 13.13.13.13 255.255.255.255
!
interface FastEthernet0/0
no ip address
no ip mroute-cache
shutdown
half-duplex
!
interface ATM1/0
no ip address
no ip route-cache cef
no atm ilmi-keepalive
!
interface ATM2/0
no ip address
no ip mroute-cache
tag-control-protocol vsi base-vc 0 180 slaves 16
atm clock INTERNAL
no atm ilmi-keepalive
tag-switching ip
!
interface XTagATM103
ip unnumbered Loopback0
shutdown
extended-port ATM2/0 vsi 0x000A0300
tag-switching atm vpi 2-15
!
interface XTagATM104
ip unnumbered Loopback0
extended-port ATM2/0 vsi 0x000A0400
tag-switching atm vpi 2-15
tag-switching ip
!
interface XTagATM151
ip unnumbered Loopback0
extended-port ATM2/0 vsi 0x000F0100
tag-switching atm vpi 2-15
tag-switching ip
!
router ospf 100
log-adjacency-changes
network 13.0.0.0 0.255.255.255 area 100
!
no ip classless
no ip http server
!
line con 0
exec-timeout 0 0
transport input none
line aux 0
line vty 0 4
password lab
login
!
end

Managing IP Services

Managing Slave Resources

The maximum number of slaves in a 16-slot switch is 14 and in a 32-slot switch is 30. Therefore, a maximum of 14 or 30 LCNs are necessary to connect a slave to all other slaves in the node. This set of LCNs is allocated from the AutoRoute partition.

If a controller is attached to an interface, master-slave connections are set up between the controller port and each of the slaves in the node.

These LCNs will be allocated from the AutoRoute Management pool. This pool is used by AutoRoute Management to allocate LCNs for connections.

VSI controllers require a bandwidth of at least 150 cps to be reserved on the port for signaling. This bandwidth is allocated from the free bandwidth available on the port (free bandwidth = port speed - PVC maximum bandwidth - VSI bandwidth).

Setting Up VSI Redundancy

The hot slave standby preprograms the slave standby card the same as the active card, so that when the active card fails, the slave card switches over operation is implemented within 250 ms. Without the VSI portion, the UXM card already provided the hot standby mechanism by duplicating internal IGX protocol messages from the NPM to the standby UXM card.

Because the master VSI controller does not recognize the standby slave card, the active slave card forwards VSI messages that it received from the master VSI controller to the standby slave VSI card.

In summary, these are the hot standby operations between active and standby card:

1. Internal IGX protocol messages are duplicated to a hot-standby slave VSI card by the NPM.

2. VSI messages (from master VSI controller or other slave VSI card) are forwarded to the hot-standby slave VSI card by the active slave VSI card. Operation 2 is normal data transferring, which occurs after both cards are synchronized.

3. When the hot-standby slave VSI card starts up, it retrieves and processes all VSI messages from the active slave VSI card. Operation 3 is initial data transferring, which occurs when the standby card first starts up.

The data transfer from the active card to the standby card should not affect the performance of the active card. Therefore, the standby card takes most actions and simplifies the operations in the active card. The standby card drives the data transferring and performs the synchronization. The active card forwards VSI messages and responds to the standby card requests.

Qbin Statistics

Qbin statistics allow network engineers to engineer and overbook the network on a per CoS (or per Qbin) basis. Each connection has a specific CoS and hence, a corresponding Qbin associated with it.

The IGX switch software collects statistics for UXM AutoRoute Qbins 1 through 9 on trunks and Autoroute Qbins 2, 3, 7, 8, and 9 on ports. Statistics are also collected for VSI Qbins 10 through 15 on UXM trunks and ports.

The following statistics types are collected per Qbin:

Since all Qbins provide the same statistical data, the Qbin number together with its statistic forms a unique statistic type. These unique statistic types are displayed in Cisco WAN Manager and may also be viewed by using the CLI.

Trunk and port counter statistics (cell discard statistics only) for the following Qbins can be collected by SNMP:

Qbin summary and counter statistics are automatically collected and TFTP and USER interval statistics can be enabled. The cell discard statistics on UXM trunk Qbins 1 through 9 are AUTO statistics. The cell discard statistics on Qbins 10 through 15 and AutoRoute port Qbins are not AUTO statistics.

Interval statistics (per Qbin) are collected through Cisco WAN Manager's Statistics Collection Manager (SCM) and through CLI.

Summary of Qbin Statistics Commands

Table 10-15   Commands for Collecting and Viewing Qbin Interval, Summary, and Counter Statistics

Command Description

clrportstats

Resets or clears the summary statistics of all statistics types on a specified port.

clrtrkstats

Resets or clears the summary statistics of all statistic types on a specified trunk.

cnfportstats

Collects USER statistics of one statistics type on a specified port.

cnfstatparms

Enables TFTP statistics from the CLI (the equivalent of using the SCM).

cnftrkstats

Collects USER statistics of one statistic type on a specific specified trunk.

dspcntrstats

Views all counter statistics of a specified entity in real-time. These statistics cannot be cleared.

dspportstathist

Views statistics of one statistics type on a specified port.

dspqbinstats

Views all Qbin summary statistics on a specified trunk or port.

dsptrkstathist

Views interval statistics of one statistic type on a specific specified trunk.

Where to Go Next

For more information on MPLS on the IGX, refer to MPLS Label Switch Controller and Enhancements 12.2(8)T.

For more information on Cisco IOS configuration and commands, refer to documentation supporting Cisco IOS Release 12.2T or later (see the "Cisco IOS Software Documentation" section).

For more information on switch software commands, refer to the Cisco WAN Switching Command Reference, Chapter 1, "Command Line Fundamentals ."

For installation and basic configuration information, see the Cisco IGX 8400 Series Installation Guide, Chapter 1, "Cisco IGX 8400 Series Product Overview."


hometocprevnextglossaryfeedbacksearchhelp
Posted: Mon May 12 15:45:43 PDT 2003
All contents are Copyright © 1992--2003 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.