|
The Cisco IGX 8400 series delivers in-chassis IP routing through the Universal Router Module (URM), a dual-processor card set delivering high-density voice and data interfaces. You can also set up IP routing services using an external router and configuring ATM PVCs on the IGX.
IP service on the IGX functions through configuration of virtual slave interfaces (VSIs) that allow a node to be managed by multiple label switch controllers (LSCs), such as Multiprotocol Label Switching (MPLS).
Note Private Network-to-Network Interface (PNNI) is not supported on the URM. |
This chapter primarily contains information related to MPLS support on the IGX using the URM. For information on configuring MPLS using an external router, such as a Cisco 7200, see the Update to the CiscoIG X 8400 Series Referenc e Guide for Switch Software Release 9.3.1.
For information on additional Cisco IOS features supported on the IGX, see the Cisco IOS documents listed in the "Related Documentation" section.
Table 10-1 contains information on the hardware and software required to provision IP services across an IGX node.
Note Refer to the Compatibility Matrix for Cisco IOS software, switch software, and firmware compatibility requirements. |
Table 10-1 Required Hardware and Software for IP Services
Note Except for the differences noted in this chapter, the URM can be configured as though it were an external router and a UXM or UXM-E card. Switch software setup on the embedded UXM-E portion of the card is the same as for a UXM or UXM-E, while the embedded router is configured like any external Cisco router. For more information on the URM, see the "Universal Router Module" section 2-84. |
The URM consists of a logically-partitioned front card connected to a universal router interface (URI) back card. The front card contains an embedded UXM-E running an administration firmware image, and an embedded router running a Cisco IOS image. The embedded UXM-E and the embedded router connect through a logical internal ATM interface, with capability equivalent to an OC3 ATM port.
The logically-defined internal ATM interface is seen as a physical interface between the embedded router and the embedded UXM-E processor. However, remote connections terminating on the URM can use the internal ATM interface as an endpoint, with the embedded UXM-E processor passing transmissions to the embedded router.
The URM supports the following types of IP service:
To configure the URM for any IP service, you must use both switch software and Cisco IOS commands. See "Functional Overview" for more information on basic URM installation and setup.
Note VSIs can only be configured on the UXM or UXM-E card sets. FR support for VSI controllers functions through FRF.8 service interworking on the UXM or UXM-E front card. |
VSIs allow a node to be managed by multiple controllers, such as MPLS.
In the VSI control model, a controller sees the switch as a collection of slaves with their interfaces. The controller can establish connections between any two interfaces, using the resources allocated to its partition. For example, an MPLS controller can only access interfaces that have been configured in the MPLS controller's partition.
A VSI interface becomes available to the controller after the VSI partition is created and enabled. The controller manages its partition through the VSI protocol and runs the VSI master. The VSI master interacts with each VSI slave in the VSI partition and sets up and terminates VSI connections.
A maximum of three VSI partitions can be enabled on the IGX. These VSI partitions can function together or independently, and are in addition to AutoRoute on each interface.
VSIs on the IGX provide the following features:
For information on configuring VSI partitions and VSIs on the IGX, see the "VSI Configuration" section.
A controller application uses a VSI master to control one or more VSI slaves. For an IGX without a URM, the controller application and Master VSI reside in an external router and the VSI slaves exist in UXM cards on the IGX node (see Figure 10-1).
IGX nodes with an installed URM utilize the embedded router on the URM front card as the location for the controller application and the Master VSI.
The controller establishes a link between the VSI master and every VSI slave on the associated switch. The slaves in turn establish links between each other (see Figure 10-2).
When multiple switches are connected together, cross-connects within the individual switch enable links between switches to be established (see Figure 10-3).
When a connection request is received by the VSI slave, it is first subjected to a Connection Admission Control (CAC) process before being forwarded to the FW layer responsible for actually programming the connection. The granting of the connection is based on the following criteria:
After CAC, the VSI slave accepts a connection setup command from the VSI master in the MPLS controller, and receives connection information including service type, bandwidth parameters, and QoS parameters. This information is used to determine an index into the VI's selected Service Template VC Descriptor table which establishes access to the associated extended parameter set stored in the table.
A preassigned ingress service template containing CoS Buffer links manages ingress traffic.
Note Service class templates (SCTs) are primarily used with virtual circuits (VCs) and must be used when configuring the IGX to work with a VSI master in a label switch controller (LSC). |
SCTs provide a way to map a set of standard connection protocol parameters to different hardware platforms. For example, SCTs for the BPX and the IGX are different, but the BPX and IGX can still deliver equivalent CoS for full QoS.
On the IGX, the NPM stores a set of SCTs. When a UXM or UXM-E is initially configured, the appropriate SCTs are downloaded to the card. Later, if you configure a new interface on the card, the appropriate SCTs for that new interface will also be downloaded to the card.
Each SCT contains the following information:
Each SCT has an associated Qbin mapping table, which manages bandwidth by temporarily storing cells and serving them to the interface based on bandwidth availability and CoS priority.
Note The default SCT, Template 1, is automatically assigned to a virtual interface (VI) when you configure the interface. |
The following nine SCTs are available for assignment to a VSI:
For more information on how SCTs work, see Figure 10-4. For information on supported SCT characteristics, see Table 10-2.
Caution SCTs can be reassigned on an operational interface, triggering a resynchronization process between the UXM or UXM-E and the controllers. However, for a Cisco MPLS VSI controller, reassignment of an SCT on an operational interface will cause all connections on the card to be resynchronized with the controller, and can impact service. |
The service type identifier is a 32-bit number.
The service types supported are:
The service type identifier appears on the dspsct screen when you specify a service class template number and service type. For example:
A list of supported service templates, associated Qbins, and service types is shown in Table 10-2.
Table 10-2 Service Category Listing
Template Type | Service Type Identifier | Service Type | Associated Qbin |
---|---|---|---|
The service class templates provide a means of mapping a set of extended parameters. These are generally platform specific, based on the set of standard ATM parameters passed to the VSI slave in a UXM port interface during initial bringup of the interface.
A set of service templates is stored in each switch and downloaded to the service modules (UXMs) as needed during initial configuration of the VSI interface when a trunk or line is enabled on the UXM.
An MPLS service template is assigned to the VSI interface when the trunk or port is initialized. The label switch controller (LSC) automatically sets up LVCs via a routing protocol (such as OSPF) and the label distribution protocol (LDP), when the CoS multiple LVC option is enabled at the edge label switch routers (LSRs).
With the multiple VC option enabled (at edge LSRs), four LVCs are configured for each IP source-destination. Each of the four LVCs is assigned a service template type. For example, one of the four cell labels might be assigned to label cos2 service type category.
Each service template type has an associated Qbin. Qbins provide the ability to manage bandwidth by temporarily storing cells, and then serving them out as bandwidth is available. This is based on factors including bandwidth availability, and the relative priority of different classes of service.
When ATM cells arrive from the edge LSR at the UXM port with one of four CoS labels, they receive CoS handling based on that label. A table lookup is performed, and the cells are processed based on their connection classification. Based on its label, a cell receives the ATM differentiated service associated with its template type, (MPLS1 template), and service type (for example, label cos2 bw), plus associated Qbin characteristics and other associated ATM parameters.
For information on setting up service class templates on the IGX, see "ATM ServiceFunctional Overview."
Table 10-3 describes the connection parameters and range of values that may be configured, if not already preconfigured, for ATM service classes per VC.
Every service class does not include all parameters. For example, a CBR service type has fewer parameters than an ABR service type.
Note Every service class does not have a value defined for every parameter listed in Table 10-3. |
A summary of the parameters associated with each of the service templates is provided in Table 10-4.
Table 10-4 MPLS Service Categories
Parameter | Default | Signaling | Tag 0/4 | Tag 1/5 | Tag 2/6 | Tag 3/7 | Tag-ABR |
---|---|---|---|---|---|---|---|
Qbins store cells and serve them to an interface based on bandwidth availability and CoS priority (see Figure 10-5. For example, if CBR and ABR cells must exit the switch from the same interface, but the interface is already transmitting CBR cells from another source, the newly-arrived CBR and ABR cells are held in the Qbin associated with that interface. As the interface becomes accessible, the Qbin passes CBR cells to the interface for transmission. After the CBR cells have been transmitted, the ABR cells are passed to the interface and transmitted to their destination.
Qbins are used with VIs, in situations where the VI is a VSI with a VSI master running on a separate controller (a label switch controller or LSC). For a VSI master to handle a VSI, each virtual circuit (VC, also known as virtual channel when used in FR networks) must receive a specific service class specified through a 32-bit service type identifier. The IGX supports identifiers for the following service types:
When a connection setup request is received from the VSI master in the LSC, the VSI slave uses the service type identifier to index into an SCT database with extended parameter settings for connections matching that service type identifier. The VSI slave then uses these extended parameter settings to complete connection setup and necessary configuration for connection maintenance and termination as needed.
The VSI master normally sends the VSI slave a service type identifier (either ATM Forum or MPLS), QoS parameters (such as CLR or CDV), and bandwidth parameters (such as PCR or MCR).
A Qbin template defines a default configuration for the set of Qbins attached to an interface. When you assign an SCT to an interface, switch software copies the Qbin configuration from the Qbin template and applies the Qbin configuration to all the Qbins attached to the interface.
Qbin templates only apply to the Qbins available to VSI partitions, meaning that Qbin templates only apply to Qbins 10-15. Qbins 0-9 are reserved and configured by automatic routing management (ARM, or AutoRoute).
Some parameters on the Qbins attached to the interface can be reconfigured for each interface. These changes do not affect the Qbin templates, which are stored on the NPM, although they do affect the Qbins attached to the interface.
For a visual description of the interaction between SCTs and Qbin templates, see Figure 10-6.
The Qbin and SCT default settings for LSCs are shown in Table 10-5.
Note Templates 2, 4, 6, and 8 support policing on partial packet discard (PPD). |
Table 10-5 Qbin Default Settings
Qbin | Max Qbin Threshold (usec) | CLP High | CLP Low/EPD | EFCI | Discard Selection |
---|---|---|---|---|---|
LABEL Template 1 |
|||||
Qbins 10 through 15 are used by VSI on interfaces configured as trunks or ports. The rest of the Qbins are reserved and configured by AutoRoute.
When you execute a dspsct command, it will give you the default service type and the Qbin number.
The available Qbin parameters are shown in Table 10-6.
Note The Qbins available for VSI are restricted to Qbins 10-15 for that interface. All 16 possible virtual interfaces are provided with 16 Qbins. |
Table 10-6 Service Template Qbin Parameters
Template Object Name | Template Units | Template Range/Values |
---|---|---|
MPLS enables edge routers to apply labels to packets or frames before transmission into the network. After the packets or frames are transmitted into the network, these labels allow network core devices to switch labeled packets with minimal lookup activity. This process integrates virtual circuit switching with IP routing, enabling scalable IP networks over ATM backbones. By summarizing routing decisions, MPLS enables switches to perform IP forwarding, optimizing the packet's route through the network core.
With MPLS, you can set up explicit data flow routes using path, resource availability, and requested quality of service (QoS) constraints.
You can enable MPLS on an IGX node in two waysby connecting an external label switch controller (LSC), such as the Cisco 7204VXR, to function as an MPLS controller for all IGX nodes in the network, or by configuring an installed URM as an MPLS controller. Support for MPLS is enabled through the use of a common control interface, or VSI, between the IGX and the controller.
Note Setting up MPLS requires one LSC for each partition on each IGX node running MPLS in the network. |
Tip To save rack space, use multiple, separately-installed URMs as LSCs for multiple partitions on the same IGX node. |
For more information on MPLS on the IGX, refer to MPLS Label Switch Controller and Enhancements 12.2(8)T.
For enabling business IP services, the most significant benefit of MPLS is the ability to assign labels that have special meanings. Sets of labels distinguish destination address and application type or service class (see Figure 10-7).
The MPLS label is compared to precomputed switching tables in core devices, such as the IGX ATM LSR, allowing each switch to automatically apply the correct IP services to each packet. Tables are precalculated, to avoid reprocessing packets at every hop. This strategy not only makes it possible to separate types of traffic, such as best-effort traffic from mission-critical traffic, it also makes an MPLS solution highly scalable.
Because MPLS uses different policy mechanisms to assign labels to packets, it decouples packet forwarding from the content of IP headers. Labels have local significance, and they are used many times in large networks. Therefore, it is almost impossible to run out of labels. This characteristic is essential to implementing advanced IP services such as QoS, large-scale VPNs, and traffic engineering.
This section describes MPLS CoS with the use of the Cisco IGX 8410, 8420, and 8430 ATM label switch router (ATM LSR). MPLS CoS is also supported in networks using the URM as a LSC.
Note The URM does not support MPLS CoS when configured as an LSR, and networks using URM-LSRs cannot run MPLS CoS across those network segments containing the URM-LSR. |
The MPLS CoS feature enables network administrators to provide differentiated types of service across an MPLS switching network. Differentiated service satisfies a range of requirements by supplying the specific type of service specified for each packet by its CoS service can be specified in different waysfor example, through use of the IP precedence bit settings in either IP packets or in source and destination addresses.
The MPLS CoS feature can be used optionally with MPLS virtual private networks. MPLS CoS can also be used in any MPLS switching network.
In supplying differentiated service, MPLS CoS offers packet classification, congestion avoidance, and congestion management. Table 10-7 lists these functions and how they are delivered.
Table 10-7 CoS Services and Features
MPLS CoS lets you duplicate Cisco IOS IP CoS (Layer 3) features as closely as possible in MPLS switching devices, including label switching routers (LSRs), edge LSRS, and ATM label switching routers (ATM LSRs). MPLS CoS functions map nearly one-for-one to IP CoS functions on all interface types.
For additional information, refer to Cisco router and MPLS-related Cisco IOS documentation (see the "Cisco IOS Software Documentation" section).
To use the MPLS CoS feature, your network must run these Cisco IOS features:
Tip For information on switch software, Cisco IOS software, and card firmware compatibility, see the Compatibili ty Matrix at http://www.cisco.com/kobayashi/sw-center/sw-wan.shtml. |
In IP+ATM networks, MPLS uses predefined sets of labels for each service class, so switches automatically know which traffic requires priority queuing. A different label is used per destination to designate each service class (see Figure 10-8).
There can be up to four labels per IP source-destination. Using these labels, core LSRs implement class-based WFQ to allocate specific amounts of bandwidth and buffer to each service class. Cells are queued by class to implement latency guarantees.
On a Cisco IP+ATM LSR, the weights assigned to each service class are relative, not absolute. The switch can therefore borrow unused bandwidth from one class and allocate it to other classes according to weight. This scenario enables very efficient bandwidth utilization. The class-based WFQ solution ensures that customer traffic is sent whenever unused bandwidth is available, whereas ordinary ATM VCs drop cells in oversubscribed classes even when bandwidth is available.
Packets have precedence bits in the type of service field of the IP header, set at either the host or an intermediate router, which could be the edge label switch router (LSR). The precedence bits define a CoS 0-3, such as available, standard, premium, or control.
To establish CoS operation when the IGX and the associated LSC router are initially configured, the binding type assigned each LVC interface on the IGX is configured to be multiple LVCs.
Then under the routing protocol (OSPF, for example), four LVCs are set up across the network for each IP source to destination requirement. Depending on the precedence bits set in the packets that are received by the edge LSR, the packet ATM cells that are sent to the ATM LSR will be one of four classes (as determined by the cell label, that is, VPI.VCI). Furthermore, two subclasses are distinguishable within each class by the use of the cell loss priority (CLP) bit in the cells.
Then the ATM LSR performs a MPLS data table lookup and assigns the appropriate CoS template and Qbin characteristics. The default mapping for CoS is listed in Table 10-8.
Figure 10-9 shows an example of IP traffic across an ATM core consisting of IGX-ATM LSRs. The host is sending two types of traffic across the network, interactive video, and nontime-critical data. Because multiple LVCs have automatically been generated for all IP source-destination paths, traffic for each source destination is assigned to one of four LVCs, based on the precedence bit setting in the IP packet header.
In this case, the video traffic might be assigned to the premium CoS, and transmitted across the network. This starts with the cell label "51" out of the Edge LSR-A, and continues across the network with the cell label "91" applied to the Edge LSR-C. In each IGX-ATM LSR, the cells are processed with the preassigned bandwidth, queuing, and other ATM QoS functions suitable to "premium" traffic.
In a similar fashion, low-priority data traffic cells with the same IP source-destination might be assigned label "53" out of Edge LSR-A and arrive at Edge LSR-C with the label "93," receiving preassigned bandwidth, queuing, and other ATM QoS functions suitable to "available" traffic.
You can use MPLS to build an entirely new class of IP VPNs. MPLS-enabled IP VPNs (MPLS-VPNs) are connectionless networks with the same privacy as VPNs built using Frame Relay or ATM VCs.
Cisco MPLS solutions offer multiple IP service classes to enforce business-based policies. Providers can offer low-cost managed IP services because they can consolidate services over common infrastructure, and improve provisioning and network operations.
Although Frame Relay and multiservice ATM deliver privacy and CoS, IP delivers any-to-any connectivity, and MPLS on Cisco IP+ATM switches, such as the IGX-ATM LSR, enables providers to offer the benefits of business-quality IP services over their ATM infrastructures.
MPLS-VPNs, created in Layer 3, are connectionless, and therefore substantially more scalable and easier to build and manage than conventional VPNs.
In addition, value-added services, such as application and data hosting, network commerce, and telephony services, can easily be added to a specific MPLS-VPN, the service provider's backbone recognizes each MPLS-VPN as a separate, connectionless IP network. MPLS over IP+ATM VPN networks combine the scalability and flexibility of IP networks with the performance and QoS capabilities of ATM.
From a single access point, it is now possible to deploy multiple VPNs, each of which designates a different set of services (see Figure 10-10). This flexible way of grouping users and services makes it possible to deliver new services more quickly and cost-effectively. The ability to associate closed groups of users with specific services is critical to service provider value-added service strategies.
The VPN network must be able to recognize traffic by application type, such as voice, mission-critical applications, or e-mail. The network should easily separate traffic based on its associated VPN without configuring complex, point-to-point meshes.
The network must be "VPN aware" so that the service provider can easily group users and services into intranets or extranets with the services they need. In such networks, VPNs offer service providers a technology that is highly scalable and allows subscribers to quickly and securely provision extranets to new partners. MPLS brings "VPN awareness" to switched or routed networks. It enables service providers to quickly and cost-effectively deploy secure VPNs of all sizes over the same infrastructure.
As part of their VPN services, service providers can offer premium services defined by SLAs to expedite traffic from certain customers or applications. QoS in IP networks gives devices the intelligence to preferentially handle traffic as dictated by network policy.
The QoS mechanisms give network managers the ability to control the mix of bandwidth, delay, jitter, and packet loss in the network. QoS is not a device feature; it is an end-to-end system architecture. A robust QoS solution includes a variety of technologies that interoperate to deliver scalable, media-independent services throughout the network, with system-wide performance monitoring capabilities.
Note VPNs can be used with the CoS feature for MPLS. MPLS-VPN does not require use of MPLS CoS. MPLS-VPNs with CoS are supported on the URM-LSC but are not supported on the URM-LSR. |
MPLS-enabled IP VPN networks provide the foundation for delivering value-added IP services, such as multimedia application support, packet voice, and application hosting, all of which require specific service quality and privacy. Because QoS and privacy are an integral part of MPLS, they no longer require separate network engineering.
Cisco's comprehensive set of QoS capabilities enables providers to prioritize service classes, allocate bandwidth, avoid congestion, and link Layer 2 and Layer 3 QoS mechanisms:
MPLS makes it possible to apply scalable QoS across very large routed networks and Layer 3 IP QoS in ATM networks, because providers can designate sets of labels that correspond to service classes. In routed networks, MPLS-enabled QoS substantially reduces processing throughout the core for optimal performance. In ATM networks, MPLS makes end-to-end Layer 3-type services possible.
Traditional ATM and Frame Relay networks implement CoS with point-to-point virtual circuits, but this is not scalable because of high provisioning and management overhead. Placing traffic into service classes at the edge enables providers to engineer and manage classes throughout the network. If service providers manage networks based on service classes, rather than point-to-point connections, they can substantially reduce the amount of detail they must track, and increase efficiency without losing functionality.
Compared to per-circuit management, MPLS-enabled CoS in ATM networks provides virtually all the benefits of point-to-point meshes with far less complexity. Using MPLS to establish IP CoS in ATM networks eliminates per-VC configuration. The entire network is easier to provision and engineer.
Subscribers want assurance that their VPNs, applications, and communications are private and secure. Cisco offers many robust security measures to keep information confidential:
In intranet and extranet VPNs based on Cisco MPLS, packets are forwarded using a unique route distinguisher (RD). RDs are unknown to end users and uniquely assigned automatically when the VPN is provisioned. To participate in a VPN, a user must be attached to its associated logical port and have the correct RD. The RD is placed in packet headers to isolate traffic to specific VPN communities.
MPLS packets are forwarded using labels attached in front of the IP header. Because the MPLS network does not read IP addresses in the packet header, it allows the same IP address space to be shared among different customers, simplifying IP address management.
Service providers can deliver fully managed, MPLS-based VPNs with the same level of security that users are accustomed to in Frame Relay/ATM services, without the complex provisioning associated with manually establishing PVCs and performing per-VPN customer premises equipment (CPE) router configuration.
QoS addresses two fundamental requirements for applications that run on a VPN: predictable performance and policy implementation. Policies are used to assign resources to applications, project groups, or servers in a prioritized way. The increasing volume of network traffic, along with project-based requirements, results in the need for service providers to offer bandwidth control and to align their network policies with business policies in a dynamic, flexible way.
VPNs based on Cisco MPLS technology scale to support many thousands of business-quality VPNs over the same infrastructure. MPLS-based VPN services solve peer adjacency and scalability issues common to large virtual circuit (VC) and IP tunnel topologies. Complex permanent virtual circuit/switched virtual circuit (PVC/SVC) meshes are no longer needed, and providers can use new, sophisticated traffic engineering methods to select predetermined paths and deliver IP QoS to premium business applications and services.
Service providers can use MPLS to build intelligent IP VPNs across their existing ATM networks. Because all routing decisions are precomputed into switching tables, MPLS both expedites IP forwarding in large ATM networks at the provider edge, and makes it possible to apply rich Layer 3 services via Cisco IOS technologies in Layer 2 cores.
A service provider with an existing ATM core can deploy MPLS-enabled edge switches or routers (LSRs) to enable the delivery of differentiated business IP services. The service provider needs only a small number of VCs to interconnect provider edge switches or routers to deliver many secure VPNs.
Cisco IP+ATM solutions give ATM networks the ability to intelligently "see" IP application traffic as distinct from ATM/Frame Relay traffic. By harnessing the attributes of both IP and ATM, service providers can provision intranet or extranet VPNs. Cisco enables IP+ATM solutions with MPLS, merging the application of Cisco IOS software with carrier-class ATM switches (see Figure 10-11).
Without MPLS, IP transport over ATM networks require a complex hierarchy of translation protocols to map IP addressing and routing into ATM addressing and routing.
MPLS eliminates complexity by mapping IP addressing and routing information directly into ATM switching tables. The MPLS label-swapping paradigm is the same mechanism that ATM switches use to forward ATM cells. This solution has the added benefit of allowing service providers to continue offering their current Frame Relay, leased-line, and ATM services portfolio while enabling them to provide differentiated business-quality IP services.
To cost-effectively provision feature-rich IP VPNs, providers need features that distinguish between different types of application traffic and apply privacy and QoSwith far less complexity than an overlay IP tunnel, Frame Relay, or ATM "mesh."
Compared to an overlay solution, an MPLS-enabled network can separate traffic and provide privacy without tunneling or encryption. MPLS-enabled networks provide privacy on a network-by-network basis, much as Frame Relay or ATM provides it on a connection-by-connection basis. The Frame Relay or ATM VPN offers basic transport, whereas an MPLS-enabled network supports scalable VPN services and IP-based value added applications. This approach is part of the shift in service provider business from a transport-oriented model to a service-focused one.
In MPLS-enabled VPNs, whether over an IP switched core or an ATM LSR switch core, the provider assigns each VPN a unique identifier called a route distinguisher (RD) that is different for each intranet or extranet within the provider network. Forwarding tables contain unique addresses, called VPN-IP addresses (see Figure 10-12), constructed by linking the RD with the customer IP address. VPN-IP addresses are unique for each endpoint in the network, and entries are stored in forwarding tables for each node in the VPN.
Border Gateway Protocol (BGP) is a routing information distribution protocol that defines who can talk to whom using MPLS extensions and community attributes. In an MPLS-enabled VPN, BGP distributes information about VPNs only to members of the same VPN, providing native security through traffic separation. Figure 10-13 shows an example of a service provider network with service provider edge label switch routers (PE) and customer edge routers (CE). The ATM backbone switches are indicated by a double-ended arrow labeled "BGP."
Additional security is assured because all traffic is forwarded using LSPs, which define a specific path through the network that cannot be altered. This label-based paradigm is the same property that assures privacy in Frame Relay and ATM connections.
The provider, not the customer, associates a specific VPN with each interface when the VPN is provisioned. Within the provider network, RDs are associated with every packet, so VPNs cannot be penetrated by attempting to "spoof" a flow or packet. Users can participate in an intranet or extranet only if they reside on the correct physical port and have the proper RD. This setup makes Cisco MPLS-enabled VPNs difficult to enter, and provides the same security levels users are accustomed to in a Frame Relay, leased-line, or ATM service.
VPN-IP forwarding tables contain labels that correspond to VPN-IP addresses. These labels route traffic to each site in a VPN (see Figure 10-14).
Because labels are used instead of IP addresses, customers can keep their private addressing schemes, within the corporate Internet, without requiring Network Address Translation (NAT) to pass traffic through the provider network. Traffic is separated between VPNs using a logically distinct forwarding table for each VPN. Based on the incoming interface, the switch selects a specific forwarding table, which only lists valid destinations in the VPN, as specified by BGP. To create extranets, a provider explicitly configures reachability between VPNs. NAT configurations may be required.
One strength of MPLS is that providers can use the same infrastructure to support many VPNs and do not need to build separate networks for each customer. VPNs loosely correspond to "subnets" of the provider network.
This solution builds IP VPN capabilities into the network itself, so providers can configure a single network for all subscribers that delivers private IP network services such as intranets and extranets without complex management, tunnels, or VC meshes. Application-aware QoS makes it possible to apply customer-specific business policies to each VPN. Adding QoS services to MPLS-based VPNs works seamlessly; the provider Edge LSR assigns correct priorities for each application within a VPN.
MPLS-enabled IP VPN networks are easier to integrate with IP-based customer networks. Subscribers can seamlessly interconnect with a provider service without changing their intranet applications, because these networks have application awareness built in, for privacy, QoS, and any-to-any networking. Customers can even transparently use their private IP addresses without NAT.
The same infrastructure can support many VPNs for many customers, removing the burden of separately engineering a new network for each customer, as with overlay VPNs.
It is also much easier to perform adds, moves, and changes. If a company wants to add a new site to a VPN, the service provider only has to tell the CPE router how to reach the network, and configure the LSR to recognize VPN membership of the CPE. BGP updates all VPN members automatically.
This scenario is easier, faster, and less expensive than building a new point-to-point VC mesh for each new site. Adding a new site to an overlay VPN entails updating the traffic matrix, provisioning point-to-point VCs from the new site to all existing sites, updating OSPF design for every site, and reconfiguring each CPE for the new topology.
Each VPN is associated with one or more VPN routing/forwarding instances (VRFs). A VRF table defines a VPN at a customer site attached to a PE router. A VRF table consists of the following:
A 1-to-1 relationship does not necessarily exist between customer sites and VPNs. A specific site can be a member of multiple VPNs. However, a site may be associated with only one VRF. A site VRF contains all the routes available to the site from the VPNs of which it is a member.
Packet forwarding information is stored in the IP routing table and the CEF table for each VRF. Together, these tables are analogous to the forwarding information base (FIB) used in Label Switching.
A logically separate set of routing and CEF tables is constructed for each VRF. These tables prevent information from being forwarded outside a VPN, and prevent packets that are outside a VPN from being forwarded to a router within the VPN.
The distribution of VPN routing information is controlled by using VPN route target communities, implemented by BGP extended communities.
When a VPN route is injected into BGP, it is associated with a list of VPN route target extended communities. Typically the list of VPN communities is set through an export list of extended community-distinguishers associated with the VRF from which the route was learned.
Associated with each VRF is an import list of route-target communities. This list defines the values to be verified by the VRF table, before a route is eligible to be imported into the VPN routing instance.
For example, if the import list for a particular VRF includes community-distinguishers of A, B, and C, then any VPN route that carries any of those extended community-distinguishersA, B, or Cwill be imported into the VRF.
A service provider edge (PE) router can learn an IP prefix from a customer edge (CE) router by static configuration, through a Border Gateway Protocol (BGP) session with the CE router, or through the Routing Information Protocol (RIP) with the CE router.
After the router learns the prefix, it generates a VPN-IPv4 (vpnv4) prefix based on the IP prefix, by linking an 8-byte route distinguisher to the IP prefix. This extended VPN-IPv4 address uniquely identifies hosts within each VPN site, even if the site is using globally nonunique (unregistered private) IP addresses.
The route distinguisher (RD) used to generate the VPN-IPv4 prefix is specified by a configuration command on the PE.
BGP uses VPN-IPv4 addresses to distribute network reachability information for each VPN within the service provider network. BGP distributes routing information between IP domains (known as autonomous systems) using messages to build and maintain routing tables. BGP communication takes place at two levels: within the domain (interior BGP or IBGP) and between domains (external BGP or EBGP).
BGP propagates vpnv4 information using the BGP Multi-Protocol extensions for handling these extended addresses (see RFC 2283, Multi-Protocol Extensions for BGP-4). BGP propagates reachability information (expressed as VPN-IPv4 addresses) among PE routers; the reachability information for a specific VPN is propagated only to other members of that VPN. The BGP Multi-Protocol extensions identify the valid recipients for VPN routing information. All members of the VPN learn routes to other members.
Based on the routing information stored in the IP routing table and the CEF table for each VRF, Cisco label switching uses extended VPN-IPv4 addresses to forward packets to their destinations.
An MPLS label is associated with each customer route. The PE router assigns the label that originated the route, and directs the data packets to the correct CE router.
Label forwarding across the provider backbone is based on either dynamic IP paths or Traffic Engineered paths. A customer data packet has two levels of labels attached when it is forwarded across the backbone.
The PE router associates each CE router with a forwarding table that contains only the set of routes that should be available to that CE router.
Note VC merge on the IGX is not supported in releases preceding Switch Software Release 9.3.40. |
Virtual circuit (VC) merge on the IGX improves the scalability of MPLS networks by combining multiple incoming VCs into a single outgoing VC (known as a merged VC). VC merge is implemented as part of the output buffering for the ATM interfaces found on the UXM-E. Each VC merge is performed in the egress direction for the connections.
Both interslave and intraslave connections are supported. However, neither the OAM cell format nor tagABR for the MPLS controller are supported.
Note VC merge is not supported on the UXM card. |
To use VC merge on the UXM-E, connections must meet the following criteria:
Direct MPLS connections on the IGX are only supported on the URM card. To configure MPLS connections not listed in Table 10-9, use an external label edge router (LER).
Note For VISM connections, the URM only supports VoIP. |
Table 10-9 Connections Supported on the URM
Hardware Platform | Connection Endpoint | Connection Type | Voice Connection | Data Connection |
---|---|---|---|---|
Note Use FRF.8 SIW transparent mode for VoATM connections, and use FRF.8 SIW translational mode for VoIP and data connections. |
You can provision IP services of varying complexities on the IGX using the URM card.
If you want to use the URM as an in-chassis router for VoIP or VoATM, see CARDS for basic URM setup and the Cisco IOS software documentation supporting the Cisco IOS software being used on the URM.
If you want to use the URM as an in-chassis router with IPsec-VPN capabilities, see the "Installing the Encryption Advanced Interface Module" section in the Cisco IGX 8400 Series Installation Guide for information on installing the correct AIM module for VPN. For information on configuring IPSec, refer to Cisco IOS documentation, as listed in the "Cisco IOS Software Documentation" section.
The following sections describe how to set up the IGX switch for use with external controllers, preparatory to configuring the IGX for MPLS. For information on configuring MPLS on the IGX, see the "MPLS Configuration on the IGX" section. For information on configuring MPLS-VPNs on the IGX, see the "MPLS VPN Sample Configuration" section.
Tip For additional Cisco IOS features supported on the IGX, see the release notes document for the Cisco IOS software release you intend to use on the URM. |
Controllers require a free bandwidth of at least 150 cells per second (cps) to be reserved for signaling on the IGX port. If a minimum of 150 cps is not available on the port, the switch software command addctrlr is not executed. To calculate free bandwidth, use the following equation:
free bandwidth = port speed - PVC maximum bandwidth -VSI bandwidth
In some cases, you may need to change the bandwidth allocated to AutoRoute to obtain a free bandwidth of 150 cps. Use the switch software command, cnfrsrc, to reallocate bandwidth on a port.
Note While you can add a controller to a UXM interface without configuring a VSI partition on that same interface, you will not be able to use the interface for VSI connections without also configuring a VSI partition. For example, MPLS controllers XTAG interfaces support includes setup of a tag-control-VC between the hosting interface and the XTAG interface. This VC is a VSI connection, so the controller cannot configure the connection unless the hosting interface has a VSI partition. |
When configuring a node for VSI, complete the following steps:
Step 2 Using the switch software commands uptrk, upln, and upport, activate the desired trunk, line, and port for the configured partition.
Step 3 Using the switch software command cnfrsrc, configure partition resources on the active interface (see Table 10-10 for command parameters).
Tip The VPI range is of local significance, and do not have to be the same for each port in a node. However, for tracking purposes, Cisco recommends keeping the VPI range the same for each port in the node. |
Table 10-10 cnfrsrc Command Parameters
Step 4 Using the switch software command addctrlr, add a controller.
Note The switch software command addctrlr, only supports MPLS and generic VSI controllers that do not require support for the AnnexG protocol. |
Tip The switch software command addctrlr, requires you to specify a controller ID, a unique identifier between 1 and 16. Different controllers must be specified with different controller IDs. |
Step 5 Assign ATM CoS template to an interface (ATM services onlysee "ATM ServiceFunctional Overview").
Step 6 Add a slave (for more information on VSI masters and slaves, see "VSI Masters and Slaves").
Step 7 Configure slave redundancy (UXM and UXM-E only).
Tip The URM does not support hot slave redundancy. For the URM, warm redundancy must be configured by setting up redundant partitions. See MPLS Label Switch Controller and Enhancements 12.2(8)T. |
Step 8 Use the switch software command, dspctrlrs, to display your controller configuration.
Step 9 Manage your resources.
Tip Use dspctrlrs to display all VSI controllers attached to the IGX. Use delctrlr to delete a controller from the IGX. |
Note MPLS controllers serving as an interface shelf are designated as Label Switch Controllers (LSCs). |
A logical switch is configured by enabling and allocating resources to the partition. This must be done for each partition in the interface. The same procedure must be followed to define each logical switch.
The following resources are partitioned among the different logical switches:
Resources are configured and allocated per interface, but the pool of resources may be managed at a different level. The bandwidth is limited by the interface rate, which places the limitation at the interface level. Similarly, the range of VPI is also defined at the interface level.
Configure these parameters on a VSI partition on an interface:
Configure partitions by using the cnfrsrc command.
Note Switch Software Release 9.3 or later supports up to three partitions. |
Table 10-11 shows the three resources that must be configured for a partition designated ifc1 (interface controller 1).
The controller is supplied with a range of LCNs, VPIs, and bandwidth. Examples of available VPI values for a VPI partition are listed in Table 10-12.
Table 10-12 VPI Range for Partitioning
UXM | Range |
---|---|
Only one VPI available per virtual trunk because a virtual trunk is currently delineated by a specific VPI. |
When a trunk is activated, the entire bandwidth is allocated to AutoRoute. To change the allocation to provide resources for a VSI, use the cnfrsrc command on the IGX switch.
You can configure partition resources between AutoRoute PVCs and three VSI LSC controllers. Up to three VSI controllers in different control planes can independently control the switch without communication between controllers. The controllers are unaware of other control planes sharing the switch because different control planes use different partitions of the switch resources.
The following limitations apply to multiple VSI partitioning:
Each logical switch represents a collection of interfaces, each with an associated set of resources.
The following example is an IGX switch with four interfaces:
See Example 10-1 for the interface configurations for Figure 10-15. See Table 10-13 for an example with three partitions enabled.
To display the partitioning resources of an interface use the dsprsrc command as in Example 10-1.
The two redundant pair slaves keep the redundant card in a hot standby state for all VSI connections. This is accomplished by a bulk update (on the standby slave) of the existing connections at the time that Y redundancy is added, and also an incremental update of all subsequent connections.
The Slave Hot Standby Redundancy feature enables the redundant card to fully duplicate all VSI connections on the active card, and prepare for operation on switchover. On bringup, the redundant card initiates a bulk retrieval of connections from the active card for fast sync-up. Subsequently, the active card updates the redundant card on a real-time basis.
The VSI Slave Hot Standby Redundancy feature provides the capability for the slave standby card to be preprogrammed the same as the active card. When the active card fails, the slave card switchover operation can be implemented quickly. Without the VSI portion, the UXM card has already provided the hot standby mechanism by duplicating CommBus messages from the NPM to the standby UXM card.
The following sections describe types of communication between the switch software and firmware to support VSI master and slave redundancy.
To provide a smooth migration of the VSI feature on the UXM card, line and trunk Y-redundancy is supported. You can pair cards with and without the VSI capability as a Y-redundant pair, if the feature is not enabled on the specific slot. If the feature is not enabled on a specific slot, switch software will not perform "mismatch checking" if the UXM firmware does not support the VSI feature. The VSI capability is treated as a card attribute and added to the attribute list.
In a Y-redundancy pair configuration, the VSI capability is determined by the minimum of the two cards. A card without VSI capabilities will mismatch if any of the interfaces has an active partition on controller. Attempts to enable a partition or add a controller on a logical card that does not support VSI are blocked.
You add an LSC to a node by using the addctrlr command. When adding a controller, you must specify a partition ID. The partition ID identifies the logical switch assigned to the controller. The valid partitions are 1, 2, and 3.
Note You can configure partition resources between Automatic Routing Management PVCs and three VSI LSC controllers. |
To display the list of controllers in the node, use the command dspctrlrs. The functionality is also available via SNMP using the switchIfTable in the switch MIB.
The management of resources on the VSI slaves requires that each slave in the node has a communication control PVC to each of the controllers attached to the node. When a controller is added to the IGX by using the addctrlr command, the NPM sets up the set of master-slave connections between the new controller port and each of the active slaves in the switch. The connections are set up using a well known VPI.VCI. The default value of the VPI for the master-slave connection is 0. The default value of the VCI is (40 + [slot - 2]), where slot is the logical slot number of the slave.
Note After the controllers are added to the node, the connection infrastructure is always present. The controllers may or may not decide to use it, depending on their state. Inter-slave channels are present whether controllers are present or not. |
The addition of a controller to a node will fail if enough channels are not available to set up the control VCs (14 in a 16-slot through 30 in a 32-slot switch) in one or more of the UXM slaves.
When the slaves receive the controller configuration message from the NPM, the slaves send a VSI message trap to the controller informing of the slaves existence. This prompts an exchange from the controller that launches the interface discovery process with the slaves.
When the controller is added, the NPM will send a VSI configuration CommBus message to each slave with this controller information, and it will set up the corresponding control VCs between the controller port and each slave.
When a new slave is activated in the node by upping the first line/trunk on a UXM card which supports VSI, the NPM will send a VSI configuration CommBus (internal IGX protocol) message with the list of the controllers attached to the switch.
The NPM will setup master-slave connections from each controller port on the switch to the added slave. It will also set up interslave connections between the new slave and the other active VSI slaves.
Note Slaves in standby mode are not considered VSI configured and are not accounted for in the interslave connections. |
Use the command delctrlr to delete controllers that have been added to interfaces.
When one of the controllers is deleted by using the delctrlr command, the master-slave connections and connections associated with this controller on all the UXM cards in the switch are also deleted. VSI partitions remain configured on the node.
The deletion of the controller triggers a new VSI configuration (internal) message. This message includes the list of the controllers attached to the node, with the deleted controller removed from the list. This message is sent to all active slaves in the node.
As long as one controller is attached to the node with a specific partition, the resources assigned to the partition are not affected by deletion of any other controllers from the node. The slaves only release all VSI resources used on the partition when the partition itself is disabled.
When a slave is deactivated by downing the last line or trunk on the card, the NPM tears down the master-slave connections between the slave and each of the controller ports on the node. The NPM also tears down all the interslave connections connecting the slave to other active VSI slaves.
Note Because VC merge is not supported on the UXM, y-redundancy cannot be set up using a UXM-E and a UXM without generating a feature mismatch error. If y-redundancy is set up between a UXM-E and a UXM, the VC merge feature cannot be enabled. |
VC merge on the IGX is supported in Switch Software Release 9.3.40.
Before setting up y-redundancy on two UXM-E cards, make sure that VC merge feature support is enabled on both cards. Both cards must have the appropriate card firmware to support the VC merge feature.
For more information on y-redundancy on the UXM-E, see the "Card Redundancy" section in "Functional Overview."
Tip Before enabling VC merge, set the minimum number of channels to 550 using the cnfrsrc command. If this minimum number of channels is not available on the card, an error message is displayed. |
To enable VC merge on the IGX, perform the following steps:
Step 2 If you receive the error message shown below, repeat Step 1.
Step 3 Continue with switch configuration or management.
Tip To display the current status of VC merge on the IGX, enter the dspcdparm slot number command. |
To disable VC merge on the IGX, perform the following steps:
Step 2 At the following message, enter y to continue disabling VC merge.
Step 3 If you receive the error message shown below, repeat Step 1 and Step 2.
If you disable the last partition on the slot while VC merge is still enabled, VC merge is disabled on the slot, and the card will display the following error message:
Step 4 Continue with switch configuration or management.
Table 10-14 Switch Software Commands for Setting up a VSI (Virtual Switch Interface)
The following sections provide a sample MPLS configuration using the network shown in Figure 10-16.
For information on configuring Cisco IOS software for MPLS, see MPLS Label Switch Controller and Enhancements 12.2(8)T.
Figure 10-16 provides an example of configuring the IGX as an MPLS label switch (ATM-LSRs) for MPLS switching of IP packets through an ATM network. The figure also shows configuration for Cisco routers for use as label edge routers (edge LSRs) at the edges of the network.
Figure 10-16 displays the configuration for the following components:
The configuration of ATM LSR-3, ATM LSR-4, and ATM LSR-5, is not detailed in this guide. However, it is similar to the sample configurations detailed for ATM LSR-1 and ATM LSR-2. The configuration for Edge LSR-B is similar to Edge LSR-A and LSR-C.
The service template contains two classes of data:
When a connection setup request is received from the VSI master in the LSC, the VSI slave (in the UXM, for example) uses the service type identifier to index into a SCT database that contains extended parameter settings for connections matching that index. The slave uses these values to complete the connection setup and program the hardware.
The ATM-LSR consists of two hardware componentsthe IGX switch (also called the label switch slave) and a router configured as a label switch controller (LSC). The label switch controller can be either an external Cisco router, such as the Cisco 7204, or the chassis-installed URM. LSC configuration for either router option is essentially the same.
For information on configuring the Cisco IOS software running on the LSC for MPLS, see MPLS Label Switch Controller and Enhancements 12.2(8)T.
Tip When configuring an ATM-LSR on an IGX with installed URM, use two terminal sessionsone to log into the embedded UXM-E on the URM card to configure the label switch slave portion of the ATM-LSR, and one to log into the embedded router on the URM card to configure the LSC portion of the ATM-LSR. |
To set up MPLS on an IGX node, complete the following tasks:
a. IGX switch (label switch slave): Configure the IGX for VSI.
b. Label switch controller (LSC): Configure the router with extended ATM interfaces on the IGX.
2. Set up label edge routers (LERs).
3. MPLS automatically sets up LVCs across the network.
Figure 10-17 shows a high-level view of an MPLS network. The packets destined for 204.129.33.127 could be real-time video, and the packets destined for 204.133.44.129 could be data files transmitted when network bandwidth is available.
When MPLS is set up on the nodes shown in Figure 10-17 (ATM-LSR 1 through ATM-LSR 5, Edge LSR_A, Edge LSR_B, and Edge LSR_C), automatic network discovery is enabled. Then MPLS automatically sets up LVCs across the network. At each ATM LSR, VCI switching (also called "label swapping") transports the cells across previously-determined LVC paths.
At the edge LSRs, labels are added to incoming IP packets, and removed from outgoing packets. Figure 10-17 shows IP packets with host destination 204.129.33.127 transported as labeled ATM cells across LVC 1. The figure also displays IP packets with host destination 204.133.44.129 transported as labeled ATM cells across LVC 2.
IP addresses shown are for illustrative purposes only and are assumed to be isolated from external networks. Check with your network administrator for appropriate IP addresses for your network.
Figure 10-18 shows the MPLS label swapping process. This process might take place during the transportation of the IP packets, in the form of ATM cells across the network on the LVC1 and LVC2 virtual circuits:
1. An unlabeled IP packet with destination 204.133.44.129 arrives at edge label switching router (LSR-A).
2. Edge LSR-A checks its label forwarding information base (LFIB) and matches the destination with prefix 204.133.44.0/8.
3. Edge LSR-A converts the AAL5 frame to cells and sends the frame out as a sequence of cells on 1/VCI 50.
4. ATM-LSR-1 (a Cisco IGX 8410, 8420, or 8430 label switch router), controlled by a routing engine, performs a normal switching operation by checking its LFIB and switching incoming cells on interface 2/VCI 50 to outgoing interface 0/VCI 42.
5. ATM-LSR-2 checks its LFIB and switches incoming cells on interface 2/VCI 42 to outgoing interface 0/VCI 90.
6. Edge LSR-C receives the incoming cells on incoming interface 1/VCI 90, checks its LFIB, converts the ATM cells back to an AAL5 frame, and an IP packet, and then sends the outgoing packet to its LAN destination 204.133.44.129.
Note IGX nodes must be set up and configured in the ATM network (including links to other nodes) before beginning configuration for MPLS support on the node. |
To configure the IGX nodes for operation, set up a virtual interface and associated partition by using the cnfrsrc command.
To link the Cisco router to the IGX, use the addctrlr command to add the router as a VSI controller. This allows the router label switch controller function to control the MPLS operation of a node.
For information on configuring the IGX partition, including distribution of IGX partition resources, see the "VSI Configuration" section.
In this example, assume that a single external controller per node is supported, so that the partition chosen is always 1.
To configure the Cisco IGX 8410, 8420, and 8430 label switch routers, ATM-LSR-1 and ATM-LSR-2:
Proceed with configuration as follows:
Before configuring the routers for the label switch (MPLS) controlling function, it is necessary to perform the initial router configuration. As part of this configuration, it is necessary to configure and enable the ATM adapter interface.
After configuring the ATM adapter interface, the extended ATM interface can be set up for label switching. The IGX ports can be configured by the router as extended ATM ports of the physical router ATM interface, according to the following procedures for LSC1 and LSC2.
Proceed with configuration as follows:
Proceed with configuration as follows:
Before configuring the routers for the MPLS controlling function, it is necessary to perform the initial router configuration. As part of this configuration, you must enable and configure the ATM Adapter interface.
Then you can set up the extended ATM interface for MPLS. The IGX ports can be configured by the router as extended ATM ports of the physical router ATM interface, according to the following procedures for LSR-A and LSR-C.
To configure the routers performing as label edge routers, use the procedures in the following tables.
Proceed with configuration as follows:
After you have completed the initial configuration procedures for the IGX and edge routers, the routing protocol (such as OSPF) sets up the LVCs via MPLS as shown in Figure 10-19.
Preliminary testing of the MPLS network consists of:
The following Cisco IOS commands are useful for monitoring and troubleshooting an MPLS network:
Note Cisco IOS commands must be entered at the Cisco IOS CLI in order to function. If you are logged in to the switch, start a separate terminal session to log into either the LSC router portion of the ATM-LSR or the network's edge routers. |
Use the following procedure to test the label switching configuration on the IGX switch (ATM LSR-1, for example):
Command | Description |
The sample output for ATM-LSC-1 (Cisco IGX 8410, 8420, and 8430 shelves) is:
Tip Check online documentation for the most current information. For information on accessing related documents, see the "Accessing User Documentation" section. |
Step 2 If there are no interfaces present, first check that card 3 is active and available with the switch software command, dspcds. If the card is not active and available, reset the card with the switch software command, resetcd. Remove the card to reset if necessary.
Step 3 Check the line status using the switch software command, dsplns (see Example 10-2).
Step 4 Check the trunk status using the switch software command, dsptrks (see Example 10-3).
Note The dsptrks screen for ATM-LSR-1 should show the 3.1, 3.3 and 4.2 MPLS interfaces, with the "Other End" of 3.1 reading "VSI (VSI)". |
Step 5 To see the controllers attached to a node, use the switch software command, dspctrlrs (see Example 10-4). The resulting screens should show trunks configured as links to the LSC as type VSI.
Step 6 To view partition configurations on an interface, use the switch software command, dsprsr (see Example 10-5).
Step 7 To see Qbin configuration information, use the switch software command, dspqbin (see Example 10-6).
Step 8 If an interface is present but not enabled, perform the previous debugging steps for the interface.
Step 9 Use the Cisco IOS ping command to send a ping over the label switch connections. If the ping does not work, but all the label switching and routing configuration appear correct, check that the LSC has found the VSI interfaces correctly by entering the following Cisco IOS command on the LSC:
If the interfaces are not shown, recheck the configuration of port 3.1 on the IGX switch as described in the previous steps.
Step 10 If the VSI interfaces are shown but are down, check whether the LSRs connected to the IGX switch show that the lines are up. If not, check such items as cabling and connections.
Step 11 If the LSCs and IGX switches show the interfaces are up but the LSC does not show this, enter the following command on the LSC:
If the show mpls interfaces command shows that the interfaces are up but the ping does not work, enter the following command on the LSC (see Example 10-7):
Step 12 If the interfaces on the display show "xmit" and not "xmit/recv," then the LSC is sending LDP messages, but not getting responses. Enter this command on the neighboring LSRs.
If resulting displays also show "xmit" and not "xmit/recv," then one of two things is probable:
a. The LSC is not able to set up VSI connections.
b. The LSC is able to set up VSI connections, but cells are not transferred because they cannot get into a queue.
Step 13 Check the VSI configuration on the switch again, for interfaces 3.1, 3.3, and 4.2, paying attention to:
a. Maximum bandwidths at least a few thousand cells per second
c. All Qbin thresholds nonzero
Note VSI partitioning and resources must be correctly set up on the interface connected to the LSC, interface 3.1 (in this example), and interfaces connected to other label switching devices. |
Before configuring VPN operation, your network must run the following Cisco IOS services:
For MPLS VPN operation, you must first configure the Cisco IGX 8410, 8420, and 8430 ATM LSR, including its associated Cisco router LSC for MPLS or for MPLS QoS.
Configure network VPN operation on the edge LSRs that act as PE routers.
The Cisco IGX 8410, 8420, and 8430, including its LSC, requires no configuration beyond enabling MPLS and QoS.
To configure a VRF and associated interfaces, perform these steps on the PE router:
To configure a BGP between provider routes for distribution of VPN routing information, perform these steps on the PE router:
Command | Purpose | |
---|---|---|
Step 1 | ||
Step 2 | ||
Step 3 | Activates a BGP session. Prevents automatic advertisement of address family IPv4 for all neighbors. |
|
Step 4 | ||
Step 5 | ||
Step 6 |
To configure import and export routes to control the distribution of routing information, perform these steps on the PE router:
Command | Purpose | |
---|---|---|
Step 1 | ||
Step 2 | Imports routing information to the specified extended community. |
|
Step 3 | Exports routing information to the specified extended community. |
|
Step 4 |
To verify VPN operation, perform these steps on the PE router:
Please see Example 10-8 for a sample MPLS-VPN configuration file from a PE router.
The maximum number of slaves in a 16-slot switch is 14 and in a 32-slot switch is 30. Therefore, a maximum of 14 or 30 LCNs are necessary to connect a slave to all other slaves in the node. This set of LCNs is allocated from the AutoRoute partition.
If a controller is attached to an interface, master-slave connections are set up between the controller port and each of the slaves in the node.
These LCNs will be allocated from the AutoRoute Management pool. This pool is used by AutoRoute Management to allocate LCNs for connections.
VSI controllers require a bandwidth of at least 150 cps to be reserved on the port for signaling. This bandwidth is allocated from the free bandwidth available on the port (free bandwidth = port speed - PVC maximum bandwidth - VSI bandwidth).
The hot slave standby preprograms the slave standby card the same as the active card, so that when the active card fails, the slave card switches over operation is implemented within 250 ms. Without the VSI portion, the UXM card already provided the hot standby mechanism by duplicating internal IGX protocol messages from the NPM to the standby UXM card.
Because the master VSI controller does not recognize the standby slave card, the active slave card forwards VSI messages that it received from the master VSI controller to the standby slave VSI card.
In summary, these are the hot standby operations between active and standby card:
1. Internal IGX protocol messages are duplicated to a hot-standby slave VSI card by the NPM.
2. VSI messages (from master VSI controller or other slave VSI card) are forwarded to the hot-standby slave VSI card by the active slave VSI card. Operation 2 is normal data transferring, which occurs after both cards are synchronized.
3. When the hot-standby slave VSI card starts up, it retrieves and processes all VSI messages from the active slave VSI card. Operation 3 is initial data transferring, which occurs when the standby card first starts up.
The data transfer from the active card to the standby card should not affect the performance of the active card. Therefore, the standby card takes most actions and simplifies the operations in the active card. The standby card drives the data transferring and performs the synchronization. The active card forwards VSI messages and responds to the standby card requests.
Qbin statistics allow network engineers to engineer and overbook the network on a per CoS (or per Qbin) basis. Each connection has a specific CoS and hence, a corresponding Qbin associated with it.
The IGX switch software collects statistics for UXM AutoRoute Qbins 1 through 9 on trunks and Autoroute Qbins 2, 3, 7, 8, and 9 on ports. Statistics are also collected for VSI Qbins 10 through 15 on UXM trunks and ports.
The following statistics types are collected per Qbin:
Since all Qbins provide the same statistical data, the Qbin number together with its statistic forms a unique statistic type. These unique statistic types are displayed in Cisco WAN Manager and may also be viewed by using the CLI.
Trunk and port counter statistics (cell discard statistics only) for the following Qbins can be collected by SNMP:
Qbin summary and counter statistics are automatically collected and TFTP and USER interval statistics can be enabled. The cell discard statistics on UXM trunk Qbins 1 through 9 are AUTO statistics. The cell discard statistics on Qbins 10 through 15 and AutoRoute port Qbins are not AUTO statistics.
Interval statistics (per Qbin) are collected through Cisco WAN Manager's Statistics Collection Manager (SCM) and through CLI.
Table 10-15 Commands for Collecting and Viewing Qbin Interval, Summary, and Counter Statistics
For more information on MPLS on the IGX, refer to MPLS Label Switch Controller and Enhancements 12.2(8)T.
For more information on Cisco IOS configuration and commands, refer to documentation supporting Cisco IOS Release 12.2T or later (see the "Cisco IOS Software Documentation" section).
For more information on switch software commands, refer to the Cisco WAN Switching Command Reference, Chapter 1, "Command Line Fundamentals ."
For installation and basic configuration information, see the Cisco IGX 8400 Series Installation Guide, Chapter 1, "Cisco IGX 8400 Series Product Overview."
Posted: Mon May 12 15:45:43 PDT 2003
All contents are Copyright © 1992--2003 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.