cc/td/doc/product/rtrmgmt/qos/qpm21
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Planning for Quality of Service

Planning for Quality of Service

Effective use of Quality of Service (QoS) capabilities requires careful planning. Before you deploy QoS to your network, you should carefully consider the types of applications used in the network and which QoS techniques might improve the performance of those applications. Then, use the Cisco QoS Policy Manager (QPM) to create and deploy your QoS policies to the network.

These topics introduce you to QoS concepts and CiscoWorks2000 QoS Policy Manager, and help you get started on developing a QoS strategy:

What Is Quality of Service?

Quality of Service (QoS) is a set of capabilities that allow you to deliver differentiated services for network traffic, thereby providing better service for selected network traffic. With QoS, you can increase bandwidth for critical traffic, limit bandwidth for non-critical traffic, and provide consistent network response, among other things. This allows you to use expensive network connections more efficiently, and to establish service level agreements with customers of the network.

To implement QoS, you define QoS policies for network devices. The policies can differentiate traffic based on categories, such as user address, application type, content, and so on. Once you have identified the applications and users on your network that are bandwidth or time sensitive, as well as the applications that take more than their fair share of bandwidth, you can develop effective QoS policies to improve the overall functioning of your network.

Following are some examples of the benefits of configuring QoS on your network:

Figure 1-1 shows an example of a local and wide area network. Typically, you classify traffic in the LAN before sending it to the WAN. The devices in the WAN then use the classification to determine the service requirements for the traffic. The WAN devices can limit the bandwidth available to the traffic, or give the traffic priority, or even change the classification of the traffic. If you control the WAN as well as the LAN, you can control all aspects of the traffic's priority. However, after the traffic leaves the networks under your control, it is your service provider who decides how to service the traffic (which might be based on an explicit agreement with your enterprise).


Figure 1-1: QoS Across LAN and WAN Networks


What Is CiscoWorks2000 QoS Policy Manager?

CiscoWorks2000 QoS Policy Manager (QPM) provides a scalable platform for defining and applying QoS policy. QPM manages QoS configuration and maintenance on a system-wide basis for Cisco devices, including routers, layer-3 switches (switch routers), switches, and LocalDirector. Using QPM, you can define and deploy policies more easily than you can by using device commands directly.

These topics provide details about the capabilities of QPM:

Overview of QoS Policy Manager

QoS Policy Manager (QPM) lets you define QoS policies at a more abstract level than can be defined using device commands. For example, with QPM you can define policies for groups of devices rather than one device at a time. You can also create policies that apply to applications or groups of hosts more easily than can be defined using device commands.

By giving you a high level view of your policies, QPM makes it easier for you to define, modify, and redeploy policies. You can more easily analyze "what if" scenarios in a lab and then deploy your best solution to your live network.

By simplifying QoS policy definition and deployment, QPM makes it easier for you to create and manage differentiated services in your network, thus making more efficient and economical use of your existing network resources. For example, you can deploy policies that ensure that your mission-critical applications always get the bandwidth required to run your business.

QPM is suitable for large-scale enterprise deployments consisting of hundreds or thousands of devices, such as IP Telephony deployments. QPM facilitates management of large networks by allowing you to create multiple QoS databases, each of which manages a subset of the network devices. In this way, you can effectively partition the network (typically by region and/or types of devices) and implement phased deployment of QoS policies across the network. The number of devices managed in a single database will vary according to your needs and preferences. Each QoS database can be managed separately, and can thus be assigned to specific individuals according to areas of administrative responsibility.

QPM includes the following programs:

For information about QPM features, refer to:

QPM Features and Benefits

Table 1-1 provides a description of QPM's main features and benefits.


Table 1-1: QPM Features and Benefits
Feature Description Further Information

Policy abstraction from device commands

QoS Manager converts your policies to device commands, without your having to know the device commands.

QPM Abstract Actions Translated to Device Commands

Simplified policy definition

Policy Manager's policy definition interface simplifies the creation of complex policies and enables you to create filters to define exactly the traffic you are targeting.

Creating a Policy

Simplified policy prioritization

Devices analyze policies in the order in which they are entered in Policy Manager. You can easily change this order within Policy Manager. QPM reorders the policies automatically on deployment.

Changing the Priority of Policies.

Basic policy validation

Policy Manager lets you define only policies that are supported by the device, interface, and software version. For example, when you set a queuing technique for an interface, only policies supported by that technique can be defined.

Device groups

Policy Manager lets you define groups of devices or interfaces, instead of having to configure only one device at a time. If a group contains devices that use different software versions, Policy Manager ensures that you can define only policies supported by the lowest version of the software used in the group.

Working with Device Groups

Device querying

Policy Manager queries devices you add to the QoS database to determine the software version, device type, and available interfaces. Because the information is obtained directly from the device, it is reliable.

Adding a Device

CiscoWorks2000 integration

Policy Manager lets you import device inventories exported by the CiscoWorks2000 Resource Manager Essentials applications. This simplifies adding devices to the QoS database.

Importing Devices from a Device Inventory

Host groups

Policy Manager lets you define groups of hosts (specific hosts or subnets). You can then use these groups when defining policies. For example, you can define a policy for all traffic that comes from a specific subnet. Alternatively, you can define a policy for all traffic that comes from your database server, giving it high priority.

Working with Host Groups

Application services

Policy Manager lets you define application services based on port, protocol, and host or subnet. You can then use these definitions when defining policies.

Working with Application Services Aliases

DNS host name resolution

If you use host names, Policy Manager resolves them to IP addresses. You can periodically force Policy Manager to redo DNS resolution to pick up changes in your network.

Resolving the Host Names in a Policy to Their IP Addresses

Web-based reporting

Both Policy Manager and Distribution Manager produce reports in HTML format. You can store these reports on your intranet and manipulate them as you require, or print them from the browser.

Creating Policy Reports

Creating Policy Distribution Reports

Uploading Device QoS Configurations

Job and device status, logging, and history

Distribution Manager maintains logs of job and device policy distributions, and maintains a history of these logs. This ensures there is an audit trail of policy configuration actions.

Reading the Distribution Manager Logs

Ability to view device commands

Both Policy Manager and Distribution Manager let you inspect the device commands that QPM will use to configure your devices. If you are fluent in IOS software, Catalyst software, or LocalDirector commands, or if you are just learning, this feature can help you understand the device's configuration commands.

Viewing the Configuration Commands for a Device

Job control

Distribution Manager lets you halt policy distributions when you are distributing policies to several devices. You can also configure Distribution Manager to distribute policies to many devices in parallel, in which case your ability to cancel policy distributions is more limited.

Changing Distribution Manager Configuration Settings

Incremental configuration updating

When distributing policies, Distribution Manager distributes only the policies that have changed.

Distributing Policies and QoS Configurations

Hands-off configuration updating

You can use the QPM distribute_policy.exe program to distribute QoS databases from a script or program. This lets you change QoS configurations on a pre-set schedule without human intervention.

Deploying Distribution Jobs from an External Program

Voice over IP (VoIP) support

QPM supports Class-Based Weighted Fair Queuing (CBWFQ) with QoS features that ensure reliable delivery of voice, with low latency. The result is minimal delay, jitter and packet loss.

Management of Voice and Other Real-Time Traffic

Traffic Shaping or Traffic Limiting Techniques for Controlling Bandwidth

"Configuring QoS for IP Telephony"

Upload of existing device configuration

If you have already defined QoS configurations on your devices using the CLI, you can upload them into the QoS database when you add the devices to the database.

How Does QoS Policy Manager Support Existing QoS Configurations?

Uploading Device QoS Configurations

Verification of device configuration

QPM lets you check whether changes have been made on your devices by comparing the policies configured on the devices with the policies defined in your QoS database.

Does QoS Policy Manager Ensure That Policies Are Consistent with Network Configuration?

Verifying Device Configuration

Ability to restore a previous database version

You can restore a previous version of a specific database that was distributed to the network. This feature is very useful when unexpected errors occur as a result of the deployment of a database and there is an immediate need to go back to a previous version of that database.

Restoring a Database Version

Content networking support

QPM supports using NBAR or dNBAR to recognize and classify specific applications for which network services can then be invoked.

Using Network Based Application Recognition (NBAR) with CBWFQ

New Features in QPM 2.1

Table 1-2 describes the main new features in QPM 2.1. For specific information about which QoS features are supported on the devices and their software versions, refer to What Devices and Software Releases Are Supported?.


Table 1-2: New Features in QPM 2.1
Feature Description Further Information

Predefined templates for configuring QoS for IP Telephony

QPM provides a separate database containing predefined device groups and policies for configuration of QoS for IP Telephony. All you need to do is click a button on the toolbar to open the database, add devices to the database, then add interfaces to the relevant device groups and deploy the database.

"Configuring QoS for IP Telephony"

Support for QoS on Catalyst 6000 with Supervisor IOS

  • Ability to define the IP precedence/DSCP markdown values to be used in coloring and limiting policies.

  • Ability to configure queuing settings on the interface level, for all interfaces belonging to the same ASIC group.

  • CoS/ToS to DSCP mapping capability.

  • Cross-interface aggregation for coloring and limiting on device groups.

  • Ability to color by trust.

Limiting on Catalyst 6000 Switches

Coloring by Trust

Device Groups for Catalyst 6000 Switches with Supervisor IOS

DSCP Mapping Dialog Box

DSCP Markdown Dialog Box

Additional QoS capabilities on Catalyst 6000 devices

  • Trust extension.

  • Ability to define the IP precedence/DSCP markdown values to be used in limiting policies.

  • CoS/ToS to DSCP mapping capability.

  • Cross-interface aggregation for limiting on device groups.

  • Coloring by trust.

Limiting on Catalyst 6000 Switches

Trust Boundaries

Coloring by Trust

Device Groups for Catalyst 6000 Switches with Supervisor IOS

DSCP Mapping Dialog Box

DSCP Markdown Dialog Box

QoS support for additional devices

  • Catalyst 6000 with Supervisor IOS

  • Catalyst 6000 with Supervisor IOS + DFC

  • Catalyst 6000 with Supervisor IOS + FlexWan

  • Catalyst 4000 Access Gateway Module

  • Catalyst 2900XL

  • Catalysts 3500XL and 3500-PWR-XL

  • Catalyst 4224

  • MC3810

What Devices and Software Releases Are Supported?

QPM support for additional IOS and CatOS versions

  • IOS 12.2, 12.2(2)T, 12.1(6)E

  • CatOS 6.2

What Devices and Software Releases Are Supported?

Extended content networking support

  • NBAR support extended to 2600 and 3600 routers.

  • dNBAR support on Cisco 7500 devices with VIP, and on MSFC FlexWAN

What Devices and Software Releases Are Supported?

Using Network Based Application Recognition (NBAR) with CBWFQ

Enhanced FRTS configuration capabilities

  • Ability to configure FRTS on one DLCI per point to point subinterface.

  • Ability to specify the minimum rate to be used during times of congestion (MinCIR).

New Interface and Properties of Interface Dialog Boxes

ATM VC support

Ability to configure Class Based QoS on one ATM PVC per point to point ATM subinterface.

New Interface and Properties of Interface Dialog Boxes

Database file protection

Only QPM user group members, administrators and the QPM system, have read/write access to QPM database files.

Understanding QPM User Authorization

Audit trail of user logon

The Job Log in the Distribution Manager shows the user that last saved the current database, as well as the user that last deployed the Job.

Audit Trail of User Logon

Update passwords from RME file

Ability to update a group of device passwords at one time, by importing an RME file.

Importing Devices from a Device Inventory

What Devices and Software Releases Are Supported?

The tables in this section describe the devices and software releases that QoS Policy Manager supports, and the QoS techniques you can use on the supported platforms. Please note that the information in the tables is subject to change, depending on specific devices and their QoS support.


Note   QPM allows you to manage large enterprise networks by creating multiple QoS databases, each of which manages a different subset of devices. For example, you can manage core devices in one database and edge devices in another database.

The following information is provided:

Supported Devices and QoS Techniques for IOS Software Releases

Cisco IOS releases supported include 11.1, 11.2, 11.3, 11.1cc, 12.0, 12.1, 12.2, 12.2(2)T, and 12.1(6)E. In addition, a Cisco IOS mapping function is used to enable QPM to support 12.2(2)T and 12.1(6)E QoS techniques included in later releases of IOS main (T or E train) software.

The following tables describe the QoS techniques that you can use with the devices and IOS software releases that QPM supports.


Table 1-3: Supported Devices and QoS Techniques for IOS Software
Releases 11.x
Quality of Service Technique Cisco Systems Device IOS Software Release
11.1 11.2 11.3 11.1(cc)

Priority Queuing (PQ), Custom Queuing (CQ)

7500, 7200

Supported

Supported

Supported

Supported

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

Supported

Supported

Supported

-

2600

-

-

Supported

-

7100, RSM VIP, 7500 VIP

-

-

-

-

Weighted Random Early Detection (WRED)

7500 VIP (uses DWRED)

-

-

-

Supported

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

-

Supported

Supported

-

2600

-

-

Supported

-

7100, RSM VIP

-

-

-

-

Weighted Fair Queuing (WFQ)

7500, 7200

Supported

Supported

Supported

Supported

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

Supported

Supported

Supported

-

2600

-

-

Supported

-

7100

-

-

-

-

RSM VIP (uses FQ only)

-

-

-

-

7500 VIP (uses FQ only)

-

-

-

Supported

Distributed Weighted Fair Queuing (DWFQ), Fair Queuing, and QoS group DWFQ

7500 VIP

-

-

-

Supported

7100, RSM VIP, RSM (Catalyst 5000), 4700, 4500, 3600, 4000, 2500

-

-

-

-

Policy-Based Routing (PBR) (also called coloring or classification)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

-

Supported

Supported

-

2600

-

-

Supported

-

7100, RSM VIP, 7500 VIP

-

-

-

-

MC3810

-

Supported

-

Supported

Generic Traffic Shaping (GTS)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

-

Supported

Supported

-

2600

-

-

Supported

-

7100, RSM VIP, 7500 VIP

-

-

-

-

Frame Relay Traffic Shaping (FRTS)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

-

Supported

Supported

-

2600

-

-

Supported

-

7100, RSM VIP, 7500 VIP

-

-

-

-

Committed Access Rate (CAR) classification (coloring)

7500, 7500 VIP, 7200

-

-

-

Supported

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600, 2600, 7100, RSM VIP

-

-

-

-

Committed Access Rate (CAR) rate limiting

7500, 7500 VIP, 7200, RSM (Catalyst 5000), 4700, 4500, 3600, 2600

-

-

-

Supported

4000, 2500, 1600, 7100, RSM VIP

-

-

-

-

Resource Reservation Protocol (RSVP)

7500, 7200, 4700, 4500, 4000, 3600, 2600, 2500, 1600

-

Supported

Supported

-

7100, RSM VIP, 7500 VIP

-

-

-

-

Compressed Real-Time Protocol (CRTP)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

-

Supported

Supported

-

RSM VIP, 7500 VIP

-

-

-

-

Link Fragmentation and Interleaving (LFI)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

-

-

Supported

-

RSM VIP, 7500 VIP

-

-

-

-


Table 1-4: Supported Devices and QoS Techniques for IOS Software Releases 12.x
Quality of Service Technique Cisco Systems Device IOS Software Release
12.0 12.1 12.2 12.1(6)E and later1 12.2(2)T and later1

Priority Queuing (PQ)

7500

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

RSM (Catalyst 5000), 4700, 4500, 3600

Supported

Supported

Supported

-

Supported

Cat4224, C4GWY2

-

-

Supported

-

Supported

4000, 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM VIP, 7500 VIP, MSFC FlexWAN

-

-

-

-

-

MC3810

-

Supported

Supported

-

Supported

Custom Queuing (CQ)

7500

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

RSM (Catalyst 5000), 4700, 4500, 3600

Supported

Supported

Supported

-

Supported

4000, 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM VIP, 7500 VIP, MSFC FlexWAN

-

-

-

-

-

MC3810

-

Supported

Supported

-

Supported

C4GWY2, Cat4224

-

-

Supported

-

Supported

Weighted Random Early Detection (WRED)

7500, 7500 VIP (uses DWRED)

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 4000, 3600 , 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

MSFC FlexWAN

-

-

Supported

Supported

Supported

MC3810

-

Supported

Supported

-

Supported

C4GWY2, Cat4224

-

-

Supported

-

Supported

Weighted Fair Queuing (WFQ) or Fair Queuing (FQ) where indicated

7500, 7500 VIP (uses FQ only)

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), RSM VIP (Catalyst 5000; uses FQ), 4700, 4500, 4000, 3600, 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

MSFC FlexWAN

-

-

Supported

Supported

Supported

MC3810

-

Supported

Supported

-

Supported

C4GWY2, Cat4224

-

-

Supported

-

Supported

Distributed Weighted Fair Queuing (DWFQ), Fair Queuing, and QoS group DWFQ

7500 VIP

-

Supported

-

-

-

7500, RSM VIP (Catalyst 5000), 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, C4GWY2, Cat4224, 2600, 2500, 1600, MSFC FlexWAN, Catalyst 6000 (Supervisor IOS)3, Catalyst 4000-L3, MC3810

-

-

-

-

-

Class-Based Weighted Fair Queuing (CBWFQ) using Modular CLI (MQC) with policing

7500, 7200, 4700, 4500, 4000, 3800, 3600, C4GWY2, Cat4224, 2600, 2500

-

-

Supported

-

Supported

1750, 1720

-

-

-

-

-

7500 VIP, RSM VIP, 7100, 7200

-

-

Supported

Supported

Supported

MC3810

-

-

-

-

-

Class-Based Weighted Fair Queuing (CBWFQ) using Modular CLI (MQC) with LLQ

7500, 7200, 4700, 4500, 3600, C4GWY2, Cat4224, 2600, 2500, 1600

-

Supported

Supported

-

Supported

1750, 1720

-

Supported

Supported

-

Supported

MC3810

-

Supported

Supported

-

Supported

Class-Based Weighted Fair Queuing (CBWFQ) using MQC with LLQ + set/match classification + RTP+FRTS+police+shape

7500, 7200, 7100

-

-

Supported

-

Supported

4700, 4500, 3600, C4GWY2, Cat4224, 2600, 2500, 1600

-

-

Supported

-

Supported

1750, 1720

-

-

Supported

-

Supported

7500 VIP, MSFC FlexWAN

-

-

Supported

Supported

Supported

RSM VIP

-

-

Supported

-

Supported

MC3810

-

-

-

-

-

Class-Based Weighted Fair Queuing (CBWFQ) using MQC with dTS and dFRF

7500 VIP

-

-

Supported

Supported

Supported

MSFC FlexWAN

-

-

-

Supported

-

MC3810

-

-

-

-

-

IP RTP Priority ("PQ+WFQ")

7500

-

Supported

Supported

-

Supported

7200

-

Supported

Supported

-

Supported

7100

-

-

Supported

-

Supported

4000

-

Supported

Supported

-

Supported

1750

-

-

Supported

-

Supported

1720

-

-

Supported

-

Supported

4700, 4500, 3600, C4GWY2, Cat4224, 2600, 2500

-

Supported

Supported

-

Supported

1600

-

Supported

-

-

-

RSM VIP, 7500 VIP, MSFC FlexWAN

-

-

-

-

-

MC3180

-

-

Supported

-

Supported

Policy-Based Routing (PBR) (also called coloring or classification)

7500, 7500 VIP, 7200, 7100, RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 3600, C4GWY2, Cat4224, 2600, Catalyst 6000 (Supervisor IOS)3, Catalyst 4000-L3, MC3810

-

-

-

-

-

4000

Supported

Supported

Supported

-

Supported

2500, 1600

Supported

Supported

-

-

-

Generic Traffic Shaping (GTS)

7500

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, C4GWY2, Cat4224, 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

7500 VIP, RSM VIP (Catalyst 5000)

-

-

-

-

-

MSFC FlexWAN

-

-

-

-

-

Catalyst 4000-L3

Supported

-

-

-

-

MC3810

-

Supported

Supported

-

Supported

Frame Relay Traffic Shaping (FRTS)

7500

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, C4GWY2, Cat4224, 2600, 2500, 1600

Supported

Supported

Supported

-

Supported

7500 VIP, RSM VIP (Catalyst 5000)

-

-

-

-

-

MSFC FlexWAN

-

-

-

-

-

MC3810

-

Supported

Supported

-

Supported

Enhanced FRTS with Frame Relay Fragmentation (FRF.12), Frame Relay Fair Queue, and Frame Relay Voice Configuration

7200

-

Supported

Supported

-

Supported

3600, C4GWY2, Cat4224, 2600

-

Supported

Supported

-

Supported

7500, 7100, RSM VIP

-

-

Supported

-

Supported

1750, 1720

-

-

Supported

-

Supported

7500 VIP, MSFC FlexWAN

-

-

Supported

-

Supported

MC3810

-

-

Supported

-

Supported

Committed Access Rate (CAR) classification (also called coloring)

7500, 7500 VIP, 7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 3600, C4GWY2, Cat4224, 2600

Supported

Supported

Supported

-

Supported

2500, 1600

-

Supported

Supported

-

Supported

MSFC FlexWAN

-

-

Supported

Supported

Supported

4000

-

-

-

-

-

Catalyst 4000-L3

Supported

-

-

-

-

MC3810

-

-

-

-

-

Committed Access Rate (CAR) rate limiting

7500 VIP

Supported

Supported

Supported

Supported

Supported

7100, 7200, 7500

-

-

Supported

Supported

Supported

RSM VIP (Catalyst 5000)

Supported

Supported

Supported

-

Supported

1750, 1720

Supported

Supported

Supported

-

Supported

RSM (Catalyst 5000), 4700, 4500, 3600, C4GWY2, Cat4224, 2600

Supported

Supported

Supported

-

Supported

2500, 1600

-

-

Supported

-

Supported

MSFC FlexWAN

-

-

Supported

Supported

Supported

4000

-

-

-

-

-

Catalyst 4000-L3

Supported

-

-

-

-

MC3810

-

-

-

-

-

Weighted Round Robin (WRR)

Catalyst 8510, Catalyst 8540

Supported

Supported

-

-

-

4908GL-3, 2948GL-3

Supported

-

-

-

-

Catalyst 4000-L3

Supported

-

-

-

-

Class Based QoS with Limit, Color and Trust

Catalyst 6000 (Supervisor IOS)3

-

-

-

Supported

-

2Q2T / 1P2Q2T

Catalyst 6000 (Supervisor IOS)3

-

-

-

Supported

-

COS-DSCP-
COS mapping

Catalyst 6000 (Supervisor IOS)3

-

-

-

Supported

-

IP-precedence-
DSCP mapping

Catalyst 6000 (Supervisor IOS)3

-

-

-

Supported

-

Resource Reservation Protocol (RSVP)

7500

Supported

Supported

Supported

Supported

Supported

7200

Supported

Supported

Supported

Supported

Supported

7100

-

-

Supported

Supported

Supported

4000

Supported

Supported

Supported

-

Supported

MSFC FlexWAN

-

-

-

Supported

-

4700, 4500, 3600, C4GWY2, Cat4224, 2600, 2500

Supported

Supported

Supported

-

Supported

1600

Supported

Supported

-

-

-

RSM VIP

-

-

-

-

-

MC3810

-

Supported

Supported

-

Supported

Network-Based Application Recognition (NBAR)

7200, 7100

-

-

Supported

Supported

Supported

Catalyst 6000 (Supervisor IOS)3, Catalyst 4000-L3, MC3810

-

-

-

-

-

2600, 3600, C4GWY2, Cat4224

-

-

Supported

-

Supported

7500 VIP

-

-

-

Supported

Supported

MSFC FlexWAN

-

-

-

Supported

-

Compressed Real-Time Protocol (CRTP)

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, C4GWY2, Cat4224, 2600, 2500

Supported

Supported

Supported

-

Supported

7500, 7200, 7100

Supported

Supported

Supported

Supported

Supported

1600

Supported

Supported

-

-

-

1750, 1720

Supported

Supported

Supported

-

Supported

7500 VIP, MSFC FlexWAN

-

-

Supported (dCRTP)

-

Supported (dCRTP)

MC3810

-

Supported

Supported

-

Supported

Link Fragmentation and Interleaving (LFI)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, C4GWY2, Cat4224, 2600, 2500

Supported

Supported

Supported

-

Supported

1600

Supported

Supported

-

-

-

1750, 1720

Supported

Supported

Supported

-

Supported

7500 VIP

-

-

-

-

Supported (dLFI)

MC3810

-

Supported

Supported

-

Supported

1A Cisco IOS mapping function is used to enable QPM-PRO 2.1(x) to support 12.2(2)T and 12.1(6)E QoS techniques included in later releases of IOS main (T or E train) software. IOS 12.1(2)T and IOS 12.1(5)T will be mapped to IOS 12.2. IOS 12.1(2)E will be mapped to IOS 12.1(6)E.
2
Catalyst 4000 with Access Gateway Module.
3
Catalyst 6000 (Supervisor IOS) with FlexWAN supports the same features as MSFC with FlexWAN.

Supported Devices and QoS Techniques for CatOS Software Releases

Cisco CatOS releases supported include 5.5, 6.1 and 6.2. In addition, a Cisco CatOS mapping function is used to enable QPM to work with the supported QoS techniques of 5.5, 6.1, and 6.2 in later releases of CatOS.

The table below describes the QoS techniques that you can use with the devices and CatOS software releases that QPM supports.


Table 1-5: Supported Devices and QoS Techniques for Catalyst Operating System
Quality of Service Technique Cisco Systems Device Catalyst Software Release
5.5 6.1 6.2 and later1

Classification (coloring)

Catalyst 5000 family with NFFC-II

Supported

Supported

Supported

Catalyst 6000 family with PFC

Supported

Supported

Supported

Set Trust and Trust Extension

Catalyst 6000 family with PFC

Supported

Supported

Supported

Traffic policing including microflows and markdown

Catalyst 6000 family with PFC

Supported

Supported

Supported

2Q2T queuing, 1P2Q2T queuing

Catalyst 6000 family with PFC

Supported

Supported

Supported

COS-DSCP-COS mapping

Catalyst 6000 family with PFC

Supported

Supported

Supported

1A Cisco CatOS mapping function is used to enable QPM 2.1(x) to work with the supported QoS techniques of 5.5, 6.1, and 6.2 in later releases of CatOS. CatOS 5.4 will be mapped to CatOS 5.5.
Note: Catalyst 6000 (Supervisor IOS) supports the same features as Catalyst 6000 with CatOS, except for trust extension.

Supported Device Software Releases and QoS Techniques for Other Devices

The table below describes the QoS techniques that you can use with other devices and device software releases that QPM supports.


Table 1-6: Supported Device Software Releases and QoS Techniques for Other Devices
Quality of Service Technique Cisco Systems Device Device Software Release

Packet classification (coloring)

LocalDirector

3.1.1

QoS Features That Require IP CEF or dCEF

Cisco Express Forwarding (CEF) is an advanced Layer 3 switching technology inside a router. Distributed CEF (dCEF) enables distributed forwarding on interfaces with VIP.

CEF must be enabled on a device in order to configure the following class based QoS features on the device:

dCEF must be enabled in order to configure the following class based QoS techniques on a device with VIP:

The global CLI command to enable CEF is:

ip cef [distributed] switch

How Does QoS Policy Manager Deploy QoS Policies?

QoS Policy Manager translates your policies into device commands and enters the commands through the device's command line interface (CLI). Some policies require the creation of access control lists (ACLs), others do not.

You can define up to three ACL ranges for the ACLs created by QPM. This lets you control your ACL numbering and use your specific numbering convention. The ACL range is defined globally for all QoS databases in your system.

Through QPM, you can inspect the commands that will be used to configure the devices. During policy distribution, you can view device log messages as QPM configures each device, so that you can identify configuration successes and failures.

Figure 1-2 shows the relationship of QPM to the devices in the network. If you are using a remote version of QPM (B), it updates the network through the QoS Manager service in the complete version (A). QoS Manager does the actual work of translating your policies, contacting the devices, and updating the device configurations.


Figure 1-2: QoS Policy Manager's Relationship to the Network


Device configuration can be implemented through QPM in the following ways:

The configuration file can be deployed to the device via TFTP or any other application that downloads configuration files to the devices.

Using QPM, you can restore a previous version of a specific database that was distributed to the network, in order to redistribute it. This is especially important when unexpected errors occur as a result of the deployment of a database and there is an immediate need to go back to a previous version of that database.

Does QoS Policy Manager Ensure That Policies Are Consistent with Network Configuration?

QoS Policy Manager does basic checking to ensure that your policies can be implemented. For example, you cannot define a policy or select a queuing technique that is not supported on the interface or device based on its software version and device model.

QPM does not check to ensure that your policies are consistent with each other. For example, if you have two policies on an interface, and the policies use the same filter conditions (thus selecting identical traffic), the second policy will never be applied (unless the first policy specifies that the interface should consider subsequent policies, which is a feature only available in committed access rate (CAR) policies). Thus, QPM ensures that your defined policies can be implemented, but does not ensure that your policies will have the effect you desire.

You can verify the device configuration to check whether the policies configured on the devices are consistent with the policies defined in your QoS database. If CLI changes are made on the device after deployment, there might be a mismatch between the database and the device configuration. During verification a DNS resolution check is done for all DNS names that are defined in the policy filter definition.

How Does QoS Policy Manager Support Existing QoS Configurations?

If you have already defined QoS configurations on your devices using the CLI, you can upload them into the QoS database when you add the devices to the database. QoS Policy Manager translates the QoS configurations into the QoS database, and generates reports for those QoS configurations that were not successfully uploaded. Unsuccessful upload might be because of incomplete configurations on the router, configuration options that are not supported by QPM, and so on.

How Does QoS Policy Manager Support Existing ACLs?

If you have existing ACLs on a device, QPM does not change or delete them. They remain defined on the device until you change or remove them using the device's commands. For example, QPM does not modify ACLs created by Cisco ACL Manager.

Planning for QoS Deployment

These topics help you decide how and where to deploy QoS in your network:

Which Applications Benefit from QoS

Some applications can benefit more from QoS techniques than other applications. The benefits you might get from QoS are dependent not only on the applications you use, but on the networking hardware and bandwidth available to you.

In general, QoS can help you solve the problems of constricted bandwidth and time sensitivity.

If you have insufficient bandwidth, either due to the lines you are leasing or the devices you have installed, QoS can help you allocate guaranteed bandwidth to your critical applications. Alternatively, you can limit the bandwidth for non-critical applications (such as FTP file transfers), so that other applications have a greater amount of bandwidth available to them.

Some applications, such as video, require a certain amount of bandwidth for them to work in a usable manner. With QoS policies, you can guarantee the bandwidth required for these applications.

For time-sensitive applications, which are sensitive to timeouts or other delays, you can help the applications by coloring their traffic with higher priorities than your regular traffic, or by placing the traffic in a priority queue. You can also define minimum bandwidth to help ensure the applications can deliver data in a timely fashion.

Real-time applications such as voice applications tolerate minimal variation in the amount of delay affecting delivery of their voice packets. Voice traffic is also intolerant of packet loss and jitter, both of which degrade unacceptably the quality of the voice transmission delivered to the recipient end user. You can use QoS policies to provide priority service to ensure reliable delivery of packets with low latency.

As you deploy QoS, identify the applications used on your network that are bandwidth or time sensitive, and also identify the applications that take more than their fair share of bandwidth. With this information, you can develop effective policies to improve the overall functioning of your network.

Which Interfaces Benefit from QoS

Any interface that is congested or on which congestion avoidance is required, can benefit from QoS policies. LAN-WAN links are typical points of congestion, as data is moved between lines that have differing carrying capacity. These links might be the best place to start deploying QoS policies. However, the congestion points for your network might be anywhere. You evaluate interface points where packets most likely get dropped during peak traffic periods.

Where to Deploy QoS in the Network

Deploy QoS to manage traffic congestion, and ensure the quality of real-time traffic:

What Types of Quality of Service Does QPM Handle?

QoS Policy Manager's interface makes it easier for you to create Quality of Service policies, so that you do not have to manually connect to each of your devices and use device commands to configure the policies.

QPM detects the QoS capabilities that are available on each of your devices, as defined by the device model, interface type, and the software version running on the device. With QPM, you cannot select an unsupported QoS capability for a given device or interface. You can choose different QoS techniques for different interfaces, as appropriate, to implement your overall networking policies.

QPM policies let you define the following:

The following topics cover the general way that devices and interfaces apply policies to traffic, and the types of QoS capabilities you can implement with QPM:

Understanding Policy Implementation Sequence on an Interface

Understanding the sequence in which policies are implemented by an interface can help you define meaningful policies that implement your traffic management requirements. Figure 1-3 shows the sequence a packet follows when it reaches an interface.


Figure 1-3: Sequence Used to Implement Interface Policies on a Packet


When a packet reaches an interface, the interface acts upon the packet in the following sequence:

With some IOS software versions and device models, you can define a policy so that subsequent policies are considered after a match is found. In these cases, you can color a packet in one policy at the input interface, and apply a limiting policy to the same packet, perhaps by keying on the packet's color. Refer to Table 1-2 to see which combinations support committed access rate (CAR) limiting or CAR classification. Normally, you should apply a coloring policy prior to applying a limiting policy. However, in some CAR cases a limiting policy can be applied at the input interface before applying a coloring policy.

Traffic Coloring Techniques

Some interface or device QoS properties recognize a packet's relative importance by examining the IP precedence or DiffServ Code Point (DSCP) value in the packet's header. Changing the IP precedence or DSCP value changes the packet's color or classification. Because the IP precedence or DSCP value is embedded in the packet, changing it can affect the way the packet is handled on its entire path through the network.

This topic provides the following information:

Coloring on Routers

Coloring on Catalyst 5000 Family Switches

Coloring on Catalyst 6000 Family Switches

Interface QoS Property Requirements for Colored Traffic

You can define traffic coloring policies on any type of interface.

WFQ, WRED, WRR, 1P2Q2T, and 2Q2T automatically consider the packet's color when queuing the packet. To have the packet's color affect queuing on interfaces using other queuing properties, you must define policies on the interface that specifically look at the precedence value (for example, custom or priority queuing policies, or shaping or limiting policies).

Coloring on Routers

On routers, you create coloring policies on interfaces. QPM uses policy-based routing (PBR), committed access rate (CAR), or modular CLI classification to implement the policies. If the router and IOS software version supports CAR, QPM uses CAR. Otherwise, QPM uses PBR. QPM uses modular CLI if you choose to create a class-based policy on a router with an IOS software version that supports modular CLI.

Because IOS software applies coloring policies on the inbound interface before queuing a packet, the coloring policy you define can affect how that packet is queued on the interface. Interfaces that use WFQ, WRED and WRR queuing techniques automatically recognize and use the IP precedence value. PQ, CQ and CBWFQ interfaces do not automatically consider the IP precedence of a packet. Therefore, to have coloring affect how the packet gets prioritized on an interface using these queuing techniques, you must create an additional policy on the outbound interface that recognizes the traffic and places it in the appropriate queue (in addition to creating a coloring policy on the inbound interface). Likewise, if you want to shape or limit traffic based on IP precedence, you must create traffic shaping or limiting policies on the outbound interface that specifically look for the defined precedence value.

If the interface supports CAR, you can use advanced coloring features. Your coloring policy can apply different precedence values based on whether the traffic flow is conforming to or exceeding a specific rate. You can also specify that additional policies be examined on the interface (usually, if a packet matches a policy, the policy is applied and no other policies on the interface are examined). Thus, in one policy you can color the traffic, and in the next policy, you can use the packet's color to limit the traffic to a specific rate, or place it in a custom or priority queue. CAR also allows you to color traffic whether it is entering or leaving the interface (or both), whereas PBR only lets you color traffic that is entering the interface.

QPM presents you with these advanced features only if the interface supports them.

If the software version supports modular CLI, you can define a class-based multiple-action policy that can contain a coloring action, a limiting action and queuing. The limiting policy definition can also apply coloring, based on whether the traffic conforms to the rate limit or exceeds it.


Note   If you define both coloring and queuing actions on an outbound interface, the queuing action cannot use the coloring defined in the same policy on that interface. It will use the coloring defined on the inbound interface.

Coloring on Catalyst 5000 Family Switches

Coloring policies on Catalyst 5000 family switches apply to all interfaces on the device. The switch itself does not use the packet classification to alter how it queues packets—the precedence setting affects only the packet as it travels through other network devices.

Coloring on Catalyst 6000 Family Switches

You create coloring policies in order to change the classification of packets, giving some packets priority over others across the network. Ports that use 1P2Q2T or 2Q2T queuing, or other precedence-sensitive queuing techniques, use the packet classification to determine how they queue packets.

With Catalyst 6000 switches, you can create coloring policies on the switch's ports or VLANs. For each port, you can specify whether its QoS style is port-based or VLAN-based. Policies defined on a VLAN will be deployed to its ports only if the ports are defined with VLAN-based QoS style (see Working with Device Interfaces and VLANS).

If you want to create the same policy on all ports on a device, you can create a device group containing all the ports and define the coloring policy on the device group. On deployment, only one ACL is created for the device. This ACL is mapped to each port in the device group.

You can also color packets while limiting the traffic rate by creating a limiting (traffic policing) policy instead of a coloring policy.

Trust Boundaries

Coloring a packet or flow with a specific priority establishes a trust boundary that must be enforced. The concept of trust is integral to implementing QoS on Catalyst 6000 switches. Once end devices have a set class of service (CoS) or type of service (ToS), the switch port has the option of trusting them or not. If the port trusts the settings, it does not need to do any reclassification; if it does not trust the settings, then it must perform reclassification for appropriate QoS. QPM allows ports to be configured as trusted or untrusted, on both the individual port level and the device group level. On trusted ports, the received CoS/ToS values are used. On untrusted ports, the received CoS/ToS values are replaced with the port CoS/ToS value.

Catalyst 6000 switches (but not Catalyst 6000 switches with Supervisor IOS) provide the additional capability to extend the trust boundary. For example, this is particularly useful for a VoIP network where you have a PC-IP phone-Catalyst 6000 setup. You can ensure that voice packets retain their high precedence settings by extending the trust boundary to the IP phone and setting it to "untrusted" so that the precedence of all packets received from the PC is negated.

Coloring by Trust

In a coloring policy, you have the option to specify a trust setting for specific traffic. This overrides the trust setting configured on the port, for the traffic that matches the policy's filter. This is useful, for example, if a port's trust value is Untrusted, but you are interested in trusting the precedence values of specific traffic from a reliable source.

Coloring on LocalDirector

On LocalDirector, you can create coloring policies on the device. QPM uses LocalDirector packet classification to implement the policies. You can color all traffic from a virtual server, or you can limit the coloring to specific ports or bind  IDs, depending on how the virtual and real servers are defined on the LocalDirector. LocalDirector itself does not use the packet classification to alter how it queues packets—the precedence setting only affects the packet as it travels through other network devices.

Related Topics

Traffic Shaping or Traffic Limiting Techniques for Controlling Bandwidth

You can create traffic shaping or limiting (traffic policing) policies on a device's interface to define how much of the interface's bandwidth should be allocated to a specific traffic flow. You can set your policies based on a variety of traffic characteristics, including the type of traffic, its source, its destination, and its IP precedence settings (traffic coloring).

Shaping differs from limiting in that shaping attempts to smooth the traffic flow to meet your rate requirements, whereas limiting (traffic policing) does not smooth the traffic flow, it only prevents the flow from exceeding the rate.

Unlike queuing techniques, which are part of an interface's characteristics, generic traffic shaping or traffic limiting is done through policies that are defined in access control lists (ACLs), or in class-based policies (modular CLI), with the exception of Frame Relay traffic shaping (FRTS) which is defined as an interface characteristic. Queuing techniques affect traffic only when an interface is congested, or in the case of WRED, when traffic exceeds a certain threshold. With traffic shaping policies, flows are affected even during times of little congestion.

You can use these types of traffic shaping policies:

Generic Traffic Shaping (GTS)—Basic Traffic Rate Control

Generic traffic shaping lets you set a target average transmission rate for specific types of traffic. For example, you can create a policy that limits web traffic to 200 kilobits/second. GTS shapes the traffic flow so that the rate does not exceed this value. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 200 kilobits/second, other kinds of traffic can use the unused bandwidth.

GTS uses a buffer to hold packets while it transmits the flow at the target rate. You can also define a burst size and an exceed burst size to further model the flow. These values define how much data GTS can send from the buffer per time interval. When the buffer is full, packets are dropped.

You can define GTS properties in class-based, multiple-action policies on devices with a software version that supports modular CLI. In these policies, GTS provides two types of shape commands: average and peak. When shape average is configured, the interface sends no more than the committed burst (Bc) in each interval. When shape peak is configured, the interface sends the committed burst (Bc) plus the excess burst (Be) bits in each interval.

GTS is useful for satisfying service level agreements, or for slowing traffic on a link where the destination interface is slower than the transmission interface (where you would define the shaping policies).

Interface QoS Property Requirements for Generic Traffic Shaping

You can define generic traffic shaping policies on any type of interface except VIP interfaces and those that use Frame Relay traffic shaping (FRTS). On VIP interfaces you can use Distributed Traffic Shaping (DTS). On Frame Relay interfaces, you cannot use GTS and FRTS simultaneously, nor can you mix GTS and FRTS on subinterfaces of a single interface.

On devices with a software version that supports modular CLI, you can configure GTS with CBWFQ.

Related Topics

Frame Relay Traffic Shaping (FRTS)—Controlling Traffic on Frame Relay Interfaces and Subinterfaces

Frame Relay traffic shaping lets you specify an average bandwidth size for Frame Relay virtual circuits (VC), defining an average rate commitment for the VC. FRTS is useful for satisfying service level agreements, or for slowing traffic on a link where the destination interface is slower than the transmission interface (where you would define the FRTS rate). Because Frame Relay is a WAN protocol, part of the Frame Relay network you use is provided by a carrier. You need to negotiate rates and other FRTS settings with the carrier to ensure you get the required WAN network performance.

How Does FRTS Work?

FRTS uses a buffer to hold packets while it transmits the flow at the specified target rate (CIR). You can also define a burst size and an exceed burst size to further model the flow. These values define how much data FRTS can send from the buffer per time interval. Once the buffer is full, packets are dropped. In addition, you can control whether the circuit responds to notifications from the network that the circuit is becoming congested (adaptive shaping).

When congestion occurs, the default minimum CIR (minCIR) is used, which is half of the CIR. QPM allows you to override this default by specifying a minimum rate for times of congestion in the FRTS properties.

QPM allows you to configure FRTS on an interface, subinterface or DLCI. Unlike GTS, you enable FRTS in the interface/subinterface properties (you do not create FRTS policies on the interface/subinterface). Thus, your FRTS settings apply to all traffic on the interface/subinterface. You cannot selectively apply different rates to different types of traffic.


Note   You cannot define GTS on an FRTS interface.

QPM applies your FRTS settings to all VCs defined on an interface or subinterface. You cannot treat multiple VCs on a single interface or subinterface differently. However, you can have different rate settings for an interface and its subinterfaces. To use FRTS on a subinterface, you must first enable it on the associated interface.

QPM also enables you to configure FRTS on one DLCI per point-to-point subinterface. You are given the option of configuring FRTS on the subinterface or on the DLCI. If you upload the configuration of a device with FRTS configured on multiple DLCIs, QPM uploads the first DLCI FRTS configuration only (see Uploading Device QoS Configurations).

Device Group Considerations for Frame Relay Traffic Shaping

Device groups allow you to treat selected interfaces or subinterfaces as a single unit, so that you can easily apply common policies or QoS settings to the group. FRTS has a special influence on how you can group Frame Relay interfaces.

FRTS is enabled or disabled for an interface and all of its subinterfaces. You cannot have FRTS enabled on one subinterface and not on another for the same interface. Thus, if you change the FRTS setting (enabled or disabled), the change is also made to any associated interface or subinterface.

If a subinterface is a member of a device group, you cannot change the FRTS setting on the associated interface. When you create a group for Frame Relay subinterfaces, you must specify whether the interfaces for the subinterfaces have FRTS enabled. This limits the subinterfaces you are allowed to add to the group.

Interface QoS Property Influences on Frame Relay Traffic Shaping

In the QPM user interface, you can select a scheduling mechanism to be used for a specific interface. This is the QoS property. You can use FRTS on Frame Relay interfaces using any type of QoS property (except "Do not change"), if the IOS version running on the device supports FRTS.

Consider the following:

Related Topics

Distributed Traffic Shaping (DTS)—Controlling Traffic on VIP Interfaces

Distributed traffic shaping (DTS) can be used with Class Based QoS on VIP interfaces on devices that support modular CLI. DTS supports all functionality provided by both GTS and FRTS.

DTS uses queues to buffer traffic surges that can congest a network. Data is buffered and then sent into the network at a regulated rate. This ensures that traffic will behave according to the configured descriptor, as defined by the Committed Information Rate (CIR), Committed Burst (Bc), and Excess Burst (Be). With the defined average bit rate and burst size that is acceptable on that shaped entity, you can derive a time interval value.

The Be allows more than the Bc to be sent during a time interval under certain conditions. Therefore, DTS provides two types of shape commands: average and peak:

In a link layer network such as Frame Relay, the network sends messages with the forward explicit congestion notification (FECN) or backwards explicit congestion notification (BECN), if there is congestion. With the DTS feature, the traffic shaping adaptive mode takes advantage of these signals and adjusts the traffic descriptors. This approximates the rate to the available bandwidth along the path.

Related Topics

Limiting on Routers—Limiting Bandwidth and Optionally Coloring Traffic

Committed access rate (CAR) limiting lets you set a bandwidth limit for specific types of traffic on router interfaces. For example, you can create a policy that limits web traffic to 200 kilobits/sec. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 200 kilobits/sec, other kinds of traffic can use the unused bandwidth.

Packets are dropped if traffic bursts exceed the limit. CAR limiting does not attempt to smooth or shape the traffic flow in the way that GTS or FRTS do. Because CAR does not buffer the traffic, there is no delay in sending it, unless the traffic flow exceeds your rate policy and it is dropped.

In addition to limiting the traffic rate, CAR lets you color the traffic that conforms to the rate. CAR limiting is related to CAR classification (described in Traffic Coloring Techniques). The difference between CAR classification and CAR limiting is that CAR classification allows traffic that exceeds the rate limit to be transmitted and optionally colored.

One of the main uses for limiting policies is to ensure that traffic coming into your network is not exceeding agreed-upon rates. If you define a limiting policy for inbound traffic, you can throttle misbehaving traffic before it gets into your network. Because you control the traffic's rate at the inbound interface, the traffic should be well-behaved while it is in your network.

For custom queuing or CBWFQ interfaces, you can create a limiting policy to form an upper limit for the bandwidth available to the selected traffic, and have the interface also apply a custom queuing policy to form a lower bandwidth limit for the traffic. To do this, you must ensure that the interface considers other policies as well as the limiting policy and that the limiting policy comes before the associated custom queuing or CBWFQ policy in the list of policies on the interface. QPM enables you to do this by providing a Continue check box in the limiting policy definition page.

You can also use the Continue attribute to apply more than one rate limiting policy. For example, you can have a general policy that applies a rate limit to all TCP traffic, and a subsequent policy that applies a different rate limit to web traffic.

On devices with a software version that supports modular CLI, you can create multiple-action policies including limiting.

Related Topics

Limiting on Switches—Policing Traffic by Limiting Bandwidth

Traffic limiting lets you set a bandwidth limit for specific types of traffic on Catalyst switch ports or VLANS. You can also define a burst size and an exceed burst size to further model the flow. These values define how much data can be sent from the buffer per time interval. For example, you can create a policy that limits aggregate web traffic on an interface to an average rate of 1024 kilobits/second, with a maximum burst of 2048 kilobits. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 1024 kilobits/second with maximum bursts to 2048 kilobits, other kinds of traffic can use the unused bandwidth. If traffic bursts exceed the limits, packets are dropped or their precedence value is reduced (markdown).

Limiting on Catalyst 6000 Switches

QPM supports limiting on Catalyst 6000 switches running CatOS software, as well as Catalyst 6000 switches with Supervisor IOS software (identified as Cat6000(IOS) in QPM).

On both types of Catalyst 6000 switches, you can set bandwidth limits for the following types of flows:

You can select a limiting mechanism for traffic that conforms to the specified rate, either IP Precedence or DSCP.

You have two options for handling traffic that exceeds the specified limits:

Related Topics

Queuing Techniques for Congestion Management for Outbound Traffic

You can set a queuing technique on a device's interface to manage how packets are queued to be sent through the interface. The technique you choose determines whether the traffic coloring characteristics of the packet are used or ignored.

These queuing techniques are primarily used for managing traffic congestion on an interface, that is, they determine the priority in which to send packets when there is more data than can be sent immediately:

First In, First Out (FIFO) Queuing—Basic Store and Forward

FIFO queuing is the basic queuing technique. In FIFO queuing, packets are queued on a first come, first served basis: if packet A arrives at the interface before packet B, packet A leaves the interface before packet B. This is true even if packet B has a higher IP precedence than packet A since FIFO queuing ignores packet characteristics.

FIFO queuing works well on uncongested high-capacity interfaces that have minimal delay, or when you do not want to differentiate services for packets traveling through the device.

The disadvantage of FIFO queuing is that when a station starts a file transfer, it can consume all the bandwidth of a link to the detriment of interactive sessions. This phenomenon is referred to as a packet train because one source sends a "train" of packets to its destination and packets from other stations get caught behind the train.

Policy Requirements for FIFO Queuing Interfaces

There are no specific requirements for creating policies on FIFO interfaces. You do not have to define any policies on these interfaces.

However, you can create traffic shaping or traffic limiting policies on FIFO interfaces to limit the bandwidth available to selected traffic.

You can also color the traffic on a FIFO interface, but the packet's color does not affect the queuing on the interface. However, if the interface supports committed access rate (CAR) classification and limiting, you can create a coloring policy that simultaneously creates a rate limit and colors the traffic.

FIFO's Relationship to Traffic Coloring

FIFO queuing treats all packets the same, meaning that whichever packet gets to the interface first is the first to go through the interface. Traffic shaping and traffic limiting policy statements can affect the bandwidth available to a packet based on its color, but FIFO does not use the coloring value to alter the packet's queuing.

Related Topics

Priority Queuing (PQ)—Basic Traffic Prioritization

Priority queuing is a rigid traffic prioritization scheme: if packet A has a higher priority than packet B, packet A always goes through the interface before packet B.

When you define an interface's QoS property as priority queuing, four queues are automatically created on the interface: high, medium, normal, and low. Packets are placed in these queues based on priority queuing policies you define on the interface. When there is no policy for unclassified traffic, unclassified packets are placed in the normal queue.

The disadvantage of priority queuing is that the higher queue is given absolute precedence over lower queues. For example, packets in the low queue are only sent when the high, medium, and normal queues are completely empty. If a queue is always full, the lower-priority queues are never serviced. They fill up and packets are lost. Thus, one particular kind of network traffic can come to dominate a priority queuing interface.

An effective use of priority queuing would be for placing time-critical but low-bandwidth traffic in the high queue. This ensures that this traffic is transmitted immediately, but because of the low-bandwidth requirement, lower queues are unlikely to be starved.

Policy Requirements for Priority Queuing Interfaces

In order for packets to be classified on a priority queuing interface, you must create policies on that interface. These policies need to filter traffic into one of the four priority queues. You can also create a class default policy to assign unfiltered traffic to a specific queue. When there is no class default policy for an interface, any traffic that is not filtered into a queue is placed in the normal queue.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. If you use limiting policies, you can specify that the interface consider other policies if the limiting policy matches the traffic. In this way, you can both limit the rate for the traffic and place it in a specific priority queue. If you do not specify Continue on the limiting policy, traffic that satisfies the limiting policy is placed in the normal priority queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the normal priority queue.

Priority Queuing's Relationship to Traffic Coloring

Priority queuing interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see Traffic Coloring Techniques), and you want the coloring to affect the priority queue, you must create a policy on the priority queuing outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific priority queue based on the color value.

Related Topics

Custom Queuing (CQ)—Advanced Traffic Prioritization

Custom queuing is a flexible traffic prioritization scheme that allocates a minimum bandwidth to specified types of traffic. You can create up to 16 of these custom queues.

For custom queue interfaces, the device services the queues in a round-robin fashion, sending out packets from a queue until the byte count on the queue is met, then moving on to the next queue. This ensures that no queue gets starved, in comparison to priority queuing.

The disadvantage of custom queuing is that, like priority queuing, you must create policy statements on the interface to classify the traffic to the queues.

An effective use of custom queuing would be to guarantee bandwidth to a few critical applications to ensure reliable application performance.

Policy Requirements for Custom Queuing Interfaces

In order for packets to be classified on a custom queuing interface, you must create custom queuing policies on that interface. These policies need to specify a ratio, or percentage, of the bandwidth on the interface that should be allocated to the queue for the filtered traffic. A queue can be as small as 5%, or as large as 95%, in increments of 5%. The total bandwidth allocation for all policy statements defined on a custom queuing interface cannot exceed 95% (QPM ensures that you do not exceed 95%). Any bandwidth not allocated by a specific policy statement is available to the traffic that does not satisfy the filters in the policy statements.

QPM uses the ratio in these policies, along with the packet size specified when you define an interface as a custom queue, to determine the byte count of each queue.

The queues you define constitute a minimum bandwidth allocation for the specified flow. If more bandwidth is available on the interface due to a light load, a queue can use the extra bandwidth. This is handled dynamically by the device.

If you do not create custom queuing policies on the custom queuing interface, all traffic is placed in a single queue (the default queue), and is processed first-in, first-out, in the same manner as a FIFO queuing interface.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. Thus, the custom queue defines a minimum bandwidth, and the limiting policy defines an upper limit. When defining the bandwidth upper limit, the limiting policy must appear before the custom queue policy, and it must filter the same traffic as the custom queue (or a subset of the same traffic). It must also specify that the interface continue looking at subsequent policies after applying the limiting policy. If you do not specify Continue, traffic that satisfies the limiting policy is placed in the default custom queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the default custom queue.

Custom Queuing's Relationship to Traffic Coloring

Custom queuing interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see Traffic Coloring Techniques), and you want the coloring to affect the custom queue, you must create a policy on the custom queuing outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific custom queue based on the color value.

Related Topics

Weighted Fair Queuing (WFQ)—Intelligent Traffic Prioritization

Weighted fair queuing acknowledges and uses a packet's priority without starving low-priority packets for bandwidth. Weighted fair queuing divides packets into two classes: interactive traffic is placed at the front of the queue to reduce response time; non-interactive traffic shares the remaining bandwidth proportionately.

Because interactive traffic is typically low-bandwidth, its higher priority does not starve the remaining traffic. A complex algorithm is used to determine the amount of bandwidth assigned to each traffic flow. IP precedence is considered when making this determination.

Weighted fair queuing is very efficient and requires little configuration.

With some versions of IOS software, when you select WFQ on Frame Relay interfaces where you enable FRTS, further WFQ configuration settings are available. These settings are used for Voice over Frame Relay.

For interfaces on VIP cards, you can use fair queuing, but not weighted fair queuing. In fair queuing, the queues are treated with the same weight. This is also called distributed weighted fair queuing (DWFQ).

Policy Requirements for Weighted Fair Queuing Interfaces

Weighted fair queuing interfaces automatically create queues for each traffic flow. No specific policies are needed.

However, you can also create traffic shaping or limiting policies to affect how selected traffic is handled on the interface. A shaping policy or a limiting policy controls the bandwidth available to the selected traffic.

Weighted Fair Queuing's Relationship to Traffic Coloring

Weighted fair queuing is sensitive to the IP precedence settings in the packets. WFQ automatically prioritizes the packets without the need for you to create policies on the WFQ interfaces. However, if you do create a coloring policy on the WFQ interface, it affects how the selected traffic is queued.

WFQ can improve network performance without traffic coloring policies. However, because WFQ automatically uses the IP precedence settings in packets, consider coloring all traffic that enters the device (or color the traffic at the point where it enters your network). By coloring all traffic, you can ensure that packets receive the service level you intend. Otherwise, the originator of the traffic, or another network device along the traffic's path, determines the service level for the traffic.

Related Topics

Class-Based Weighted Fair Queuing (CBWFQ)—Customizable WFQ

Class-based weighted fair queuing (CBWFQ) combines the best characteristics of weighted fair queuing and custom queuing.

CBWFQ uses WFQ processing to give higher weight to high priority traffic, but derives that weight from classes that you create on the interface. These classes are similar to custom queues—they are policy-based, identify traffic based on the traffic's characteristics (protocol, source, destination, and so forth), and allocate a percentage of the interface's bandwidth to the traffic flow.

With CBWFQ, you can create up to 64 classes on an interface. (Unlike WFQ, queues are not automatically based on IP precedence or DSCP value.) CBWFQ also lets you control the drop mechanism used when congestion occurs on the interface. You can use WRED for the drop mechanism, and configure the WRED queues, to ensure that high-priority packets within a class are given the appropriate weight. If you use tail drop, all packets within a class are treated equally, even if the IP precedence is not equal.

The disadvantage of CBWFQ is that, like custom queuing, you must create policy statements on the interface to place the traffic in the classes.

An effective use of CBWFQ would be to guarantee bandwidth to a few critical applications to ensure reliable application performance.

If CBWFQ is available on an interface, Cisco recommends that you use CBWFQ instead of custom queuing.


Note   In an ATM network, QPM enables you to configure CBWFQ on a VC on the subinterface level. You are given the option of configuring CBWFQ on the subinterface or on the VC (see New Interface and Properties of Interface Dialog Boxes).

CBWFQ with Modular CLI

On routers with software versions that support modular CLI, you can create multiple-action, class-based policies. You should choose Class Based QoS for the QoS property on interfaces on which you want to create multiple-action policies. The queuing method for these interfaces is CBWFQ, and it can be used with additional QoS capabilities to enable efficient management of voice and other real-time traffic. Some of these features are defined as interface properties, others are defined as properties of a policy.

Many of these features enable efficient management of voice traffic.

Policy Requirements for CBWFQ Interfaces

In order for packets to be placed in a CBWFQ class on an interface, you must create CBWFQ policies on that interface. These policies need to specify a minimum percentage of the maximum allocatable bandwidth on the interface that should be allocated to the class queue for the filtered traffic.

The maximum allocatable bandwidth is variable and can be set on the device using the max-reserve-bandwidth command. Unless you change the maximum allocatable bandwidth on the interface, for interfaces on a non-VIP card, a queue can be as small as 1%, or as large as 75%, in increments of 1%. The total bandwidth allocation for all policy statements defined on a CBWFQ interface cannot exceed 75%. For interfaces on a VIP card, the upper limit is 99%. The maximum bandwidth limit includes the IP RTP Priority queue if you create one. Because you can change the maximum allocatable bandwidth, QPM does not check to ensure that you do not exceed the bandwidth limits.

The queues you define constitute a minimum bandwidth allocation for the specified flow. If more bandwidth is available on the interface due to a light load, a queue can use the extra bandwidth. This is handled dynamically by the device.

Unclassified packets that do not match any filters defined for class-based policies are processed according to the settings in the default class. The default behavior for unclassified traffic is weighted fair queuing.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. Thus, the class queue defines a minimum bandwidth, and the limiting policy defines an upper limit. When defining the bandwidth upper limit, the limiting policy must appear before the CBWFQ policy, and it must filter the same traffic as the class queue (or a subset of the same traffic). It must also specify that the interface continue looking at subsequent policies after applying the limiting policy. If you do not specify Continue on the limiting policy, traffic that satisfies the limiting policy is placed in the default class queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the default class queue.

Using Network Based Application Recognition (NBAR) with CBWFQ

NBAR is a classification engine that recognizes a wide variety of applications, including web-based and other difficult-to-classify protocols that utilize dynamic TCP/UDP port assignments. When an application is recognized and classified by NBAR, a network can invoke services for that specific application.

On devices with an IOS software version that supports modular CLI and NBAR, you can use NBAR to refine your CBWFQ policies. With NBAR, you can identify traffic based on application. For example, you can filter all RealAudio traffic.


Note   QPM also supports distributed NBAR (dNBAR) on versatile interface processor (VIP) cards on the Cisco 7500 series of routers and on the Catalyst 6000 FlexWAN module.

CBWFQ's Relationship to Traffic Coloring

CBWFQ interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see Traffic Coloring Techniques), and you want the coloring to affect the class queue, you must create a policy on the CBWFQ outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific class queue based on the color value.

If you use WRED as the drop mechanism for a class, WRED automatically considers the packet's color when determining which packet to drop. Tail drop does not consider a packet's color.

If you use WFQ on the default class policy, WFQ automatically considers the packet's color when queuing, dropping, and sending packets in the default queues.

Related Topics

Weighted Round Robin (WRR)—Managing Layer 3 Switch Congestion

Weighted round-robin (WRR) scheduling is used on layer 3 switches. WRR queuing is handled differently on the Catalyst 8500 family and on other layer 3 switches.

Weighted round-robin (WRR) scheduling is used on layer 3 switches on egress ports to manage the queuing and sending of packets. WRR places a packet in one of four queues based on IP precedence, from which it derives a delay priority. Table 1-7 shows the queue assignments based on the IP precedence value and derived delay priority of the packet, and the weight of the queue if you do not change it.


Table 1-7: WRR Queue Packet Assignments
IP Precedence Delay Priority Queue Assignment Default Queue Weight (Catalyst 8500) Default Queue Weight (Other Layer 3 Switches)

0, 1

0

0

1

1

2, 3

1

1

2

2

4, 5

2

21

4

3

6, 7

3

3

8

4

1Queue 2 is the queue typically used for voice traffic.

The Catalyst 8500 devices automatically use WRR on egress ports. Unlike other queuing properties, you do not configure WRR through the device's interface properties (QPM does not list switch router interfaces). Instead, you configure WRR through policies defined on the device level.

On other layer 3 switches, you configure WRR through policies defined on the interface level for destination ports.

With WRR, each queue is given a weight. This weight is used when congestion occurs on the port to give weighted priority to high-priority traffic without starving low priority traffic. The weights provide the queues with an implied bandwidth for the traffic on the queue. The higher the weight, the greater the implied bandwidth. The queues are not assigned specific bandwidth, however, and when the port is not congested, all queues are treated equally.

Policy Requirements for Weighted Round-Robin Devices

Devices that use WRR automatically create the four queues with default weights for each interface. You need only define policies if you want to change the queue weights for an interface. For the Catalyst 8500, these policies are defined at the device level, and QPM does not display the device interfaces. For other layer 3 switches, policies are defined on the interface level for the destination ports.

Weighted Round-Robin's Relationship to Traffic Coloring

WRR is sensitive to the IP precedence settings in the packets. WRR automatically places the packets in queues based on precedence. Although you cannot change the color of a packet on a layer 3 switch, if you change the packet's color on another device before it reaches the layer 3 switch, that change affects the WRR queuing.

Related Topics

2 Queues, 2 Thresholds (2Q2T)—Managing Congestion on Switch Ports

2Q2T queuing on Catalyst 6000 family switches uses a packet's precedence setting to determine how that packet is serviced on the port.

2Q2T queuing uses two queues, one high priority, the other low priority, with two thresholds for each queue, to determine the bandwidth allowed for traffic based on each IP precedence value. 2Q2T assigns each precedence to a specific queue and threshold on that queue.

For example, packets with 0 for precedence (the lowest priority) are placed in the low priority queue and use the lower threshold by default. This ensures that the least important traffic gets less service than any other traffic.

These queues and thresholds are serviced using weighted round robin (WRR) techniques to ensure a fair chance of transmission to each class of traffic. 2Q2T favors high-priority traffic without starving low-priority traffic.

2Q2T queuing comes with a default configuration for the queues, thresholds, and traffic assignments based on IP precedence settings. You only need to change this configuration if it does not suit your requirements.

If you decide to change the 2Q2T configuration, you can change the size of the queues, their relative WRR weights, the sizes of their thresholds, and the assignment of precedence values to the appropriate queue and threshold.


Note   For Catalyst 6000 switches, queuing configuration is done on the device level and applies to all the device's interfaces. For Catalyst 6000 switches with Supervisor IOS, queuing configuration is done on the interface level, for all interfaces belonging to the same ASIC group. Any changes you make to the queuing configuration on an interface will be applied to all the other interfaces on the ASIC as well.

Policy Requirements for 2Q2T Queuing Interfaces

2Q2T queuing ports are not configured with policies. You can change the 2Q2T configuration through the switch's device properties (or through the interface properties for the Catalyst 6000 device with Supervisor IOS). See Viewing or Changing Device Properties for information on changing device properties.

However, you can create traffic limiting policies (called traffic policing on switches) to affect how selected traffic is handled on the interface. A limiting policy controls the bandwidth available to the selected traffic.

2Q2T Queuing's Relationship to Traffic Coloring

2Q2T queuing is sensitive to the IP precedence settings in the packets. The queues and thresholds selected for the traffic are based on the precedence value.

If you use coloring policies on the interface to change a packet's precedence, that change affects the queue and threshold assignment for the packet.

Related Topics

1 Priority Queue, and 2 Queues 2 Thresholds (1P2Q2T)—Managing Voice Traffic on Switches

Like 2Q2T, 1P2Q2T queuing on Catalyst 6000 family switches uses a packet's precedence setting to determine how that packet is serviced on the port.

1P2Q2T queuing uses three queues:

1P2Q2T assigns each precedence to a specific queue and threshold on that queue.

You can color voice traffic so that it will be assigned to the strict priority queue. On 1P2Q2T interfaces, the switch services traffic in the strict-priority queue before servicing the standard queues. When the switch is servicing a standard queue, after transmitting a packet, it checks for traffic in the strict-priority queue. If the switch detects traffic in the strict-priority queue, it suspends its service of the standard queue and completes service of all traffic in the strict-priority queue before returning to the standard queue.

On 1P2Q2T interfaces, the default QoS configuration allocates 90 percent of the total transmit queue size to the low-priority standard queue, 5 percent to the high-priority standard queue, and 5 percent to the strict-priority queue.

For 1P2Q2T interfaces, the default QoS configuration assigns all traffic with IP Precedence 5 to the strict priority queue, traffic with IP Precedence 4, 6, and 7 to the high-priority standard queue, and traffic with IP Precedence 0, 1, 2, and 3 to the low-priority standard queue.


Note   For Catalyst 6000 switches, queuing configuration is done on the device level and applies to all the device's interfaces. For Catalyst 6000 switches with Supervisor IOS, queuing configuration is done on the interface level, for all interfaces belonging to the same ASIC group. Any changes you make to the queuing configuration on an interface will be applied to all the other interfaces on the ASIC as well.

Policy Requirements for 1P2Q2T Queuing Interfaces

1P2Q2T queuing interfaces are not configured using policies, but through the switch's device properties (or through the interface properties for the Catalyst 6000 device with Supervisor IOS). See Viewing or Changing Device Properties, for information on changing device properties.

However, you can create traffic limiting policies (called traffic policing on switches) to affect how selected traffic is handled on the interface. A limiting policy controls the bandwidth available to the selected traffic.

1P2Q2T Queuing's Relationship to Traffic Coloring

1P2Q2T queuing is sensitive to the IP precedence settings in the packets. The queues and thresholds selected for the traffic are based on the precedence value.

If you use coloring policies on the interface to change a packet's precedence, that change affects the queue and threshold assignment for the packet.

Related Topics

Queuing Techniques for Congestion Avoidance on Outbound Traffic

You can set a queuing technique on a device's interface to manage how packets are handled when the interface starts to be congested. The queuing technique available for congestion avoidance is weighted random early detection (WRED).

With WRED, when traffic begins to exceed the interface's traffic thresholds, but before congestion occurs, the interface starts dropping packets from selected flows. If the dropped packets are TCP, the TCP source recognizes that packets are getting dropped, and lowers its transmission rate. The lowered transmission rate then reduces the traffic to the interface, thus avoiding congestion. Because TCP retransmits dropped packets, no actual data loss occurs.

To determine which packets to drop, WRED takes these things into account:

WRED chooses the packets to drop after considering these factors in combination, the net result being that the highest priority and lowest bandwidth traffic is preserved.

WRED differs from standard random early detection (RED) in that RED ignores IP precedence, and instead drops packets from all traffic flows, not selecting low precedence or high bandwidth flows.

By selectively dropping packets before congestion occurs, WRED prevents an interface from getting flooded, necessitating a large number of dropped packets. This increases the overall bandwidth usage for the interface.

If you are using IOS software version 12.0 on a device with a versatile interface processor (VIP), when you configure an interface to use WRED, it automatically uses distributed WRED. Distributed WRED takes advantage of the VIP.

The disadvantage of WRED is that only predominantly TCP/IP networks can benefit. Other protocols, such as UDP or NetWare (IPX), do not respond to dropped packets by lowering their transmission rates. Instead they retransmit the packets at the same rate. WRED treats all non-TCP/IP packets as having precedence 0. If you have a mixed network, WRED might not be the best choice for queuing traffic.

An effective use of weighted random early detection would be to avoid congestion on a predominantly TCP/IP network, one that has minimal UDP traffic and no significant traffic from other networking protocols. It is especially effective on core devices rather than edge devices, because the traffic coloring you perform on edge devices can then affect the WRED interfaces throughout the network.

Policy Requirements for Weighted Random Early Detection Interfaces

Weighted random early detection interfaces automatically favor high priority, low bandwidth traffic flows. No specific policies are needed.

However, you can also create traffic shaping policies or traffic limiting policies to affect how selected traffic is handled on the interface. A shaping policy or a limiting policy can control the bandwidth available to the selected traffic.

You can also create CBWFQ policies that use WRED as the drop mechanism for the class-based queues.

Weighted Random Early Detection's Relationship to Traffic Coloring

WRED is sensitive to the IP precedence settings in the packets. Therefore, you can create policies on inbound interfaces on the device and have those policies implemented on the outbound interfaces that use WRED. WRED automatically prioritizes the packets without the need for you to create policies on the WRED queuing interfaces, dropping packets with low priority before dropping high-priority packets.

However, you do not need to create policies on the inbound interfaces that color traffic. If packets have the same IP precedence, WRED drops packets from the highest-bandwidth flows first. However, because WRED automatically uses the IP precedence settings in packets, consider coloring all traffic that enters the device (or color the traffic at the point where it enters your network). By coloring all traffic, you can ensure that packets receive the service level you intend. Otherwise, the originator of the traffic, or another network device along the traffic's path, determines the service level for the traffic.

If you create a coloring policy on the WRED interface, it also affects how the selected traffic is queued.

Related Topics

Management of Voice and Other Real-Time Traffic

Realtime-based applications, such as voice applications, have different characteristics and requirements from those of other data applications. Voice applications tolerate minimal variation in the amount of delay affecting delivery of their voice packets. Voice traffic is also intolerant of packet loss and jitter, both of which degrade the quality of the voice transmission delivered to the recipient end user. To effectively transport voice traffic over IP, mechanisms are required that ensure reliable delivery of packets with low latency.

Specific QoS features can be used to manage voice traffic, such as Low Latency Queuing (LLQ), Frame Relay Fragmentation (FRF), Link Fragmentation and Interleaving (LFI), and so on. Some of these features are defined on the interface level and some are included in the policy definition.

To simplify the process of defining end-to-end QoS for voice traffic, QPM provides you with a voice database containing pre-defined IP telephony templates. These templates contain the QoS configurations and policies required at each relevant point in the network. All you need to do is add your devices to the database, assign their interfaces to the relevant templates, and deploy the database. For detailed information about QPM IP telephony templates, refer to "Configuring QoS for IP Telephony".

The following features are generally used to ensure QoS for Voice over IP:

On Catalyst switches, the following feature is available for management of voice traffic:

The devices and software versions that support these features are shown in Supported Devices and QoS Techniques for IOS Software Releases.

Low Latency Queuing (LLQ)—Strict Priority Queuing

Low latency queuing (LLQ) is used with CBWFQ to bring strict priority queuing to CBWFQ. Strict priority queuing allows delay-sensitive data such as voice to be dequeued and sent first (before packets in other queues are dequeued), giving delay-sensitive data preferential treatment over other traffic. LLQ is not limited to UDP port numbers, as is IP RTP priority.

Using LLQ reduces delay and jitter in voice conversations. LLQ is enabled when you configure the priority status within the CBWFQ queuing properties. When several types of traffic on an interface are configured as priority classes, all these types of traffic are queued in the same, single, strict priority queue.

Related Topics

IP RTP Priority—Providing Absolute Priority to Voice Traffic

IP RTP Priority creates a strict-priority queue for real-time transport protocol (RTP) traffic. The IP RTP Priority queue is emptied before other queues are serviced. This is typically used to provide absolute priority to voice traffic, which uses RTP ports. Because voice traffic is delay-sensitive and low bandwidth, you can typically give it absolute priority without starving other data traffic. This ensures that voice quality is adequate.

IP RTP Priority is especially useful on slow-speed WAN links, including Frame Relay, Multilink PPP (MLP), and T1 ATM links. It works with WFQ and CBWFQ.

In QPM, you generally enable IP RTP Priority in the interface or device group properties. You select the range of RTP port traffic to place in the queue, and the percentage of the interface's bandwidth to allocate to the queue. Any allocated bandwidth that is not used is available to other queues on the interface. When creating multiple-action policies on interfaces that support modular CLI, you define the range of RTP port traffic in the filter, and then assign the traffic to the priority queue in the queuing policy.

Do not set the bandwidth too low. Any traffic for the queue that exceeds the bandwidth is dropped. Although voice traffic typically uses 24 kbps, there is occasional overhead requiring 25 kbps service. If you select a bandwidth percentage that equates to 24 kbps, the interface is likely to drop voice packets occasionally, which will give you poor voice quality.

Also, IP RTP Priority ignores compression, treating a compressed 12 kbps flow as a 24 kbps flow.

Policy Requirements for IP RTP Priority Interfaces

IP RTP Priority is not defined as a policy action. The IP RTP Priority configuration defined on the interface or device group, or in the filter with modular CLI, determines which traffic is placed in the priority queue.

Interface QoS Property Requirements for IP RTP Priority

You can use IP RTP Priority on WFQ and CBWFQ interfaces. On CBWFQ interfaces, you can configure custom class-based queues for other types of traffic. The bandwidth allocated to the IP RTP Priority queue counts as part of the total allocated CBWFQ queue bandwidth. IP RTP priority cannot be configured on the interface when FRTS is enabled. IP RTP priority is not available on VIP cards.

Related Topics

Link Fragmentation and Interleaving (LFI)—Reducing Delay and Jitter on Lower Speed Links

Voice over IP is susceptible to increased latency and jitter when the network processes large packets, such as LAN-to-LAN FTP Telnet transfers traversing a WAN link. This susceptibility increases as the traffic is queued on slower links. LFI reduces delay and jitter on slower speed links by breaking up large data packets so that they are small enough to satisfy the delay requirements of real-time traffic. The low-delay traffic packets, such as voice packets, are interleaved with the fragmented packets. LFI also provides a special transmit queue for the smaller, delay-sensitive packets, enabling them to be sent earlier than other flows.

LFI was designed especially for lower-speed links in which serialization delay is significant.

Interface QoS Property Requirements for LFI

You can configure LFI on PPP interfaces when Multilink Point-to-Point Protocol (MLP) is configured on the interface. You can use LFI on virtual templates, dialer interfaces, multilink, and ISDN BRI or PRI interfaces. QPM cannot detect or implement MLP and will assume that the multilink PPP command is enabled on the interface. QPM will configure only the interleave and fragmentation commands. When LFI is defined on an interface group, it is only deployed to the interfaces that support it.


Note   QPM also supports distributed LFI (dLFI) on versatile interface processor (VIP) cards on Cisco 7500 routers and on the Catalyst 6000 FlexWAN module.

Related Topics

Frame Relay Fragmentation (FRF)—Preventing Delay on Frame Relay Links

Frame Relay fragmentation ensures predictability for voice traffic, aiming to provide better throughput on low-speed Frame Relay links by interleaving delay-sensitive voice traffic on one virtual circuit (VC) with fragments of a long frame on another VC utilizing the same interface.

VoIP packets should not be fragmented. However, VoIP packets can be interleaved with fragmented packets.

If some PVCs are carrying voice traffic, you can enable fragmentation on all PVCs. The fragmentation header is included only for frames that are greater than the fragment size configured.

Interface QoS Property Requirements for FRF

You can use FRF on Frame Relay interfaces, where the device software version supports WFQ, or Class Based QoS is defined as the QoS property (depending on the device and the IOS version). FRTS must be enabled in order to use FRF.


Note   FRF12 is supported on:
2600, 3600, 7200 with IOS version 12.1, 12.1(6)E, with WFQ as the QoS property.
2600, 3600, 7200 with IOS version 12.2, 12.2(2)T, with WFQ or CBWFQ as the QoS property.

Related Topics

Distributed Frame Relay Fragmentation (DFRF)—FRF for VIP Interfaces

DFRF allows long data frames to be fragmented into smaller pieces and interleaved with real-time frames. In this way, real-time voice and non-real-time data frames can be carried together on lower-speed links without causing excessive delay to the real-time traffic.


Note   VoIP packets should not be fragmented. However, VoIP packets can be interleaved with fragmented packets.

Interface QoS Property Requirements for DFRF

You can use DFRF on Frame Relay VIP interfaces, where Class Based QoS is defined as the QoS property. FRTS must be enabled in order to use DFRF.

Related Topics

Compressed Real-Time Protocol (CRTP)—RTP Header Compression to Reduce Delay

Real-Time Protocol (RTP) is a host-to-host protocol used for carrying multimedia application traffic, including packetized audio and video, over an IP network. RTP provides end-to-end network transport functions intended for applications sending real-time requirements, such as audio and video.

To avoid the unnecessary consumption of available bandwidth, the RTP header compression feature, referred to as CRTP, is used on a link-by-link basis. RTP header compression results in decreased consumption of available bandwidth for voice traffic. A corresponding reduction in delay is realized.

RTP header compression is supported on serial lines using Frame Relay, High-Level Data Link Control (HDLC), or PPP encapsulation. CRTP compresses the IP/UDP/RTP header in an RTP data packet from 40 bytes to approximately 2 to 5 bytes.

Interface QoS Property Requirements for CRTP

You can define CRTP on WFQ and CBWFQ interfaces with later IOS versions. DCRTP is supported on VIP interfaces on devices running IOS 12.1(5)T and later.

Related Topics

Managing Traffic Through Access Control

You can control traffic access by permitting or denying transport of packets into or out of interfaces.

You can define access control policies, which will deny or permit traffic that matches the filter definition in the specified direction. You can also define a filter condition to deny specific types of traffic as part of a QoS policy definition.

The access control feature can be used as a security feature, and can be enabled or disabled globally for all databases in your system. You can overwrite the global configuration on a per-device basis.

You cannot create Access Control policies on Catalyst switches.

Related Topics

Signaling Techniques

In order to implement end-to-end quality of service, a traffic flow must contain or use some type of signal to identify the requirements of the traffic. With QPM, you can control these types of signaling techniques:

IP Precedence and DSCP Values—Differentiated Services

The simplest form of signal is the IP precedence or DSCP setting in data packets: the packet's color or classification.

This signal is carried with the packet, and can affect the packet's handling at each node in the network. Queuing techniques such as WFQ and WRED automatically use this signal to provide differentiated services to high-priority traffic.

To use the IP precedence or DSCP setting effectively, ensure that you color traffic at the edges of your network so that the color affects the packet's handling throughout the network. See Traffic Coloring Techniques, for information on how to change a packet's IP precedence or DSCP setting.

Interface QoS Property Requirements for IP Precedence and DSCP Signaling

IP precedence and DSCP can only provide differentiated services on interfaces that use a queuing technique that is sensitive to the precedence setting in the packet. For example, WFQ, WRED, WRR, 1P2Q2T, and 2Q2T automatically consider the precedence settings.

Other QoS properties can use the precedence settings, but you must create policies that specifically filter on the precedence. For example, custom queuing, priority queuing, and CBWFQ interfaces can use the precedence setting if you create the appropriate policies.

Related Topics

Resource Reservation Protocol (RSVP)—Guaranteed Services

A more sophisticated form of signaling than IP precedence is the resource reservation protocol (RSVP). RSVP is used by applications to dynamically request specific bandwidth resources from each device along the traffic flow's route to its destinations. Once the reservations are made, the application can start the traffic flow with the assurance that the required resources are available.

RSVP is mainly used by applications that produce real-time traffic, such as voice, video, and audio. Unlike standard data traffic, such as HTTP, FTP, or Telnet, real-time applications are delay sensitive, and can become unusable if too many packets are dropped from a traffic flow. RSVP helps the application ensure there is sufficient bandwidth so that jitter, delay, and packet drop can be avoided.

RSVP is typically used by multicast applications. With multicasting, an application sends a stream of traffic to several destinations. For example, the Cisco IP/TV application can provide several audio-video programs to users. If a user accesses one of the provided programs, IP/TV sends a stream of video and audio to the user's computer.

Network devices consolidate multicast traffic to reduce bandwidth usage. Thus, if there are 10 users for a traffic flow behind a router, the router sees one traffic flow, not 10. In unicast traffic, the router sees 10 traffic flows. Although RSVP can work with unicast traffic (one sender, one destination), RSVP unicast flows can quickly use up RSVP resources on the network devices if a lot of users access unicast applications. In other words, unicast traffic scales poorly.

To configure RSVP on network devices, you must determine the bandwidth requirements of the RSVP-enabled applications on your network. If you do not configure the devices to allow RSVP to reserve enough bandwidth, the applications will perform poorly. See the documentation for the applications to determine their bandwidth requirements.

In QPM, you enable RSVP in the interface's properties (where you also set the QoS property) so that the device will act on received RSVP signals. You can determine the maximum percentage of the interface's bandwidth that can be reserved (default is 75%), and the maximum percentage of the bandwidth that any one flow can use. You can also configure RSVP on device groups.

When an RSVP request is made, RSVP calculates the bandwidth request by considering the mean data rate, the amount of data the interface can hold in the queue, and the minimum QoS requirement for the traffic flow. The interface determines if it can meet the request, and replies to the requesting application.

When the traffic flow begins, RSVP can dynamically respond to changes in routes, switching reservations to new devices and releasing reservations for devices no longer on the path. When the flow is complete, all reservations are removed and the bandwidth on the interfaces released.

Interface QoS Property Requirements for RSVP Signaling

You can use RSVP on only WFQ, WRED, and CBWFQ interfaces:

Related Topics

More Information About Quality of Service

This publication cannot cover everything you might want to know about quality of service. This section provides pointers to more information available on the web.


Note   For pages that require a Cisco Connection Online (CCO) login, you can register at the CCO web site at http://www.cisco.com/register/.

The references are broken down into these categories:

QPM Information
http://www.cisco.com/cgi-bin/tablebuild.pl/qos-patches

Voice over IP Information
http://www.cisco.com/univercd/cc/td/doc/product/voice/ip_tele/avvidqos/index.htm

http://www.cisco.com/warp/public/793/voip/

IOS Software Release 12.x Documentation
http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_r/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/120xe/120xe5/mqc/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/wan_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/wan_r/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t4/120tvofr/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121newft/121limit/121e/121e2/nbar2e.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t5/iprtp.htm

IOS Software Release 11.3 Documentation
http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/fun_c/fcprt4/fcperfrm.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/fun_r/frprt4/frperfrm.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/wan_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/wan_r/index.htm

IOS Software Release 11.2 Documentation
http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/1cbook/1csysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/1rbook/1rsysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/4cbook/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/4rbook/index.htm

IOS Software Release 11.1 and 11.1cc Documentation
http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/mods/1mod/1cbook/1csysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/mods/1mod/1rbook/1rsysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/car.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/wred.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/dwfq.htm

IOS Software White Papers
http://www.cisco.com/warp/customer/779/largeent/learn/technologies/qos/index.html

http://www.cisco.com/warp/customer/732/qos/index.html

http://www.cisco.com/warp/customer/732/net_enabled/qos.html

http://www.cisco.com/warp/customer/779/largeent/design/index.html

http://www.cisco.com/warp/customer/cc/sol/mkt/ent/multi/dvvi4/qosfs_ds.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/qos.htm

http://www.cisco.com/warp/customer/cc/sol/mkt/ent/cmps/gcnd_wp.htm

Catalyst Documentation
http://www.cisco.com/univercd/cc/td/doc/product/l3sw/8540/rel_12_0/w5_6f/softcnfg/5cfg8500.htm

http://www.cisco.com/univercd/cc/td/doc/product/l3sw/4908g_l3/ios_12/7w515d/config/qos_cnfg.htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat5000/rel_5_5/sw_cfg/qos.htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/sw_5_5/cnfg_gd/qos.htm

LocalDirector Documentation
http://www.cisco.com/univercd/cc/td/doc/product/iaabu/localdir/ld31rns/ldicgd/index.htm


hometocprevnextglossaryfeedbacksearchhelp
Posted: Tue Nov 12 12:22:54 PST 2002
All contents are Copyright © 1992--2002 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.