cc/td/doc/product/rtrmgmt/qos/pqpm20
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Planning for Quality of Service
What Is Quality of Service?
What Is Provisioned QoS Policy Manager?
Planning for QoS Deployment
What Types of Quality of Service Does QPM-PRO Handle?
More Information About Quality of Service

Planning for Quality of Service


Effective use of Quality of Service (QoS) capabilities requires careful planning. Before you deploy QoS to your network, carefully consider the types of applications used in the network and which QoS techniques might improve the performance of those applications. Then, use the Cisco Provisioned QoS Policy Manager (QPM-PRO) to create and deploy your QoS policies to the network.

These topics introduce you to QoS concepts and Provisioned QoS Policy Manager, and get you started on developing a QoS strategy.

What Is Quality of Service?

Quality of Service (QoS) is a set of capabilities that allow you to deliver differentiated services for network traffic, thereby providing better service for selected network traffic. For example, with QoS, you can increase bandwidth for critical traffic, limit bandwidth for non-critical traffic, and provide consistent network response, among other things. This allows you to use expensive network connections more efficiently, and to establish service level agreements with customers of the network.

To implement QoS, you define QoS policies for network devices. The policies can differentiate traffic based on categories, such as user address, application type, content, and so on.

On outbound device interfaces, packets can be queued according to their IP precedence. Using QoS, you can control how the queues are serviced, thus determining the priority of the traffic. As you deploy QoS, identify the applications and users on your network that are bandwidth or time sensitive, and also identify the applications that take more than their fair share of bandwidth. With this information, you can develop effective policies to improve the overall functioning of your network.

With Cisco's QoS Policy Management solutions you can define QoS policies for all types of network devices:

Figure 1-1 shows an example of a local and wide area network. Typically, you classify traffic in the LAN before sending it to the WAN. The devices on the WAN then use the classification to determine the service requirements for the traffic. The WAN devices can limit the bandwidth available to the traffic, or give the traffic priority, or even change the classification of the traffic. If you control the WAN as well as the LAN, you can control all aspects of the traffic's priority. However, after the traffic leaves the networks under your control, it is your service provider who decides how to service the traffic (which might be based on an explicit agreement with your enterprise).


Figure 1-1   QoS Across LAN and WAN Networks


What Is Provisioned QoS Policy Manager?

Provisioned QoS Policy Manager (QPM-PRO) provides a scalable platform for defining and applying QoS policy. QPM-PRO manages QoS configuration and maintenance on a system-wide basis for Cisco devices, including routers, layer-3 switches (switch routers), switches, and LocalDirector. Using QPM-PRO, you can define and deploy policies more easily than you can by using device commands directly.

These topics provide details about the capabilities of QPM:

Overview of Provisioned QoS Policy Manager

Provisioned QoS Policy Manager (QPM-PRO) lets you define QoS policies at a more abstract level than can be defined using device commands. For example, with QPM-PRO you can define policies for groups of devices rather than one device at a time. You can also create policies that apply to applications or groups of hosts more easily than can be defined using device commands.

By giving you a higher level view of your policies, QPM-PRO makes it easier for you to define, modify, and redeploy policies. You can more easily analyze "what if" scenarios in a lab and then deploy your best solution to your live network.

By simplifying QoS policy definition and deployment, QPM-PRO can make it easier for you to create and manage differentiated services in your network, thus making more efficient and economical use of your existing network resources. For example, you can deploy policies that ensure that your mission-critical applications always get the bandwidth required to run your business.

QPM-PRO includes these programs:

When you install the QPM-PRO remote version, only Policy Manager and Distribution Manager are installed. The remote version must communicate with a QoS Manager service on another machine in order for you to distribute policies. You must also install the QPM-PRO complete version on a machine in order to install QoS Manager (Policy Manager and Distribution Manager are also installed with QoS Manager in the complete version).

QPM-PRO Features and Benefits

Table 1-1 describes the benefit of QPM-PRO policy definition and deployment over device commands.

Table 1-1   QPM-PRO Features and Benefits

Feature  Description  Benefit Over Device Commands 

Policy abstraction from device commands

QoS Manager converts your policies to device commands.

You do not have to know the device commands in order to create policies. Because QPM-PRO supports a wide variety of devices, this can save you a lot of time and effort.

Simplified policy definition

Policy Manager's policy definition interface simplifies the creation of complex policies. You can create complex filters to define exactly the traffic you are targeting by choosing possible values in a table format.

For more information on creating policy filters, see Creating a Policy.

Using device commands, you must carefully enter the correct parameters in order to get the results you want, and the more complex your filter, the harder it is to develop (and type) the correct command stream.

Simplified policy prioritization

Devices analyze policies in the order in which they are entered. With Policy Manager, you can easily change this order by selecting the policy and moving it up or down the list of policies.

For more information on changing the priority of policies, see Changing the Priority of Policies.

Using device commands, you must manually change the order of policies. QPM-PRO reorders the policies automatically the next time you deploy your policies.

Basic policy validation

Policy Manager lets you define only policies that are supported by the device, interface, and software version. When you set a queuing technique for an interface, only policies supported by that technique are available.

Using device commands, you are responsible for knowing what types of policies are supported by the combination of device, interface, software version, and queuing technique.

Device groups

Policy Manager lets you define groups of devices or interfaces. These groups can have the same or different software versions. If the group contains devices that use different software versions, Policy Manager ensures that you can define only policies supported by the lowest version of the software used in the group.

For more information about device groups, see Working with Device Groups.

Using device commands, you can configure only one device at a time. You also must be aware of the software versions on each device, and modify your policies accordingly. QPM-PRO lets you treat devices as a group, which is beneficial when you want to maintain consistent policies for your network.

Device querying

Policy Manager queries devices you add to the QoS database to determine the software version, device type, and available interfaces. Because the information is obtained directly from the device, it is reliable.

For more information about querying a device's characteristics, see Adding a Device.

Using device commands, you must manually query the device to determine this information.

CiscoWorks2000 integration

Policy Manager lets you import device inventories exported by the CiscoWorks2000 Resource Manager Essentials applications. This simplifies adding devices to the QoS database.

For more information about importing device inventories, see Importing Devices from a Device Inventory.

There is no equivalent capability when using device commands.

Host groups

Policy Manager lets you define groups of hosts (specific hosts or subnets). You can then use these groups when defining policies. For example, you can define a policy for all traffic that comes from a specific subnet. Or, you can define a policy for all traffic that comes from your database server, giving it high priority.

For more information on host groups, see Working with Host Groups.

Using device commands, you must enter the list of IP addresses each time you define the policy, and you must define the policy separately for each interface or device. QPM-PRO helps you eliminate this redundancy and its potential for errors.

Application services

Policy Manager lets you define application services based on port, protocol, and host or subnet. You can then use these definitions when defining policies.

For more information on application services, see Working with Application Services Aliases.

Using device commands, you must enter this information each time you define the policy, and you must define the policy separately for each interface or device. QPM-PRO helps you eliminate this redundancy and its potential for errors.

DNS host name resolution

If you use host names, Policy Manager resolves them to IP addresses. You can periodically force Policy Manager to redo DNS resolution to pick up changes in your network.

For more information about DNS resolution, see Resolving the Host Names in a Policy to Their IP Addresses.

Using device commands, you must manually convert host names to IP addresses.

Web-based reporting

Both Policy Manager and Distribution Manager produce reports in HTML format. You can store these reports on your intranet and manipulate them as you require, or print them from the browser.

With commands, reporting is limited to show commands.

Job and device status, logging, and history

Distribution Manager maintains logs of job and device policy distributions, and maintains a history of these logs. This ensures there is an audit trail of policy configuration actions.

For more information on job and device logging, see Reading the Distribution Manager Logs.

There is no equivalent feature when using device commands. You must maintain your own audit history manually.

Ability to view device commands

Both Policy Manager and Distribution Manager let you inspect the device commands that QPM-PRO will use to configure your devices. If you are fluent in IOS software, Catalyst software, or LocalDirector commands, or if you are just learning, you might find this feature helpful in understanding the device's configuration commands.

For more information about viewing software commands, see Viewing the Configuration Commands for a Device.

This feature is beneficial mainly for users who like to know exactly what is going to happen to their device's configuration before the changes are made.

Job control

Distribution Manager lets you halt policy distributions when you are distributing policies to several devices. You can also configure Distribution Manager to distribute policies to many devices in parallel, in which case your ability to cancel policy distributions is more limited.

For more information about job control, see Changing Distribution Manager Configuration Settings.

With device commands, you are always entering one command at a time. Unless you develop your own scripts, or have someone helping you, you cannot configure more than one device at a time.

Incremental configuration updating

When distributing policies, Distribution Manager distributes only the policies that have changed.

When using device commands, policy configuration is always incremental, because you are entering one command at a time.

Hands-off configuration updating

You can use the QPM-PRO distribute_policy.exe program to distribute QoS databases from a script or program. This lets you change QoS configurations on a pre-set schedule without human intervention.

With device commands, you must enter all commands in a script and handle device responses if you want to automate configuration changes.

New Functions and Features in QPM-PRO 2.0

Table 1-2 describes the main new features in QPM-PRO 2.0.

Table 1-2   New Functions and Features in QPM-PRO 2.0

Function  Feature  Description 

Voice over IP (VoIP) support

Class-based weighted fair queuing (CBWFQ) with the following QoS features:

 

 

  • Low latency queuing (LLQ)

Low Latency Queuing (LLQ)—Strict Priority Queuing

 

  • IP RTP Priority

IP RTP Priority—Providing Absolute Priority to Voice Traffic

 

  • Generic Traffic Shaping (GTS)

Generic Traffic Shaping (GTS)—Basic Traffic Rate Control

 

  • Frame Relay Traffic Shaping (FRTS)

Frame Relay Traffic Shaping (FRTS)—Controlling Traffic on Frame Relay Interfaces and Subinterfaces

 

  • Distributed Traffic Shaping (DTS)

Distributed Traffic Shaping (DTS)—Controlling Traffic on VIP Interfaces

 

  • Limiting

Limiting on Routers—Limiting Bandwidth and Optionally Coloring Traffic

 

  • Frame Relay Fragmentation (FRF)

Frame Relay Fragmentation (FRF)—Preventing Delay on Frame Relay Links

 

  • Distributed Frame Relay Fragmentation (dFRF)

Distributed Frame Relay Fragmentation (DFRF)—FRF for VIP Interfaces

 

  • Compressed RTP header (CRTP)

Compressed Real-Time Protocol (CRTP)—RTP Header Compression to Reduce Delay

 

  • Link Fragmentation and Interleaving (LFI)

Link Fragmentation and Interleaving (LFI)—Reducing Delay and Jitter on Lower Speed Links

QoS LAN edge support

Queueing methods on Catalyst devices:

  • 1 Priority Queue, and 2 Queues 2 Thresholds (1P2Q2T)
  • Weighted Round Robin (WRR)

Weighted Round Robin (WRR)—Managing Layer 3 Switch Congestion

1 Priority Queue, and 2 Queues 2 Thresholds (1P2Q2T)—Managing Voice Traffic on Switches

Access Control

You can permit or deny access to specified types of traffic.

Managing Traffic Through Access Control

Policy definition extensions

  • Modular CLI (MQC) support for class-based QoS
  • Classification by VLAN on switches
  • Filtering and classification by DSCP
  • Option to exclude specific traffic from classes of service.

CBWFQ with Modular CLI

Limiting on Switches—Policing Traffic by Limiting Bandwidth

Traffic Coloring Techniques

Upload of Existing Device Configuration

  • Enables uploading of QoS commands defined using the CLI.

How Does Provisioned QoS Policy Manager Support Existing QoS Configurations?

Deployment control

  • Extended ACL range ability
  • Output configuration file
  • Restore a previous database version

How Does Provisioned QoS Policy Manager Deploy QoS Policies?

Database and Device Configuration Verification

  • Verification of current device configuration against database definitions to detect if changes have been made.

Does Provisioned QoS Policy Manager Ensure That Policies Are Consistent with Network Configuration?

Extended Content Networking Support

  • NBAR coloring and limiting
  • Additional print protocols and host name matching for NBAR
  • Mapping NBAR protocol numbers

Using Network Based Application Recognition (NBAR) with CBWFQ

What Devices and Software Releases Are Supported?

The tables in this section describe the devices and software releases that Provisioned QoS Policy Manager supports, and the QoS techniques you can use on the supported platforms. Please note that the information in the tables is subject to change, depending on specific devices and their QoS support.


Note   QPM-PRO supports approximately 200 devices in a single QoS database. You can create multiple databases using Policy Manager, to manage an unlimited number of devices. (For example, you can manage core devices in one database and edge devices in another database).

Supported Devices and QoS Techniques for IOS Software Releases

Cisco IOS releases supported include 11.1, 11.2, 11.3, 11.1cc, 12.0, 12.1, 12.1(2)T, and 12.1(2)E. In addition, a Cisco IOS mapping function is used to enable QPM-PRO to support 12.1(2)T and 12.1(2)E QoS techniques included in later releases of IOS main (T or E train) software.

The tables below describe the QoS techniques that you can use with the devices and IOS software releases that QPM-PRO supports.

Table 1-3, Part 1   Supported Devices and QoS Techniques for IOS Software
Releases 11.x

Quality of Service Technique Cisco Systems Device IOS Software Release
11.1  11.2  11.3  11.1(cc) 

Priority queuing (PQ), custom queuing (CQ)

7500, 7200

X

X

X

X

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

X

X

X

 

2600

 

 

X

 

7100, RSM VIP, 7500 VIP

 

 

 

 

Weighted random early detection (WRED)

7500 VIP (uses DWRED)

 

 

 

X

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

 

X

X

 

2600

 

 

X

 

7100, RSM VIP

 

 

 

 

Weighted fair queuing (WFQ)

7500, 7200

X

X

X

X

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

X

X

X

 

2600

 

 

X

 

7100

 

 

 

 

RSM VIP (uses FQ only)

 

 

 

 

7500 VIP (uses FQ only)

 

 

 

X

Distributed weighted fair queuing (DWFQ), fair queuing, and QoS group DWFQ

7500 VIP

 

 

 

X

7100, RSM VIP, RSM (Catalyst 5000), 4700, 4500, 3600, 4000, 2500

 

 

 

 

Policy-based routing (PBR) (also called coloring or classification)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

 

X

X

 

2600

 

 

X

 

7100, RSM VIP, 7500 VIP

 

 

 

 

Generic traffic shaping (GTS)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

 

X

X

 

2600

 

 

X

 

7100, RSM VIP, 7500 VIP

 

 

 

 

Frame Relay traffic shaping (FRTS)

7500, 7200, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600

 

X

X

 

2600

 

 

X

 

7100, RSM VIP, 7500 VIP

 

 

 

 

Committed access rate (CAR) classification (coloring)

7500, 7500 VIP, 7200

 

 

 

X

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2500, 1600, 2600, 7100, RSM VIP

 

 

 

 

Committed access rate (CAR) rate limiting

7500, 7500 VIP, 7200, RSM (Catalyst 5000), 4700, 4500, 3600, 2600

 

 

 

X

4000, 2500, 1600, 7100, RSM VIP

 

 

 

 

Resource reservation protocol (RSVP)

7500, 7200, 4700, 4500, 3600, 2600, 2500, 1600

 

X

X

 

7100, RSM VIP, 7500 VIP

 

 

 

 

Compressed real-time protocol (CRTP)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

 

X

X

 

RSM VIP, 7500 VIP

 

 

 

 

Link fragmentation and interleaving (LFI)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

 

 

X

 

RSM VIP, 7500 VIP

 

 

 

 

Access Control

All IOS devices (except 8500)

X

X

X

X

Table 1-3, Part 2   Supported Devices and QoS Techniques for IOS Software
Releases 12.x

Quality of Service Technique Cisco Systems Device IOS Software Release
12.0  12.1  12.1(2)E and later1  12.1(2)T and later1 

Priority queuing (PQ)

7500

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

RSM (Catalyst 5000), 4700, 4500, 3600

X

X

X

X

4000, 2600, 2500, 1600

X

X

 

X

1750, 1720

 

 

 

X

RSM VIP, 7500 VIP, MSFC FlexWAN

 

 

 

 

Custom queuing (CQ)

7500

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

RSM (Catalyst 5000), 4700, 4500, 3600

X

X

X

X

4000, 2600, 2500, 1600

X

X

X

X

1750, 1720

 

 

 

X

RSM VIP, 7500 VIP, MSFC FlexWAN

 

 

 

 

Weighted random early detection (WRED)

7500, 7500 VIP (uses DWRED)

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

1750, 1720

 

 

 

X

RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

X

X

 

X

MSFC FlexWAN

 

 

X

X

Weighted fair queuing (WFQ) (or fair queuing (FQ) where indicated)

7500, 7500 VIP (uses FQ only)

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

1750, 1720

 

 

 

X

RSM (Catalyst 5000), RSM VIP (Catalyst 5000; uses FQ), 4700, 4500, 4000, 3600, 2600, 2500, 1600

X

X

 

X

MSFC FlexWAN

 

 

X

X

Distributed weighted fair queuing (DWFQ), fair queuing, and QoS group DWFQ

7500, 7500 VIP, RSM VIP (Catalyst 5000), 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600, MSFC FlexWAN

 

 

 

 

Class-based weighted fair queuing (CBWFQ) using Modular CLI (MQC) with LLQ

7500, 7200, 4700, 4500, 3600, 2600, 2500, 1600

 

X

 

 

7500 VIP, RSM VIP, 7100

 

 

 

 

Class-based weighted fair queuing (CBWFQ) using MQC with LLQ + set/match classification

7500, 7200, 7100

 

 

X

X

4700, 4500, 3600, 2600, 2500, 1600

 

 

 

X

1750, 1720

 

 

 

X

7500 VIP, MSFC FlexWAN

 

 

X

 

RSM VIP

 

 

 

 

Class-based weighted fair queuing (CBWFQ) using MQC with LLQ + RTP

7500, 7200, 7100

 

 

 

X

4700, 4500, 3600, 2600, 2500, 1600

 

 

 

X

1750,1720

 

 

 

X

Class-based weighted fair queuing (CBWFQ) using MQC with FRTS

7500, 7200, 7100, 4700, 4500, 3600, 2600, 2500, 1600

 

 

 

X

1750

 

 

 

X

1720

 

 

 

 

RSM VIP, 7500 VIP, MSFC FlexWAN

 

 

 

 

Class-based weighted fair queuing (CBWFQ) using MQC with GTS

7500, 7200, 7100, 4700, 4500, 3600, 2600, 2500, 1600

 

 

 

X

RSM VIP, 7500 VIP, MSFC FlexWAN

 

 

 

 

Class-based weighted fair queuing (CBWFQ) using MQC with dTS and dFRF

7500 VIP, MSFC FlexWAN

 

 

X

 

IP RTP Priority
("PQ+WFQ")

7500

 

X

X

X

7200

 

X

X

X

7100

 

 

X

X

1750

 

 

 

X

4700, 4500, 3600, 2600, 2500

 

X

 

X

1600

 

X

 

 

1720, RSM VIP, 7500 VIP, MSFC FlexWAN

 

 

 

 

Policy-based routing (PBR) (also called coloring or classification)

7500, 7500 VIP, 7200, 7100, RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 3600, 2600

 

 

 

 

4000

X

X

 

X

2500, 1600

X

X

 

 

Generic traffic shaping (GTS)

7500

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

1750, 1720

 

 

 

X

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

X

X

 

X

7500 VIP, RSM VIP (Catalyst 5000)

 

 

 

 

MSFC FlexWAN

 

 

 

 

Frame Relay traffic shaping (FRTS)

7500

X

X

X

X

7200

X

X

X

X

7100

 

 

X

X

1750, 1720

 

 

 

X

RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500, 1600

X

X

 

X

7500 VIP, RSM VIP (Catalyst 5000)

 

 

 

 

MSFC FlexWAN

 

 

 

 

Enhanced FRTS with Frame Relay fragmentation (FRF.12), Frame Relay fair queue, and Frame Relay voice configuration

7200

 

X

 

X

3600, 2600

 

X

 

X

7500, 7100, RSM VIP

 

 

 

 

1750, 1720

 

 

 

 

7500 VIP, MSFC FlexWAN

 

 

 

 

Committed access rate (CAR) classification (also called coloring)

7500, 7500 VIP, 7200

X

X

X

X

7100

 

 

X

X

1750

 

 

 

X

RSM (Catalyst 5000), RSM VIP (Catalyst 5000), 4700, 4500, 3600, 2600

X

X

 

X

2500, 1600

 

X

 

X

MSFC FlexWAN

 

 

X

X

4000

 

 

 

 

Committed access rate (CAR) rate limiting

7500, 7500 VIP, 7200

X

X

X

X

7100

 

 

 

 

RSM VIP (Catalyst 5000)

X

X

 

X

1750, 1720

 

 

 

X

RSM (Catalyst 5000), 4700, 4500, 3600, 2600

X

X

 

 

2500, 1600

 

 

 

X

MSFC FlexWAN

 

 

X

X

4000

 

 

 

 

Weighted round robin (WRR)

Catalyst 8510, Catalyst 8540

X

X

 

 

Catalyst 4003 with L3, Catalyst 4006 with L3, 4908GL-3, 2948GL-3

X

X

 

 

Resource reservation protocol (RSVP)

7500

X

X

X

X

7200

X

X

X

X

7100, 7500 VIP, MSFC FlexWAN

 

 

X

X

4700, 4500, 3600, 2600, 2500

X

X

 

X

1600

X

X

 

 

RSM VIP

 

 

 

 

Network-based application recognition (NBAR)

7200, 7100

 

 

X

 

7500 VIP

 

 

 

 

Compressed real-time protocol (CRTP)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500

X

X

X

X

1600

X

X

 

 

1750, 1720

 

 

 

X

7500 VIP, RSM VIP, MSFC FlexWAN

 

 

 

 

Link Fragmentation and Interleaving (LFI)

7500, 7200, 7100, RSM (Catalyst 5000), 4700, 4500, 4000, 3600, 2600, 2500

X

X

X

X

1600

X

X

 

 

1750, 1720

 

 

 

X

7500 VIP, RSM VIP, MSFC FlexWAN

 

 

 

 

Access control

All IOS devices

X

X

X

X

A Cisco IOS mapping function is used to enable QPM-PRO to support 12.1(2)T and 12.1(2)E QoS techniques included in later releases of IOS main (T or E train) software.

Supported Devices and QoS Techniques for CATOS Software Releases

Cisco CATOS releases supported include 5.4, 5.5, and 6.1. In addition, a Cisco CATOS mapping function is used to enable QPM-PRO to work with the supported QoS techniques of 5.4, 5.5, and 6.1 in later releases of CATOS.

The table below describes the QoS techniques that you can use with the devices and CATOS software releases that QPM-PRO supports.

Table 1-4   Supported Devices and QoS Techniques for Catalyst Operating System

Quality of Service Technique Cisco Systems Device Catalyst Software Release
5.4  5.5  6.1 and later1 

Classification (coloring)

Catalyst 5000 family with NFFC-II

X

X

X

Catalyst 6000 family with PFC

X

X

X

Traffic policing including microflows

Catalyst 6000 family with PFC

X

X

X

2Q2T queuing, 1P2Q2T queuing

Catalyst 6000 family with PFC

X

X

X

A Cisco CATOS mapping function is used to enable QPM-PRO to work with the supported QoS techniques of 5.4, 5.5, and 6.1 in later releases of CATOS.

Supported Device Software Releases and QoS Techniques for Other Devices

The table below describes the QoS techniques that you can use with other devices and device software releases that QPM-PRO supports.

Table 1-5   Supported Device Software Releases and QoS Techniques for Other Devices

Quality of Service Technique  Cisco Systems Device  Device Software Release 

Packet classification (coloring)

LocalDirector

3.1.1

How Does Provisioned QoS Policy Manager Deploy QoS Policies?

Provisioned QoS Policy Manager translates your policies into device commands and enters the commands through the device's command line interface (CLI). Some policies require the creation of access control lists (ACLs), others do not.

You can define up to three ACL ranges for the ACLs created by QPM-PRO. This lets you control your ACL numbering and use your specific numbering convention. The ACL range is defined globally for all QoS databases in your system.

Through QPM, you can inspect the commands that will be used to configure the devices. During policy distribution, you can view device log messages as QPM-PRO configures each device, so that you can identify configuration successes and failures.

Figure 1-2 shows the relationship of QPM-PRO to the devices in the network. If you are using a remote version of QPM-PRO (B), it updates the network through the QoS Manager service in the complete version (A). QoS Manager does the actual work of translating your policies, contacting the devices, and updating the device configurations.


Figure 1-2   Provisioned QoS Policy Manager's Relationship to the Network


Device configuration can be implemented through QPM-PRO in the following ways:

The configuration file can be deployed to the device via TFTP or any other application that downloads configuration files to the devices.

Using QPM-PRO, you can restore a previous version of a specific database that was distributed to the network, in order to redistribute it. This is especially important when unexpected errors occur as a result of the deployment of a database and there is an immediate need to go back to a previous version of that database.

Does Provisioned QoS Policy Manager Ensure That Policies Are Consistent with Network Configuration?

Provisioned QoS Policy Manager does basic checking to ensure that your policies can be implemented. For example, you cannot define a policy or select a queuing technique that is not supported on the interface or device based on its software version and device model.

QPM-PRO does not check to ensure that your policies are consistent with each other. For example, if you have two policies on an interface, and the policies use the same filter conditions (thus selecting identical traffic), the second policy will never be applied (unless the first policy specifies that the interface should consider subsequent policies, which is a feature only available in committed access rate (CAR) policies). Thus, QPM-PRO ensures that your defined policies can be implemented, but does not ensure that your policies will have the effect you desire.

You can verify the device configuration to check whether the policies configured on the devices are consistent with the policies defined in your QoS database. If CLI changes are made on the device after deployment, there might be a mismatch between the database and the device configuration. During verification a DNS resolution check is done for all DNS names that are defined in the policy filter definition.

How Does Provisioned QoS Policy Manager Support Existing QoS Configurations?

If you have already defined QoS configurations on your devices using the CLI, you can upload them into the QoS database when you add the devices to the database. Provisioned QoS Policy Manager translates the QoS configurations into the QoS database, and generates reports for those QoS configurations that were not successfully uploaded. Unsuccessful upload might be because of incomplete configurations on the router, configuration options that are not supported by QPM-PRO, and so on.

How Does Provisioned QoS Policy Manager Support Existing ACLs?

If you have existing ACLs on a device, QPM-PRO does not change or delete them. They remain defined on the device until you change or remove them using the device's commands. For example, QPM-PRO does not modify ACLs created by Cisco ACL Manager.

Planning for QoS Deployment

These topics help you decide how and where to deploy QoS in your network:

Which Applications Benefit from QoS

Some applications can benefit more from QoS techniques than other applications. The benefits you might get from QoS are dependent not only on the applications you use, but on the networking hardware and bandwidth available to you.

In general, QoS can help you solve the following problems: constricted bandwidth and time sensitivity.

If you have insufficient bandwidth, either due to the lines you are leasing or the devices you have installed, QoS can help you allocate guaranteed bandwidth to your critical applications. Alternatively, you can limit the bandwidth for non-critical applications (such as FTP file transfers), so that other applications have a greater amount of bandwidth available to them.

Some applications, such as video, require a certain amount of bandwidth for them to work in a usable manner. With QoS policies, you can guarantee the bandwidth required for these applications.

For time-sensitive applications, which are sensitive to timeouts or other delays, you can help the applications by coloring their traffic with higher priorities than your regular traffic, or by placing the traffic in a priority queue. You can also define minimum bandwidth to help ensure the applications can deliver data in a timely fashion.

Real-time applications such as voice applications tolerate minimal variation in the amount of delay affecting delivery of their voice packets. Voice traffic is also intolerant of packet loss and jitter, both of which degrade unacceptably the quality of the voice transmission delivered to the recipient end user. You can use QoS policies to provide priority service to ensure reliable delivery of packets with low latency.

As you deploy QoS, identify the applications used on your network that are bandwidth or time sensitive, and also identify the applications that take more than their fair share of bandwidth. With this information, you can develop effective policies to improve the overall functioning of your network.

Which Interfaces Benefit from QoS

Any interface that is congested or on which congestion avoidance is required, can benefit from QoS policies. LAN-WAN links are typical points of congestion, as data is moved between lines that have differing carrying capacity. These links might be the best place to start deploying QoS policies. However, the congestion points for your network might be anywhere. You evaluate interface points where packets most likely get dropped during peak traffic periods.

Where to Deploy QoS in the Network

Deploy QoS to manage traffic congestion, and ensure the quality of real-time traffic:

What Types of Quality of Service Does QPM-PRO Handle?

Provisioned QoS Policy Manager's interface makes it easier for you to create Quality of Service policies, so that you do not have to manually connect to each of your devices and use device commands to configure the policies.

QPM-PRO detects the QoS capabilities that are available on each of your devices, as defined by the device model, interface type, and the software version running on the device. With QPM, you cannot select an unsupported QoS capability for a given device or interface. You can chose different QoS techniques for different interfaces, as appropriate, to implement your overall networking policies.

QPM-PRO policies let you define the following:

The following topics cover the general way that devices and interfaces apply policies to traffic, and the types of QoS capabilities you can implement with QPM:

Understanding Policy Implementation Sequence on an Interface

Understanding the sequence in which policies are implemented by an interface can help you define meaningful policies that implement your traffic management requirements. Figure 1-3 shows the sequence a packet follows when it reaches an interface.


Figure 1-3   Sequence Used to Implement Interface Policies on a Packet


When a packet reaches an interface, the interface acts upon the packet in the following sequence:

With some IOS software versions and device models, you can define a policy so that subsequent policies are considered after a match is found. In these cases, you can color a packet in one policy at the input interface, and apply a limiting policy to the same packet, perhaps by keying on the packet's color. Refer to Table 1-2 to see which combinations support committed access rate (CAR) limiting or CAR classification. Normally, you should apply a coloring policy prior to applying a limiting policy. However, in some CAR cases a limiting policy can be applied at the input interface before applying a coloring policy.

Traffic Coloring Techniques

Some interface or device QoS properties recognize a packet's relative importance by examining the IP precedence field or DiffServ Code Point (DSCP) value in the packet's header. Changing the IP precedence value or DSCP value is changing the packet's color or classification. Because the IP precedence or DSCP value is embedded in the packet, changing it can affect the way the packet is handled on its entire path through the network.

You can color traffic on several types of devices:

Because IOS software applies coloring policies on the input interface before queuing a packet, the coloring policy you set can affect how that packet is queued on the interface. Interfaces that use WFQ, WRED, and some other queuing techniques automatically recognize and use the IP precedence value. Priority queuing and custom queuing interfaces do not automatically consider the IP precedence of a packet. Therefore, to have coloring affect how the packet gets prioritized on a priority queuing or custom queuing interface, you must create an additional policy on the outbound interface that recognizes the traffic and places it in the appropriate queue (in addition to creating a coloring policy on the inbound interface). Likewise, if you want to shape or limit traffic based on IP precedence, you must create traffic shaping or limiting policies on the outbound interface that specifically look for the selected precedence value.

If the interface supports CAR, you can use advanced coloring features. Your coloring policy can apply different precedence values based on whether the traffic flow is conforming to or exceeding a specific rate. You can also specify that additional policies be examined on the interface (usually, if a packet matches a policy, the policy is applied and no other policies on the interface are examined). Thus, in one policy you can color the traffic, and in the next policy, you can use the packet's color to limit the traffic to a specific rate, or place it in a custom or priority queue. CAR also allows you to color traffic whether it is entering or leaving the interface (or both), whereas PBR only lets you color traffic that is entering the interface.

If the software version supports modular CLI, you can define a class-based multiple-action policy that can contain a coloring action, a limiting action and queuing. The limiting policy definition can also apply coloring, based on whether the traffic conforms to the rate limit or exceeds it.


Note    If you define coloring and queueing actions on an outbound interface, the queueing cannot use the coloring on that interface.

QPM-PRO only presents you with these advanced features if the interface supports them.

Ports that use 1P2Q2T or 2Q2T queuing, or other precedence-sensitive queuing techniques, use the packet classification to determine how they queue packets.

You can also color the packets while limiting the traffic rate by creating a limiting (traffic policing) policy instead of a coloring policy.

Interface QoS Property Requirements for Colored Traffic

You can define traffic coloring policies on any type of interface.

WFQ, WRED, WRR, 1P2Q2T, and 2Q2T automatically consider the packet's color when queuing the packet. To have the packet's color affect queuing on interfaces using other queuing properties, you must define policies on the interface that specifically look at the precedence value (for example, custom or priority queuing policies, or shaping or limiting policies).

Related Topics

Traffic Shaping or Traffic Limiting Techniques for Controlling Bandwidth

You can create traffic shaping or limiting (traffic policing) policies on a device's interface to manage how much of the interface's bandwidth should be allocated to a specific traffic flow. You can set your policies based on a variety of traffic characteristics, including the type of traffic, its source, its destination, and its IP precedence settings (traffic coloring).

Shaping differs from limiting in that shaping attempts to smooth the traffic flow to meet your rate requirements, whereas limiting (traffic policing) does not smooth the traffic flow, it only prevents the flow from exceeding the rate.

Unlike queuing techniques, which are part of an interface's characteristics, generic traffic shaping or traffic limiting is done through policies that are defined in access control lists (ACLs), or in class-based policies (modular CLI). (However, Frame Relay traffic shaping (FRTS) is defined as an interface characteristic.) Queuing techniques affect traffic only when an interface is congested, or in the case of WRED, when traffic exceeds a certain threshold. With traffic shaping policies, flows are affected even during times of little congestion.

You can use these types of traffic shaping policies:

Generic Traffic Shaping (GTS)—Basic Traffic Rate Control

Generic traffic shaping lets you set a target average transmission rate for specific types of traffic. For example, you can create a policy that limits web traffic to 200 kilobits/second. GTS shapes the traffic flow so that the rate does not exceed this value. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 200 kilobits/second, other kinds of traffic can use the unused bandwidth.

GTS uses a buffer to hold packets while it transmits the flow at the target rate. You can also define a burst size and an exceed burst size to further model the flow. These values define how much data GTS can send from the buffer per time interval. Once the buffer is full, packets are dropped.

You can define GTS properties in class-based, multiple-action policies on devices with a software version that supports modular CLI. In these policies, GTS provides two types of shape commands: average and peak. When shape average is configured, the interface sends no more than the committed burst (Bc) in each interval. When shape peak is configured, the interface sends the committed burst (Bc) plus the excess burst (Be) bits in each interval.

GTS is useful for satisfying service level agreements, or for slowing traffic on a link where the destination interface is slower than the transmission interface (where you would define the shaping policies).

Interface QoS Property Requirements for Generic Traffic Shaping

You can define generic traffic shaping policies on any type of interface except VIP interfaces and those that use Frame Relay traffic shaping (FRTS). On VIP interfaces you can use Distributed Traffic Shaping (DTS). On Frame Relay interfaces, you cannot use GTS and FRTS simultaneously, nor can you mix GTS and FRTS on subinterfaces of a single interface.

On devices with a software version that supports modular CLI, you can configure GTS with CBWFQ.

Related Topics

Frame Relay Traffic Shaping (FRTS)—Controlling Traffic on Frame Relay Interfaces and Subinterfaces

Frame Relay traffic shaping lets you specify an average bandwidth size for Frame Relay virtual circuits (VC), defining an average rate commitment for the VC. FRTS is useful for satisfying service level agreements, or for slowing traffic on a link where the destination interface is slower than the transmission interface (where you would define the FRTS rate).

Unlike GTS, you enable FRTS in the interface's properties. You do not create FRTS policies on the interface. Thus, your FRTS settings apply to all traffic on the interface. You cannot selectively apply different rates to different types of traffic. FRTS can be configured with CBWFQ on devices with a software version that supports modular CLI.

FRTS uses a buffer to hold packets while it transmits the flow at the target rate. You can also define a burst size and an exceed burst size to further model the flow. These values define how much data FRTS can send from the buffer per time interval. Once the buffer is full, packets are dropped.

You can also control whether the circuit responds to notifications from the network that the circuit is becoming congested (adaptive shaping).

QPM-PRO applies your FRTS settings to all VCs defined on an interface or subinterface. You cannot treat multiple VCs on a single interface or subinterface differently. However, you can have different rate settings for an interface and its subinterfaces. To use FRTS on a subinterface, you must first enable it on the associated interface.

If you are using Voice over Frame Relay, you can also define the percentage of the interface's bandwidth to use for voice.

Because Frame Relay is a WAN protocol, part of the Frame Relay network you use is provided by a carrier. You need to negotiate rates and other FRTS settings with the carrier to ensure you get the required WAN network performance.

Device Group Considerations for Frame Relay Traffic Shaping

Device groups allow you to treat selected interfaces or subinterfaces as a single unit, so that you can easily apply common policies or QoS settings to the group. FRTS has a special influence on how you can group frame relay interfaces.

FRTS is enabled or disabled for an interface and all of its subinterfaces. You cannot have FRTS enabled on one subinterface and not on another for the same interface. Thus, if you change the FRTS setting (enabled or disabled), the change is also made to any associated interface or subinterface.

If a subinterface is a member of a device group, you cannot change the FRTS setting on the associated interface. When you create a group for Frame Relay subinterfaces, you must specify whether the interfaces for the subinterfaces have FRTS enabled. This limits the subinterfaces you are allowed to add to the group.

Interface QoS Property Requirements for Frame Relay Traffic Shaping

You can use FRTS on Frame Relay interfaces using any type of QoS property (except "Do not change").

If you use priority queuing or custom queuing, you must create policies on the interfaces or subinterfaces that create the required queues. By creating custom or priority queues, you can further modify the rate-limiting features of FRTS.

On some versions of IOS software, when you select WFQ or Class-Based QoS for QoS Property, there are additional configuration settings available. These settings are used for Voice over Frame Relay.

You cannot create generic traffic shaping policies on an FRTS interface. However, you can create traffic coloring policies on the interface.

Related Topics

Distributed Traffic Shaping (DTS)—Controlling Traffic on VIP Interfaces

Distributed traffic shaping (DTS) can be used with Class-Based QoS on VIP interfaces on devices that support modular CLI. DTS supports all functionality provided by both GTS and FRTS.

DTS uses queues to buffer traffic surges that can congest a network. Data is buffered and then sent into the network at a regulated rate. This ensures that traffic will behave according to the configured descriptor, as defined by the Committed Information Rate (CIR), Committed Burst (Bc), and Excess Burst (Be). With the defined average bit rate and burst size that is acceptable on that shaped entity, you can derive a time interval value.

The Be allows more than the Bc to be sent during a time interval under certain conditions. Therefore, DTS provides two types of shape commands: average and peak:

In a link layer network such as Frame Relay, the network sends messages with the forward explicit congestion notification (FECN) or backwards explicit congestion notification (BECN), if there is congestion. With the DTS feature, the traffic shaping adaptive mode takes advantage of these signals and adjusts the traffic descriptors. This approximates the rate to the available bandwidth along the path.

Interface QoS Property Requirements for Distributed Traffic Shaping

You can use DTS on VIP interfaces using the Class-Based QoS property.

Related Topics

Limiting on Routers—Limiting Bandwidth and Optionally Coloring Traffic

Committed access rate (CAR) limiting lets you set a bandwidth limit for specific types of traffic on router interfaces. For example, you can create a policy that limits web traffic to 200 kilobits/sec. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 200 kilobits/sec, other kinds of traffic can use the unused bandwidth.

Packets are dropped if traffic bursts exceed the limit. CAR limiting does not attempt to smooth or shape the traffic flow in the way that GTS or FRTS attempt to do. Because CAR does not buffer the traffic, there is no delay in sending it, unless the traffic flow exceeds your rate policy and it is dropped.

In addition to limiting the traffic rate, CAR lets you color the traffic that conforms to the rate. CAR limiting is a related to CAR classification (described in the "Traffic Coloring Techniques" section). The difference between CAR classification and CAR limiting is that CAR classification allows traffic that exceeds the rate limit to be transmitted and optionally colored.

One of the main uses for limiting policies is to ensure that traffic coming into your network is not exceeding agreed-upon rates. If you define a limiting policy for inbound traffic, you can throttle misbehaving traffic before it gets into your network. Because you control the traffic's rate at the inbound interface, the traffic should be well-behaved while it is in your network.

Interface QoS Property Requirements for Rate Limited Traffic

You can define limiting policies on any type of interface.

For custom queuing or CBWFQ interfaces, you can create a limiting policy to form an upper limit for the bandwidth available to the selected traffic, and have the interface also apply a custom queuing policy to form a lower bandwidth limit for the traffic. To do this, you must specify Continue on the limiting policy to ensure that the policy comes before the associated custom queuing or CBWFQ policy in the list of policies on the interface.

You can also use the Continue attribute to apply more than one rate limiting policy. For example, you can have a general policy that applies a rate limit to all TCP traffic, and a subsequent policy that applies a different rate limit to web traffic.

On devices with a software version that supports modular CLI, you can create multiple-action policies including limiting.

Related Topics

Limiting on Switches—Policing Traffic by Limiting Bandwidth

Traffic limiting lets you set a bandwidth limit for specific types of traffic on Catalyst switch ports or VLANS. You can set bandwidth limits for individual flows (microflow limiting) or for aggregate flows. For example, you can create a policy that limits aggregate web traffic to an average rate of 1024 kilobits/second, with a maximum burst of 2048 kilobits. This puts a cap on the bandwidth available to that traffic, ensuring that the remainder of the interface's bandwidth is available to other kinds of traffic. In this example, if web traffic does not fill 1024 kilobits/second with maximum bursts to 2048 kilobits, other kinds of traffic can use the unused bandwidth.

Packets are dropped if traffic bursts exceed the limits.

By setting the rate to 0, you can effectively prevent the selected traffic from being transmitted through the port.

In addition to limiting the traffic rate, traffic policing changes the color of traffic that does not conform to the rate.

Interface QoS Property Requirements for Limiting Traffic

You can define limiting policies on any type of port, if the QoS style is defined as port-based, or VLAN.

Related Topics

Queuing Techniques for Congestion Management for Outbound Traffic

You can set a queuing technique on a device's interface to manage how packets are queued to be sent through the interface. The technique you choose determines whether the traffic coloring characteristics of the packet are used or ignored.

These queuing techniques are primarily used for managing traffic congestion on an interface, that is, they determine the priority in which to send packets when there is more data than can be sent immediately:

First In, First Out (FIFO) Queuing—Basic Store and Forward

FIFO queuing is the basic queuing technique. In FIFO queuing, packets are queued on a first come, first served basis: if packet A arrives at the interface before packet B, packet A leaves the interface before packet B. This is true even if packet B has a higher IP precedence than packet A since FIFO queuing ignores packet characteristics.

FIFO queuing works well on uncongested high-capacity interfaces that have minimal delay, or when you do not want to differentiate services for packets traveling through the device.

The disadvantage of FIFO queuing is that when a station starts a file transfer, it can consume all the bandwidth of a link to the detriment of interactive sessions. This phenomenon is referred to as a packet train because one source sends a "train" of packets to its destination and packets from other stations get caught behind the train.

Policy Requirements for FIFO Queuing Interfaces

There are no specific requirements for creating policies on FIFO interfaces. You do not have to define any policies on these interfaces.

However, you can create traffic shaping or traffic limiting policies on FIFO interfaces to limit the bandwidth available to selected traffic.

You can also color the traffic on a FIFO interface, but the packet's color does not affect the queuing on the interface. However, if the interface supports committed access rate (CAR) classification and limiting, you can create a coloring policy that simultaneously creates a rate limit and colors the traffic.

FIFO's Relationship to Traffic Coloring

FIFO queuing treats all packets the same, meaning that whichever packet gets to the interface first is the first to go through the interface. Traffic shaping and traffic limiting policy statements can affect the bandwidth available to a packet based on its color, but FIFO does not use the coloring value to alter the packet's queuing.

Related Topics

Priority Queuing (PQ)—Basic Traffic Prioritization

Priority queuing is a rigid traffic prioritization scheme: if packet A has a higher priority than packet B, packet A always goes through the interface before packet B.

When you define an interface's QoS property as priority queuing, four queues are automatically created on the interface: high, medium, normal, and low. Packets are placed in these queues based on priority queuing policies you define on the interface. When there is no policy for unclassified traffic, unclassified packets are placed in the normal queue.

The disadvantage of priority queuing is that the higher queue is given absolute precedence over lower queues. For example, packets in the low queue are only sent when the high, medium, and normal queues are completely empty. If a queue is always full, the lower-priority queues are never serviced. They fill up and packets are lost. Thus, one particular kind of network traffic can come to dominate a priority queuing interface.

An effective use of priority queuing would be for placing time-critical but low-bandwidth traffic in the high queue. This ensures that this traffic is transmitted immediately, but because of the low-bandwidth requirement, lower queues are unlikely to be starved.

Policy Requirements for Priority Queuing Interfaces

In order for packets to be classified on a priority queuing interface, you must create policies on that interface. These policies need to filter traffic into one of the four priority queues. You can also create a class default policy to assign unfiltered traffic a specific queue. When there is no class default policy for an interface, any traffic that is not filtered into a queue is placed in the normal queue.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. If you use limiting policies, you can specify that the interface consider other policies if the limiting policy matches the traffic. In this way, you can both limit the rate for the traffic and place it in a specific priority queue. If you do not specify Continue on the limiting policy, traffic that satisfies the limiting policy is placed in the normal priority queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the normal priority queue.

Priority Queuing's Relationship to Traffic Coloring

Priority queuing interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see the "Traffic Coloring Techniques" section), and you want the coloring to affect the priority queue, you must create a policy on the priority queuing outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific priority queue based on the color value.

Related Topics

Custom Queuing (CQ)—Advanced Traffic Prioritization

Custom queuing is a flexible traffic prioritization scheme that allocates a minimum bandwidth to specified types of traffic. You can create up to 16 of these custom queues.

For custom queue interfaces, the device services the queues in a round-robin fashion, sending out packets from a queue until the byte count on the queue is met, then moving on to the next queue. This ensures that no queue gets starved, in comparison to priority queuing.

The disadvantage of custom queuing is that, like priority queuing, you must create policy statements on the interface to classify the traffic to the queues.

An effective use of custom queuing would be to guarantee bandwidth to a few critical applications to ensure reliable application performance.

Policy Requirements for Custom Queuing Interfaces

In order for packets to be classified on a custom queuing interface, you must create custom queuing policies on that interface. These policies need to specify a ratio, or percentage, of the bandwidth on the interface that should be allocated to the queue for the filtered traffic. A queue can be as small as 5%, or as large as 95%, in increments of 5%. The total bandwidth allocation for all policy statements defined on a custom queuing interface cannot exceed 95% (QPM-PRO ensures that you do not exceed 95%). Any bandwidth not allocated by a specific policy statement is available to the traffic that does not satisfy the filters in the policy statements.

QPM-PRO uses the ratio in these policies, along with the packet size specified when you define an interface as a custom queue, to determine the byte count of each queue.

The queues you define constitute a minimum bandwidth allocation for the specified flow. If more bandwidth is available on the interface due to a light load, a queue can use the extra bandwidth. This is handled dynamically by the device.

If you do not create custom queuing policies on the custom queuing interface, all traffic is placed in a single queue (the default queue), and is processed first-in, first-out, in the same manner as a FIFO queuing interface.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. Thus, the custom queue defines a minimum bandwidth, and the limiting policy defines an upper limit. When defining the bandwidth upper limit, the limiting policy must appear before the custom queue policy, and it must filter the same traffic as the custom queue (or a subset of the same traffic). It must also specify that the interface continue looking at subsequent policies after applying the limiting policy. If you do not specify Continue, traffic that satisfies the limiting policy is placed in the default custom queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the default custom queue.

Custom Queuing's Relationship to Traffic Coloring

Custom queuing interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see the "Traffic Coloring Techniques" section), and you want the coloring to affect the custom queue, you must create a policy on the custom queuing outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific custom queue based on the color value.

Related Topics

Weighted Fair Queuing (WFQ)—Intelligent Traffic Prioritization

Weighted fair queuing acknowledges and uses a packet's priority without starving low-priority packets for bandwidth. Weighted fair queuing divides packets into two classes: interactive traffic is placed at the front of the queue to reduce response time; non-interactive traffic shares the remaining bandwidth proportionately.

Because interactive traffic is typically low-bandwidth, its higher priority does not starve the remaining traffic. A complex algorithm is used to determine the amount of bandwidth assigned to each traffic flow. IP precedence is considered when making this determination.

Weighted fair queuing is very efficient and requires little configuration.

With some versions of IOS software, when you select WFQ on Frame Relay interfaces where you enable FRTS, there are WFQ configuration settings available. These settings are used for Voice over Frame Relay.

For interfaces on VIP cards, you can use fair queuing, but not weighted fair queuing. In fair queuing, the queues are treated with the same weight. This is also called distributed weighted fair queuing (DWFQ).

Policy Requirements for Weighted Fair Queuing Interfaces

Weighted fair queuing interfaces automatically create queues for each traffic flow. No specific policies are needed.

However, you can also create traffic shaping or limiting policies to affect how selected traffic is handled on the interface. A shaping policy or a limiting policy controls the bandwidth available to the selected traffic.

Weighted Fair Queuing's Relationship to Traffic Coloring

Weighted fair queuing is sensitive to the IP precedence settings in the packets. WFQ automatically prioritizes the packets without the need for you to create policies on the WFQ interfaces. However, if you do create a coloring policy on the WFQ interface, it affects how the selected traffic is queued.

WFQ can improve network performance without traffic coloring policies. However, because WFQ automatically uses the IP precedence settings in packets, consider coloring all traffic that enters the device (or color the traffic at the point where it enters your network). By coloring all traffic, you can ensure that packets receive the service level you intend. Otherwise, the originator of the traffic, or another network device along the traffic's path, determines the service level for the traffic.

Related Topics

Class-Based Weighted Fair Queuing (CBWFQ)—Customizable WFQ

Class-based weighted fair queuing (CBWFQ) combines the best characteristics of weighted fair queuing and custom queuing.

CBWFQ uses WFQ processing to give higher weight to high priority traffic, but derives that weight from classes that you create on the interface. These classes are similar to custom queues—they are policy-based, identify traffic based on the traffic's characteristics (protocol, source, destination, and so forth), and allocate a percentage of the interface's bandwidth to the traffic flow.

With CBWFQ, you can create up to 64 classes on an interface. (Unlike WFQ, queues are not automatically based on IP precedence or DSCP value.) CBWFQ also lets you control the drop mechanism used when congestion occurs on the interface. You can use WRED for the drop mechanism, and configure the WRED queues, to ensure that high-priority packets within a class are given the appropriate weight. If you use tail drop, all packets within a class are treated equally, even if the IP precedence is not equal.

The disadvantage of CBWFQ is that, like custom queuing, you must create policy statements on the interface to place the traffic in the classes.

An effective use of CBWFQ would be to guarantee bandwidth to a few critical applications to ensure reliable application performance.

If CBWFQ is available on an interface, Cisco recommends that you use CBWFQ instead of custom queuing.

CBWFQ with Modular CLI

On routers with software versions that support modular CLI, you can create multiple-action, class-based policies. You should choose Class-Based QoS for the QoS property on interfaces on which you want to create multiple-action policies. The queuing method for these interfaces is CBWFQ, and it can be used with additional QoS capabilities to enable efficient management of voice and other real-time traffic. Some of these features are defined as interface properties, others are defined as properties of a policy.

Many of these features enable efficient management of voice traffic.

Policy Requirements for CBWFQ Interfaces

In order for packets to be placed in a CBWFQ class on an interface, you must create CBWFQ policies on that interface. These policies need to specify a minimum percentage of the bandwidth on the interface that should be allocated to the class queue for the filtered traffic.

Unless you change the maximum allocatable bandwidth on the interface, for interfaces on a non-VIP card, a queue can be as small as 1%, or as large as 75%, in increments of 1%. The total bandwidth allocation for all policy statements defined on a CBWFQ interface cannot exceed 75%. For interfaces on a VIP card, the upper limit is 99%. The maximum bandwidth limit includes the IP RTP Priority queue if you create one. Because you can change the maximum allocatable bandwidth, QPM-PRO does not check to ensure that you do not exceed the bandwidth limits.

The queues you define constitute a minimum bandwidth allocation for the specified flow. If more bandwidth is available on the interface due to a light load, a queue can use the extra bandwidth. This is handled dynamically by the device.

Unclassified packets that do not match any filters defined for class-based policies are processed according to the settings in the default class. The default behavior for unclassified traffic is weighted fair queuing.

You can also create traffic limiting policies to define an upper range on the bandwidth allocated to selected traffic. Thus, the class queue defines a minimum bandwidth, and the limiting policy defines an upper limit. When defining the bandwidth upper limit, the limiting policy must appear before the CBWFQ policy, and it must filter the same traffic as the class queue (or a subset of the same traffic). It must also specify that the interface continue looking at subsequent policies after applying the limiting policy. If you do not specify Continue on the limiting policy, traffic that satisfies the limiting policy is placed in the default class queue.

If you use traffic shaping policies to specify a rate limit, the traffic to which the shaping policy applies is placed in the default class queue.

Using Network Based Application Recognition (NBAR) with CBWFQ

NBAR is a classification engine that recognizes a wide variety of applications, including web-based and other difficult-to-classify protocols that utilize dynamic TCP/UDP port assignments. When an application is recognized and classified by NBAR, a network can invoke services for that specific application.

On devices with an IOS software version that supports modular CLI and NBAR, you can use NBAR to refine your CBWFQ policies. With NBAR, you can identify traffic based on application. For example, you can filter all RealAudio traffic.

CBWFQ's Relationship to Traffic Coloring

CBWFQ interfaces do not automatically consider the IP precedence settings of a packet. If you create traffic coloring policies on inbound interfaces (see the "Traffic Coloring Techniques" section), and you want the coloring to affect the class queue, you must create a policy on the CBWFQ outbound interface that recognizes the color value and places the packet in the desired queue.

If the interface supports committed access rate (CAR) classification, you can create a coloring policy and specify that the interface consider other policies if the coloring policy matches the traffic. In this way, you can color the traffic and place it in a specific class queue based on the color value.

If you use WRED as the drop mechanism for a class, WRED automatically considers the packet's color when determining which packet to drop. Tail drop does not consider a packet's color.

If you use WFQ on the default class policy, WFQ automatically considers the packet's color when queuing, dropping, and sending packets in the default queues.

Related Topics

Weighted Round Robin (WRR)—Managing Layer 3 Switch Congestion

Weighted round-robin (WRR) scheduling is used on layer 3 switches. WRR queuing is handled differently on the Catalyst 8500 family and on other layer 3 switches.

Weighted round-robin (WRR) scheduling is used on layer 3 switches on egress ports to manage the queuing and sending of packets. WRR places a packet in one of four queues based on IP precedence, from which it derives a delay priority. Table 1-6 shows the queue assignments based on the IP precedence value and derived delay priority of the packet, and the weight of the queue if you do not change it.

Table 1-6   WRR Queue Packet Assignments

IP Precedence  Delay Priority  Queue Assignment  Default Queue Weight (Catalyst 8500)  Default Queue Weight (Other Layer 3 Switches) 

0, 1

0

0

1

1

2, 3

1

1

2

2

4, 5

2

21

4

3

6, 7

3

3

8

4

Queue 2 is the queue typically used for voice traffic.

The Catalyst 8500 devices automatically use WRR on egress ports. Unlike other queuing properties, you do not configure WRR through the device's interface properties (QPM-PRO does not list switch router interfaces). Instead, you configure WRR through policies defined on the device level.

On other layer 3 switches, you configure WRR through policies defined on the interface level for destination ports.

With WRR, each queue is given a weight. This weight is used when congestion occurs on the port to give weighted priority to high-priority traffic without starving low priority traffic. The weights provide the queues with an implied bandwidth for the traffic on the queue. The higher the weight, the greater the implied bandwidth. The queues are not assigned specific bandwidth, however, and when the port is not congested, all queues are treated equally.

Policy Requirements for Weighted Round-Robin Devices

Devices that use WRR automatically create the four queues with default weights for each interface. You need only define policies if you want to change the queue weights for an interface. For the Catalyst 8500, these policies are defined at the device level, and QPM-PRO does not display the device interfaces. For other layer 3 switches, policies are defined on the interface level for the destination ports.

Weighted Round-Robin's Relationship to Traffic Coloring

WRR is sensitive to the IP precedence settings in the packets. WRR automatically places the packets in queues based on precedence. Although you cannot change the color of a packet on a layer 3 switch, if you change the packet's color on another device before it reaches the layer 3 switch, that change affects the WRR queuing.

Related Topics

2 Queues, 2 Thresholds (2Q2T)—Managing Congestion on Switch Ports

2Q2T queuing on Catalyst 6000 family switches uses a packet's precedence setting to determine how that packet is serviced on the port.

2Q2T queuing uses two queues, one high priority, the other low priority, with two thresholds for each queue, to determine the bandwidth allowed for traffic based on each IP precedence value. 2Q2T assigns each precedence to a specific queue and threshold on that queue.

For example, packets with 0 for precedence (the lowest priority) are placed in the low priority queue and use the lower threshold by default. This ensures that the least important traffic gets less service than any other traffic.

These queues and thresholds are serviced using weighted round robin (WRR) techniques to ensure a fair chance of transmission to each class of traffic. 2Q2T favors high-priority traffic without starving low-priority traffic.

2Q2T queuing comes with a default configuration for the queues, thresholds, and traffic assignments based on IP precedence settings. You only need to change this configuration if it does not suit your requirements.

If you decide to change the 2Q2T configuration, you can change the size of the queues, their relative WRR weights, the sizes of their thresholds, and the assignment of precedence values to the appropriate queue and threshold.

Policy Requirements for 2Q2T Queuing Interfaces

2Q2T queuing ports are not configured with policies. You can change the 2Q2T configuration through the switch's device properties. See Viewing or Changing Device Properties for information on changing device properties.

However, you can create traffic limiting policies (called traffic policing on switches) to affect how selected traffic is handled on the interface. A limiting policy controls the bandwidth available to the selected traffic.

2Q2T Queuing's Relationship to Traffic Coloring

2Q2T queuing is sensitive to the IP precedence settings in the packets. The queues and thresholds selected for the traffic are based on the precedence value.

If you use coloring policies on the interface to change a packet's precedence, that change affects the queue and threshold assignment for the packet.

Related Topics

1 Priority Queue, and 2 Queues 2 Thresholds (1P2Q2T)—Managing Voice Traffic on Switches

Like 2Q2T, 1P2Q2T queuing on Catalyst 6000 family switches uses a packet's precedence setting to determine how that packet is serviced on the port.

1P2Q2T queuing uses three queues:

1P2Q2T assigns each precedence to a specific queue and threshold on that queue.

You can color voice traffic so that it will be assigned to the strict priority queue. On 1P2Q2T interfaces, the switch services traffic in the strict-priority queue before servicing the standard queues. When the switch is servicing a standard queue, after transmitting a packet, it checks for traffic in the strict-priority queue. If the switch detects traffic in the strict-priority queue, it suspends its service of the standard queue and completes service of all traffic in the strict-priority queue before returning to the standard queue.

On 1P2Q2T interfaces, the default QoS configuration allocates 90 percent of the total transmit queue size to the low-priority standard queue, 5 percent to the high-priority standard queue, and 5 percent to the strict-priority queue.

For 1P2Q2T interfaces, the default QoS configuration assigns all traffic with IP Precedence 5 to the strict priority queue, traffic with IP Precedence 4, 6, and 7 to the high-priority standard queue, and traffic with IP Precedence 0, 1, 2, and 3 to the low-priority standard queue.

Policy Requirements for 1P2Q2T Queuing Interfaces

1P2Q2T queuing interfaces are not configured using policies, but through the switch's device properties. See Viewing or Changing Device Properties, for information on changing device properties.

However, you can create traffic limiting policies (called traffic policing on switches) to affect how selected traffic is handled on the interface. A limiting policy controls the bandwidth available to the selected traffic.

1P2Q2T Queuing's Relationship to Traffic Coloring

1P2Q2T queuing is sensitive to the IP precedence settings in the packets. The queues and thresholds selected for the traffic are based on the precedence value.

If you use coloring policies on the interface to change a packet's precedence, that change affects the queue and threshold assignment for the packet.

Related Topics

Queuing Techniques for Congestion Avoidance on Outbound Traffic

You can set a queuing technique on a device's interface to manage how packets are handled when the interface starts to be congested. The queuing technique available for congestion avoidance is weighted random early detection (WRED).

With WRED, when traffic begins to exceed the interface's traffic thresholds, but before congestion occurs, the interface starts dropping packets from selected flows. If the dropped packets are TCP, the TCP source recognizes that packets are getting dropped, and lowers its transmission rate. The lowered transmission rate then reduces the traffic to the interface, thus avoiding congestion. Because TCP retransmits dropped packets, no actual data loss occurs.

To determine which packets to drop, WRED takes these things into account:

WRED chooses the packets to drop after considering these factors in combination, the net result being that the highest priority and lowest bandwidth traffic is preserved.

WRED differs from standard random early detection (RED) in that RED ignores IP precedence, and instead drops packets from all traffic flows, not selecting low precedence or high bandwidth flows.

By selectively dropping packets before congestion occurs, WRED prevents an interface from getting flooded, necessitating a large number of dropped packets. This increases the overall bandwidth usage for the interface.

If you are using IOS software version 12.0 on a device with a versatile interface processor (VIP), when you configure an interface to use WRED, it automatically uses distributed WRED. Distributed WRED takes advantage of the VIP.

The disadvantage of WRED is that only predominantly TCP/IP networks can benefit. Other protocols, such as UDP or NetWare (IPX), do not respond to dropped packets by lowering their transmission rates, instead retransmitting the packets at the same rate. WRED treats all non-TCP/IP packets as having precedence 0. If you have a mixed network, WRED might not be the best choice for queuing traffic.

An effective use of weighted random early detection would be to avoid congestion on a predominantly TCP/IP network, one that has minimal UDP traffic and no significant traffic from other networking protocols. It is especially effective on core devices rather than edge devices, because the traffic coloring you perform on edge devices can then affect the WRED interfaces throughout the network.

Policy Requirements for Weighted Random Early Detection Interfaces

Weighted random early detection interfaces automatically favor high priority, low bandwidth traffic flows. No specific policies are needed.

However, you can also create traffic shaping policies or traffic limiting policies to affect how selected traffic is handled on the interface. A shaping policy or a limiting policy can control the bandwidth available to the selected traffic.

You can also create CBWFQ policies that use WRED as the drop mechanism for the class-based queues.

Weighted Random Early Detection's Relationship to Traffic Coloring

WRED is sensitive to the IP precedence settings in the packets. Therefore, you can create policies on inbound interfaces on the device and have those policies implemented on the outbound interfaces that use WRED. WRED automatically prioritizes the packets without the need for you to create policies on the WRED queuing interfaces, dropping packets with low priority before dropping high-priority packets.

However, you do not need to create policies on the inbound interfaces that color traffic. If packets have the same IP precedence, WRED drops packets from the highest-bandwidth flows first. However, because WRED automatically uses the IP precedence settings in packets, consider coloring all traffic that enters the device (or color the traffic at the point where it enters your network). By coloring all traffic, you can ensure that packets receive the service level you intend. Otherwise, the originator of the traffic, or another network device along the traffic's path, determines the service level for the traffic.

If you create a coloring policy on the WRED interface, it also affects how the selected traffic is queued.

Related Topics

Management of Voice and Other Real-Time Traffic

Real-time-based applications, such as voice applications, have different characteristics and requirements from those of other data applications. Voice applications tolerate minimal variation in the amount of delay affecting delivery of their voice packets. Voice traffic is also intolerant of packet loss and jitter, both of which degrade the quality of the voice transmission delivered to the recipient end user. To effectively transport voice traffic over IP, mechanisms are required that ensure reliable delivery of packets with low latency.

The following features can be used to manage voice traffic. Some of these features are defined on the interface and some can be included in the policy definition.

The devices and software versions that support these features are shown in Supported Devices and QoS Techniques for IOS Software Releases.

On Catalyst switches, the following is available for management of voice traffic:

Low Latency Queuing (LLQ)—Strict Priority Queuing

Low latency queueing (LLQ) is used with CBWFQ to bring strict priority queueing to CBWFQ. Strict priority queueing allows delay-sensitive data such as voice to be dequeued and sent first (before packets in other queues are dequeued), giving delay-sensitive data preferential treatment over other traffic. LLQ is not limited to UDP port numbers, as is IP RTP priority.

Using LLQ reduces delay and jitter in voice conversations. LLQ is enabled when you configure the priority status within the CBWFQ queuing properties. When several types of traffic on an interface are configured as priority classes, all these types of traffic are enqueued to the same, single, strict priority queue.

Related Topics

IP RTP Priority—Providing Absolute Priority to Voice Traffic

IP RTP Priority creates a strict-priority queue for real-time transport protocol (RTP) traffic. The IP RTP Priority queue is emptied before other queues are serviced. This is typically used to provide absolute priority to voice traffic, which uses RTP ports. Because voice traffic is delay-sensitive and low bandwidth, you can typically give it absolute priority without starving other data traffic. This ensures that voice quality is adequate.

IP RTP Priority is especially useful on slow-speed WAN links, including Frame Relay, Multilink PPP (MLP), and T1 ATM links. It works with WFQ and CBWFQ.

In QPM-PRO, you generally enable IP RTP Priority in the interface or device group properties. You select the range of RTP port traffic to place in the queue, and the percentage of the interface's bandwidth to allocate to the queue. Any allocated bandwidth that is not used is available to other queues on the interface. When creating multiple-action policies on interfaces that support modular CLI, you define the range of RTP port traffic in the filter, and then assign the traffic to the priority queue in the queuing policy.

Do not set the bandwidth too low. Any traffic for the queue that exceeds the bandwidth is dropped. Although voice traffic typically uses 24 kbps, there is occasional overhead requiring 25 kbps service. If you select a bandwidth percentage that equates to 24 kbps, the interface is likely to drop voice packets occasionally, which will give you poor voice quality.

Also, IP RTP Priority ignores compression, treating a compressed 12 kbps flow as a 24 kbps flow.

Policy Requirements for IP RTP Priority Interfaces

IP RTP Priority is not defined as a policy action. The IP RTP Priority configuration defined on the interface or device group, or in the filter with modular CLI, determines which traffic is placed in the priority queue.

Interface QoS Property Requirements for IP RTP Priority

You can use IP RTP Priority on WFQ and CBWFQ interfaces. On CBWFQ interfaces, you can configure custom class-based queues for other types of traffic. The bandwidth allocated to the IP RTP Priority queue counts as part of the total allocated CBWFQ queue bandwidth. IP RTP priority cannot be configured on the interface when FRTS is enabled. IP RTP priority is not available on VIP cards.

Related Topics

Link Fragmentation and Interleaving (LFI)—Reducing Delay and Jitter on Lower Speed Links

Voice over IP is susceptible to increased latency and jitter when the network processes large packets, such as LAN-to-LAN FTP Telnet transfers traversing a WAN link. This susceptibility increases as the traffic is queued on slower links. LFI reduces delay and jitter on slower speed links by breaking up large data packets so that they are small enough to satisfy the delay requirements of real-time traffic.The low-delay traffic packets, such as voice packets, are interleaved with the fragmented packets. LFI also provides a special transmit queue for the smaller, delay-sensitive packets, enabling them to be sent earlier than other flows.

LFI was designed especially for lower-speed links in which serialization delay is significant.

Interface QoS Property Requirements for LFI

You can configure LFI on PPP interfaces when Multilink Point-to-Point Protocol (MLP) is configured on the interface. You can use LFI on virtual templates, dialer interfaces, and ISDN BRI or PRI interfaces. QPM-PRO cannot detect or implement MLP and will assume that the multilink PPP command is enabled on the interface. QPM-PRO will configure only the interleave and fragmentation commands. When LFI is defined on an interface group, it is only deployed to the interfaces that support it.

Related Topics

Frame Relay Fragmentation (FRF)—Preventing Delay on Frame Relay Links

Frame Relay fragmentation ensures predictability for voice traffic, aiming to provide better throughput on low-speed Frame Relay links by interleaving delay-sensitive voice traffic on one virtual circuit (VC) with fragments of a long frame on another VC utilizing the same interface.

VoIP packets should not be fragmented. However, VoIP packets can be interleaved with fragmented packets.

If some PVCs are carrying voice traffic, you can enable fragmentation on all PVCs. The fragmentation header is only included for frames that are greater than the fragment size configured.

Interface QoS Property Requirements for FRF

You can use FRF on frame relay interfaces, where the device software version supports WFQ, or Class-Based QoS is defined as the QoS property (depending on the device and the IOS version). FRTS must be enabled in order to use FRF.


Note   FRF12 is supported on:
2600, 3600, 7200 with IOS version 12.1, 12.1(2)E, with WFQ as the QoS property.
2600, 3600, 7200 with IOS version 12.1(2)T, with WFQ or CBWFQ as the QoS property.

Related Topics

Distributed Frame Relay Fragmentation (DFRF)—FRF for VIP Interfaces

DFRF allows long data frames to be fragmented into smaller pieces and interleaved with real-time frames. In this way, real-time voice and non-real-time data frames can be carried together on lower-speed links without causing excessive delay to the real-time traffic.


Note   VoIP packets should not be fragmented. However, VoIP packets can be interleaved with fragmented packets.

Interface QoS Property Requirements for DFRF

You can use DFRF on frame relay VIP interfaces, where Class-Based QoS is defined as the QoS property. FRTS must be enabled in order to use DFRF.

Related Topics

Compressed Real-Time Protocol (CRTP)—RTP Header Compression to Reduce Delay

Real-Time Protocol (RTP) is a host-to-host protocol used for carrying multimedia application traffic, including packetized audio and video, over an IP network. RTP provides end-to-end network transport functions intended for applications sending real-time requirements, such as audio and video.

To avoid the unnecessary consumption of available bandwidth, the RTP header compression feature, referred to as CRTP, is used on a link-by-link basis. RTP header compression results in decreased consumption of available bandwidth for voice traffic. A corresponding reduction in delay is realized.

RTP header compression is supported on serial lines using Frame Relay, High-Level Data Link Control (HDLC), or PPP encapsulation. CRTP compresses the IP/UDP/RTP header in an RTP data packet from 40 bytes to approximately 2 to 5 bytes.

Interface QoS Property Requirements for CRTP

You can define CRTP on WFQ and CBWFQ interfaces with later IOS versions. CRTP is not available for VIP cards.

Related Topics

Managing Traffic Through Access Control

You can control traffic access by permitting or denying transport of packets into or out of interfaces.

You can define access control policies, which will deny or permit traffic that matches the filter definition in the specified direction. You can also define a filter condition to deny specific types of traffic as part of a QoS policy definition.

The access control feature can be used as a security feature, and can be enabled or disabled globally for all databases in your system. You can overwrite the global configuration on a per-device basis.

You cannot create Access Control policies on Catalyst switches.

Related Topics

Signaling Techniques

In order to implement end-to-end quality of service, a traffic flow must contain or use some type of signal to identify the requirements of the traffic. With QPM, you can control these types of signaling techniques:

IP Precedence and DSCP Values—Differentiated Services

The simplest form of signal is the IP precedence or DSCP setting in data packets: the packet's color or classification.

This signal is carried with the packet, and can affect the packet's handling at each node in the network. Queuing techniques such as WFQ and WRED automatically use this signal to provide differentiated services to high-priority traffic.

To use the IP precedence or DSCP setting effectively, ensure that you color traffic at the edges of your network so that the color affects the packet's handling throughout the network. See the "Traffic Coloring Techniques" section, for information on how to change a packet's IP precedence or DSCP setting.

Interface QoS Property Requirements for IP Precedence and DSCP Signaling

IP precedence and DSCP can only provide differentiated services on interfaces that use a queuing technique that is sensitive to the precedence setting in the packet. For example, WFQ, WRED, WRR, 1P2Q2T, and 2Q2T automatically consider the precedence settings.

Other QoS properties can use the precedence settings, but you must create policies that specifically filter on the precedence. For example, custom queuing, priority queuing, and CBWFQ interfaces can use the precedence setting if you create the appropriate policies.

Related Topics

Resource Reservation Protocol (RSVP)—Guaranteed Services

A more sophisticated form of signaling than IP precedence is the resource reservation protocol (RSVP). RSVP is used by applications to dynamically request specific bandwidth resources from each device along the traffic flow's route to its destinations. Once the reservations are made, the application can start the traffic flow with the assurance that the required resources are available.

RSVP is mainly used by applications that produce real-time traffic, such as voice, video, and audio. Unlike standard data traffic, such as HTTP, FTP, or Telnet, real-time applications are delay sensitive, and can become unusable if too many packets are dropped from a traffic flow. RSVP helps the application ensure there is sufficient bandwidth so that jitter, delay, and packet drop can be avoided.

RSVP is typically used by multicast applications. With multicasting, an application sends a stream of traffic to several destinations. For example, the Cisco IP/TV application can provide several audio-video programs to users. If a user accesses one of the provided programs, IP/TV sends a stream of video and audio to the user's computer.

Network devices consolidate multicast traffic to reduce bandwidth usage. Thus, if there are 10 users for a traffic flow behind a router, the router sees one traffic flow, not 10. In unicast traffic, the router sees 10 traffic flows. Although RSVP can work with unicast traffic (one sender, one destination), RSVP unicast flows can quickly use up RSVP resources on the network devices if a lot of users access unicast applications. In other words, unicast traffic scales poorly.

To configure RSVP on network devices, you must determine the bandwidth requirements of the RSVP-enabled applications on your network. If you do not configure the devices to allow RSVP to reserve enough bandwidth, the applications will perform poorly. See the documentation for the applications to determine their bandwidth requirements.

In QPM-PRO, you enable RSVP in the interface's properties (where you also set the QoS property) so that the device will act on received RSVP signals. You can determine the maximum percentage of the interface's bandwidth that can be reserved (default is 75%), and the maximum percentage of the bandwidth that any one flow can use. You can also configure RSVP on device groups.

When an RSVP request is made, RSVP calculates the bandwidth request by considering the mean data rate, the amount of data the interface can hold in the queue, and the minimum QoS requirement for the traffic flow. The interface determines if it can meet the request, and replies to the requesting application.

When the traffic flow begins, RSVP can dynamically respond to changes in routes, switching reservations to new devices and releasing reservations for devices no longer on the path. Once the flow is complete, all reservations are removed and the bandwidth on the interfaces released.

Interface QoS Property Requirements for RSVP Signaling

You can use RSVP on only WFQ, WRED, and CBWFQ interfaces:

Related Topics

More Information About Quality of Service

This publication cannot cover everything you might want to know about quality of service. This section provides pointers to more information available on the web.


Note   For pages that require a Cisco Connection Online (CCO) login, you can register at the CCO web site at http://www.cisco.com/register/.

The references are broken down into these categories:

QPM Information

http://www.cisco.com/kobayashi/sw-center/netmgmt/nr/qos.shtml

Voice over IP Information

http://www.cisco.com/warp/public/793/voip/

IOS Software Release 12.x Documentation

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_r/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/120xe/120xe5/mqc/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/wan_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/wan_r/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t4/120tvofr/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121newft/121limit/121e/121e2/nbar2e.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t5/iprtp.htm

IOS Software Release 11.3 Documentation

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/fun_c/fcprt4/fcperfrm.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/fun_r/frprt4/frperfrm.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/wan_c/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/wan_r/index.htm

IOS Software Release 11.2 Documentation

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/1cbook/1csysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/1rbook/1rsysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/4cbook/index.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios112/112cg_cr/4rbook/index.htm

IOS Software Release 11.1 and 11.1cc Documentation

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/mods/1mod/1cbook/1csysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/mods/1mod/1rbook/1rsysmgt.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/car.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/wred.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios111/cc111/dwfq.htm

IOS Software White Papers

http://www.cisco.com/warp/customer/779/largeent/learn/technologies/qos/index.html

http://www.cisco.com/warp/customer/732/qos/index.html

http://www.cisco.com/warp/customer/732/net_enabled/qos.html

http://www.cisco.com/warp/customer/779/largeent/design/index.html

http://www.cisco.com/warp/customer/cc/sol/mkt/ent/multi/dvvi4/qosfs_ds.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/qos.htm

http://www.cisco.com/warp/customer/cc/sol/mkt/ent/cmps/gcnd_wp.htm

Catalyst Documentation

http://www.cisco.com/univercd/cc/td/doc/product/l3sw/8540/rel_12_0/w5_6f/softcnfg/5cfg8500.htm

http://www.cisco.com/univercd/cc/td/doc/product/l3sw/4908g_l3/ios_12/7w515d/config/qos_cnfg.htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat5000/rel_5_5/sw_cfg/qos.htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/sw_5_5/cnfg_gd/qos.htm

LocalDirector Documentation

http://www.cisco.com/univercd/cc/td/doc/product/iaabu/localdir/ld31rns/ldicgd/index.htm


hometocprevnextglossaryfeedbacksearchhelp
Posted: Tue Aug 19 09:34:58 PDT 2003
All contents are Copyright © 1992--2003 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.