background image
1-22
Cisco AVVID Network Infrastructure Enterprise Quality of Service Design
956467
Chapter 1 Overview
What is the Quality of Service Toolset?
Tip
For more information, see Congestion Avoidance in the Cisco IOS Quality of Service Solutions
Configuration Guide
, Release 12.2.
Scheduling Recommendations
This section provides some recommendations for queue scheduling and the number of queues to use.
Queue Scheduling
Once you have marked your traffic appropriately and have allowed the traffic admission into the
appropriate queue, you need to control how the queues are serviced. The scheduler process can use a
variety of methods to service each of the transmit queues (voice, video, mission critical data, and general
access data). The easiest method is a round-robin algorithm, which services queue 1 through queue N in
a sequential manner. While not robust, this is an extremely simple and efficient method that can be used
for branch office and wiring closet switches. Distribution layer switches use a WRR algorithm in which
higher priority traffic is given a scheduling "weight."
Today's QoS-enabled switches have the ability to use hybrid scheduling algorithms that allow an
exhaustive priority queue to be serviced until it is empty and WRR is used for the additional queues on
a given interface.
Where supported, it is best to use a priority queue for voice and video bearer traffic. If a priority queue
is not supported, use WRR and WRED to service the queue that contains the voice and video traffic. Set
the WRR to service that queue most frequently and set the WRED thresholds such that the queue does
not participate in WRED congestion avoidance.
Number of Queues
There has been much discussion about how many queues are actually needed on transmit interfaces in
the campus. Should you add a queue to the wiring closet switches for each CoS value? Should you add
eight queues to the distribution layer switches? Should you add a queue for each of the 64 DSCP values?
This section presents some guidelines that address these questions.
First, it is important to remember that each port has a finite amount of buffer memory. A single queue
has access to all the memory addresses in the buffer. As soon as a second queue is added, the finite buffer
amount is split into two portions, one for each queue. At this point, all packets entering the switch must
contend for a much smaller portion of buffer memory. During periods of high traffic, the buffer fills and
packets are dropped at the ingress interface. Because the majority of network traffic today is TCP-based,
a dropped packet results in a resend, which further increases network congestion. Therefore, queuing
should be used cautiously and only when particular priority traffic is sensitive to packet delays and
drops.
Provisioning Tools
The category of provisioning tools includes:
·
Policing and Shaping Tools
·
Link-Efficiency Mechanisms