background image
3-5
Cisco AVVID Network Infrastructure Enterprise Quality of Service Design
956467
Chapter 3 QoS in an AVVID-Enabled Campus Network
QoS Toolset
Scheduling
In access-layer switches, the number of queues is not as important as how those queues and their various
drop thresholds are configured and serviced. As few as two queues might be adequate for wiring closet
access switches, where buffer management is less critical than at other layers. How these queues are
serviced (RR, WRR, Priority Queuing, or a combination of Priority Queuing and WRR or WRED) is less
critical than the number of buffers because the scheduler process is extremely fast when compared to the
aggregate amount of traffic.
Distribution-layer switches require much more complex buffer management due to the flow aggregation
occurring at that layer. Not only are priority queues needed, but thresholds within the standard queues
should also be specified. It is important to note that Cisco has chosen to use multiple thresholds within
queues instead of continually increasing the number of interface queues. As discussed earlier, each time
a queue is configured and allocated, all of the memory buffers associated with that queue can be used
only by frames meeting the queue entrance criteria.
For example, assume that a Catalyst 4000 10/100 Ethernet port has two queues configured, one for VoIP
(VoIP bearer and control traffic) and the default queue, which is used for Hypertext Transfer Protocol
(HTTP), email, FTP, logins, Windows NT Shares, and Network File System (NFS). The 128 KB voice
queue is split into a 7:1 transmit and receive ratio. The transmit buffer memory is then further separated
into high and low priority partitions in a 4:1 ratio. If the default traffic (the web, email, and file shares)
begins to congest the default queue, which is only 24 KB, then packets are dropped at the ingress
interfaces. This happens regardless of whether or not the VoIP control traffic is using any of its queue
buffers. The dropped packets of the TCP-oriented applications cause each of these applications to send
the data again, aggravating the congested condition of the network. If this same scenario were configured
with a single queue, but with multiple thresholds used for congestion avoidance, then the default traffic
would share the entire buffer space with the VoIP control traffic. Only during periods of congestion,
when the entire buffer memory approaches saturation, would the lower priority traffic (HTTP and email)
be dropped.
This discussion does not imply that multiple queues are to be avoided in Cisco AVVID networks. As
discussed earlier, the VoIP bearer streams must use a separate queue to eliminate the adverse affects that
packet drops and delays have on voice quality. However, every single CoS or DSCP value should not
get its own queue because the small size of the resulting default queue will cause many TCP resends and
will actually increase network congestion.
In addition, the VoIP and video bearer channel is a bad candidate for queue congestion avoidance
algorithms such as WRED. Queue thresholding uses the WRED algorithm to manage queue congestion
when a preset threshold value is specified. Random Early Detection (RED) works by monitoring buffer
congestion and discarding TCP packets before they are admitted to the queue if the congestion begins
to increase. The result of the drop is that the sending endpoint detects the dropped traffic and slows the
TCP sending rate by adjusting the window size. A WRED drop threshold is the percentage of buffer
utilization at which traffic with a specified CoS value is dropped, leaving the buffer available for traffic
with higher-priority CoS values. The key is the word "Random" in the algorithm name. Even with
weighting configured, WRED can still discard packets in any flow; it is just statistically more likely to
drop them from the lower CoS thresholds.