background image
92 Chapter 3: OSI Reference Model & Layered Communication
Buffering
Buffering simply means that the computers reserve enough buffer space that bursts of incoming
data can be held until processed. No attempt is made to actually slow the transmission rate of
the sender of the data. In fact, buffering is such a common method of dealing with changes in
the rate of arrival of data that most of us would probably just assume that it is happening.
However, some older documentation refers to "three methods of flow control," of which
buffering is one of the methods, so be sure to remember it as a separate function.
Congestion Avoidance
Congestion avoidance is the second method of flow control covered here. The computer
receiving the data notices that its buffers are filling. This causes either a separate PDU, or field
in a header, to be sent toward the sender, signaling the sender to stop transmitting. Figure 3-9
shows an example.
Figure 3-9
Congestion Avoidance Flow Control
"Hurry up and wait" is a popular expression used to describe the process used in this congestion
avoidance example. This process is used by Synchronous Data Link Control (SDLC) and Link
Access Procedure, Balanced (LAPB) serial data link protocols.
A preferred method might be to get the sender to simply slow down instead of stopping
altogether. This method would still be considered congestion avoidance, but instead of
signaling the sender to stop, the signal would mean to slow down. One example is the TCP/IP
Internet Control Message Protocol (ICMP) message "Source Quench." This message is sent by
the receiver or some intermediate router to slow the sender. The sender can slow down gradually
until "Source Quench" messages are no longer received.
Sender
Receiver
1
2
3
4
5
6
Stop
Go
.
.
.
.
.
ch03.fm Page 92 Monday, March 20, 2000 4:58 PM