Previous Table of Contents Next


The multicast routing protocols are designed primarily to avoid forwarding loops. Consider a generic rule that states that all multicasts are forwarded out all interfaces except the source interface. This method works fine for simple linear topologies. However, it is easy to understand that a loop would occur if the topology provided additional paths. For example, router A forwards to B, which forwards to C, which returns the packet to A.

DVMRP and PIM both operate to prevent looping from occurring by understanding the network topology and using Reverse Path Forwarding (RPF). This mechanism uses the distance back to the multicast source and effectively creates a spanning tree to control the flow of the multicast packets. RPF is not part of the 802.1d specification but operates as part of a Layer 3 process on the routers participating in the multicast.


Catalyst switches offer the ability to squelch broadcasts and multicasts at a predefined threshold based on per-second monitoring. Designers should use this feature with care as the deployment of multicasts may exceed this setting.

Multicast will be an important component in the deployment of voice, video, and data services. As part of its Architecture of Voice, Video and Integrated Data (AVVID) initiative, Cisco announced new multimedia applications that will provide video services to the desktop. In addition to the IP/TV offering, Cisco will release IP/VC in early 2000. These products build upon H.323 and other standards to provide compression of the data stream and encoding. Administrators may wish to evaluate the IP/VC product, which promises to operate with the Windows Media Player—a benefit that may preclude the installation of new software at every workstation.

Design Considerations for Quality of Service

Most vendors champion their QoS, or quality of service, offerings as a value added into the network. This set of features was a primary motivator for the migration to ATM in the mid-1990s. A second method, RSVP, or the Resource Reservation Protocol, was also developed to provide guaranteed bandwidth.

At present, QoS simply refers to the ability of the network to reserve bandwidth for the application data stream. For example, a program may query the network to obtain 1Mbps of available bandwidth that cannot be impacted. Doing so provides the application with quality service—performance is not degraded for the requesting application because it has reserved all the bandwidth it should need. All other applications are left to contend for the remaining bandwidth.

Under most circumstances, QoS is a factor only when bandwidth is limited. Most designers opt to provide additional bandwidth under the premise that no single application warrants more access to the network than another. Of course, in some instances, it is appropriate to reserve bandwidth. A typical situation would be for real-time data, including voice and video.

It is important for designers to note that RSVP is not a routing protocol. Rather, RSVP operates at the transport layer and is used to establish QoS over an existing routed path.

Designers planning to implement enterprise-wide multicast services or real-time data services should review and evaluate the QoS features available to them.

Redundancy and Load Balancing

One of the simplest redundancy options available to network designers is the Cisco proprietary HSRP, or Hot Standby Router Protocol. HSRP configurations establish two router interfaces on the local subnet and duplicate the MAC address and the IP address on each router. This duplication is permitted because only one HSRP interface is active at any time. Each interface also has its original IP address and MAC address. This configuration is illustrated in Figure 13.2, which shows the left router as the HSRP primary and the right router as the HSRP secondary.


FIGURE 13 .2  The Hot Standby Router Protocol


The non-proprietary HSRP flavor, VRRP, is discussed later in this section.

One of the keys to a redundant design is the use of monitoring tools and automatic failover. The term “failover” defines the actions necessary to provide comparable service in the event of a failure—the network fails over to another router, for example. In ATM installations, many designers opt to configure OAM (operation, administration and management) cells. These cells work to provide connectivity information regarding the entire virtual circuit, as opposed to the physical connection. Because OAM cells can detect a failure faster than the routing protocol can, they are used to trigger an update.

Some network configurations use the backup interface function in the IOS to activate a standby link in the event of primary failure. This is an excellent solution for low-bandwidth requirements where circuit costs are high.

Perhaps the most redundant solution is to install multiple paths through the network. The majority of this book focused on single paths through the network, in part because this concept is easier to understand. In fact, most examples in Cisco’s extensive library of information and configurations fail to consider redundant paths through the network unless the specific topic demands this level of detail.

The best counsel regarding multiple circuit designs is to use a high-end routing protocol and the hierarchical model. In addition, it is advisable to consider more than just link failure when mapping circuits.

From a physical layer perspective, the network can fail at one of three points. These are illustrated in Figure 13.3.


FIGURE 13 .3  Physical layer failure points

As shown in Figure 13.3, the middle failure point incorporates the WAN cloud. This encompasses failures in switching equipment and provider networks. Unfortunately, the only viable method for addressing this failure scenario is to select diverse providers. This solution can add to the cost of the network and become a factor when corporate mergers (between telecommunication vendors) occur.

The two end failure points actually encompass two different solution sets. The first is the physical entry into the building. For critical locations, designers should consider diverse entry paths into the building—possibly terminating in two different demarks, or demarcation points. A demarcation point is the point at which the telephone company turns over the cabling to the business. While this solution adds significantly to the costs, it can prevent a multitude of failures.

The second solution set incorporates the distribution layer destination. Consider an access layer site with two circuits from different providers that terminate into the same distribution layer building. Perhaps the designer improved on this design by terminating each circuit on a different router. While this design may be the only one available, the scenario of complete building failure quickly ruins such a design. Building failure may occur from an earthquake, a hurricane, a flood, a tornado, or non-nature-driven events, including civil unrest and power failure. Whenever possible, designers should opt for two physically separate termination points.


Previous Table of Contents Next