Previous Table of Contents Next


Redundancy as a Design Consideration

The critical nature of mainframes in modern networks mandates the use of redundant links for connectivity. In IP-based mainframe installations, this function frequently incorporates the use of VIPA, or virtual IP addressing. In SNA environments, other techniques are used.

One of the most fundamental redundancy techniques in SNA designs is to install dual front-end processors (FEP). Given the critical nature of the FEP in the network, this is a reasonable precaution.

When dual FEPs are configured, they use the same locally administered address (LAA). The client, when sending an explorer packet, will connect to the first FEP that responds via the RIF.


The RIF provides a hop-by-hop path through the Layer 2 network. This path is comprised of ring numbers and bridge numbers.

The SNA session will not recover automatically from a failure of the host FEP. However, clients can reattach to the other FEP with a simple explorer packet and reconnect. These types of installations work best if each FEP has at least two TICs (Token Ring interface couplers) and two routers. Each TIC is configured with a presence on each ring serviced by the routers. This configuration is illustrated in Figure 10.3. Ring 100 is shown in the thicker lines, whereas ring 200 is shown with thinner lines. The connections to the main-frame are omitted for clarity. Note that routers are shown in the diagram, but SNA is not routable and the frames are truly bridged.

Redundant SNA designs may also make use of dual backbone rings. Under this design, the connections to the FEPs are available with partial ring failures. Bridge failures are also addressed. This design is illustrated at a high level in Figure 10.4.


FIGURE 10.3  Redundant dual front-end processors


FIGURE 10.4  Redundant dual backbones

Experienced designers should be quick to note that explorer packets could be problematic under this design. This problem would be best controlled with a restriction on the hop count for explorer packets. Presuming that the FEPs and servers are connected directly to the backbones (a common, albeit suboptimal, configuration), the maximum hop explorer count could be set at one. Connectivity between all user rings and the backbone would be available, but connectivity between clients would be blocked if it attempted to leave the ring. These installations typically place servers directly on the user rings.

A variation on the dual backbone design is the dual, collapsed-backbone design. This configuration establishes a virtual ring within each router to bridge the physical user rings and the rings that connect to the FEPs. The failure of either router, or its virtual ring, is covered by the other router and its connections.

Queuing as a Design Consideration

Many designers find that the time-sensitive nature of SNA is problematic when merging the protocol to interoperate with other protocols. This is one of the reasons that local acknowledgement and encapsulation are beneficial to the designer.

There are times and installations when the designer does not wish to use these techniques to control SNA traffic. For these instances, the designer may wish to employ queuing to provide a higher priority to SNA traffic—reducing the delay experienced in the router’s buffer. Both queuing types are best suited for lower bandwidth serial connections.

Priority queuing is a process-switched solution to queuing. Four output interface queues are established, and the processor removes frames from the queue with the highest priority. The queues are named and sequenced as high, medium, normal, and low.

This type of queuing is best suited to installations where SNA traffic is of the greatest importance to the company, as other traffic will be discarded in order to accommodate the higher priority queue. Should the designer find that packets are consistently dropped, the solution would be to install more bandwidth. The benefit may still remain, however. SNA traffic would, all things being equal, have less latency than other protocols.

It is important to note that priority queuing is very CPU-intensive and requires frames to be process-switched. This is the slowest switching method available on the router. It is also possible that protocols in the lower priority queues will not be serviced and the frames will be dropped.

Figure 10.5 illustrates priority queuing. Note that SNA traffic has been given high priority and, as a result, sends all packets into the queue before IP and IPX.


FIGURE 10.5  Priority queuing

Custom queuing is also available to prioritize SNA traffic and is processor-intensive. However, it is less likely to completely block traffic from lower priority protocols. Rather than allocate all of the available bandwidth to a single high-priority queue, custom queuing defines up to 16 output interface queues that are accessed in sequence. The number of bytes permitted per sequence provides the prioritization. For example, the administrator wishes to provide roughly 75 percent of the circuit to SNA (RSRB) and the remainder to IP.

Under these objectives, the queue for SNA could be defined as 4,500 bytes, while 1,500 are allocated to IP. Individual installations and experience will help to develop the final parameters, but the installation makes certain that SNA receives service, as a function of bandwidth, 75 percent of the time.

Figure 10.6 demonstrates custom queuing. Note that SNA has been allocated 50 percent of the queue priority, while IP and IPX each have 25 percent of the queue. As a result, the last SNA packet must wait until the IP and IPX packets in the queue have been processed. Note that the right side of Figure 10.6 is read from right to left—the rightmost side shows the first packet exiting the router. Assuming full queues, this results in an SNA packet, an SNA packet, an IP packet, and an IPX packet, given the percentages above. This process will continue so long as all queues are filled.


FIGURE 10.6  Custom queuing

Designers are apt to place queuing at the access layer of the network. This placement typically results in the least performance degradation and is consistent with the hierarchical model. However, in practice, queuing is configured when and where it makes the most sense to do so—perhaps ahead of a slow serial link or at an aggregation point. Because queuing is not a zero-sum gain, i.e., there is a significant cost associated with it, most designers and administers avoid using either type of queue unless there is a specific reason to do so.

It is also noteworthy that priority queuing should be regarded as a last-resort option and that queuing impacts only outbound traffic. High volumes of high-priority traffic in priority queuing will block all other traffic—it is better to use custom queuing so that all traffic is serviced.


Previous Table of Contents Next