Previous | Table of Contents | Next |
The final advantage of using this distribution layer design in the three-tier model is that it will greatly simplify OSPF configurations. The network core becomes a natural area 0, while each distribution router becomes an area border router between area 0 and other areas.
FIGURE 1.10 The hierarchical model with addressing
Designers should use the distribution layer with an eye toward failure scenarios as well. Ideally, each distribution layer and its attached access layers should include its own DHCP (Dynamic Host Configuration Protocol) and WINS (Windows Internet Naming Service) servers, for example. Other critical network devices, such as e-mail and file servers, are also best included in the distribution layer. This design promotes two significant benefits. First, the distribution layer can continue to function in the event of core failure or other concerns. While the core should be designed to be fault-tolerant, in reality, network changes, service failures, and other issues demand that the designer develop a contingency plan in the event of its unavailability. Second, most administrators prefer to have a number of servers for WINS and DHCP, for example. By placing these services at the distribution layer, the number of devices is kept at a fairly low number while logical divisions are established, all of which simplify administration.
The networks ultimate purpose is to interconnect users, which is how the access layer completes the three-tier model. The access layer is responsible for connecting workgroups to backbones, blocking broadcasts, and grouping users based on common functions and services. Logical divisions are also maintained at the access layer. For example, dial-in services would be connected to an access layer point, thus making the users all part of a logical group. Depending on the network's overall size, it would likely be appropriate to place an authentication server for remote users at this point, although a single centrally located server may also be appropriate if fault tolerance is not required. It is helpful to think of the access layer as a leaf on a tree. Being furthest from the trunk and attached only via a branch, the path between any two access layers (leaves) is almost always the longest. The access layer is also the primary location for access lists and other security implementations. However, as noted previously, this is a textbook answer. Many designers use the distribution layer as an aggregation point for security implementations.
The three-tier model can greatly facilitate the network design process so designers should closely follow the guidelines. Failure to do so may result in a suboptimal design. There may be good cause to waver from these guidelines, but doing so is not recommended and usually will cause additional compromises. The main reason these rules are broken is for financial considerations.
Interconnect Layers via the Layer Just Above
There will be a great temptation to connect two access layers directly in order to address a change in the network. Figure 1.11 illustrates this implementation with the bold line between Routers A and B.
There are many arguments in favor of this approach, although all of them are in error. The contention will be made that the interconnection will reduce hop count, latency, cost, and other factors. However, in reality, connecting the two access groups will eliminate the benefits of the three-tier model and will ultimately cost more, which is something most designers try to avoid. Most of the hop count and other concerns are moot in modern networks, and if they are legitimate issues, the designer should address those problems before deploying a work-around. Connecting access layers, or distribution layers, without using the core complicates troubleshooting, routing, economies of scale, redundancy, and a host of other factors. It can be done, and the arguments may be quite persuasive, but avoid doing it.
FIGURE 1.11 Interconnection of access layers
Connect End Stations to Access Layers
Ideally, the backbone should be reserved for controlled data flow. This includes making as few changes as possible in the core and, to a lesser degree, the distribution layer. While an exception might be made for a global service, such as DHCP, it is usually best to keep the core and distribution layers as clean as possible. Reliability, traffic management, capacity planning, and troubleshooting are all augmented by this policy.
Design around the 80/20 Rule When Possible
Historically, networks were designed around the 80/20 rule, which states that 80 percent of the traffic should remain in the local segment and the remaining 20 percent could leave. This was primarily due to the limitations of routers.
Today, the 80/20 rule remains valid, but the designer will need to factor cost, security, and other considerations into this decision. New features, including route once/switch many technologies and server farms have altered the 80/20 rule in many designs. The Internet and other remote services have also impacted these criteria. While it is preferable to keep traffic locally bound, in modern networks it is much more difficult to do so, and the benefits are not as great as before.
While the 80/20 rule does remain a good guideline, it is important to note that most modern networks are confronted with traffic models that follow the corollary of the 80/20 rule. The 20/80 rule acknowledges that 80 percent of the traffic is off the local subnet in most modern networks. This is the result of centralized server farms, database servers, and the Internet. Designers should keep this fact in mind when designing the networksome installations are already bordering on a 5/95 ratio. It is conceivable that less than five percent of the traffic will remain on the local subnet in the near term as bandwidth availability increases.
Network Design in the Real World: Outsourcing In 1998 and 1999, the networking industry saw an explosion of outsourcing efforts to move responsibility for the data center away from the enterprise. The intent was to reduce costs and allow the organization to focus on their core business. While some of these efforts were less than successful, there is little doubt that contracting and outsourcing will remain acceptable strategies for many companies. The need for high-speed connections is one consequence of off-site data centers. A number of companies place their file servers in a remote, outsourced location, moving all of their data away from the user community. It is likely that this trend, should it continue, will take data off not only the user subnet (the origin of the 80/20 rule), but the local campus network as well. |
Make Each Layer Represent a Layer 3 Boundary
This is possibly one of the easier guidelines to understand, as routers are included at each layer in the model and these routers divide Layer 3 boundaries. Therefore, this rule takes on a default status. It also relates to the policy of not linking various layers without using the layer just above in that switches (Layer 2 devices) should not be used to interconnect access layer groups. Later in this text the issues of spanning tree and Layer 3 designs will be presentedthey relate well to this policy.
Note that this guideline also incorporates a separation of the broadcast and collision domains. Network design model layers cannot be isolated by only collision domainsa function of Layer 2 devices, including bridges and switches. The layers must also be isolated via routers, which define the borders of the broadcast domain.
Previous | Table of Contents | Next |