17.3. Network infrastructure
Partitioning a low-bandwidth network should
ease the
constraints imposed by the network on
attribute-intensive applications, but may not necessarily address the
limitations encountered by data-intensive applications.
Data-intensive applications require high bandwidth, and may require
the hosts to be migrated onto higher bandwidth networks, such as Fast
Ethernet, FDDI, ATM, or Gigabit Ethernet. Recent advances in
networking as well as economies of scale have made high bandwidth and
switched networks more accessible. We explore their effects on NIS
and NFS in the remaining sections of this chapter.
17.3.1. Switched networks
Switched Ethernets have become
affordable and extremely popular in
the last few years, with configurations ranging from enterprise-class
switching networks with hundreds of ports, to the small 8- and
16-port Fast Ethernet switched networks used in small businesses.
Switched Ethernets are commonly found in configurations that use a
high-bandwidth interface into the server (such as Gigabit Ethernet)
and a switching hub that distributes the single fast network into a
large number of slower branches (such as Fast Ethernet ports). This
topology isolates a client's traffic to the server from the
other clients on the network, since each client is on a different
branch of the network. This reduces the collision rate, allowing each
client to utilize higher bandwidth when communicating to the server.
Although switched networks alleviate the impact of collisions, you
still have to watch for "impedance mismatches" between an
excessive number of client network segments and only a few server
segments. A typical problem in a switched network environment occurs
when an excessive number of NFS clients capable of saturating their
own network segments overload the server's "narrow"
network segment.
Consider the case where 100 NFS clients and a single NFS server are
all connected to a switched Fast Ethernet. The server and each of its
clients have their own 100 Mbit/sec port on the switch. In this
configuration, the server can easily become bandwidth starved when
multiple concurrent requests from the NFS clients arrive over its
single network segment. To address this problem, you should provide
multiple network interfaces to the server, each connected to its own
100 Mb/sec port on the switch. You can either turn on IP interface
groups on the server, such that the server can have more than one IP
address on the same subnet, or use the outbound networks for
multiplexing out the NFS read replies. The clients should use all of
the hosts' IP addresses in order for the inbound requests to
arrive over the various network interfaces. You can
configure BIND
round-robin
[52]
if you don't want to hardcode the destination addresses. You
can alternatively enable interface trunking on the server to use the
multiple network interfaces as a single IP address avoiding the need
to mess with IP addressing and client naming conventions.
Trunking
also
offers a measure of fault tolerance, since the trunked interface
keeps working even if one of the network interfaces fails. Finally,
trunking scales as you add more network interfaces to the server,
providing additional network bandwidth. Many switches provide a
combination of Fast Ethernet and Gigabit Ethernet channels as well.
They can also support the aggregation of these channels to provide
high bandwidth to either data center servers or to the backbone
network.
Heavily used NFS servers will benefit from their own
"fast" branch, but try to keep NFS clients and servers
logically close in the network topology. Try to minimize the number
of switches and routers that traffic must cross. A good rule of thumb
is to try to keep 80% of the traffic within the network and only 20%
of the
traffic
from accessing the backbone.
17.3.2. ATM and FDDI networks
ATM (Asynchronous Transfer Mode) and FDDI (Fiber Distributed Data
Interface) networks are two other forms
of high-bandwidth networks that can sustain multiple high-speed
concurrent data exchanges with minimal degradation. ATM and FDDI are
somewhat more efficient than Fast Ethernet in data-intensive
environments because they use a larger MTU (Maximum Transfer Unit),
therefore requiring less packets than Fast Ethernet to transmit the
same amount of information. Note that this does not necessarily
present an advantage to attribute-intensive environments where the
requests are small and always fit in a Fast Ethernet packet.
Although ATM promises scalable and seamless bandwidth, guaranteed QoS
(Quality of Service), integrated services (voice, video, and data),
and virtual networking, Ethernet technologies are not likely to be
displaced. Today, ATM has not been widely deployed outside backbone
networks. Many network administrators prefer to deploy Fast Ethernet
and Gigabit Ethernet because of their familiarity with the protocol,
and because it requires no changes to the packet format. This means
that existing analysis and network management tools and software that
operate at the network and transport layers, and higher, continue to
work as before. It is unlikely that ATM will experience
a significant amount
of deployment outside the
backbone.
| | |
17.2. Network partitioning hardware | | 17.4. Impact of partitioning |