Chapter 9. Local Network ServicesContents: The Network File System Now our attention turns to configuring local network servers. As with name service, these servers are not strictly required for the network to operate, but they provide services that are central to the network's purpose. There are many network services -- many more than can be covered in this chapter. Here we concentrate on servers that provide essential services for local clients. The services covered in this chapter are:
All of these software packages are designed to provide service to systems within your organization and are not intended to service outsiders. Essential services that are as important to external users as they are to in-house users, such as email, web service, and name service, are covered in separate chapters. We begin our discussion of local network services with NFS, which is the server that provides file sharing on Unix networks. 9.1. The Network File SystemThe Network File System (NFS) allows directories and files to be shared across a network. It was originally developed by Sun Microsystems but is now supported by virtually all Unix and many non-Unix operating systems. Through NFS, users and programs can access files located on remote systems as if they were local files. In a perfect NFS environment, the user neither knows nor cares where files are actually stored. NFS has several benefits:
There are two sides to NFS: a client side and a server side. The client is the system that uses the remote directories as if they were part of its local filesystem. The server is the system that makes the directories available for use. Attaching a remote directory to the local filesystem (a client function) is called mounting a directory. Offering a directory for remote access (a server function) is called sharing or exporting a directory.[98] Frequently, a system runs both the client and the server NFS software. In this section we'll look at how to configure a system to export and mount directories using NFS.
If you're responsible for an NFS server for a large site, you should take care in planning and implementing the NFS environment. This chapter describes how NFS is configured to run on a client and a server, but you may want more details to design an optimal NFS environment. For a comprehensive treatment, see Managing NFS and NIS by Hal Stern (O'Reilly & Associates). 9.1.1. NFS DaemonsThe Network File System is run by several daemons, some performing client functions and some performing server functions. Before we discuss the NFS configuration, let's look at the function of the daemons that run NFS on a Solaris 8 system:
On a Solaris 8 system, the daemons necessary to run NFS are found in the /usr/lib/nfs directory. Most of these daemons are started at boot time by two scripts located in the /etc/init.d directory, nfs.client and nfs.server. The nfs.client script starts the statd and lockd programs.[99]
NFS server systems run those two daemons, plus the NFS server daemon (nfsd), the NFS logging daemon (nfslogd), and the mount server daemon (mountd). On Solaris systems, the nfs.server script starts mountd, nfslogd, and 16 copies of nfsd. Solaris systems do not normally start rquotad at boot time. Instead, rquotad is started by inetd, as this grep of the /etc/inetd.conf file shows: $ grep rquotad /etc/inetd.conf rquotad/1 tli rpc/datagram_v wait root /usr/lib/nfs/rquotad rquotad Each system has its own technique for starting these daemons. If some of the daemons aren't starting, ensure your startup scripts and your inetd.conf file are correct. 9.1.2. Sharing Unix FilesystemsThe first step in configuring a server is deciding which filesystems will be shared and what restrictions will be placed on them. Only filesystems that provide a benefit to the client should be shared. Before you share a filesystem, think about what purpose it will serve. Some common reasons for sharing filesystems are:
Once you've selected the filesystems you'll share, you must configure them for sharing using the appropriate commands for your system. The following section emphasizes the way this is done on Solaris systems. It is very different on Linux systems, which are covered later. Check your system's documentation to find out exactly how it implements NFS file sharing. 9.1.2.1. The share commandOn Solaris systems, directories are exported using the share command. A simplified syntax for the share command is: share -F nfs [-o options] pathname where pathname is the path of the directory the server is offering to share with its clients, and options are the access controls for that directory. The options are:
A few of the options contain an access list. The access list is a colon-separated list that identifies computers by individual hostnames, individual IP addresses, or by the domain, network, or NIS netgroup to which the hosts belong. The syntax of these list elements is:
The rw and ro options can be combined to grant different levels of access to different clients. For example: share -F nfs -o rw=crab:horseshoe ro /usr/man share -F nfs -o rw=rodent:crab:horseshoe:jerboas /export/home/research The first share command grants read and write access to crab and rodent, and read-only access to all other clients. On the other hand, the second share command grants read/write access to rodent, crab, horseshoe, and jerboas, and no access of any kind to any other client. The share command does not survive a boot. Put the share commands in the /etc/dfs/dfstab file to make sure that the filesystems continue to be offered to your clients even if the system reboots. Here is a sample dfstab file containing our two share commands: % cat /etc/dfs/dfstab # place share(1M) commands here for automatic execution # on entering init state 3. # # share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource] # .e.g., # share -F nfs -o rw=engineering -d "home dirs" /export/home2 share -F nfs -o rw=crab:horseshoe ro /usr/man share -F nfs -o rw=rodent:crab:horseshoe:jerboas /export/home/research The share command, the dfstab file, and even the terminology "share" are Solaris-specific. Most Unix systems say that they are exporting files, instead of sharing files, when they are offering files to NFS clients. Furthermore, they do not use the share command or the dfstab file; instead, they offer filesystems through the /etc/exports file. Linux is an example of such a system. 9.1.2.2. The /etc/exports fileThe /etc/exports file is the NFS server configuration file for Linux systems. It controls which files and directories are exported, which hosts can access them, and what kinds of access are allowed. A sample /etc/exports file might contain these entries: /usr/man crab(rw) horseshoe(rw) (ro) /usr/local (ro) /home/research rodent(rw) crab(rw) horseshoe(rw) jerboas(rw) This sample file says that:
The options used in each of the entries in the /etc/exports file determine what kinds of access are allowed. The information derived from the sample file is based on the options specified on each line in the file. The general format of the entries is as follows: directory [host(option)]... directory names the directory or file that is available for export. The host is the name of the client granted access to the exported directory, while the option specifies the type of access being granted. In the sample /etc/exports file shown above, the host value is either the name of a single client or it is blank. When a single hostname is used, access is granted to the individual client. If no hostvalue is specified, the directory is exported to everyone. Like Solaris, Linux also accepts values for domains, networks, and netgroups, although the syntax is slightly different. Valid host values are:
Notice that in Linux, domain names begin with an asterisk (*), instead of the dot used in Solaris. Also note that the at-sign begins a netgroup name, whereas in Solaris the at-sign is used at the beginning of a network address. The options used in the sample /etc/exports file are:
Although specific hosts are granted read/write access to some of these directories, the access granted to individual users of those systems is controlled by standard Unix user, group, and world file permissions based on the user's user ID (UID) and group ID (GID). NFS trusts that a remote host has authenticated its users and assigned them valid UIDs and GIDs. Exporting files grants the client system's users the same access to the files they would have if they directly logged into the server. This assumes, of course, that both the client and the server have assigned exactly the same UIDs and GIDs to the same users, which is not always the case. If both the client and the server assign the same UID to a given user, for example, if Craig is assigned 501 on both systems, then both systems properly identify Craig and grant him appropriate access to his files. On the other hand, if the client assigns Craig a UID of 501 and the server has assigned that UID to Michael, the server will grant Craig access to Michael's files as if Craig owned those files. NFS provides several tools to deal with the problems that arise because of mismatched UIDs and GIDs. One obvious problem is dealing with the root account. It is very unlikely that you want people with root access to your clients to also have root access to your server. By default, NFS prevents this with the root_squash setting, which maps requests that contain the root UID and GID to the nobody UID and GID. Thus if someone is logged into a client as root, they are only granted world permissions on the server. You can undo this with the no_root_squash setting, but no_root_squash opens a potential security hole. Map other UIDs and GIDs to nobody with the squash_uids, squash_gids, and all_squash options. all_squash maps every user of a client system to the user nobody. squash_uids and squash_gids map specific UIDs and GIDs. For example: /pub (ro,all_squash) /usr/local/pub (squash_uids=0-50,squash_gids=0-50) The first entry exports the /pub directory with read-only access to every client. It limits every user of those clients to the world permissions granted to nobody, meaning that the only files the users can read are those that have world read permission. The second entry exports /usr/local/pub to every client with default read/write permission. The squash_uid and squash_gid options in the example show that a range of UIDs and GIDs can be specified in some options.[100] A single UID or GID can be defined with these options, but it is frequently useful to affect a range of values with a single command. In the example we prevent users from accessing the directory with a UID or GID that is 50 or less. These low numbers are usually assigned to non-user accounts. For example, on our Linux system, UID 10 is assigned to uucp. Attempting to write a file as uucp would cause the file to be written with the owner mapped to nobody. Thus the user uucp would be able to write to the /usr/local/pub directory only if that directory had world write permission.
It is also possible to map every user from a client to a specific user ID or group ID. The anonuid and anongid options provide this capability. These options are most useful when the client has only one user and does not assign that user a UID or GID, for example, in the case of a Microsoft Windows PC running NFS. PCs generally have only one user and they don't use UIDs or GIDs. To map the user of a PC to a valid user ID and group ID, enter a line like this in the /etc/exports file: /home/alana giant(all_squash,anonuid=1001,anongid=1001) In this example, the hostname of Alana's PC is giant. The entry grants that client read/write access to the directory /home/alana. The all_squash option maps every request from that client to a specific UID, but this time, instead of nobody, it maps to the UID and the GID defined by the anonuid and anongid options. Of course, for this to work correctly, 1001:1001 should be the UID and GID pair assigned to alana in the /etc/passwd file. A single mapping is sufficient for a PC, but it might not handle all of the mapping needed for a Unix client. Unix clients assign their users UIDs and GIDs. Problems occur if those differ from the UIDs and GIDs assigned to those same users on the NFS server. Use the map_static option to point to a file that maps the UIDs and GIDs for a specific client. For example: /export/oscon oscon(map_static=/etc/nfs/oscon.map) This entry says that the /export/oscon directory is exported to the client oscon with read/write permission. The map_static option points to a file on the server named /etc/nfs/oscon.map that maps the UIDs and GIDs used on oscon to those used on the server. The oscon.map file might contain the following entries: # UID/GID mapping for client oscon # remote local comment uid 0-50 - #squash these gid 0-50 - #squash these uid 100-200 1000 #map 100-200 to 1000-1100 gid 100-200 1000 #map 100-200 to 1000-1100 uid 501 2001 #map individual user gid 501 2001 #map individual user The first two lines map the UIDs and GIDs from 0 to 50 to the user nobody. The next two lines map all of the client UIDs and GIDs in the range of 100 to 200 to corresponding numbers in the range of 1000 to 1100 on the server. In other words, 105 on the client maps to 1005 on the server. This is the most common type of entry. On most systems, existing UIDs and GIDs have been assigned sequentially. Often, several systems have assigned the UIDs and GIDs sequentially from 101 to different users in a completely uncoordinated manner. This entry maps the users on oscon to UIDs and GIDs starting at 1000. Another file might map the 100 to 200 entries of another client to UIDs and GIDs starting at 2000. A third file might map yet another client to 3000. This type of entry allows the server to coordinate UIDs and GIDs where no coordination exists. The last two lines map an individual user's UID and GID. This is less commonly required, but it is possible. 9.1.2.3. The exportfs commandAfter defining the directories in the /etc/exports file, run the exportfs command to process the exports file and to build /var/lib/nfs/xtab. The xtab file contains information about the currently exported directories, and it is the file that mountd reads when processing client mount requests. To process all of the entries in the /etc/exports file, run exportfs with the -a command-line option: # exportfs -a This command, which exports everything in the exports file, is normally run during the boot from a startup script. To force changes in the /etc/exports file to take effect without rebooting the system, use the -r argument: # exportfs -r The -r switch synchronizes the contents of the exports file and the xtab file. Items that have been added to the exports file are added to the xtab file, and items that have been deleted are removed from xtab. The exportfs command can export a directory that is not listed in the /etc/exports file. For example, to temporarily export /usr/local to the client fox with read/write permission, enter this command: # exportfs fox:/usr/local -o rw After the client has completed its work with the temporarily exported filesystem, the directory is removed from the export list with the -u option, as shown: # exportfs -u fox:/usr/local The -u option can be combined with the -a option to completely shut down all exports without terminating the NFS daemon: # exportfs -ua Once the server exports or shares the appropriate filesystems, the clients can mount and use those filesystems. The next section looks at how an NFS client system is configured. 9.1.3. Mounting Remote FilesystemsSome basic information is required before you can decide which NFS directories to mount on your system. You need to know which servers are connected to your network and which directories are available from those servers. A directory cannot be mounted unless it is first exported by a server. Your network administrator is a good source for this information. The administrator can tell you what systems are providing NFS service, what directories they are exporting, and what these directories contain. If you are the administrator of an NFS server, you should develop this type of information for your users. See Chapter 4, "Getting Started" for advice on planning and distributing network information. On Solaris and Linux systems, you can also obtain information about the shared directories directly from the servers by using the showmount command. The NFS servers are usually the same centrally supported systems that provide other services such as mail and DNS. Select a likely server and query it with the command showmount -e hostname. In response to this command, the server lists the directories that it exports and the conditions applied to their export. For example, a showmount -e query to jerboas produces the following output: % showmount -e jerboas export list for jerboas: /usr/man (everyone) /home/research rodent,crab,limulus,horseshoe /usr/local (everyone) The export list shows the NFS directories exported by jerboas as well as who is allowed to access those directories. From this list, rodent's administrator may decide to mount any of the directories offered by jerboas. Our imaginary administrator decides to:
These selections represent some of the most common motivations for mounting NFS directories:
The extent to which you use NFS is a personal choice. Some people prefer the greater personal control you get from keeping files locally, while others prefer the convenience offered by NFS. Your site may have guidelines for how NFS should be used, which directories should be mounted, and which files should be centrally maintained. Check with your network administrator if you're unsure about how NFS is used at your site. 9.1.3.1. The mount commandA client must mount a shared directory before using it. "Mounting" the directory attaches it to the client's filesystem hierarchy. Only directories offered by the servers can be mounted, but any part of the offered directory, such as a subdirectory or a file, can be mounted. NFS directories are mounted using the mount command. The general structure of the mount command is: mount hostname:remote-directory local-directory The hostname identifies an NFS server, and the remote-directory identifies all or part of a directory offered by that server. The mount command attaches that remote directory to the client's filesystem using the directory name provided for local-directory. The client's local directory, called the mount point, must be created before mount is executed. Once the mount is completed, files located in the remote directory can be accessed through the local directory exactly as if they were local files. For example, assume that jerboas.wrotethebook.com is an NFS server and that it shares the files shown in the previous section. Further assume that the administrator of rodent wants to access the /home/research directory. The administrator simply creates a local /home/research directory and mounts the remote /home/research directory offered by jerboas on this newly created mount point: # mkdir /home/research # mount jerboas:/home/research /home/research In this example, the local system knows to mount an NFS filesystem because the remote directory is preceded by a hostname and NFS is the default network filesystem for this client. NFS is the most common default network filesystem. If your client system does not default to NFS, specify NFS directly on the mount command line. On a Solaris 8 system, the -F switch is used to identify the filesystem type: # mount -F nfs jerboas:/home/research /home/research On a Linux system the -t switch is used: # mount -t nfs jerboas:/home/research /home/research Once a remote directory is mounted, it stays attached to the local filesystem until it is explicitly dismounted or the local system reboots. To dismount a directory, use the umount command. On the umount command line, specify either the local or remote name of the directory that is to be dismounted. For example, the administrator of rodent can dismount the remote jerboas:/home/research filesystem from the local /home/research mount point, with either: # umount /home/research or: # umount jerboas:/home/research Booting also dismounts NFS directories. Because systems frequently wish to mount the same filesystems every time they boot, Unix provides a system for automatically remounting after a boot. 9.1.3.2. The vfstab and fstab filesUnix systems use the information provided in a special table to remount all types of filesystems, including NFS directories, after a system reboot. The table is a critical part of providing users consistent access to software and files, so care should be taken whenever it is modified. Two different files with two different formats are used for this purpose by the different flavors of Unix. Linux and BSD systems use the /etc/fstab file, and Solaris, our System V example, uses the /etc/vfstab file. The format of the NFS entries in the Solaris vfstab file is: filesystem - mountpoint nfs - yes options The various fields in the entry must appear in the order shown and be separated by whitespace. The items not in italics (both dashes and the words nfs and yes) are keywords that must appear exactly as shown. filesystem is the name of the directory offered by the server, mountpoint is the pathname of the local mount point, and options are the mount options discussed below. A sample NFS vfstab entry is: jerboas:/home/research - /home/research nfs - yes rw,soft This entry mounts the NFS filesystem jerboas:/home/research on the local mount point /home/research. The filesystem is mounted with the rw and soft options set. We previously discussed the commonly used read/write (rw) and read-only (ro) options, and there are many more NFS options. The NFS mount options available on Solaris systems are:
On the Solaris system, the filesystems defined in the vfstab file are mounted by a mountall command located in a startup file. On Linux systems, the startup file contains a mount command with the -a flag set, which causes Linux to mount all filesystems listed in fstab.[101]
The format of NFS entries in the /etc/fstab file is: filesystem mountpoint nfs options The fields must appear in the order shown and must be separated by whitespace. The keyword nfs is required for NFS filesystems. filesystem is the name of the directory being mounted. mountpoint is the pathname of the local mount point. options are any of the Linux mount options. Linux uses most of the same NFS mount options as Solaris. rsize, wsize, timeo, retrans, acregmin, acregmax, acdirmin, acdirmax, actimeo, retry, port, bg, fg, soft, hard, intr, nointr, ac, noac, and posix are all options that Linux has in common with Solaris. In addition to these, Linux uses:
Finally, there are several options that are not specific to NFS and can be used on the mount command for any type of filesystem. Table 9-1 lists the common mount options used on Linux systems. Table 9-1. Common mount options
A grep of fstab shows sample NFS entries.[102]
% grep nfs /etc/fstab jerboas:/usr/spool/mail /usr/spool/mail nfs rw 0 0 jerboas:/usr/man /usr/man nfs rw 0 0 jerboas:/home/research /home/research nfs rw 0 0 The grep shows that there are three NFS filesystems contained in the /etc/fstab file. The mount -a command in the boot script remounts these three directories every time the system boots. The vfstab and fstab files are the most common methods used for mounting filesystems at boot time. There is another technique that automatically mounts NFS filesystems, but only when they are actually needed. It is called automounter. 9.1.4. NFS AutomounterAn automounter is a feature available on most NFS clients. Two varieties of automounters are in widespread use: autofs and amd. The Automounter Filesystem (autofs) is the automounter implementation that comes with Solaris and Linux, and it is the implementation we cover in this section. Automounter Daemon (amd) is available for many Unix versions and is included with Linux but not with Solaris. To find out more about amd, see Linux NFS and Automounter Administration written by Erez Zadok, the amd maintainer. In this section, automounter and automounter daemon refer to the version of autofs that comes with Solaris 8. The automounter configuration files are called maps. Three basic map types are used to define the automounter filesystem:
On Solaris systems the automounter daemon (automountd) and the automount command are started by the /etc/init.d/autofs script. The script is run with the start option to start automounter, i.e., autofs start. It is run with the stop option to shut down automounter. automount and automountd are two distinct, separate programs. automountd runs as a daemon and dynamically mounts filesystems when they are needed. automount processes the auto_master file to determine the filesystems that can be dynamically mounted. To use automounter, first configure the /etc/auto_master file. Entries in the auto_master file have this format: mount-point map-name options The Solaris system comes with a default auto_master file preconfigured. Customize the file for your configuration. Comment out the +auto_master entry unless you run NIS+ or NIS and your servers offer a centrally maintained auto_master map. Also ignore the /xfn entry, which is for creating a federated (composite) name service. Add an entry for your direct map. In the example, this is called auto_direct. Here is /etc/auto_master after our modifications: # Master map for automounter # #+auto_master #/xfn -xfn /net -hosts -nosuid /home auto_home /- auto_direct All lines that begin with a sharp sign (#) are comments, including the +auto_master and /xfn lines we commented out. The first real entry in the file specifies that the shared filesystems offered by every NFS server listed in the /etc/hosts file are automatically mounted under the /net directory. A subdirectory is created for each server under /net using the server's hostname. For example, assume that jerboas is listed in the hosts file and that it exports the /usr/local directory. This auto_master entry automatically makes that remote directory available on the local host as /net/jerboas/usr/local. The second entry automatically mounts the home directories listed in the /etc/auto_home map under the /home directory. A default /etc/auto_home file is provided with the Solaris system. Comment out the +auto_home entry found in the default file. It is used only if you run NIS+ or NIS and your servers offer a centrally maintained auto_home map. Add entries for individual user home directories or for all home directories from specific servers. Here is a modified auto_home map: # Home directory map for automounter # #+auto_home craig crab:/export/home/craig * horseshoe:/export/home/& The first entry mounts the /export/home/craig filesystem shared by crab on the local mount point /home/craig. The auto_home map is an indirect map, so the mount point specified in the map (craig) is relative to the /home mount point defined in the auto_master map. The second entry mounts every home directory found in the /export/home filesystem offered by horseshoe to a "like-named" mount point on the local host. For example, assume that horseshoe has two home directories, /export/home/daniel and /export/home/kristin. Automounter makes them both available on the local host as /home/daniel and /home/kristin. The asterisk (*) and the ampersand (&) are wildcard characters used specifically for this purpose in autofs maps. That's it for the auto_home map. Refer back to the auto_master map. The third and final entry in the /etc/auto_master file is: /- auto_direct We added this entry for our direct map. The special mount point /- means that the map name refers to a direct map. Therefore the real mount points are found in the direct map file. We named our direct map file /etc/auto_direct. There is no default direct map file; you must create it from scratch. The file we created is: # Direct map for automounter # /home/research -rw jerboas:/home/research /usr/man -ro,soft horseshoe,crab,jerboas:/usr/share/man The format of entries in a direct map file is: mount-point options remote filesystem Our sample file contains two typical entries. The first entry mounts the remote filesystem /home/research offered by the server jerboas on the local mount point /home/research. It is mounted read/write. The second entry mounts the manpages read-only with a "soft" timeout.[103] Note that three servers are specified for the manpages in a comma-separated list. If a server is unavailable or fails to respond within the soft timeout period, the client asks the next server in the list. This is one of the nice features of automounter.
Automounter has four key features: the -hosts map, wildcarding, automounting, and multiple servers. The -hosts map makes every exported filesystem from every server listed in the /etc/hosts file available to the local user. The wildcard characters make it very easy to mount every directory from a remote server to a like-named directory on the local system. Automounting goes hand-in-glove with these two features because only the filesystems that are actually used are mounted. While -hosts and wildcards make a very large number of filesystems available to the local host, automounting limits the filesystems that are actually mounted to those that are needed. The last feature, multiple servers, improves the reliability of NFS by removing the dependence on a single server. Copyright © 2002 O'Reilly & Associates. All rights reserved. |
||||||||||||||||||||||||||||||||||||||
|