6.3. Mounting filesystems
This section uses filenames and command names
specific to Solaris. Note that you are better off using the
automounter (see
Chapter 9, "The Automounter")
to
mount
filesystems, rather than using the
mount utility
described in this section. However, understanding the automounter,
and why it is better than
mount, requires
understanding
mount. Thus, we will discuss the
concept of NFS filesystem mounting in the context of
mount.
Solaris has different component names from non-Solaris systems.
Table 6-3 shows the rough equivalents to non-Solaris
systems.
Table 6-3. Correspondence of Solaris and non-Solaris mount components
Description
|
Solaris
|
Non-Solaris
|
List of filesystems
|
/etc/vfstab
|
/etc/fstab
|
List of mounted filesystems
|
/etc/mnttab
|
/etc/mtab
|
RPC program number to network address mapper
(portmapper)
|
rpcbind
|
portmap
|
MOUNT daemon
|
mountd
|
rpc.mountd
|
NFS clients can mount any filesystem, or part of a filesystem, that
has been exported from an NFS server. The filesystem can be
listed in the client's
/etc/vfstab file, or it can be mounted
explicitly using the mount(1M)
command.
(Also, in Solaris, see the mount_nfs(1M)
manpage, which explains NFS-specific details of filesystem mounting.)
NFS filesystems appear to be "normal" filesystems on the
client, which means that they can be mounted on any directory on the
client. It's possible to mount an NFS filesystem over all or
part of another filesystem, since the directories used as mount
points appear the same no matter where they actually reside. When you
mount a filesystem on top of another one, you obscure whatever is
"under" the mount point. NFS clients see the most recent
view of the filesystem. These potentially confusing issues will be
the foundation for the discussion of NFS naming schemes later in this
chapter.
6.3.1. Using /etc/vfstab
Adding entries to
/etc/vfstab is one way to
mount
NFS filesystems. Once the entry has been added to the
v
fstab file, the client mounts it on every
reboot. There are several features that distinguish NFS filesystems
in the v
fstab file:
-
The "device name" field is replaced with a
server:filesystem specification, where the
filesystem name is a pathname (not a device name) on the server.
-
The "raw device name" field that is checked with
fsck, is replaced with a -.
-
The filesystem type is nfs, not
ufs as for local filesystems.
-
The fsck pass is set to -.
-
The options field can contain a variety of NFS-specific mount
options, covered in the Section 6.3.2, "Using mount".
Some typical
vfstab entries for NFS filesystems
are:
ono:/export/ono
|
-
|
/hosts/ono
|
nfs
|
-
|
yes
|
rw,bg,hard
|
onaga:/export/onaga
|
-
|
/hosts/onaga
|
nfs
|
-
|
yes
|
rw,bg,hard
|
wahoo:/var/mail
|
-
|
/var/mail
|
nfs
|
-
|
yes
|
rw,bg,hard
|
The yes in theabove
entries says to mount the filesystems whenever the system boots up.
This field can be yes or
no, and has the same effect for NFS and non-NFS
filesystems.
Of course, each vendor is free to vary the server and filesystem name
syntax, and your manual set should provide the best
sample
vfstab entries.
6.3.2. Using mount
While entries in the v
fstab file are
useful
for creating a long-lived NFS
environment, sometimes you need to mount a filesystem right away or
mount it temporarily while you copy files from it. The
mount command allows you to perform an NFS
filesystem mount that remains active until you explicitly unmount the
filesystem using
umount, or until the client is
rebooted.
As an example of using
mount, consider building
and testing a new
/usr/local directory. On an
NFS client, you already have the "old"
/usr/local, either on a local or NFS-mounted
filesystem. Let's say you have built a new version of
/usr/local on the NFS server
wahoo and want to test it on this NFS client.
Mount the new filesystem on top of the existing
/usr/local:
# mount wahoo:/usr/local /usr/local
Anything in the old
/usr/local is hidden by the
new mount point, so you can debug your new
/usr/local as if it were mounted at boot time.
From the command line,
mount uses a server name
and filesystem name syntax similar to that of the
v
fstab file. The
mount
command assumes that the type is
nfs if a
hostname appears in the device specification. The server filesystem
name must be an absolute pathname (usually starting with a leading
/), but it need not exactly match the name of a filesystem exported
from the server. Barring the use of the
nosub
option on the server (see
Section 6.2.2, "Exporting options" earlier in this chapter), the only
restriction on server filesystem names is that they must contain a
valid, exported server filesystem name as a prefix. This means that
you can mount a subdirectory of an exported filesystem, as long as
you specify the entire pathname to the subdirectory in either the
v
fstab file or on the
mount
command line. Note that the
rw and
hard suboptions are redundant since they are the
defaults (in Solaris at least). This book often specifies them in
examples to make it clear what semantics will be.
For example, to mount a particular home directory from
/export/home of server ono, you do not have to
mount the entire filesystem. Picking up only the subdirectory
that's needed may make the local filesystem hierarchy simpler
and less cluttered. To mount a subdirectory of a server's
exported filesystem, just specify the pathname to that directory in
the v
fstab file:
ono:/export/home/stern - /users/stern nfs - yes rw,bg,hard
Even though server
ono exports all of
/export/home, you can choose to handle some
smaller portion of the
entire filesystem.
6.3.3. Mount options
NFS mount options are as varied as the vendors themselves. There are
a few well-known and widely supported options, and others that are
added to support additional NFS features or to integrate secure
remote procedure call systems. As with everything else that is
vendor-specific, your system's manual set provides a complete
list of supported mount options. Check the manual pages for
mount(1M), mount_nfs(1M), and
v
fstab(4).
TIP:
For the most part, the default set of mount options will serve you
fine. However, pay particular attention to the
nosuid suboption, which is described in Chapter 12, "Network Security". The nosuid suboption is
not the default in Solaris, but perhaps it ought to be.
The Solaris
mount command syntax for
mounting
NFS filesystems is:
mount [ -F nfs ] [-mrO] [ -o suboptions ] server:pathname
mount [ -F nfs ] [-mrO] [ -o suboptions ] mount_point
mount [ -F nfs ] [-mrO] [ -o suboptions ] server:pathname mount_point
mount [ -F nfs ] [-mrO] [ -o suboptions ] server1:pathname1,server2:pathname2,...serverN:pathnameN mount_point
mount [ -F nfs ] [-mrO] [ -o suboptions ] server1,server2,...serverN:pathname mount_point
The first two forms are used when mounting a filesystem listed in the
vfstab file. Note that
server is the hostname of the NFS server. The
last two forms are used when mounting replicas. See
Section 6.6, "Naming schemes" later in this chapter.
The
-F nfs option is used to specify that the
filesystem being mounted is of type NFS. The option is not necessary
because the filesystem type can be discerned from the presence of
host:
pathname on the
command line.
The
-r option says to mount the filesystem as
read-only. The
preferred
way to specify read-only is the
ro suboption to
the
-o option.
The
-m option says to not record the entry in
the
/etc/mnttab file.
The
-O option says to permit the filesystem to
be mounted over an existing mount point. Normally if
mount_point already has a filesystem mounted on
it, the
mount command will fail with a
filesystem busy error.
In addition, you can use
-o to specify
suboptions. Suboptions can also be specified (without
-o) in the mount options field in
/etc/vfstab. The common
NFS mount suboptions are:
- rw/ro
-
rw mounts a filesystem as read-write; this is
the default. If ro is specified, the filesystem
is mounted as read-only. Use the ro option if
the server enforces write protection for various filesystems.
- bg/fg
-
The bg option tells mount
to retry a failed mount attempt in the background, allowing the
foreground mount process to continue. By
default, NFS mounts are not performed in the background, so
fg is the default. We'll discuss the
bg option further in the next section. Note that
the bg option does not apply to the automounter
(see Chapter 9, "The Automounter").
- grpid
-
Since Solaris is a derivative of Unix System V, it will by default
obey System V semantics. One area in which System V differs from 4.x
BSD systems is in the group identifier of newly created files. System
V will set the group identifier to the effective group identifier of
the calling process. If the grpid option is set,
BSD semantics are used, and so the group identifier is always
inherited from the file's directory. You can control this
behavior on a per-directory basis by not specifying
grpid, and instead setting the set group id bit
on the directory with the chmod command:
% chmod g+s /export/home/dir
If the set group id bit is set, then even if
grpid is absent, the group identifier of a
created file is inherited from the group identifier of the
file's directory. So for example:
% chmod g+s /export/home/dir
% ls -ld /export/home/dir
drwxr-sr-x 6 mre writers 3584 May 24 09:17
/export/home/dir/
% touch /export/home/dir/test
% ls -l /export/home/dir/test
-rw-r--r-- 1 mre writers 0 May 27 06:07 /export/home/dir/test
- quota/noquota
-
Enables/prevents the quota command to check for quotas on the
filesystem.
- port=n
-
Specify the port number of the NFS server. The default is to use the
port number as returned by the rpcbind. This
option is typically used to support pseudo NFS servers that run on
the same machine as the NFS client. The Solaris removable media
(CD-ROMs and floppy disks) manager (vold ) is an
example of such a server.
- public
-
This option is useful for environments that have to cope with
firewalls. We will discuss it in more detail in Chapter 12, "Network Security".
- suid/nosuid
-
Under some situations, the nosuid option
prevents security exposures. The default is
suid. We will go into more detail in Chapter 12, "Network Security".
- sec=mode
-
This option lets you set the security mode used
on the filesystem. Valid security modes are as specified in Section 6.2.2, "Exporting options" earlier in this chapter.
If you're using NFS Version 3, normally you need not be
concerned with security modes in vfstab or the
mount command, because Version 3 has a way to
negotiate the security mode. We will go into more detail in Chapter 12, "Network Security".
- hard/soft
-
By default, NFS filesystems are hard mounted,
and operations on them are retried until they are acknowledged by the
server. If the soft option is specified, an NFS
RPC call returns a timeout error if it fails the number of times
specified by the retrans option.
- vers=version
-
The NFS protocol supports two versions: 2 and 3.
By default, the mount command will attempt to
use Version 3 if the server also supports Version 3; otherwise, the
mount will use Version 2. Once the protocol
version is negotiated, the version is bound to the filesystem until
it is unmounted and remounted. If you are mounting multiple
filesystems from the same server, you can use different versions of
NFS. The binding of the NFS protocol versions is per mount point and
not per NFS client/server pair. Note the NFS protocol version is
independent of the transport protocol used. See the discussion of the
proto option later in this section.
- proto=protocol
-
The NFS protocol supports arbitrary transport protocols, both
connection-oriented and connectionless. TCP is the commonly used
connection-oriented protocol for NFS, and UDP is the commonly used
connectionless protocol. The protocol specified
in the proto option is the
netid field (the first field) in the
/etc/netconfig file. While the
/etc/netconfig file supports several different
netids, practically speaking, the only ones NFS supports today are
tcp and udp. By default,
the mount command will select TCP over UDP if
the server supports TCP. Otherwise UDP will be used.
TIP:
It is a popular misconception that NFS Version 3 and NFS over TCP are
synonymous. As noted previously, the NFS protocol version is
independent of the transport protocol used. You can have NFS Version
2 clients and servers that support TCP and UDP (or just TCP, or just
UDP). Similarly, you can have NFS Version 3 clients that support TCP
and UDP (or just TCP, or just UDP). This misconception arose because
Solaris 2.5 introduced both NFS Version 3 and NFS over TCP at the
same time, and so NFS mounts that previously used NFS Version 2 over
UDP now use NFS Version 3 over TCP.
- retrans/timeo
-
The retrans option specifies the number of times
to repeat an RPC request before returning a timeout error on a
soft-mounted filesystem. The retrans option is
ignored if the filesystem is using TCP. This is because it is assumed
that the system's TCP protocol driver will do a better of job
than the user of the mount command of judging
the necessary TCP level retransmissions. Thus when using TCP, the RPC
is sent just once before returning an error on a
soft mounted filesystem. The
timeo parameter varies the RPC timeout period
and is given in tenths of a second. For example, in
/etc/vfstab, you could have:
onaga:/export/home/mre - /users/mre nfs - yes rw,proto=udp,retrans=6,timeo=11
- retry=n
-
This option specifies the number of times to retry the mount attempt.
The default is 10000. (The default is only 1 when using the
automounter. See Chapter 9, "The Automounter".) See Section 6.3.4, "Backgrounding mounts" later in this chapter.
- rsize=n/wsize=n
-
This option controls the maximum transfer size of read
(rsize) and write (wsize)
operations. For NFS Version 2, the maximum transfer size is 8192
bytes, which is the default. For NFS Version 3, the client and server
negotiate the maximum. Solaris systems will by default negotiate a
maximum transfer size of 32768 bytes.
- intr/nointr
-
Normally, an NFS operation will continue until an RPC error occurs
(and if mounted hard, most RPC errors will not
prevent the operation from continuing) or until it has completed
successfully. If a server is down and a client is waiting for an RPC
call to complete, the process making the RPC call hangs until the
server responds (unless mounted soft). With the
intr option, the user can use Unix signals (see
the manpage for kill(1)) to interrupt NFS RPC
calls and force the RPC layer to return an error. The
intr option is the default. The nointr
option will cause the NFS client to ignore Unix signals.
- noac
-
This option suppresses attribute caching and forces writes to be
synchronously written to the NFS server. The purpose behind this
option to is let each client that mounts with
noac be guaranteed that when it reads a file
from the server it will always have the most recent copy of the data
at the time of the read. We will discuss attribute caching and
asynchronous/synchronous NFS input/output in more detail in Chapter 7, "Network File System Design and Operation".
- actimeo=n
-
The options that have the prefix ac(collectively referred to as the
ac* options)affect the length of time that attributes are cached on
NFS clients before the client will get new attributes from the
server. The quantity n is specified in seconds.
The two options prefixed with acdiraffect the cache times of directory
attributes. The two options prefixed with acreg
affect the cache times of regular file attributes. The
actimeo option simply sets the minimum and
maximum cache times of regular files and directory files to be the
same. We will discuss attribute caching in more detail in Chapter 7, "Network File System Design and Operation".
TIP:
It is a popular misconception that if the
minimum
attribute timeout is set to 30 seconds, that the NFS client will
issue a request to get new attributes for each open file every 30
seconds. Marketing managers for products that compete with NFS use
this misconception to claim that NFS is therefore a network bandwidth
hog because of all the attribute requests that are sent around. The
reality is that the attribute timeouts are checked only whenever a
process on the NFS client tries to access the file. If the attribute
timeout is 30 seconds and the client has not accessed the file in
five hours, then during that five-hour period, there will be no NFS
requests to get new attributes. Indeed, there will be no NFS requests
at all. For files that are being continuously accessed, with an
attribute timeout of 30 seconds, you can expect to get new attribute
requests to occur no more often than every 30 seconds. Given that in
NFS Version 2, and to an even higher degree in NFS Version 3,
attributes are piggy-backed onto the NFS responses, attribute
requests would tend to be seen far less often than every 30 seconds.
For the most part, attribute requests will be seen most often when
the NFS client opens a file. This is to guarantee cache consistency.
See Section 7.4.1, "File attribute caching" for more details.
- acdirmax=n
-
This option is like actimeo, but it affects the
maximum attribute timeout on directories; it defaults to 60 seconds.
It can't be higher than 10 hours (36000 seconds).
- acdirmin=n
-
This option is like actimeo, but it affects the
minimum attribute timeout on directories; it defaults to 30 seconds.
It can't be higher than one hour (3600 seconds).
- acregmax=n
-
This option is like actimeo, but it affects the
maximum attribute timeout on regular files; it defaults to 60
seconds. It can't be higher than 10 hours (36000 seconds).
- acregmin=n
-
This option is like actimeo, but it affects the
minimum attribute timeout on regular files; it defaults to three
seconds. It can't be higher than one hour (3600 seconds).
The
nointr,
intr,
retrans,
rsize,
wsize,
timeo,
hard,
soft, and ac* options
will be discussed in more detail in the
Chapter 18, "Client-Side Performance Tuning",
since they are directly responsible for altering clients'
performance in periods of peak server loading.
6.3.4. Backgrounding mounts
The mount protocol used by clients is subject to
the same RPC timeouts as individual NFS
RPC calls. When a client cannot mount an NFS filesystem during the
allotted RPC execution time, it retries the RPC operation up to the
count specified by the
retry mount option. If
the
bg mount option is used,
mount starts
another process that continues trying to
mount the filesystem in the background, allowing the
mount command to consider that request complete
and to attempt the next mount operation. If
bg
is not specified,
mount blocks waiting for the
remote fileserver to recover, or until the mount retry count has been
reached. The default value of 10,000 may cause a single mount to hang
for several hours before
mount gives up on the
fileserver.
You cannot put a mount in the background of any system-critical
filesystem such as the root ( / ) or
/usr
filesystem on a diskless client. If you need the filesystem to run
the system, you must allow the mount to complete in the foreground.
Similarly, if you require some applications from an NFS-mounted
partition during the boot process -- let's say you start
up a license server via a script in
/etc/rc2.d
-- you should hard-mount the filesystem with these executables
so that you are not left with a half-functioning machine. Any
filesystem that is not critical to the system's operation can
be mounted with the
bg option. Use of background
mounts allows your network to recover more gracefully from widespread
problems such as power failures.
When two servers are clients of each other, the
bg option must be used in at least one of the
server's
/etc/vfstab files. When both
servers boot at the same time, for example as the result of a power
failure, one usually tries to mount the other's filesystems
before they have been exported and before NFS is started. If both
servers use foreground mounts only, then a deadlock is possible when
they wait on each other to recover as NFS servers. Using
bg allows the first mount attempt to fail and be
put into the background. When both servers finally complete booting,
the backgrounded mounts complete successfully. So what if you have
critical mounts on each client, such that backgrounding one is not
appropriate? To cope, you will need to use the automounter (see
Chapter 9, "The Automounter") instead of
vfstab to
mount NFS filesystems.
The default value of the
retry option was chosen
to be large enough to guarantee that a client makes a sufficiently
good effort to mount a filesystem from a crashed or hung server.
However, if some event causes the client and the server to reboot at
the same time, and the client cannot complete the mount before the
retry count is exhausted, the client will not mount the filesystem
even when the remote server comes back online. If you have a power
failure early in the weekend, and all the clients come up but a
server is down, you may have to manually remount filesystems on
clients that have reached their
limit of mount retries.
6.3.5. Hard and soft mounts
The
hard and
soft mount
options determine
how a client behaves when the server is
excessively loaded for a long period or when it crashes. By default,
all NFS filesystems are mounted
hard, which
means that an RPC call that times out will be retried indefinitely
until a response is received from the server. This makes the NFS
server look as much like a local disk as possible -- the request
that needs to go to disk completes at some point in the future. An
NFS server that crashes looks like a disk that is very, very slow.
A side effect of hard-mounting NFS filesystems is that processes
block (or "hang") in a high-priority disk wait state
until their NFS RPC calls complete. If an NFS server goes down, the
clients using its filesystems hang if they reference these
filesystems before the server recovers. Using
intr in conjunction with the
hard mount option allows users to interrupt
system calls that are blocked waiting on a crashed server. The system
call is interrupted when the process making the call receives a
signal, usually sent by the user typing
CTRL-C
(interrupt) or using
the
kill command.
CTRL-\
(quit) is another way to generate a signal, as is logging out of the
NFS client host. When using
kill, only
SIGINT,
SIGQUIT, and
SIGHUP will
interrupt NFS operations.
When an NFS filesystem is
soft-mounted, repeated
RPC call failures eventually cause the NFS operation to fail as well.
Instead of emulating a painfully slow disk, a server exporting a
soft-mounted filesystem looks like a failing disk when it crashes:
system calls referencing the soft-mounted NFS filesystem return
errors. Sometimes the errors can be ignored or are preferable to
blocking at high priority; for example, if you were doing an
ls -l when the NFS server crashed, you
wouldn't really care if the
ls command
returned an error as long as your system didn't hang.
The other side to this "failing disk" analogy is that you
never want to write data to an unreliable
device, nor do you want to try to load executables from it. You
should not use the
soft option on any filesystem
that is writable, nor on any filesystem from which you load
executables. Furthermore, because many applications do not check
return value of the
read(2) system call when
reading regular files (because those programs were written in the
days before networking was ubiquitous, and disks were reliable enough
that reads from disks virtually never failed), you should not use the
soft option on any filesystem that is supplying
input to applications that are in turn using the data for a
mission-critical purpose. NFS only guarantees the consistency of data
after a server crash if the NFS filesystem was hard-mounted by the
client. Unless you really know what you are doing, neveruse the
soft option.
We'll come back to
hard- and
soft-mount issues in when
we discuss modifying
client behavior in the face of slow NFS servers in
Chapter 18, "Client-Side Performance Tuning".
6.3.6. Resolving mount problems
There are several things that can go wrong
when attempting to mount an NFS
filesystem. The most obvious failure of
mount is
when it
cannot find the server, remote filesystem,
or local mount point. You get the usual assortment of errors such as
"No such host" and "No such file or
directory." However, you may also get more cryptic messages
like:
client# mount orion:/export/orion /hosts/orion
mount: orion:/export/orion on /hosts/orion: No such device.
If either the local or remote filesystem was specified incorrectly,
you would expect a message about a nonexistent file or directory. The
device hint in
this
error indicates that NFS is not configured into the client's
kernel. The
device in question is more of a
pseudo-device -- it's the interface to the NFS vnode
operations. If the NFS client code is not in the kernel, this
interface does not exist and any attempts to use it return invalid
device messages. We won't discuss how to build a kernel; check
your documentation for the proper procedures and options that need to
be included to support NFS.
Another cryptic message is "Permission denied." Often
this is because the filesystem has been exported with the options
rw=client_list or
ro=client_list and your client is not in
client_list. But sometimes it means that the
filesystem on the server is not exported at all.
Probably the most common message on NFS clients is "NFS server
not responding." An NFS client will attempt to complete an RPC
call up to the number of times specified by the
retrans option. Once the retransmission limit
has been reached, the "not responding" message appears on
the system's console (or in the console window):
NFS server bitatron not responding, still trying
followed by a message indicating that the server has responded to the
client's RPC requests:
NFS server bitatron OK
These "not responding" messages may mean that the server
is heavily loaded and cannot respond to NFS requests before the
client has had numerous RPC timeouts, or they may indicate that the
server has crashed. The NFS client cannot tell the difference between
the two, because it has no knowledge of why its NFS RPC calls are not
being handled. If NFS clients begin printing "not
responding" messages, a server have may have crashed, or you
may be experiencing a burst of activity causing poor server
performance.
A less common but more confusing error message is "stale
filehandle." Because NFS allows multiple clients to share the
same directory, it opens up a window in which one client can delete
files or directories that are being referenced by another NFS client
of the same server. When the second client goes to reference the
deleted directory, the NFS server can no longer find it on disk, and
marks the handle, or pointer, to this directory
"invalid." The exact causes of stale filehandles and
suggestions for avoiding them are described in
Section 18.8, "Stale filehandles".
If there is a problem with the server's NFS configuration, your
attempt to mount filesystems from it will result in RPC errors when
mount cannot reach the portmapper
(
rpcbind) on the server. If you get RPC
timeouts, then the remote host may have lost its portmapper service
or the
mountd daemon may have exited
prematurely. Use
ps to locate these processes:
server% ps -e | grep -w mountd
274 ? 0:00 mountd
server% ps -e | grep -w rpcbind
106 ? 0:00 rpcbind
You should see both the
mountd and the
rpcbind processes running on the NFS server.
If
mount promptly reports "Program not
registered," this means
that
the
mountd daemon never started up and
registered itself. In this case, make sure that
mountd is getting started at boot time on the
NFS server, by checking the
/etc/dfs/dfstabfile. See
Section 6.1, "Setting up NFS" earlier in this chapter.
Another
mountd-related problem is two
mountd daemons competing for the same RPC
service number. On some systems (not Solaris), there might be a
situation when one mount daemon can be started in the boot script and
one configured into
/etc/inet/inetd.conf; the
second instance of the server daemon will not be able to register its
RPC service number with the portmapper. Since the
inetd-spawned process is usually the second to
appear, it repeatedly exits and restarts until
inetd realizes that the server cannot be started
and disables the service. The NFS RPC daemons should be started from
the boot scripts and not from
inetd, due to the
overhead of spawning processes from the
inetd
server (see
Section 1.5.3, "Internet and RPC server configuration").
There is also a detection mechanism for attempts to
make
"transitive," or multihop, NFS mounts. You can only use
NFS to mount another system's local filesystem as one of your
NFS filesystems. You can't mount another system's
NFS-mounted filesystems. That is, if
/export/home/bob is local on
serverb, then all machines on the network must
mount
/export/home/bob from
serverb. If a client attempts to mount a
remotely mounted directory on the server, the mount fails with a
multihop error message. Let's say NFS client marble has done:
# mount serverb:/export/home/bob /export/home/bob
and
marble is also an NFS server that exports
/export/home. If a third system tries to mount
marble:/export/home/bob, then the mount fails
with the error:
mount: marble:/export/home/bob on /users/bob: Too many levels of remote in path
"Too many levels" means more than one -- the
filesystem on the server is itself NFS-mounted. You cannot nest NFS
mounts by mounting through an intermediate fileserver. There are two
practical sides to this restriction:
-
Allowing multihop mounts would defeat the host-based permission
checking used by NFS. If a server limits access
to a filesystem to a few clients, then one of these client should not
be allowed to NFS-mount the filesystem and make it available to
other, non-trusted systems. Preventing multihop mounts makes the
server owning the filesystem the single authority governing its use
-- no other machine can circumvent the access policies set by
the NFS server owning a filesystem.
-
Any machine used as an intermediate server in a multihop mount
becomes a very inefficient "gateway" between the NFS
client and the server owning the filesystem.
We've seen how to export NFS filesystems on a network and how
NFS clients mount them. With this basic explanation of NFS usage,
we'll look at how NFS mounts are combined with symbolic links
to create more complex -- and
sometimes
confusing -- client filesystem
structures.
| | |
6.2. Exporting filesystems | | 6.4. Symbolic links |