| | |
Appendix C. Tunable Parameters
NFS client and server implementations tend to have lots of tunable
parameters. This appendix summarizes some of the more important ones.
Except as noted, the parameters are tuned by changing a parameter in
the kernel, which requires setting a value in a file like
/etc/system on Solaris 8. Note that while many
NFS implementations share many of these parameters, the names of the
parameters and the methods for setting them will vary between
implementations. Table C-1 and Table C-2 summarize client and server tunables.
Table C-1. Client parameters
Parameter
|
Description
|
Caveats
|
clnt_max_conns
|
This parameter
controls
the number of connections the client will create between the client
and a given server. In Solaris, the default is one. The rationale is
that a single TCP connection ought to be sufficient to use the
available bandwidth of network channel between the client and server.
You may find this to not be the case for network media faster than
the traditional 10Base T (10Mb per second).
Note that this parameter is not in the Solaris
nfs module, but it is in the kernel RPC module
rpcmod.
|
At the time of this writing, the algorithm used to assign traffic to
each connection was a simple round robin approach. You may find
diminishing returns if you set this parameter higher than 2. This
parameter is highly experimental.
|
clnt_idle_timeout
|
This parameter
sets
the number of milliseconds the NFS client will let a connection go
idle before closing it.
This parameter applies to NFS/TCP connections and is set in the
Solaris kernel RPC module called rpcmod.
|
Normally this parameter should be a minute below the lowest
server-side idle timeout among all the servers that you connect your
client to. Otherwise, you may observe clients sending requests
simultaneous with the server tearing down connections. This will
result in an unnecessary sequence of connection tear down, followed
immediately by connection setup.
|
nfs_max_threads
(NFS Version 2)
nfs3_max_threads
(NFS Version 3)
|
Sets the number of
background
read-ahead and write-behind threads on a per NFS-mounted filesystem
basis, for NFS Version 2 and Version 3.
Read-ahead is a performance win when applications do mostly
sequential reads. The NFS filesystem can thus anticipate what the
application wants, and so when it performs the next read(
) system call, the required data will already be in the
client's cache.
Write-behind is a performance win, because the NFS client must
synchronize dirty data to the server before the application closes
the file. A sequential write pattern is not necessary to leverage the
benefits of multiple write-behind threads.
|
Setting too many of these threads has the following risks:
-
If there are lots of mounted filesystems, consuming kernel memory for
lots of threads could degrade system performance.
-
If the network link or the NFS server is slow, the network can become
saturated.
|
nfs3_max_transfer_size
|
Controls the
default
I/O transfer size for NFS Version 3 mounts.
|
Given that UDP datagrams are limited to a maximum of 64 KB, adjusting
this value beyond its default is dangerous. If you do raise it from
its default (32 KB for Solaris, at the time of this writing), make
sure that you specify the use of the TCP protocol for all NFS mounts.
|
nfs_nra
(NFS Version 2)
nfs3_nra
(NFS Version 3)
|
Controls the number
of
blocks the NFS filesystem will read ahead at a time once it detects a
sequential read pattern.
|
This is a parameter that can have diminishing returns if set too
high. Not only will sequential read performance not improve, but the
increased memory use by the client will ultimately degrade overall
performance of the system.
If the read pattern is dominated by random and not sequential reads
(as might be the case when reading indexed files), setting this
tunable to 0 (zero) might be a win.
|
nfs_shrinkreaddir
|
This is a parameter
that is for enhancing interoperability. Many NFS implementations were
based on early source code from Sun Microsystems. This code reads
directories in buffers that were much smaller (1038 bytes) than the
maximum transfer size. Later, when Sun changed Solaris NFS clients to
read directories using maximum transfer sizes, it was found that some
servers could not cope.
Set this parameter to 1 to force 1038-byte directory read transfers.
|
|
nfs_write_error_to_cons_only
|
Controls whether NFS
write
errors are logged to the system console only, or to the console and
syslog. By default, errors are logged to both
the console and syslog.
|
This is a security issue. The syslog setup
usually logs errors to a file that is globally readable in
/var/adm directory. Write errors often include
the file handle of the file on which the error was encountered. If
the file handle can be easily obtained, it is easier for attackers to
attack the NFS server, since they can bypass the NFS filesystem to
mount such attacks.
|
rsize
wsize
|
These are suboptions
to
the NFS mount command that change read and write transfer block
sizes, respectively.
|
For NFS Version 2 mounts, the maximum is limited to 8KB, per the NFS
Version 2 protocol definition.
For NFS Version 3 mounts, the same caveats for the
nfs3_max_transfer_size parameter apply.
|
-t timeout
|
This is an option to
the
automount command that sets the number of
seconds the automounter will wait before attempting to unmount a
filesystem. Since unmounting a filesystem often forces the premature
flushing of buffers and release of performance enhancing caches,
higher values of this parameter can have very beneficial effects.
If your NFS server performs additional functions, like electronic
mail, or it allows users to login to run applications, then it is
likely your NFS server will be a heavy client of the automounter,
even if the filesystems are local to the NFS server. While you are
better off making your NFS servers do only NFS service, if you must
allow the NFS server to do non-NFS things, you are strongly
encouraged to increase the automounter timeout.
|
Lowering the timeout from its default value is almost always a bad
idea, except when you have lots of unreliable servers or networks. In
that case, more frequent unmounting of automounted filesystems might
be a net win.
|
Table C-2. Server Parameters
Parameter
|
Description
|
Caveats
|
nfs_portmon
|
This parameter controls whether the NFS server will allow requests
with a source port less than 1024.
Many operating systems use the nonstandard notion of privileged port
numbers, which says that only the superuser can create network
endpoints bound to a port less than 1024. Many NFS client
implementations will bind to ports less than 1024, and many NFS
server implementations will refuse NFS accesses if the port is
greater than or equal to 1024.
By default, Solaris NFS servers do not care if the client's
source port is less than 1024. This is because the security benefits
are minimal (given that it is trivial to bind to ports less than 1024
on many non-Unix operating systems).
|
If you set this parameter to 1 to enable NFS
port checking, you may find that some NFS clients cannot access your
server.
|
svc_idle_timeout
|
This parameter sets the number of milliseconds the NFS server will
let a connection go idle before closing it.
This parameter applies to NFS/TCP connections and is set in the
Solaris kernel RPC module called rpcmod.
|
Normally this parameter should be a minute beyond the highest
client-side idle timeout among all the clients that connect to your
server. Otherwise, you may observe clients sending requests
simultaneous with the server tearing down connections. This will
result in an unnecessary sequence of connection teardown, followed
immediately by connection setup.
|
nservers
|
This is an integer argument to the nfsd command.
It defines the number of NFS server threads or processes that will be
available to service NFS requests.
|
On some non-Solaris implementations, setting
nservers too high can result in bad performance
due to three effects:
-
The number of server threads or processes is allocated up front,
taking up lots of precious kernel memory that might not be needed if
the server load is minimal. This is not a problem on Solaris since
threads are allocated on demand and released when demand ebbs.
-
The thundering herd problem exists, which results when there are lots
of threads, and every time a request arrives, all the idle threads,
instead of just one idle thread, are dispatched. If the load is
moderate, many CPU cycles can be wasted, as the majority of the
threads wake up, find there is nothing to do, and then go back to
sleep. This is not a problem under Solaris because only one thread at
a time is dispatched when a request arrives.
-
The Robinson Factor[63]
is the final effect. Consider the situation when there are threads
doing NFS work, but some are idle. By the time an idle thread is
dispatched, an active thread has picked up the request, thus wasting
a dispatch of the idle thread. This is not a problem with
Solaris.
|
[63]The Robinson Factor is named
after David Robinson, the engineer at Sun Microsystems who observed
the issue in Sun's NFS server, and fixed it.
| | | B.3. NFS errno values | | Index |
|
|