12.4. NFS security
Filesystem security has two aspects: controlling
access
to and operations on files, and limiting exposure of the contents of
the files. Controlling access to remote files involves mapping Unix
file operation semantics into the NFS system, so that certain
operations are disallowed if the remote user fails to provide the
proper credentials. To avoid giving superuser permissions across the
network, additional constraints are put in place for access to files
by
root. Even more stringent NFS security
requires proving that the Unix-style credentials contained in each
NFS request are valid; that is, the server must know that the NFS
client's request was made by a valid user and not an imposter
on the network.
Limiting disclosure of data in a file is more difficult, as it
usually
involves encrypting the contents of the file. The client application
may choose to enforce its own data encryption and store the file on
the server in encrypted form. In this case, the client's NFS
requests going over the network contain blocks of encrypted data.
However, if the file is stored and used in clear text form, NFS
requests to read or write the file will contain clear text as well.
Sending parts of files over a network is subject to some data
exposure concerns. In general, if security would be compromised by
any part of a file being disclosed, then either the file should not
be placed on an NFS-mounted filesystem, or you should use a security
mechanism for RPC that encrypts NFS remote procedure calls and
responses over the network. We will cover one such mechanism later in
this section.
You can prevent damage to files by restricting write permissions
and enforcing user authentication.
With NFS you have the choice of deploying some simple security
mechanisms and more complex, but stronger RPC security mechanisms.
The latter will ensure that user authentication is made secure as
well, and will be described later in this section. This section
presents ways of restricting access based on the user credentials
presented in NFS requests, and then looks at validating the
credentials themselves using stronger RPC security.
12.4.1. RPC security
Under the default RPC security
mechanism,
AUTH_SYS, every NFS request, including mount requests, contains a set
of user credentials with a UID and a list of group IDs (GIDs) to
which the UID belongs. NFS credentials are the same as those used for
accessing local files, that is, if you belong to five groups, your
NFS credentials contain your UID and five GIDs. On the NFS server,
these credentials are used to perform the permission checks that are
part of Unix file accesses -- verifying write permission to
remove a file, or execute permission to search directories. There are
three areas in which NFS credentials may not match the user's
local credential structure: the user is the superuser, the user is in
too many groups, or no credentials were supplied (an
"anonymous" request). Mapping of root and anonymous users
is covered in the next section.
Problems with too many groups
depend upon the implementation of NFS
used by the client and the server, and may be an issue only if they
are different (including different revisions of the same operating
system). Every NFS implementation has a limit on the number of groups
that can be passed in a credentials structure for an NFS RPC. This
number usually agrees with the maximum number of groups to which a
user may belong, but it may be smaller. On Solaris 8 the default and
maximum number of groups is 16 and 32, respectively. However, under
the AUTH_SYS RPC security mechanism, the maximum is 16. If the
client's group limit is larger than the server's, and a
user is in more groups than the server allows, then the
server's attempt to parse and verify the credential structure
will fail, yielding error messages like:
RPC: Authentication error
Authentication errors may occur when trying to mount a filesystem, in
which case the superuser is in too many groups. Errors may also occur
when a particular user tries to access files on the NFS server; these
errors result from any NFS RPC operation. Pay particular attention to
the
group file in a heterogeneous environment,
where the NIS-managed
group map may be appended
to a local file with several entries for common users like
root and
bin. The only
solution is to restrict the number
of groups to the smallest value allowed
by all systems that are running NFS.
12.4.2. Superuser mapping
The superuser is not given normal file
access permissions to NFS-mounted
files. The motivation behind this restriction is that root access
should be granted on a per-machine basis. A user who is capable of
becoming root on one machine should not necessarily have permission
to modify files on a file server. Similarly, a
setuid program that assumes root privileges may
not function properly or
as
expected if it is
allowed to operate on remote files.
To enforce restrictions on
superuser
access, the root's UID is mapped to the anonymous user
nobody in the NFS RPC credential structure. The
superuser frequently has fewer permissions than a nonprivileged user
for NFS-mounted filesystems, since
nobody
's group usually includes no other users. In the
password file,
nobody has a UID of 60001, and
the group
nobody also has a GID of 60001. When
an executable, that is owned by root with the
setuid bit set on the permissions, runs, its
effective user ID is root, which gets mapped to
nobody. The executable still has permissions on
the local system, but it cannot get to remote files unless they have
been explicitly exported with root access enabled.
Most implementations of NFS allow the root UID mapping to be
defeated. Some do this by letting you change the UID used for
nobody in the server's kernel. Others do
this by letting you specify the UID for the anonymous user at the
time you export the filesystem. For example, in this line in
/etc/dfs/dfstab:
share -o ro,anon=0 /export/home/stuff
Changing the UID for
nobody from 60001 to 0
allows the superuser to access all files exported from the server,
which may be less restrictive than desired.
Most NFS servers let you grant root permission on an exported
filesystem on a per-host basis using the
root=
export option. The server exporting a filesystem grants root access
to a host or list of hosts by including them in the
/etc/dfs/dfstab file:
share -o rw,root=bitatron:corvette /export/home/work
The superuser on hosts
bitatron and
corvette is given normal root filesystem
privileges on the server's
/export/home/work directory. The name of a
netgroup may be substituted for a hostname; all of the hosts in the
netgroup are granted root access.
Root permissions on a remote filesystem
should be extended only when
absolutely necessary. While privileged users may find it annoying to
have to log into the server owning a filesystem in order to modify
something owned by root, this restriction also eliminates many common
mistakes. If a system administrator wants to purge
/usr/local on one host (to rebuild it, for
example), executing
rm -rf * will have
disastrous consequences if there is an NFS-mounted filesystem with
root permission under
/usr/local. If
/usr/local/bin is NFS-mounted, then it is
possible to wipe out the server's copy of this directory from a
client when root permissions are extended over the network.
One clear-cut case where root permissions should be extended on an
NFS filesystem is for the root and swap partitions of a diskless
client, where they are mandatory. One other possible scenario in
which root permissions are useful is for cross-server mounted
filesystems. Assuming that only the system administration staff is
given superuser privileges on the file servers, extending these
permissions across NFS mounts may make software distribution and
maintenance a little easier. Again, the pitfalls await, but hopefully
the community with networked root permissions is small and
experienced enough to use these sharp instruments safely.
On the client side, you may want to protect the NFS client from
foreign
setuid executables of unknown origin.
NFS-mounted
setuid executables
should not be trusted unless you control
superuser access to the server from which they are mounted. If
security on the NFS server is compromised, it's possible for
the attacker to create
setuid executables which
will be found -- and executed -- by users who NFS mount the
filesystem. The
setuid process will have root
permission on the host on which it is running, which means it can
damage files on the local host. Execution of NFS-mounted
setuid executables can be disabled with the
nosuid mount option. This option may be
specified as a suboption to the
-o command-line
flag, the automounter map entry, or in the
/etc/vfstab entry:
automounter auto_local entry:
bin -ro,nosuid toolbox:/usr/local/bin
vfstab entry:
toolbox:/usr/local/bin - /usr/local/bin nfs - no ro,nosuid
A bonus is that on many systems, such as Solaris, the
nosuid option also disables access to block and
character device nodes (if not, check your system's
documentation for a
nodev option). NFS is a file
access protocol and it doesn't allow remote device access.
However it allows device nodes to be stored on file servers, and they
are interpreted by the NFS client's operating system. So here
is another problem with mounting without
nosuid.
Suppose under your NFS client's
/dev
directory you have a device node with permissions restricted to root
or a select group of users. The device node might be protecting a
sensitive resource, like an unmounted disk partition containing, say,
personal information of every employee. Let's say the major
device number is 100, and the minor is 0. If you mount an NFS
filesystem without
nosuid, and if that
filesystem has a device node with wide open permissions, a major
number of 100, and a minor number of 0, then there is nothing
stopping unauthorized users from using the remote device node to
access your sensitive local device.
The only clear-cut case where NFS filesystems should be mounted
without the
nosuid option is when the filesystem
is a root partition of a diskless client. Here you have no choice,
since diskless operation requires setuid execution and device access.
We've discussed problems with
setuid and
device nodes from the NFS client's perspective. There is also a
server perspective. Solaris and other NFS server implementations have
a
nosuid option that applies to the exported
filesystem:
share -o rw,nosuid /export/home/stuff
This option is highly recommended. Otherwise, malicious or careless
users on your NFS clients could create setuid executables and device
nodes that would allow a careless or cooperating user logged into the
server to commit a security breach, such as gaining superuser access.
Once again, the only clear-cut case where NFS filesystems should be
exported without the
nosuid (and
nodev if your system supports it, and decouples
nosuid from
nodev
semantics) option is when the filesystem is a root partition of a
diskless client, because there is no choice if diskless operation is
desired. You should ensure that any users logged into the diskless
NFS server can't access the root partitions, lest the superuser
on the diskless client is careless. Let's say the root
partitions are all under
/export/root. Then you
should change the permissions of directory
/export/root so that no one but
superuser can
access:
# chown root /export/root
# chmod 700 /export/root
12.4.3. Unknown user mapping
NFS handles requests that do not have valid
credentials in them by mapping
them to the
anonymous user. There are several
cases in which an NFS request has no valid credential structure in
it:
-
The NFS client and server are using a more secure form of RPC like
RPC/DH, but the user on the client has not provided the proper
authentication information. RPC/DH will be discussed later in this
chapter.
-
The client is a PC running PC/NFS, but the PC user has not supplied a
valid username and password. The PC/NFS mechanisms used to establish
user credentials are described in Section 10.3, "Configuring PC/NFS".
-
The client is not a Unix machine and cannot produce Unix-style
credentials.
-
The request was fabricated (not sent by a real NFS client), and is
simply missing the credentials structure.
Note that this is somewhat different behavior from Solaris 8 NFS
servers. In Solaris 8 the default is that invalid credentials are
rejected. The philosophy is that allowing an NFS user with an invalid
credential is no different then allowing a user to log in as user
nobody if he has forgotten his password.
However, there is a way to override the default behavior:
share -o sec=sys:none,rw /export/home/engin
This says to export the filesystem, permitting AUTH_SYS credentials.
However if a user's NFS request comes in with invalid
credentials or non-AUTH_SYS security, treat and accept the user as
anonymous. You can also map all users to anonymous, whether they have
valid credentials or not:
share -o sec=none,rw /export/home/engin
By default, the anonymous user is
nobody, so
unknown users (making the credential-less requests) and superuser can
access only files with world permissions set. The
anon export option allows a server to change the
mapping of anonymous requests. By setting the anonymous user ID in
/etc/dfs/dfstab, the unknown user in an
anonymous request is mapped to a well-known local user:
share -o rw,anon=100 /export/home/engin
In this example, any request that arrives without user credentials
will be executed with UID 100. If
/export/home/engin is owned by UID 100, this
ensures that unknown users can access the directory once it is
mounted. The user ID mapping does not affect the real or effective
user ID of the process accessing the NFS-mounted file. The anonymous
user mapping just changes the user credentials used by the NFS server
for determining file access permissions.
The anonymous user mapping is valid only for the filesystem that is
exported with the
anon option. It is possible to
set up different mappings for each filesystem exported by specifying
a different anonymous user ID value in each line of the
/etc/dfs/dfstab file:
share -o rw,anon=100 /export/home/engin
share -o rw,anon=200 /export/home/admin
share -o rw,anon=300 /export/home/marketing
Anonymous users should almost
never be mapped to
root, as this would grant superuser access to filesystems to any user
without a valid password file entry on the server. An exception would
be when you are exporting read-only, and the data is not sensitive.
One application of this is exporting directories containing the
operating system installation. Since operating systems like Solaris
are often installed over the network, and superuser on the client
drives the installation, it would be tedious to list every possible
client that you want to install the operating system on.
Anonymous users should be thought of as transient or even unwanted
users, and should be given as few file access permissions as
possible. RPC calls with missing UIDs in the credential structures
are rejected out of hand on the server if the server exports its
filesystems with
anon=-1. Rather than mapping
anonymous users to
nobody, filesystems that
specify
anon=-1 return authentication errors for
RPC calls with no credentials in them.
Normally, with the anonymous user mapped to
nobody, anonymous requests are accepted but have
few, if any, permissions to access files on the server. Mapping
unknown users is a risky venture. Requests that are missing UIDs in
their credentials may be appearing from outside the local network, or
they may originate from machines on which security has been
compromised. Thus, if you must export filesystems with the anonymous
user mapped to a UID other than nobody, you should limit it to a
smaller set of hosts:
share -o rw=engineering,anon=100 /export/home/engin # a nergroup
share -o rw=admin1:admin2,anon=200 /export/home/admin # a pair of hosts
share -o rw=.marketing.widget.com,anon=300 /export/home/marketing # a domain
We discuss limiting exports to certain hosts in the
next section.
12.4.4. Access to filesystems
In addition to being protected from root
access, some
filesystems require protection from
certain hosts. A machine containing source code is a good example;
the source code may be made available only to a selected set of
machines and not to the network at large. The list of hosts to which
access is restricted is included in the server's
/etc/dfs/dfstab file with the
rw= option:
share -o rw=noreast,root=noreast /export/root/noreast
This specification is typical of that for the root filesystem of a
diskless client. The client machine is given root access to the
filesystem, and access is further restricted to host
noreast only. No user can look at
noreast 's root filesystem unless he or
she can log into
noreast and look locally. The
hosts listed in a
rw= list can be individual
hostnames or netgroup names, separated by colons. On Solaris 8, the
hosts can also be DNS domain names, if prefixed by a leading dot (.),
or a network number if preceded by a leading at sign (@). Solaris 8
also has the capability to deny specific hosts (individual hostnames,
netgroups, domains, or network numbers) access. For example:
share -o rw=-marketing /source
Restricting host access ensures that NFS is not used to circumvent
login restrictions. If a user cannot log into a host to restrict
access to one or more filesystems, the user should not be able to
recreate that host's environment
by mounting all of its NFS-mounted
filesystems on another system.
12.4.5. Read-only access
By default, NFS filesystems are exported with
write access enabled for any host that
mounts them. Using the
ro or ro= option in the
/etc/dfs/dfstab file, you can specify whether
the filesystem is exported read-only, and to what hosts:
share -o ro=system-engineering /source
In this example, the machines in
system-engineering
netgroup are authorized to only browse the source code;
they get read-only access. Of course, this prevents users on machines
authorized to modify the source from doing their job. So you might
instead use:
share -o rw=source-group,ro=system-engineering /source
In this example, the machines in
source-group
are authorized to modify the source code get read and write access,
whereas the machines in the
system-engineering
netgroup, which are authorized to only browse the source
code, get read-only access.
12.4.6. Port monitoring
Port monitoring is used to frustrate "spoofing" --
hand-crafted
imitations of valid NFS requests
that are sent from unauthorized user processes. A clever user could
build an NFS request and send it to the
nfsd
daemon port
on a server, hoping to grab all or
part of a file on the server. If the request came from a valid NFS
client kernel, it would originate from a privileged UDP or TCP port
(a port less than 1024) on the client. Because all UDP and TCP
packets contain both source and destination port numbers, the NFS
server can check the originating port number to be sure it came from
a privileged port.
NFS port monitoring may or may not be enabled by default. It is
usually governed by a kernel variable that is modified at boot time.
Solaris 8 lets you modify this via the
/etc/system file, which is read-only at boot
time. You would add this entry to
/etc/system to
enable port monitoring:
set nfssrv:nfs_portmon = 1
In addition, if you don't want to reboot your server for this
to take effect, then, you can change it on the fly by doing:
echo "nfs_portmon/W1" | adb -k -w
This script sets the value of
nfs_ portmon to 1
in the kernel's memory image, enabling port monitoring. Any
request that is received from a nonprivileged port is rejected.
By default, some
mountd daemons perform port
checking, to be sure that mount requests are coming from processes
running with root privileges. It rejects requests that are received
from nonprivileged ports. To turn off port monitoring in the mount
daemon, add the
-n flag to its invocation in the
boot script:
mountd -n
Not all NFS clients send requests from privileged ports; in
particular, some PC implementations of the NFS client code will not
work with port monitoring enabled. In addition, some older NFS
implementations on Unix workstations use nonprivileged ports and
require port monitoring to be disabled. This is one reason why, by
default, the Solaris 8
nfs_ portmon tunable is
set to zero. Another reason is that on operating systems like
Windows, with no concept of privileged users, anyone can write a
program that binds to a port less than 1024. The Solaris 8
mountd also does not monitor ports, nor is there
any way to turn on mount request port monitoring. The reason is that
as of Solaris 2.6 and onward, each NFS request is checked against the
rw=,
ro=, and
root= lists. With that much checking,
filehandles given out a mount time are longer magic keys granting
access to an exported filesystem as they were in previous versions of
Solaris and in other, current and past, NFS server implementations.
Check your system's documentation and boot scripts
to determine under
what conditions, if any, port monitoring is enabled.
12.4.7. Using NFS through firewalls
If you are behind a firewall that has the
purpose
of keeping intruders out of your network, you may find your firewall
also prevents you from accessing services on the greater Internet.
One of these services is NFS. It is true there aren't nearly as
many public NFS servers on the Internet as FTP or HTTP servers. This
is a pity, because for downloading large files over wide area
networks, NFS is the best of the three protocols, since it copes with
dropped connections. It is very annoying to have an FTP or HTTP
connection time-out halfway into a 10 MB download. From a security
risk perspective, there is no difference between surfing NFS servers
and surfing Web servers.
You, or an organization that is collaborating with you, might have an
NFS server outside your firewall that you wish to access. Configuring
a firewall to allow this can be daunting if you consider what an NFS
client does to access an NFS server:
-
The NFS client first contacts the NFS server's portmapper or
rpcbind daemon to find the port of the mount daemon. While the
portmapper and rpcbind daemons listen on a well-known port,
mountd typically does not. Since:
-
Firewalls typically filter based on ports.
-
Firewalls typically block all incoming UDP traffic except for some
DNS traffic to specific DNS servers.
-
Portmapper requests and responses often use UDP.
mountd alone can frustrate your aim.
-
The NFS client then contacts the mountd daemon
to get the root filehandle for the mounted filesystem.
-
The NFS client then contacts the portmapper or rpcbind daemon to find
the port that the NFS server typically listens on. The NFS server is
all but certainly listening on port 2049, so changing the firewall
filters to allow requests to 2049 is not hard to do. But again we
have the issue of the portmapper requests themselves going over UDP.
-
After the NFS client mounts the filesystem, if it does any file or
record locking, the lock requests will require a consultation with
the portmapper or rpcbind daemon to find the lock manager's
port. Some lock managers listen on a fixed port, so this would seem
to be a surmountable issue. However, the lock manager makes callbacks
to the client's lock manager, and the source port of the
callbacks is not fixed.
-
Then there is the status monitor, which is also not on a fixed port.
The status monitor is needed every time a client makes first contact
with a lock manager, and also for recovery.
To deal with this, you can pass the following options to the
mount command, the automounter map entry, or the
vfstab:
mount commmand:
mount -o proto=tcp,public nfs.eisler.com:/export/home/mre /mre
automounter auto_home entry:
mre -proto=tcp,public nfs.eisler.com:/export/home/&
vfstab entry:
nfs.eisler.com:/export/home/mre - /mre nfs - no proto=tcp,public
The
proto=tcp option forces the mount to use the
TCP/IP protocol. Firewalls prefer to deal with TCP because it
establishes state that the firewall can use to know if a TCP segment
from the outside is a response from an external server, or a call
from an external client. The former is not usually deemed risky,
whereas the latter usually is.
The
public option does the following:
-
Bypasses the portmapper entirely and always contacts the NFS server
on port 2049 (or a different port if the port=
option is specified to the mount command). It
sends a NULL ping to the NFS Version 3 server first, and if that
fails, tries the NFS Version 2 server next.
-
Makes the NFS client contact the NFS server directory to get the
initial filehandle. How is this possible? The NFS client sends a
LOOKUP request using a null filehandle (the public
filehandle) and a pathname to the server (in the preceding
example, the pathname would be /export/home).
Null filehandles are extremely unlikely to map to a real file or
directory, so this tells the server that understands public
filehandles that this is really a mount request. The name is
interpreted as a multicomponent place-name, with each component
separated by slashes (/). A filehandle is returned from LOOKUP.
-
Marks the NFS mounts with the llock option. This
is an undocumented mount option that says to handle all locking
requests for file on the NFS filesystem locally. This is somewhat
dangerous in that if there is real contention for the filesystem from
multiple NFS clients, file corruption can result. But as long as you
know what you are doing (and you can share the filesystem to a single
host, or share it read-only to be sure), this is safe to do.
If your firewall uses Network Address Translation,
which
translates private IP addresses behind the
firewall to public IP addresses in front of the firewall, you
shouldn't have problems. However, if you are using any of the
security schemes discussed in the section
Section 12.5, "Stronger security for NFS", be advised that they are designed for
Intranets, and require collateral network services like a directory
service (NIS for example), or a key service (a Kerberos Key
Distribution Center for example). So it is not likely you'll be
able to use these schemes through a firewall until the LIPKEY scheme,
discussed in
Section 12.5.7, "NFS security futures", becomes available.
Some NFS servers require the
public option in
the
dfstab or the equivalent when exporting the
filesystem in order for the server to accept the public filehandle.
This is not the case for Solaris 8 NFS servers.
What about allowing NFS clients from the greater Internet to access
NFS servers located behind your firewall? This a reasonable thing to
do as well, provided you take some care. The NFS clients will be
required to mount the servers' filesystems with the
public option. You will then configure your
firewall to allow TCP connections to originate from outside your
Intranet to a specific list of NFS servers behind the firewall.
Unless Network Address Translation gets in the way, you'll want
to use the
rw= or
ro=
options to export the filesystems only to specific NFS clients
outside your Intranet. Of course, you should export with the
nosuid option, too.
If you are going to use NFS firewalls to access critical data, be
sure to
read
Section 12.5.3, "NFS and IPSec" later in this
chapter.
12.4.8. Access control lists
Some NFS servers exist in an operating environment
that
supports Access Control Lists (ACLs). An ACL extends the basic set of
read, write, execute permissions beyond those of file owner, group
owner, and other. Let's say we have a set of users called
linus,
charlie,
lucy, and
sally, and these
users comprise the group
peanuts. Suppose
lucy owns a file called
blockhead, with group ownership assigned to
peanuts. The permissions of this file are 0660
(in octal). Thus
lucy can read and write to the
file, as can all the members of her group. However,
lucy decides she doesn't want
charlie to read the file, but still wants to
allow the other
peanuts group members to access
the file. What
lucy can do is change the
permissions to 0600, and then create an ACL that explicitly lists
only
linus and
sally as
being authorized to read and write the file, in addition to herself.
Most Unix systems, including Solaris 2.5 and higher, support a draft
standard of ACLs from the POSIX standards body. Under Solaris,
lucy would prevent
charlie
from accessing her file by doing:
% chmod 0600 blockhead
% setfacl -m mask:rw-,user:linus:rw-,user:sally:rw- blockhead
To understand what
setfacl did, let's read
back the ACL for
blockhead:
% getfacl blockhead
# file: blockhead
# owner: lucy
# group: peanuts
user::rw-
user:linus:rw- #effective:rw-
user:sally:rw- #effective:rw-
group::--- #effective:---
mask:rw-
other:---
The
user: entries for
sally
and
linus correspond to the
rw permissions
lucy
requested. The
user:: entry simply points out
that the owner of the file,
lucy has
rw permissions. The
group::
entry simply says that the group owner,
peanuts,
has no access. The
mask: entry says what the
maximum permissions are for any users (other than the file owner) and
groups. If
lucy had not included mask
permissions in the
setfacl command, then
linus and
sally would be
denied access. The
getfacl command would instead
have shown:
% getfacl blockhead
# file: blockhead
# owner: lucy
# group: peanuts
user::rw-
user:linus:rw- #effective:---
user:sally:rw- #effective:---
group::--- #effective:---
mask:---
other:---
Note the difference from the two sets of
getfacl
output: the effective permissions granted to
linus and
sally.
Once you have the ACL on a file the way you want it, you can take the
output of
getfacl on one file and apply it to
another file:
% touch patty
% getfacl blockhead | setfacl -f /dev/stdin patty
% getfacl patty
# file: patty
# owner: lucy
# group: peanuts
user::rw-
user:linus:rw- #effective:rw-
user:sally:rw- #effective:rw-
group::--- #effective:---
mask:rw-
other:---
It would be hard to disagree if you think this is a pretty arcane way
to accomplish something that should be fairly simple. Nonetheless,
ACLs can be leveraged to solve the "too many groups"
problem described earlier in this chapter in
Section 12.4.1, "RPC security". Rather than put users into lots of
groups, you can put lots of users into ACLs. The previous example
showed how to copy an ACL from one file to another. You can also set
a default ACL on a directory, such that any files or directories
created under the top-level directory are inherited. Any files or
directories created in a subdirectory inherit the default ACL. It is
easier to hand edit a file containing the ACL description than to
create one on the command line. User
lucy
creates the following file:
user::rwx
user:linus:rwx
user:sally:rwx
group::---
mask:rwx
other:---
default:user::rwx
default:user:linus:rwx
default:user:sally:rwx
default:group::---
default:mask:rwx
default:other:---
It is the
default: entries that result in
inherited ACLs. The reason why we add execution permissions is so
that directories have search permissions, i.e., so
lucy and her cohorts can change their current
working directories to her protected directories.
Once you've got default ACLs set up for various groups of
users, you then apply it to each top-level directory that you create:
% mkdir lucystuff
% setfacl -f /home/lucy/acl.default lucystuff
Note that you cannot apply an ACL file with
default: entries in it to nondirectories.
You'll have to create another file without the
default: entries to use
setfacl -f
on nondirectories:
% grep -v '^default:' | /home/lucy/acl.default > /home/lucy/acl.files
The preceding example strips out the
default:
entries. However it leaves the executable bit on in the entries:
% cat /home/lucy/acl.files
user::rwx
user:linus:rwx
user:sally:rwx
group::---
mask:rwx
other:---
This might not be desirable for setting an ACL on existing regular
files that don't have the executable bit. So we create a third
ACL file:
% sed 's/x$/-/' /home/lucy/acl.files | sed 's/^mask.*$/mask:rwx/' \
> /home/lucy/acl.noxfiles
This first turns off every execute permission bit, but then sets the
mask to allow execute permission should we later decide to enable
execute permission on a file:
% cat /home/lucy/acl.noxfiles
user::rw-
user:linus:rw-
user:sally:rw-
group::---
mask:rwx
other:---
With an ACL file with
default: entries, and the
two ACL files without
default: entries,
lucy can add protection to existing trees of
files. In the following example,
oldstuff is an
existing directory containing a hierarchy of files and
subdirectories:
fix the directories:
% find oldstuff -type d -exec setfacl -f /home/lucy/acl.default {} \;
fix the nonexecutable files:
% find oldstuff ! -type d ! ( -perm -u=x -o -perm -g=x -o -perm -o=x ) \
-exec setfacl -f /home/lucy/acl.noxfiles {} \;
fix the executable files:
% find oldstuff ! -type d ( -perm -u=x -o -perm -g=x -o -perm -o=x ) \
-exec setfacl -f /home/lucy/acl.noxfiles {} \;
In addition to solving the "too many groups in NFS"
problem, another advantage of ACLs versus groups is potential
decentralization. As the system administrator, you are called on
constantly to add groups, or to modify existing groups (add or delete
users from groups). With ACLs, users can effectively administer their
own groups. It is a shame that constructing ACLs is so arcane,
because it effectively eliminates a way to decentralize a security
access control for logical groups of users. You might want to create
template ACL files and scripts for setting them to make it easier for
your users to use them as a way to wean them off of groups. If you
succeed, you'll reduce your workload and deal with fewer issues
of "too many groups in NFS."
TIP:
In Solaris, ACLs are not preserved when copying a file from the local
ufs filesystem to a file in the
tmpfs (/tmp) filesystem.
This can be a problem if you later copy the file back from
/tmp to a ufs filesystem.
Also, in Solaris, ACLs are not, by default, preserved when generating
tar or cpio archives. You need to use the -p
option to tar to preserve ACLs when creating and
restoring a tar archive. You need to use the
-P option to cpio when
creating and restoring cpio archives. Be aware
that non-Solaris systems probably will not be able to read archives
with ACLs in them.
12.4.8.1. ACLs that deny access
We showed how we can prevent
charlie from
getting access to
lucy's files by creating an ACL that
included only
linus and
sally. Another way
lucy
could have denied
charlie files is to set a deny
entry for charlie:
% setfacl -m user:charlie:--- blockhead
No matter what the group ownership of
blockhead
is, and no matter what the other permissions on
blockhead are,
charlie will
not be able read or write the file.
12.4.8.2. ACLs and NFS
ACLs are ultimately enforced by
the local filesystem on the NFS server.
However, the NFS protocol has no way to pass ACLs back to the client.
This is a problem for NFS Version 2 clients, because they use the
nine basic permissions bits (read, write, execute for user, group,
and other) and the file owner and group to decide if a user should
have access to the file. For this reason, the Solaris NFS Version 2
server reports the minimum possible permissions in the nine
permission bits whenever an ACL is set on a file. For example,
let's suppose the permissions on a file are 0666 or
rw-rw-rw-. Now let's say an ACL is added
for user
charlie that gives him permissions of
-- -, i.e., he is denied access. When an ACL
is set on a file, the Solaris NFS Version 2 server will see that
there is a user that has no access to the file. As a result, it will
report to most NFS Version 2 clients permissions of 0600, thereby
denying nearly everyone (those accessing from NFS clients) but
lucy access to the file. If it did not, then
what would happen is that the NFS client would see permissions of
0666 and allow
charlie to access the file.
Usually
charlie's application would
succeed in opening the file, but attempts to read or write the file
would fail in odd ways. This isn't desirable. Even less
desirable is that if the file were cached on the NFS client,
charlie would be allowed to read the
file.
[21]
This is not the case for the NFS Version 3 server though. With the
NFS Version 3 protocol, there is an
ACCESS
operation that the client sends to the server to see if the indicated
user has access to the file. Thus the exact, unmapped permissions are
rendered back to the NFS Version 3 client.
We said that the Solaris NFS server will report to most NFS Version 2
clients permissions of 0600. However, starting with Solaris 2.5 and
higher, a side band protocol to NFS was added, such that if the
protocol exists, the client can not only get the exact permissions,
but also use the sideband protocol's ACCESS procedure for
allowing the server to permissions the access checks. This then
prevents
charlie or the superuser from gaining
unauthorized access to files.
What if you have NFS clients that are not running Solaris 2.5 or
higher, or are not running Solaris at all? In that situation you have
two choices: live with the fact that some users will be denied access
due to the minimal permissions behavior, or you can use the
aclok option of the Solaris
share command to allow maximal access. If the
filesystem is shared with
aclok, then if anyone
has read access to the files, then everyone does. So,
charlie would then be allowed to access file
blockhead.
Another issue with NFS and ACLs is that the NFS protocol has no way
to set or retrieve ACLs, i.e., there is no protocol support for the
setfacl or
getfacl command.
Once again, the sideband protocol in Solaris 2.5 and higher comes to
the rescue. The sideband protocol allows ACLs to be set and
retrieved, so
setfacl and
getfacl work
across NFS.
TIP:
IBM's AIX and Compaq's Tru64 Unix have sideband ACL
protocols for manipulating ACLs over NFS. Unfortunately, none of the
three protocols are compatible with each other.
12.4.8.3. Are ACLs worth it?
With all the arcane details, caveats, and limitations we've
seen, you as the system administrator may decide that ACLs are more
pain than benefit. Nonetheless, ACLs are a feature that are available
to users. Even if you don't want to actively support them, your
users might attempt to use them, so it is a good idea to
become
familiar
with
ACLs.
| | |
12.3. Password and NIS security | | 12.5. Stronger security for NFS |