24.3 Overload Attacks
In an
overload attack, a shared resource or service is overloaded with
requests to such a point that it is unable to satisfy requests from
other users. For example, if one user spawns enough processes, other
users won't be able to run processes of their own.
If one user fills up the disks, other users won't be
able to create new files. You can partially protect against overload
attacks through the use of quotas and other techniques that limit the
amount of resources that a single user can consume. You can use
physical limitations as a kind of quota—for example, you can
partition your computer's resources, and then limit
each user to a single partition. Finally, you can set up systems for
automatically detecting overloads and restarting your
computer—although giving an attacker the capability to restart
your computer at will can create other problems.
24.3.1 Process and CPU Overload Problems
One of the simplest denial of
service attacks is a process attack. In a
process attack, one user makes a computer unusable for others who
happen to be using the computer at the same time. Process attacks are
generally of concern only with shared computers: the fact that a user
incapacitates her own workstation is of no interest if nobody else is
using the machine.
24.3.1.1 Too many processes
The following
program will paralyze or crash many older versions of Unix:
main( )
{
while (1)
fork( );
}
When this program is run, the process executes the fork(
) instruction, creating a second process identical to the
first. Both processes then execute the fork( )
instruction, creating four processes. The growth continues until the
system can no longer support any new processes. This is a total
attack because all of the child processes are waiting for new
processes to be established. Even if you were somehow able to kill
one process, another would come along to take its place.
This attack will not disable most current versions of Unix because of
limits on the number of processes that can be run under any UID
(except for root). This limit, called MAXUPROC,
is usually configured into the kernel when the system is built. Some
Unix systems allow this value to be set at boot time; for instance,
Solaris allows you to put the following
in your /etc/system file:
set maxuproc=100
With this restriction in place, a user employing a process-overload
attack will use up his quota of processes, but no more. However, note
that if you set the limit too high, a runaway process or an actual
attack can still slow your machine to the point where it is nearly
unusable!
24.3.1.2 Recovering from too many processes
In many cases, the superuser can recover a system on which a single
user is running too many processes. To do this, however, you must be
able to run the ps command to determine the
process numbers of the offending processes. Once you have the
numbers, use the kill command to kill them.
You cannot kill the processes one by one because the remaining
processes will simply create more. A better approach is to use the
kill command to first stop each process, and
then kill them all at once:
# kill -STOP 1009 1110 1921
# kill -STOP 3219 3220
...
# kill -KILL 1009 1110 1921 3219 3220...
Linux systems typically include the
Pluggable Authentication Modules (PAM) package. In addition to
providing a set of common mechanisms for authenticating users and
authorizing their access to network services, PAM offers runtime
control of resource limits for user sessions started under a
PAM-controlled service (such as login or
sshd).
The file /etc/security/limits.conf defines the
resource limits. Each line has the following format:
<username | @groupname |*> <hard | soft> <resource> <limit>
Limits can be set per-user, per-group, and as defaults (*);
individual limits override group limits, which override defaults.
Users can relax soft resource limits, but hard limits can be
overridden only by the superuser. Among the resources that PAM can
limit are the following:
- core
-
Maximum size of core dump files in kilobytes. If you
don't expect users to need to debug programs with
core dumps, this can be set to 0 to prevent an attacker from crashing
a program and producing very large core dump files.
- fsize
-
Maximum size of files in kilobytes.
- nofile
-
Maximum number of files that can be open at once.
- rss
-
Maximum resident set size (amount of resident memory in use by this
session's processes) in kilobytes.
- cpu
-
Maximum minutes of CPU time that can be clocked during this session.
- nproc
-
Maximum number of processes that can be run under this session.
- maxlogins
-
Maximum number of logins for the user. This limits the total number
of sessions that can be active.
Here's an example of an
/etc/security/limits.conf file illustrating
some limits:
* soft core 0
* soft rss 16384
* hard nproc 20
@staff hard nproc 50
* soft maxlogins 5
* hard maxlogins 15
PAM is also available for other Unix systems, including Solaris and
HP-UX, as of early 2003, but Linux has the best PAM tools and the
most up-to-date support. On BSD-based systems, several of the same
measures are available through /etc/login.conf.
|
Because the stopped processes still come out of the
user's NPROC quota, the forking program will
not be able to spawn more. You can then deal with the author.
Alternatively, you can kill all the processes in a process group at
the same time; in many cases of a user spawning too many processes,
the processes will all be in the same process group. To discover the
process group, run the ps command with the
-j option. Identify the process group, and then
kill all processes in one fell swoop:
# kill -9 -1009
|
Yet another alternative is the
killall command on those systems that have it.
killall can kill all processes that match a
given name or that are executing from a given file, but
it's generally less portable and more unsure than
determining the process IDs and killing them by hand. It also
won't work when the process table is so full that
even root can't start a new process. On some
systems, the pkill command is available for the
same purpose.
|
|
24.3.1.3 "No more processes"
There is a possibility that your system may reach the total number of
allowable processes because many users are logged on, even though
none of them has reached their individual limits.
Another possibility is that your system has been configured
incorrectly. Your per-user process limit may be equal to or greater
than the limit for all processes on the system. In this case, a
single user can swamp the machine.
It is also possible that a root-UID process is
the one that has developed a bug or is being used in an attack. If
that is the case, the limits on the number of processes do not apply,
and all available processes are in use.
If you are ever presented with an error message from the shell that
says "No more processes," then
either you've created too many child processes or
there are simply too many processes running on the system: the system
won't allow you to create anymore processes.
For example:
% ps -efj
No more processes
%
If you run out of processes, wait a moment and try again. The
situation may have been temporary. If the process problem does not
correct itself, you have an interesting situation on your hands.
Having too many processes that are running can be very difficult to
correct without rebooting the computer. There are two reasons why:
You cannot run the ps command to determine the
process numbers of the processes to kill because it requires a new
process for the fork/exec.
If you are not currently the superuser, you cannot use the
su or login command because
both of these functions require the creation of a new process.
One way around the second problem is
to use the shell's exec
built-in command to run the su command
without creating a new process:
% exec /bin/su
password: foobar
#
Be careful, however, that you do not mistype your password or
exec the ps program: the
program will execute, but you will then be automatically logged out
of your computer!
Although the superuser is not encumbered by the per-user process
limit, each Unix system has a maximum number of processes that it can
support. If root is running a program that is
buggy (or booby-trapped), the machine will be overwhelmed to the
point where it will not be possible to manually kill the processes.
24.3.1.4 Safely halting the system
If you have a problem with too many processes saturating the system,
you may be forced to reboot the system. The simplest way might seem
to power-cycle the machine. However, this may damage the
computer's filesystems because the computer will not
have a chance to flush active buffers to disk—few systems are
designed to undergo an orderly shutdown when powered off suddenly.
It's better to use the kill
command to kill the errant processes or bring the system to
single-user mode. (See Appendix B for information
about kill, ps, Unix
processes, and signals.)
If you get the error
"No more processes" when you
attempt to execute the kill command,
exec a version of ksh or
csh—these shells have the
kill command built into them and therefore
don't need to spawn an extra process to run the
command.
On most modern versions of Unix, the superuser can send a
SIGTERM
signal to all processes except system processes and your own process
by typing:
# kill -TERM -1
#
If your Unix system does not have this feature, you can execute the
following command to send a SIGTERM to the init
process:
# kill -TERM 1
#
Unix automatically kills all processes and goes to single-user mode
when init dies. You can then execute the
sync command from the console and reboot the
operating system.
24.3.1.5 CPU overload attacks
Another common process-based denial
of service occurs when a user spawns many processes that consume
large amounts of CPU or disk bandwidth. As most Unix systems use a
form of simple round-robin scheduling, these overloads reduce the
total amount of CPU processing time available for all other users.
For example, someone who dispatches 10 find
commands with grep components throughout your
web server's directories, or spawns a dozen large
troff jobs, can slow the system significantly.
If your system is exceptionally loaded, log in as
root and set your own priority as high as you
can right away with the
renice command, if it is available on your
system:
# renice -19 $$
#
Then, use the ps command to see
what's running, followed by the
kill command to remove the processes
monopolizing the system, or the renice command
to slow down these processes. On Linux and other modern Unix systems,
the kernel may dynamically reduce the priority of processes that run
for long periods of time or use substantial CPU time, which helps
prevent this problem.
|
The best way to deal with overload problems is to educate your users
about how to share the system fairly. Encourage them to use the
nice command to reduce the priorities of
their background tasks, and to do them for several tasks at a time.
They can also use the at or
batch command to defer execution of lengthy
tasks to a time when the system is less crowded.
You'll need to be more forceful with users who
intentionally or repeatedly abuse the system. If CPU-intensive jobs
are common and you have a network of similar machines, you may wish
to investigate a distributed task scheduling system such as Condor
(http://www.cs.wisc.edu/condor/)
or GNQS (http://www.gnqs.org/).
|
|
24.3.2 Swap Space Problems
Most Unix systems are configured with some
disk space for holding process memory images when they are paged or
swapped out of main memory. If your system is not
configured with enough swap space, then new processes, especially
large ones, will not be run because there is no swap space for them.
There are many symptoms that you may observe if your system runs out
of swap space, depending on the kind of system involved:
Some programs may inexplicably freeze, while others may fail.
You may see the error "No space"
when you attempt to execute a command from the command line.
Network servers may accept TCP/IP connections, then close the
connections without providing any service.
Users may be unable to log in.
As with the maximum number of processes, most Unix systems provide
quotas on the maximum amount of memory that each user process can
allocate. Nevertheless, running out of swap space is considerably
more common than running out of processes because, invariably, Unix
systems are configured so that each user's process
can allocate a significant amount of memory. If a few dozen processes
each allocate a few dozen gigabytes of memory, most Unix
systems' swap space will be quickly exhausted.
For example, the destructive program that was demonstrated in
Section 24.3.1.1 can be
trivially modified to be an effective memory attacker:
main( )
{
while (1)
malloc(256*1024*1024);
fork( );
}
This variant of the attack is allocated an additional 256 MB of
memory each time through the loop. All of the child processes
allocate memory as well. The power of multiplication quickly rears
its head. Most Unix systems are configured so that a user can create
at least 50 processes. Likewise, most Unix systems are configured so
that each user's process can allocate at least 256
MB of memory—it seems that 50 MB is the required minimum for
programs such as Emacs and web servers these days. But
with this attack, each of those 50 processes would shortly require at
least 256 MB each, for a total of 12.8 GB of memory. Few Unix systems
are configured with swap spaces this large. The result is that no
swap space is available for any new processes.
Swap space can also be overwhelmed if you are using
tempfs or a similar filesystem that stores files
in RAM, rather than on a physical device. If you use
tempfs, you should be sure that it is configured
so that the maximum amount of space it will use is less than your
available swap space.
If you run out of swap space because processes have accidentally
filled up the available space, you can increase the space you
allocated to backing store. The obvious way to do this is by
attaching another disk to your computer and swapping on a raw
partition. Unfortunately, such actions frequently require shutting
down the system. Fortunately, there is another approach: you can swap
to a file!
24.3.2.1 Swapping to files
While Unix is normally configured to swap
to a raw partition, many versions of Unix can also swap to a file.
Swapping to a file is somewhat slower than swapping to a raw
partition because all read and write operations need to go through
the Unix filesystem. The advantage of swapping to files is that you
do not need to preallocate a raw device for swapping, and you can
trivially add more files to your system's swap space
without rebooting.
For example, if you are on a
Solaris system that is running low on
swap space, you could remedy the situation without rebooting by
following several steps. First, find a partition with some spare
storage:
# /bin/df -ltk
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t3d0s0 95359 82089 8505 91% /
/proc 0 0 0 0% /proc
/dev/dsk/c0t1d0s2 963249 280376 634713 31% /user2
/dev/dsk/c0t2d0s0 1964982 1048379 720113 59% /user3
/dev/dsk/c0t2d0s6 1446222 162515 1139087 12% /user4
#
In this case, partition /user4 appears to have
lots of spare room. You can create an additional 500 MB of swap space
on this partition with this command sequence on Solaris systems:
# mkfile 500m /user4/junkfile
# swap -a /user4/junkfile
On Linux systems, you first create a
file of the desired size, and then format it as swap space:
# dd if=/dev/zero of=/user4/junkfile bs=1048576 count=500
# mkswap /user4/junkfile
# swapon /user4/junkfile
Correcting a shortage of swap space on systems that do not support
swapping to files usually involves shutting down your computer and
adding another hard disk.
If a malicious user has filled up your swap space, a short-term
approach is to identify the offending process(es) and kill it. The
ps command shows you the size of every executing
process and helps you determine the cause of the problem. The
vmstat command, if you have it, can also provide
valuable process state information.
24.3.3 Disk Attacks
Another way of overwhelming a system
is to fill a disk partition. If one user fills up the disk, other
users won't be able to create files or do other
useful work.
24.3.3.1 Disk-full attacks
A disk can store only a certain amount of information. If your disk
is full, you must delete some information before more can be stored.
Sometimes disks fill up suddenly when an application program or a
user erroneously creates too many files (or a few files that are too
large). Other times, disks fill up because many users are slowly
increasing their disk usage.
The du command lets you find the directories
on your system that contain the most data. du
searches recursively through a tree of directories and lists how many
blocks are used by each one. For example, to check the entire
/usr partition, you could type:
# du /usr
29 /usr/dict/papers
3875 /usr/dict
8 /usr/pub
4032 /usr
...
#
By finding the larger directories, you can decide where to focus your
cleanup efforts.
You can also search for and list only the names of the larger files
by using the find command. You can also use the
find command with the
-size option to list only the files larger than
a certain size. Additionally, you can use the options
-xdev or -local to avoid
searching NFS-mounted directories. This method is about as fast as doing a
du and can be even more useful when trying to
find a few large files that are taking up space. For example:
# find /usr -size +1000 -exec ls -l {} \;
-rw-r--r-- 1 root 1819832 Jan 9 10:45 /usr/lib/libtext.a
-rw-r--r-- 1 root 2486813 Aug 10 1995 /usr/dict/web2
-rw-r--r-- 1 root 1012730 Aug 10 1995 /usr/dict/web2a
-rwxr-xr-x 1 root 589824 Oct 22 21:27 /usr/bin/emacs
-rw-r--r-- 1 root 7323231 Oct 31 2000 /usr/tex/TeXdist.tar.Z
-rw-rw-rw- 1 root 772092 Mar 10 22:12 /var/spool/mqueue/syslog
-rw-r--r-- 1 uucp 1084519 Mar 10 2000 /var/spool/uucp/LOGFILE
-r--r--r-- 1 root 703420 Nov 21 15:49 /usr/tftpboot/mach
...
#
In this example, the file /usr/tex/TeXdist.tar.Z
is probably a candidate for deletion—especially if you have
already unpacked the TeX distribution. The files
/var/spool/mqueue/syslog and
/var/spool/uucp/LOGFILE are also good candidates
to compress or delete, considering their ages.
24.3.3.2 quot command
The quot command lets you summarize
filesystem usage by user; this program is available on some System V
systems and on most Berkeley-derived systems. With the
-f option,
quot prints the number of files and the number
of blocks used by each user:
# quot -f /dev/sd0a
/dev/sd0a (/):
53698 4434 root
4487 294 bin
681 155 hilda
319 121 daemon
123 25 uucp
24 1 audit
16 1 mailcmd
16 1 news
6 7 operator
#
You do not need to have disk quotas enabled to run the quot
-f command.
|
The quot -f command may lock the device while it
is running. All other programs that need to access the device will be
blocked until the quot -f command completes.
|
|
24.3.3.3 inode problems
The Unix filesystem uses
inodes to store information about files, directories, and devices.
One way to make the disk unusable is to consume all of the free
inodes on a disk so no new files can be created. A person might
inadvertently do this by creating thousands of empty files. This can
be a perplexing problem to diagnose if you're not
aware of the potential because the df command
might show lots of available space, but attempts to create a file
will result in a "no space" error.
In general, each new file, directory, pipe, device, symbolic link,
FIFO, or socket requires an inode on disk to describe it. If the
supply of available inodes is exhausted, the system
can't allocate a new file even if disk space is
available.
You can tell how many inodes are free on a disk by issuing the
df command as follows:
% df -o i /usr may be df -i on some systems
Filesystem iused ifree %iused Mounted on
/dev/dsk/c0t3d0s5 20100 89404 18% /usr
%
The output shows that this disk has lots of inodes available for new
files.
The number of inodes in a filesystem is fixed at the time you
initially format the disk for use. The default created for the
partition is usually appropriate for normal use, but you can override
it to provide more or fewer inodes, as you wish. You may wish to
increase this number for partitions in which you have many small
files—for example, a partition to hold mail directories (e.g.,
/var/mail or /var/imap on a
system running an IMAP mail server). If you run out of inodes on a
filesystem, about the only recourse is to save the disk to tape,
reformat with more inodes, and then restore the contents.
24.3.3.4 Using partitions to protect your users
You
can protect your system from disk attacks and accidents by dividing
your hard disk into several smaller partitions. Place different
users' home directories on different partitions. In
this manner, if one user fills up one partition, users on other
partitions won't be affected. (Drawbacks to this
approach include needing to move directories to different partitions
if they require more space, and an inability to hard-link files
between some user directories.)
If you run network services that have the potential to allow
outsiders to use up significant disk space (e.g., incoming mail or an
anonymous FTP site that allows uploads), consider isolating them on
separate partitions to protect your other partitions from overflows.
Temporarily losing the ability to receive mail or files is an
annoyance, but losing access to the entire server is much more
frustrating.
24.3.3.5 Using quotas
A
more effective way to protect your system from disk attacks is to use
the quota system that is available on most modern versions of Unix.
(Quotas are usually available as a build-time or runtime option on
POSIX systems.)
With disk quotas, each user can be assigned a limit for how many
inodes and disk blocks that user can use. There are two basic kinds
of quotas:
- Hard quotas
-
These are absolute limits on how many inodes and how much space the
user may consume.
- Soft quotas
-
These are advisory. Users are allowed to exceed soft quotas for a
grace period of several days. During this time, the user is issued a
warning whenever he logs into the system. After the final day, the
user is not allowed to create anymore files (or use anymore space)
without first reducing current usage.
A few systems, including Linux, also support a
group quota, which allows you to set a
limit on the total space used by a whole group of users. This can
result in cases where one user can deny another the ability to store
a file if they are in the same group, so it is an option you may not
wish to use. On the other hand, if a single person or project
involves multiple users and a single group for file sharing, group
quotas can be an effective protection.
To enable quotas on your system, you first need to create the quota
summary file. This is usually named quotas, and
is located in the top-level directory of the disk. Thus, to set
quotas on the /home partition, you would issue
the following commands:
# cp /dev/null /home/quotas
# chmod 600 /home/quotas
# chown root /home/quotas
You also need to mark the partition as having quotas enabled. You do
this by changing the filesystem file in your
/etc directory; depending on the system, this
may be /etc/fstab,
/etc/vfstab,
/etc/checklist, or
/etc/filesystems. If the option field is
currently rw, you should change it to
rq; otherwise, you should probably add the
options parameter. Then, you need to build the options tables
on every disk. This process is done with the
quotacheck -a
command. (If your version of quotacheck takes
the -p option, you may wish to use it to make
the checks faster.) Note that if there are any active users on the
system, this check may result in improper values. Thus, we advise you
to reboot; the quotacheck command should run as
part of the standard boot sequence and will check all of the
filesystems you enabled.
Last of all, you can edit an individual user's
quotas with the
edquota command:
# edquota spaf
If you want to "clone" the same set
of quotas to multiple users, and your version of the command supports
the -p option, you may do so by using one
user's quotas as a
"prototype":
# edquota -p spaf simsong beth kathy
You and your users can view quotas with the
quota command; see your documentation for
particular details.
24.3.3.6 Reserved space
Versions of Unix that use a
filesystem derived from the BSD Fast File System (FFS) have an
additional protection against filling up the disk: the filesystem
reserves approximately 10% of the disk and makes it unusable by
regular users. The reason for reserving this space is performance:
the BSD Fast File System does not perform as well if less than 10% of
the disk is free. However, this restriction also prevents ordinary
users from overwhelming the disk. The restriction does not apply to
processes running with superuser privileges.
This "minfree" value (10%) can be
set to other values when the partition is created. It can also be
changed afterwards using the
tunefs command, but setting it to less than
10% is probably not a good idea.
The Linux ext2fs filesystem also
allows you to reserve space on your filesystem. The amount of space
that is reserved, 10% by default, can be changed with the
tune2fs command.
One way to reserve space for emergency use at a later point in time
is to create a large file on the disk; when you need the space, just
delete the file.
24.3.3.7 Hidden space
Open files that are unlinked continue to
take up space until they are closed. The space that these files take
up will not appear with the du or
find commands because they are not in the
directory tree; nevertheless, they will take up
space because they are in the filesystem. For example:
main( )
{
int ifd;
char buf[8192];
ifd = open("./attack", O_WRITE|O_CREAT, 0777);
unlink("./attack");
while (1)
write (ifd, buf, sizeof(buf));
}
Files created in this way can't be found with the
ls or du commands because
the files have no directory entries. (However, the space will still
be reported by the quota system because the file still has an inode.)
To recover from this situation and reclaim the space, you must kill
the process that is holding the file open. If you cannot identify the
culprit immediately, you may have luck using the
lsof utility. This program will identify the
processes that have open files, and the file position of each open
file. By identifying a process with an open file that has a huge
current offset, you can terminate that single process to regain the
disk space. After the process dies and the file is closed, all the
storage it occupied is reclaimed.
If you still cannot determine which process is to blame, it may be
necessary to kill all processes—most
easily done by simply rebooting the system. When the system reboots,
it will run the filesystem consistency checker (i.e.,
fsck) if it was not able to shut down the
filesystem cleanly.
24.3.3.8 Tree structure attacks
It is also possible to attack a system
by building a tree structure that is too deep to be deleted with the
rm command; nested directories are deleted
by removing the deepest nodes first, so the path to that directory
may be too long to construct. Such an attack could be caused by
something like the following shell file:
#!/bin/ksh
#
# # Don't try this at home!
while mkdir anotherdir
do
cd ./anotherdir
cp /bin/cc fillitup
done
On some systems, rm -r cannot delete this tree
structure because the directory tree overflows either the buffer
limits used inside the rm program to represent
filenames or the number of open directories allowed at one time.
You can almost always delete a very deep set of directories by
manually using the chdir command from the shell
and going to the bottom of the tree, then deleting the files and
directories one at a time. This process can be very tedious. On some
systems, it may not even be possible; some Unix systems do not let
you chdir to a directory described by a path
that contains more than a certain number of characters.
Another approach is to use a script similar to the one in Example 24-1.
Example 24-1. Removing nested directories
#!/bin/ksh
if (( $# != 1 ))
then
print -u2 "usage: $0 <dir>"
exit 1
fi
typeset -i index=1 dindex=0
typeset t_prefix="unlikely_fname_prefix" fname=$(basename $1)
cd $(dirname "$1") # Go to the directory containing the problem.
while (( dindex < index ))
do
for entry in $(ls -1a "$fname")
do
[[ "$entry" == @(.|..) ]] && continue
if [[ -d "$fname/$entry" ]]
then
rmdir "$fname/$entry" 2>/dev/null && continue
mv "$fname/$entry" ./$t_prefix.$index
let index+=1
else
rm -f "$fname/$entry"
fi
done
rmdir "$fname"
let dindex+=1
fname="$t_prefix.$dindex"
done
What this method does is delete the nested directories starting at
the top. It deletes any files at the top level, and moves any nested
directories up one level to a temporary name. It then deletes the
(now empty) top-level directory and begins anew with one of the
former descendant directories. This process is slow, but it will work
on almost any version of Unix with little or no modification.
The only other way to delete such a directory on one of these systems
is to remove the inode for the top-level directory manually, and then
use the fsck command to erase the remaining
directories. To delete these kinds of troubling directory structures
this way, follow these steps:
Take the system to single-user mode.
Find the inode number of the root of the
offending directory: # ls -i anotherdir
1491 anotherdir
#
Use the df command to determine the device of
the offending directory: # /usr/bin/df anotherdir
/g17 (/dev/dsk/c0t2d0s2 ): 377822 blocks 722559 files
#
Clear the inode associated with that directory using the
clri program: # /usr/sbin/clri /dev/dsk/c0t2d0s2 1491
# (Remember to replace /dev/dsk/c0t2d0s2 with the
name of the actual device reported by the df
command.)
Run your filesystem consistency checker (for example, fsck
/dev/dsk/cot2dos2) until it reports no errors. When the
program tells you that there is an unconnected directory with inode
number 1491 and asks if you want to reconnect it, answer
"no." The fsck
program will reclaim all the disk blocks and inodes used by the
directory tree.
If you are using the
Linux
ext2 filesystem, you can delete an inode using
the debugfs command. It is important that the
filesystem be unmounted before using the debugfs
command.
24.3.4 /tmp Problems
Most Unix systems are configured so
that any user can create files of any size in the
/tmp directory. Normally, there is no quota
checking enabled in the /tmp directory.
Consequently, a single user can fill up the partition on which the
/tmp directory is mounted so that it will be
impossible for other users (and possibly the superuser) to create new
files.
Unfortunately, many programs require that the ability to store files
in the /tmp directory function properly. For
example, the vi and mail
programs both store temporary files in /tmp.
These programs will unexpectedly fail if they cannot create their
temporary files. Many locally written system administration scripts
rely on the ability to create files in the /tmp
directory, and do not check to make sure that sufficient space is
available.
Problems with the /tmp directory are almost
always accidental. A user will copy a number of large files there and
then forget them. Perhaps many users will do this.
In the early days of Unix, filling up the /tmp
directory was not a problem. The /tmp directory
is automatically cleared when the system boots, and early Unix
computers crashed a lot. These days, Unix systems stay up much
longer, and the /tmp directory often does not
get cleaned out for days, weeks, or months.
There are a number of ways to minimize the danger of
/tmp attacks:
Enable quota checking on
/tmp so that no single user can fill it up. A
good quota plan is to allow each user to take up at most 30% of the
space in /tmp. Thus, filling up
/tmp will, under the best circumstances, require
collusion among more than three users.
Have a process that monitors the /tmp directory
on a regular basis and alerts the system administrator if it is
nearly filled.
As the superuser, you might also want to sweep through the
/tmp directory on a periodic basis and delete
any files that are more than five days old. This line can also be
added to your crontab so that the
same is done each night:
# find /tmp -type f -mtime +5 -exec rm {} \;
Note the use of the -type f option on this
command; this prevents named sockets from being inadvertently
deleted. However, this won't clean out directories
that are no longer being used.
24.3.5 Soft Process Limits: Preventing Accidental Denial of Service
Most
modern versions of Unix allow you to set limits on the maximum amount
of memory or CPU time a process can consume, as well as the maximum
file size it can create (see the earlier sidebar PAM Resource Limits for an example of this kind
of resource limiting). These limits are handy if you are developing a
new program and do not want to accidentally make the machine very
slow or unusable for other people with whom you're
sharing.
The Korn shell ulimit and C shell
limit commands display the current process
limits:
$ ulimit -Sa -H for hard limits, -S for soft limits
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 2097148 kbytes
stack(kbytes) 8192 kbytes
coredump(blocks) unlimited
nofiles(descriptors) 64
vmemory(kbytes) unlimited
$
These limits have the following meanings:
- time
-
Maximum number of CPU seconds that your process can consume.
- file
-
Maximum file size that your process can create, reported in 512-byte
blocks.
- data
-
Maximum amount of memory for data space that your process can
reference.
- stack
-
Maximum stack that your process can consume.
- coredump
-
Maximum size of a core file that your process will write. Setting
this value to 0 prevents you from writing core
files.
- nofiles
-
Number of file descriptors (open files) that your process can have.
- vmemory
-
Total amount of virtual memory that your process can consume.
You can also use the ulimit command to change a
limit. For example, to prevent any future process you create from
writing a datafile longer than 5,000 KB, execute the following
command:
$ ulimit -Sf 10000
$ ulimit -Sa
time(seconds) unlimited
file(blocks) 10000
data(kbytes) 2097148 kbytes
stack(kbytes) 8192 kbytes
coredump(blocks) unlimited
nofiles(descriptors) 64
vmemory(kbytes) unlimited
$
To reset the limit, execute this command:
$ ulimit -Sf unlimited
$ ulimit -Sa
ctime(seconds) unlimited
file(blocks) unlimited
data(kbytes) 2097148 kbytes
stack(kbytes) 8192 kbytes
coredump(blocks) unlimited
nofiles(descriptors) 64
vmemory(kbytes) unlimited
$
Note that if you set the hard limit, you cannot increase it again
unless you are currently the superuser. This limit may be handy to
use in a system-wide profile to limit all your users.
On many systems, system-wide limits can also be specified in the file
/etc/login.conf, as shown in Example 24-2.
Example 24-2. /etc/login.conf
# login.conf - login class capabilities database.
# Remember to rebuild the database after each change to this file:
# cap_mkdb /etc/login.conf
#
# Default settings effectively disable resource limits. See the
# examples below for a starting point to enable them.
# Defaults
# These settings are used by login(1) by default for classless users.
# Note that entries like "cputime" set both "cputime-cur" and "cputime-max"
default:\
:passwd_format=md5:\
:copyright=/etc/COPYRIGHT:\
:welcome=/etc/motd:\
:setenv=MAIL=/var/mail/$,BLOCKSIZE=K,FTP_PASSIVE_MODE=YES:\
:path=/sbin /bin /usr/sbin /usr/bin /usr/local/bin /usr/X11R6/bin ~/bin:\
:nologin=/var/run/nologin:\
:cputime=unlimited:\
:datasize=unlimited:\
:stacksize=unlimited:\
:memorylocked=unlimited:\
:memoryuse=unlimited:\
:filesize=unlimited:\
:coredumpsize=unlimited:\
:openfiles=unlimited:\
:maxproc=unlimited:\
:sbsize=unlimited:\
:priority=0:\
:ignoretime@:\
:umask=022:
# root can always log in.
root:ignorenologin:tc=default:
#
# Russian Users Accounts. Set up proper environment variables.
#
russian:Russian Users Accounts:charset=KOI8-R:lang=ru_RU.KOI8-R:\
:tc=default:
# Users in the "limited" class get less memory.
limited:datasize-cur=22M:stacksize-cur=8M:coredumpsize=0:tc=default:
|