home | O'Reilly's CD bookshelfs | FreeBSD | Linux | Cisco | Cisco Exam  


Practical UNIX & Internet Security

Practical UNIX & Internet SecuritySearch this book
Previous: 25.1 Destructive Attacks Chapter 25
Denial of Service Attacks and Solutions
Next: 25.3 Network Denial of Service Attacks
 

25.2 Overload Attacks

In an overload attack, a shared resource or service is overloaded with requests to such a point that it's unable to satisfy requests from other users. For example, if one user spawns enough processes, other users won't be able to run processes of their own. If one user fills up the disks, other users won't be able to create new files. You can partially protect against overload attacks by partitioning your computer's resources, and limiting each user to one partition. Alternatively, you can establish quotas to limit each user. Finally, you can set up systems for automatically detecting overloads and restarting your computer.

25.2.1 Process-Overload Problems

One of the simplest denial of service attacks is a process attack. In a process attack, one user makes a computer unusable for others who happen to be using the computer at the same time. Process attacks are generally of concern only with shared computers: the fact that a user incapacitates his or her own workstation is of no interest if nobody else is using the machine.

25.2.1.1 Too many processes

The following program will paralyze or crash many older versions of UNIX :

main() 
{ 		
while (1) 			
fork(); 	
}

When this program is run, the process executes the fork() instruction, creating a second process identical to the first. Both processes then execute the fork() instruction, creating four processes. The growth continues until the system can no longer support any new processes. This is a total attack, because all of the child processes are waiting for new processes to be established. Even if you were somehow able to kill one process, another would come along to take its place.

This attack will not disable most current versions of UNIX , because of limits on the number of processes that can be run under any UID (except for root ). This limit, called MAXUPROC , is usually configured into the kernel when the system is built. Some UNIX systems allow this value to be set at boot time; for instance, Solaris allows you to put the following in your /etc/system file:

set maxuproc=100

A user employing this attack will use up his quota of processes, but no more. As superuser, you will then be able to use the ps command to determine the process numbers of the offending processes and use the kill command to kill them. You cannot kill the processes one by one, because the remaining processes will simply create more. A better approach is to use the kill command to first stop each process, then kill them all at once:

# 
kill -STOP 1009 1110 1921 

# 
kill -STOP 3219 3220 


. 
. 
. 

# 
kill -KILL 1009 1110 1921 3219 3220... 

Because the stopped processes still come out of the user's NPROC quota, the forking program will be able to spawn no more. You can then deal with the author.

Alternatively, you can kill all the processes in a process group at the same time; in many cases of a user spawning too many processes, the processes will all be in the same process group. To discover the process group, run the ps command with the -j option. Identify the process group, and then kill all processes with one fell swoop:

# 
kill -9 -1009

Note that many older, AT&T-derived systems do not support either process groups or the enhanced version of the kill command, but it is present in SVR4 . This enhanced version of kill interprets the second argument as indicating a process group if it is preceded by a "-", and the absolute value of the argument is used as the process group; the indicated signal is sent to every process in the group.

Under modern versions of UNIX , the root user can still halt the system with a process attack because there is no limit to the number of processes that the superuser can spawn. However, the superuser can also shut down the machine or perform almost any other act, so this is not a major concern - except when root is running a program that is buggy (or booby-trapped). In these cases, it's possible to encounter a situation in which the machine is overwhelmed to the point where no one else can get a free process even to do a login.

There is also a possibility that your system may reach the total number of allowable processes because so many users are logged on, even though none of them has reached her individual limit.

One other possibility is that your system has been configured incorrectly. Your per-user process limit may be equal to or greater than the limit for all processes on the system. In this case, a single user can swamp the machine.

If you are ever presented with an error message from the shell that says "No more processes," then either you've created too many child processes or there are simply too many processes running on the system; the system won't allow you to create any more processes.

For example:

% 
ps -efj 

No more processes %

If you run out of processes, wait a moment and try again. The situation may have been temporary. If the process problem does not correct itself, you have an interesting situation on your hands.

Having too many processes that are running can be very difficult to correct without rebooting the computer; there are two reasons why:

  • You cannot run the ps command to determine the process numbers of the processes to kill.

  • If you are not currently the superuser, you cannot use the su or login command, because both of these functions require the creation of a new process.

One way around the second problem is to use the shell's exec [1] built-in command to run the su command without creating a new process:

[1] The shell's exec function causes a program to be run (with the exec() system call) without a fork() system call being executed first; the user-visible result is that the shell runs the program and then exits.

% exec /bin/su 
password: 
foobar

#

Be careful, however, that you do not mistype your password or exec the ps program: the program will execute, but you will then be automatically logged out of your computer!

If you have a problem with too many processes saturating the system, you may be forced to reboot the system. The simplest way might seem to be to power-cycle the machine. However, this may damage blocks on disk, because it will probably not flush active buffers to disk  - few systems are designed to undergo an orderly shutdown when powered off suddenly. It's better to use the kill command to kill the errant processes or to bring the system to single-user mode. (See Appendix C, UNIX Processes for information about kill , ps , UNIX processes, and signals.)

On most modern versions of UNIX , the superuser can send a SIGTERM signal to all processes except system processes and your own process by typing:

# 
kill -TERM -1
 
#

If your UNIX system does not have this feature, you can execute the command:

# 
kill -TERM 1

#

to send a SIGTERM to the init process. UNIX automatically kills all processes and goes to single-user mode when init dies. You can then execute the sync command from the console and reboot the operating system.

If you get the error "No more processes" when you attempt to execute the kill command, exec a version of the ksh or csh  - they have the kill command built into them and therefore don't need to spawn an extra process to run the command.

25.2.1.2 System overload attacks

Another common process-based denial of service occurs when a user spawns many processes that consume large amounts of CPU . As most UNIX systems use a form of simple round-robin scheduling, these overloads reduce the total amount of CPU processing time available for all other users. For example, someone who dispatches ten find commands with grep components throughout your Usenet directories, or spawns a dozen large troff jobs can slow the system to a crawl.[2]

[2] We resist using the phrase commonly found on the net of "bringing the system to its knees." UNIX systems have many interesting features, but knees are not among them. How the systems manage to crawl, then, is left as an exercise to the reader.

The best way to deal with these problems is to educate your users about how to share the system fairly. Encourage them to use the nice command to reduce the priority of their background tasks, and to do them a few at a time. They can also use the at or batch command to defer execution of lengthy tasks to a time when the system is less crowded. You'll need to be more forceful with users who intentionally or repeatedly abuse the system.

If your system is exceptionally loaded, log in as root and set your own priority as high as you can right away with the renice command, if it is available on your system:[3]

[3] In this case, your login may require a lot of time; renice is described in more detail in Appendix C .

# 
renice -19 $$$

#

Then, use the ps command to see what's running, followed by the kill command to remove the processes monopolizing the system, or the renice command to slow down these processes.

25.2.2 Disk Attacks

Another way of overwhelming a system is to fill a disk partition. If one user fills up the disk, other users won't be able to create files or do other useful work.

25.2.2.1 Disk-full attacks

A disk can store only a certain amount of information. If your disk is full, you must delete some information before more can be stored.

Sometimes disks fill up suddenly when an application program or a user erroneously creates too many files (or a few files that are too large). Other times, disks fill up because many users are slowly increasing their disk usage.

The du command lets you find the directories on your system that contain the most data. du searches recursively through a tree of directories and lists how many blocks are used by each one. For example, to check the entire /usr partition, you could type:

# 
du /usr 

29		/usr/dict/papers 
3875		/usr/dict 
8		/usr/pub 
4032		/usr ... 
#

By finding the larger directories, you can decide where to focus your cleanup efforts.

You can also search for and list only the names of the larger files by using the find command. You can also use the find command with the -size option to list only the files larger than a certain size. Additionally, you can use the options called -xdev or -local to avoid searching NFS -mounted directories (although you will want to run find on each NFS server.) This method is about as fast as doing a du and can be even more useful when trying to find a few large files that are taking up space. For example:

# 
find /usr -size +1000 -exec ls -l {} \;

-rw-r--r-- 1 root 1819832 Jan  9 10:45 /usr/lib/libtext.a
-rw-r--r-- 1 root 2486813 Aug 10  1985 /usr/dict/web2
-rw-r--r-- 1 root 1012730 Aug 10  1985 /usr/dict/web2a
-rwxr-xr-x 1 root  589824 Oct 22 21:27 /usr/bin/emacs
-rw-r--r-- 1 root 7323231 Oct 31  1990 /usr/tex/TeXdist.tar.Z
-rw-rw-rw- 1 root  772092 Mar 10 22:12 /var/spool/mqueue/syslog
-rw-r--r-- 1 uucp 1084519 Mar 10 22:12 /var/spool/uucp/LOGFILE
-r--r--r-- 1 root  703420 Nov 21 15:49 /usr/tftpboot/mach
... 
#

In this example, the file /usr/tex/TeXdist.tar.Z is probably a candidate for deletion - especially if you have already unpacked the TeX distribution. The files /var/spool/mqueue/syslog and /var/spool/uucp/LOGFILE are also good candidates to delete, after saving them to tape or another disk.

25.2.2.2 quot command

The quot command lets you summarize filesystem usage by user; this program is available on some System V and on most Berkeley-derived systems. With the -f option, quot prints the number of files and the number of blocks used by each user:

# 
quot -f /dev/sd0a

/dev/sd0a (/):
53698  4434 root
 4487   294 bin
  681   155 hilda
  319   121 daemon
  123    25 uucp
   24     1 audit
   16     1 mailcmd
   16     1 news
    6     7 operator
#

You do not need to have disk quotas enabled to run the quot -f command.

NOTE: The quot -f command may lock the device while it is running. All other programs that need to access the device will be blocked until the quot -f command completes.

25.2.2.3 Inode problems

The UNIX filesystem uses inodes to store information about files. One way to make the disk unusable is to consume all of the free inodes on a disk, so no new files can be created. A person might inadvertently do this by creating thousands of empty files. This can be a perplexing problem to diagnose if you're not aware of the potential because the df command might show lots of available space, but attempts to create a file will result in a "no space" error. In general, each new file, directory, pipe, FIFO , or socket requires an inode on disk to describe it. If the supply of available inodes is exhausted, the system can't allocate a new file even if disk space is available.

You can tell how many inodes are free on a disk by issuing the df command with the -i option:

% 
df -o i /usr
           
>may be 
df -i
 on some systems

Filesystem             iused   ifree  %iused  Mounted on
/dev/dsk/c0t3d0s5      20100   89404    18%   /usr
%

The output shows that this disk has lots of inodes available for new files.

The number of inodes in a filesystem is usually fixed at the time you initially format the disk for use. The default created for the partition is usually appropriate for normal use, but you can override it to provide more or fewer inodes, as you wish. You may wish to increase this number for partitions in which you have many small files - for example, a partition to hold Usenet files (e.g., /var/spool/news ). If you run out of inodes on a filesystem, about the only recourse is to save the disk to tape, reformat with more inodes, and then restore the contents.

25.2.2.4 Using partitions to protect your users

You can protect your system from disk attacks by dividing your hard disk into several smaller partitions. Place different users' home directories on different partitions. In this manner, if one user fills up one partition, users on other partitions won't be affected. (Drawbacks of this approach include needing to move directories to different partitions if they require more space, and an inability to hard-link files between some user directories.)

25.2.2.5 Using quotas

A more effective way to protect your system from disk attacks is to use the quota system that is available on most modern versions of UNIX . (Quotas are usually available as a build-time or run-time option on POSIX systems.)

With disk quotas, each user can be assigned a limit for how many inodes and how many disk blocks that user can use. There are two basic kinds of quotas:

  • Hard quotas are absolute limits on how many inodes and how much space the user may consume.

  • Soft quotas are advisory. Users are allowed to exceed soft quotas for a grace period of several days. During this time, the user is issued a warning whenever he or she logs into the system. After the final day, the user is not allowed to create any more files (or use any more space) without first reducing current usage.

A few systems also support a group quota , which allows you to set a limit on the total space used by a whole group of users. This can result in cases where one user can deny another the ability to store a file if they are in the same group, so it is an option you may not wish to use.

To enable quotas on your system, you first need to create the quota summary file. This is usually named quotas , and is located in the top-level directory of the disk. Thus, to set quotas on the /home partition, you would issue the following commands:[4]

[4] If your system supports group quotas, the file will be named something else, such as quotas.user or quotas.group.

# 
cp /dev/null /home/quotas

# 
chmod 600 /home/quotas

# 
chown root /home/quotas

You also need to mark the partition as having quotas enabled. You do this by changing the filesystem file in your /etc directory: depending on the system, this may be /etc/fstab, /etc/vfstab, /etc/checklist, or /etc/filesystems . If the option field is currently rw you will change it to rq ; otherwise, you probably add the options parameter.[5] Then, you need to build the options tables on every disk. This process is done with the quotacheck -a command. (If your version of quotacheck takes the -p option, you may wish to use it to make the checks faster.) Note that if there are any active users on the system, this check may result in improper values. Thus, we advise that you reboot; the quotacheck command should run as part of the standard boot sequence and will check all the filesystems you enabled.

[5] This is yet another example of how non-standard UNIX has become, and why we have not given more examples of how to set up each and every system for each option we have explained. It is also a good illustration of why you should consult your vendor documentation to see how to interpret our suggestions appropriately for your release of the operating system.

Last of all, you can edit an individual user's quotas with the edquota command:

# 
edquota spaf

If you want to "clone" the same set of quotas to multiple users and your version of the command supports the -p option, you may do so by using one user's quotas as a "prototype":

# 
edquota -p spaf simsong beth kathy

You and your users can view quotas with the quota command; see your documentation for particular details.

25.2.2.6 Reserved space

Versions of UNIX that use a filesystem derived from the BSD Fast Filesystem ( FFS ) have an additional protection against filling up the disk: the filesystem reserves approximately 10% of the disk and makes it unusable by regular users. The reason for reserving this space is performance: the BSD Fast Filesystem does not perform as well if less than 10% of the disk is free. However, this restriction also prevents ordinary users from overwhelming the disk. The restriction does not apply to processes running with superuser privileges.

This "minfree" value (10%) can be set to other values when the partition is created. It can also be changed afterwards using the tunefs command, but setting it to less than 10% is probably not a good idea.

The Linux ext2 filesystem also allows you to reserve space on your filesystem. The amount of space that is reserved, 10% by default, can be changed with the tune2fs command.

25.2.2.7 Hidden space

Open files that are unlinked continue to take up space until they are closed. The space that these files take up will not appear with the du or find commands, because they are not in the directory tree; however, they will nevertheless take up space, because they are in the filesystem.

For example:

main()
{
		int ifd;
		char buf[8192];
		ifd = open("./attack", O_WRITE|O_CREAT, 0777);
		unlink("./attack");
		while (1)
			write (ifd, buf, sizeof(buf));
}

Files created in this way can't be found with the ls or du commands because the files have no directory entries.

To recover from this situation and reclaim the space, you must kill the process that is holding the file open. You may have to take the system into single-user mode and kill all processes if you cannot determine which process is to blame. After you've done this, run the filesystem consistency checker (e.g., fsck) to verify that the free list was not damaged during the shutdown operation.

You can more easily identify the program at fault by downloading a copy of the freeware lsof program from the net. This program will identify the processes that have open files, and the file position of each open file.[6] By identifying a process with an open file that has a huge current offset, you can terminate that single process to regain the disk space. After the process dies and the file is closed, all the storage it occupied is reclaimed.

[6] Actually, you should consider getting a copy of lsof for other reasons, too. It has an incredible number of other uses, such as determining which processes have open network connections and which processes have their current directories on a particular disk.

25.2.2.8 Tree-structure attacks

It is also possible to attack a system by building a tree structure that is made too deep to be deleted with the rm command. Such an attack could be caused by something like the following shell file:

$!/bin/ksh 
$ 
$ 
Don't try this at home!
while mkdir anotherdir
do
		cd ./anotherdir
		cp /bin/cc fillitup
done

On some systems, rm -r cannot delete this tree structure because the directory tree overflows either the buffer limits used inside the rm program to represent filenames or the number of open directories allowed at one time.

You can almost always delete a very deep set of directories by manually using the chdir command from the shell and going to the bottom of the tree, then deleting the files and directories one at a time. This process can be very tedious. Unfortunately, some UNIX systems do not let you chdir to a directory described by a path that contains more than a certain number of characters.

Another approach is to use a script similar to the one in Example 25-1:

Example 25.1: Removing Nested Directories

#!/bin/ksh 

if (( $# != 1 ))
then
    print -u2 "usage: $0 <dir>"
    exit 1
fi

typeset -i index=1 dindex=0
typeset t_prefix="unlikely_fname_prefix" fname=$(basename $1)

cd $(dirname "$1")         # go to the directory containing the problem

while (( dindex < index ))
do
    for entry in $(ls -1a "$fname")
    do
      [[ "$entry" == @(.|..) ]] && continue
      if [[ -d "$fname/$entry" ]]
      then
          rmdir - "$fname/$entry" 2>/dev/null && continue
          mv "$fname/$entry" ./$t_prefix.$index
          let index+=1
      else
          rm -f - "$fname/$entry"
      fi
    done
    rmdir "$fname"
    let dindex+=1
    fname="$t_prefix.$dindex"
done

What this method does is delete the nested directories starting at the top. It deletes any files at the top level, and moves any nested directories up one level to a temporary name. It then deletes the (now empty) top-level directory and begins anew with one of the former descendent directories. This process is slow, but it will work on almost any version of UNIX .

The only other way to delete such a directory on one of these systems is to remove the inode for the top-level directory manually, and then use the fsck command to erase the remaining directories. To delete these kinds of troubling directory structures this way, follow these steps:

  1. Take the system to single-user mode.

  2. Find the inode number of the root of the offending directory.

    # 
    ls -i anotherdir 
    
    1491 anotherdir 
    # 
  3. Use the df command to determine the device of the offending directory:

    # 
    /usr/bin/df anotherdir 
    
    /g17            (/dev/dsk/c0t2d0s2 ):  377822 blocks   722559 files 
    # 
  4. Clear the inode associated with that directory using the clri program:[7]

    [7] The clri command can be found in /usr/sbin/clri on Solaris systems. If you are using SunOS, use the unlink command instead.

    # 
    clri /dev/dsk/c0t2d0s2 1491 
    
    #

    (Remember to replace /dev/dsk/c0t2d0s2 with the name of the actual device reported by the df command.)

  5. Run your filesystem consistency checker (for example, fsck /dev/dsk/cot2dos2) until it reports no errors. When the program tells you that there is an unconnected directory with inode number 1491 and asks you if you want to reconnect it, answer "no." The fsck program will reclaim all the disk blocks and inodes used by the directory tree.

If you are using the Linux ext2 filesystem, you can delete an inode using the debugfs command. It is important that the filesystem be unmounted before using the debugfs command.

25.2.3 Swap Space Problems

Most UNIX systems are configured with some disk space for holding process memory images when they are paged or swapped out of main memory.[8] If your system is not configured with enough swap space, then new processes, especially large ones, will not be run because there is no swap space for them. This failure often results in the error message "No space" when you attempt to execute a command.

[8] Swapping and paging are technically two different activities. Older systems swapped entire process memory images out to secondary storage; paging removes only portions of programs at a time. The use of the word "swap" has become so commonplace that most UNIX users use the word "swap" for both swapping and paging, so we will too.

If you run out of swap space because processes have accidentally filled up the available space, you can increase the space you've allocated to backing store. On SVR4 or the SunOS system, this increase is relatively simple to do, although you must give up some of your user filesystem. First, find a partition with some spare storage:

# 
/bin/df -ltk

Filesystem            kbytes      used   avail capacity  Mounted on
/dev/dsk/c0t3d0s0        95359     82089    8505     91%     /                  
/proc                      0         0       0      0%     /proc              
/dev/dsk/c0t1d0s2     963249    280376  634713    31%    /user2      
/dev/dsk/c0t2d0s0    1964982   1048379  720113    59%    /user3      
/dev/dsk/c0t2d0s6    1446222    162515 1139087    12%    /user4      
#

In this case, partition /user4 appears to have lots of spare room. You can create an additional 50 Mb of swap space on this partition with this command sequence on Solaris systems:

# 
mkfile 50m /user4/junkfile

# 
swap -a /user4/junkfile

On SunOS systems, type:

# 
mkfile 50m /user4/junkfile

# 
swapon /user4/junkfile

You can add this to the vfstab if you want the swap space to be available across reboots. Otherwise, remove the sequence as a swap device ( swap -d /user4/junkfile) and then delete the file.

Correcting a shortage of swap space on systems that do not support swapping to files (such as most older versions of UNIX ) usually involves shutting down your computer and repartitioning your hard disk.

If a malicious user has filled up your swap space, a short-term approach is to identify the offending process or processes and kill them. The ps command shows you the size of every executing process and helps you determine the cause of the problem. The vmstat command, if you have it, can also provide valuable process state information.

25.2.4 /tmp Problems

Most UNIX systems are configured so that any user can create files of any size in the /tmp directory. Normally, there is no quota checking enabled in the /tmp directory. Consequently, a single user can fill up the partition on which the /tmp directory is mounted, so that it will be impossible for other users (and possibly the superuser) to create new files.

Unfortunately, many programs require the ability to store files in the /tmp directory to function properly. For example, the vi and mail programs both store temporary files in /tmp . These programs will unexpectedly fail if they cannot create their temporary files. Many locally written system administration scripts rely on the ability to create files in the /tmp directory, and do not check to make sure that sufficient space is available.

Problems with the /tmp directory are almost always accidental. A user will copy a number of large files there, and then forget them. Perhaps many users will do this.

In the early days of UNIX , filling up the /tmp directory was not a problem. The /tmp directory is automatically cleared when the system boots, and early UNIX computers crashed a lot. These days, UNIX systems stay up much longer, and the /tmp directory often does not get cleaned out for days, weeks, or months.

There are a number of ways to minimize the danger of /tmp attacks:

  • Enable quota checking on /tmp , so that no single user can fill it up. A good quota is to allow each user to take up 40% of the space in /tmp . Thus, filling up /tmp will, under the best circumstances, require collusion between more than two users.

  • Have a process that monitors the /tmp directory on a regular basis and alerts the system administrator if it is nearly filled.

As the superuser, you might also want to sweep through the /tmp directory on a periodic basis and delete any files that are more than three or five days old:[9]

[9] Beware that this command may be vulnerable to the filename attacks described in Chapter 11 .

# 
find /tmp -mtime +5 -print | xargs rm -rf

This line is a simple addition to your crontab for nightly execution.

25.2.5 Soft Process Limits: Preventing Accidental Denial of Service

Most modern versions of UNIX allow you to set limits on the maximum amount of memory or CPU time a process can consume, as well as the maximum file size it can create. These limits are handy if you are developing a new program and do not want to accidentally make the machine very slow or unusable for other people with whom you're sharing.

The Korn shell ulimit and C shell limit commands display the current process limits:

$ 
ulimit -Sa
           -H for hard limits, -S for soft limits
time(seconds) 							unlimited
file(blocks) 						unlimited
data(kbytes) 						2097148 kbytes
stack(kbytes) 						8192 kbytes
coredump(blocks) 						unlimited
nofiles(descriptors) 	64
vmemory(kbytes) 						unlimited
$ 

These limits have the following meanings:

time

Maximum number of CPU seconds your process can consume.

file

Maximum file size that your process can create, reported in 512-byte blocks.

data

Maximum amount of memory for data space that your process can reference.

stack

Maximum stack your process can consume.

coredump

Maximum size of a core file that your process will write; setting this value to 0 prevents you from writing core files.

nofiles

Number of file descriptors (open files) that your process can have.

vmemory

Total amount of virtual memory your process can consume.

You can also use the ulimit command to change a limit. For example, to prevent any future process you create from writing a data file longer than 5000 Kilobytes, execute the following command:

$ 
ulimit -Sf 10000 

$ 
ulimit -Sa 

time(seconds) 				unlimited
file(blocks) 			10000
data(kbytes) 			2097148 kbytes
stack(kbytes) 			8192 kbytes
coredump(blocks) 			unlimited
nofiles(descriptors) 	64
vmemory(kbytes) 			unlimited
$ 

To reset the limit, execute this command:

$ 
ulimit -Sf unlimited

$ 
ulimit -Sa 

ctime(seconds) 				unlimited
file(blocks) 			unlimited
data(kbytes) 			2097148 kbytes
stack(kbytes) 			8192 kbytes
coredump(blocks) 			unlimited
nofiles(descriptors) 	64
vmemory(kbytes) 			unlimited
$

Note that if you set the hard limit, you cannot increase it again unless you are currently the superuser. This limit may be handy to use in a system-wide profile to limit all your users.


Previous: 25.1 Destructive Attacks Practical UNIX & Internet Security Next: 25.3 Network Denial of Service Attacks
25.1 Destructive Attacks Book Index 25.3 Network Denial of Service Attacks