24.3 Unlinking Open Files Isn't a Good Idea[Some programmers write programs that make temporary files, open them, then unlink (remove) each file before they're done reading it . ( 45.10 ) This keeps other people from deleting, reading, or overwriting a file. Because the file is opened by a process, UNIX removes the file's directory entry (its link) but doesn't actually free the disk space until the process is done with the file. Here's why you shouldn't do that. (By the way, the point Chris makes about system administrators cleaning up full filesystems by emptying open files is a good one.) -JP] To give people another reason not to unlink open files (besides that it does, er, "interesting" things under NFS ( 1.33 ) ), consider the following:
multi 1000 </usr/dict/words >/tmp/file1 ( multi is a program that makes n copies of its input; here n is 1000.) Now suppose /tmp ( 21.2 ) runs out of space. You can:
rm /tmp/file1 # oops, file didn't actually go away ps ax # find the "multi" process kill or you can:
Bending the example a bit, suppose that /tmp runs out of file space and there are a bunch of unlinked but open ( 45.20 ) files. To get rid of the space these occupy, you must kill the processes holding them open. However, if they are ordinary files, you can just trim them down to zero bytes. There is one good reason to unlink open temporary files: if anything goes wrong, the temporary files will vanish. There is no other way to guarantee this absolutely. You must balance this advantage against the disadvantages. - in net.unix-wizards on Usenet, 9 September 1985 |
|