26.1 Can You Trust Your Computer?
For a few minutes, try thinking like
a computer criminal. A few months ago, you were fired from Big
Whammix, the large smokestack employer on the other side of town, and
now you're working for a competing company, Bigger
Bammers. Your job at Bammers is corporate espionage;
you've spent the last month trying to break into Big
Whammix's central mail server. Yesterday, you
discovered a bug in a version of the web server software that Whammix
is running, and you gained privileged access to the system.
What do you do now?
Your primary goal is to gain as much valuable corporate information
as possible, and do so without leaving any evidence that would allow
you to be caught. But you have a secondary goal of masking your steps
so that your former employers at Whammix will never figure out that
they have lost information.
Realizing that the hole in the Whammix web server might someday be
plugged, you decide to create a new back door that you can use to
gain access to the company's computers in the
future. One logical approach is to modify the
computer's SSH server to accept hidden passwords.
Because the source code for sshd is widely
available, this task is easy.
You want to hide evidence of your data collection, so you also patch
the /bin/ls program. When the program is asked
to list the contents of the directory in which you are storing your
cracker tools and intercepted mail, it displays none of your files.
You "fix" the
computer's MD5 utility so that it detects when it is
computing the MD5 of one of the modified utilities, and returns the
MD5 of the unmodified utility instead. Then you manipulate the system
clock or edit the raw disk to set all the times in the inodes back to
their original values to further cloak your modifications.
You'll be connecting to the computer on a regular
basis, so you also modify /usr/sbin/netstat so
that it doesn't display connections between the Big
Whammix IP subnet and the subnet at Bigger Bammers. You may also
modify the /usr/bin/ps and
/usr/bin/who programs so that they
don't list users who are logged in via this special
back door.
Content, you now spend the next five months periodically logging into
the mail server at Big Whammix and making copies of all of the email
directed to the marketing staff. You do so right up to the day that
you leave your job at Bigger Bammers and move on to a new position at
another firm. On your last day, you run a shell script that you have
personally prepared that restores all of the programs on the hard
disk to their original configuration. Then, as a parting gesture,
your program introduces subtle modifications into the Big Whammix
main accounting database.
Technological fiction? Hardly. By the middle of the 1990s, attacks
against computers in which the system binaries were modified to
prevent detection of the intruder had become commonplace. Once
sophisticated attackers have gained superuser access, the usual way
you discover their presence is if they make a mistake. Despite better
intrusion detection and firewall technologies introduced in the late
1990s, the problem of "invisible"
misuse continues to be common.
26.1.1 Harry's Compiler
In the early days of the MIT Media Lab, there was a graduate student
who was very unpopular with the other students in his lab. To protect
his privacy, we'll call the unpopular student
"Harry."
Harry was obnoxious and abrasive, and he wasn't a
very good programmer either. So the other students in the lab decided
to play a trick on him. They modified the PL/I compiler on the
computer that they all shared so that the program would determine the
name of the person who was running it. If the person running the
compiler was Harry, the program would run as usual, reporting syntax
errors and the like, but it would occasionally, randomly, not produce
a final output file.
This mischievous prank caused a myriad of troubles for Harry. He
would make a minor change to his program, run it,
and—occasionally—the program would run the same way as it
did before he made his modification. He would fix bugs, but the bugs
would still remain. But then, whenever he went for help, one of the
other students in the lab would sit down at the terminal, log in, and
everything would work properly.
Poor Harry. It was a cruel trick. Somehow, though, everybody forgot
to tell him about it. He soon grew frustrated with the whole
enterprise, and eventually left school.
And you thought those random "bugs"
in your system were there by accident?
26.1.2 Trusting Trust
Perhaps the definitive account of the problems inherent in computer
security and trust is Ken Thompson's
article, "Reflections on Trusting
Trust." Thompson describes a back door planted in an early
research version of Unix.
The back door was a modification to the
/bin/login program that would allow him to gain
superuser access to the system at any time, even if his account had
been deleted, by providing a predetermined username and password.
While such a modification is easy to make, it's also
an easy one to detect by looking at the computer's
source code. So Thompson modified the computer's C
compiler to detect whether it was translating the
login.c program. If so, then the additional code
for the back door would automatically be inserted into the
object-code stream, even though the code was not present in the
original C source file.
Thompson could now have the login.c source
inspected by his coworkers, compile the program, install the
/bin/login executable, and yet be assured that
the back door is firmly in place.
But what if somebody inspected the source code for the C compiler
itself? Thompson thought of that case as well. He further modified
the C compiler so that it would detect whether it was compiling the
source code for itself. If so, the compiler would automatically
insert the special login program recognition code. After one more
round of compilation, Thompson was able to put all the original
source code back in place.
Thompson's experiment was like a magic trick. There
was no back door in the login.c source file and
no back door in the source code for the C compiler, and yet there was
a back door in both the final compiler and in the
login program. Abracadabra!
What hidden actions do your compiler and login
programs perform?
26.1.3 What the Superuser Can and Cannot Do
As these examples illustrate, technical
expertise combined with superuser privileges on a computer is a
powerful combination. Together, they let an attacker change the very
nature of the computer's operating system. An
attacker can modify the system to create
"hidden" directories that
don't show up under normal circumstances (if at all)
and can change the system clock, making it look as if the files that
he modified today were actually modified months ago. An attacker can
also forge electronic mail. (Actually, anybody can forge electronic
mail, but an attacker can do a better job of it.)
Of course, there are some things that an attacker cannot do, even if
that attacker is a technical genius and has full access to your
computer and its source code. An attacker cannot, for example,
decrypt a message that has been encrypted with a perfect encryption
algorithm. But he can alter the code to record the key the next time
you type it. An attacker probably can't alter your
computer's hardware to perform basic mathematical
calculations a dozen times faster than it currently does, although
there are few security implications to doing so. Most attackers
can't read the contents of a file after
it's been written over with another file unless they
take apart your computer and take the hard disk to a laboratory.
However, an attacker with privileges can alter your system so that
deleted files are still accessible (to him).
In each case, how—and
when—do you tell if the attack has occurred?
The "what-if" scenario can be taken
to considerable lengths. Consider an attacker who is attempting to
hide a modification in a computer's
/bin/login program. (See Table 26-1.)
Table 26-1. The "what-if" scenario
The attacker plants a back door in the
/bin/login program to allow unauthorized access.
|
You use PGP to create a digital signature of all system programs. You
check the signatures every day.
|
The attacker modifies the version of PGP that you are using so that
it will report that the signature on /bin/login
verifies, even if it doesn't.
|
You copy /bin/login onto another computer before
verifying it with a trusted copy of PGP.
|
The attacker modifies your computer's kernel by
adding loadable modules so that when the
/bin/login file is sent through a TCP
connection, the original /bin/login, rather than
the modified version, is sent.
|
You put a copy of PGP on a removable hard disk. You mount the hard
disk to perform the signature verification and then unmount it.
Furthermore, you put a good copy of /bin/login
onto your removable hard disk and then copy the good
program over the installed version on a regular basis.
|
The attacker regains control of your system and further modifies the
kernel so that the modification to /bin/login is
patched into the running program after it loads. Any attempt to read
the contents of the /bin/login file results in
the original, unmodified version.
|
You reinstall the entire system software, and configure the system to
boot from a read-only device such as a CD-ROM.
|
Because the system now boots from a CD-ROM, you cannot easily update
system software as bugs are discovered. The attacker waits for a bug
to crop up in one of your installed programs, such as
sendmail. When the bug is reported, the attacker
will be ready to pounce.
|
Your move . . .
|
If you think that this description sounds like an intricate game of
chess, you're right. Practical computer security is
a series of actions and counteractions, attacks and defenses. As with
chess, success depends on anticipating your
opponent's moves and planning countermeasures ahead
of time. Simply reacting to your opponent's moves is
a recipe for failure.
The key thing to note, however, is that somewhere, at some level, you
need to trust what you are working with. Maybe you trust the
hardware. Maybe you trust the CD-ROM. But at some level, you need to
trust what you have on hand. Perfect security isn't
possible, so we need to settle for the next best thing: reasonable
trust on which to build.
The question is, where do you place that trust?
|