26.2 Can You Trust Your Suppliers?
Your
computer does something suspicious. You discover that the
modification dates on your system software have changed. It appears
that an attacker has broken in, or that some kind of virus is
spreading. So what do you do? You save your files to backup tapes,
format your hard disks, and reinstall your
computer's operating system and programs from the
original distribution media.
Is this really the right plan? You can never know. Perhaps your
problems were the result of a break-in. But sometimes, the worst is
brought to you by the people who sold you your hardware and software
in the first place.
26.2.1 Hardware Bugs
In 1994, the public learned that
Intel Pentium processors had a floating-point problem that
infrequently resulted in a significant loss of precision when
performing some division operations. Not only had Intel officials
known about this, but apparently they had decided not to tell their
customers until after there was significant negative public reaction.
Several vendors of disk drives have had problems with their products
failing suddenly and catastrophically, sometimes within days of being
initially used. Other disk drives failed when they were used with
Unix, but not with the vendor's own proprietary
operating system. The reason: Unix did not run the necessary command
to map out bad blocks on the media. Yet these drives were widely
bought for use with the Unix operating system.
Furthermore, there are many cases of effective
self-destruct
sequences in various kinds of terminals and
computers. For example, Digital's original VT100
terminal had an escape sequence that switched the terminal from a 60
Hz refresh rate to a 50 Hz refresh rate, and another escape sequence
that switched it back. By repeatedly sending the two escape sequences
to a VT100 terminal, a malicious programmer could cause the
terminal's flyback transformer to burn
out—sometimes spectacularly!
A similar sequence of instructions could be used to break the
monochrome monitor on the original IBM PC video display.
26.2.2 Viruses on the Distribution Disk
A
few years ago, there was a presumption in the field of computer
security that manufacturers who distributed computer software took
the time and due diligence to ensure that their computer programs, if
they were not free of bugs and defects, were at least free of
computer viruses and glaring computer security holes. Users were
warned not to run shareware and not to download programs from
bulletin board systems because such programs were likely to contain
viruses or Trojan horses. Indeed, at least one company that
manufactured a shareware virus-scanning program made a small fortune
telling the world that everybody else's shareware
programs were potentially unsafe.
Time and experience have taught us otherwise.
In recent years, a few viruses have been distributed with shareware,
but we have also seen many viruses distributed in shrink-wrapped
programs. The viruses come from small companies, and from the makers
of major computer systems. Several times in the last decade Microsoft
has distributed CD-ROMs with viruses on them, including one with the
first in-the-wild macro virus (the
"Concept" virus for Microsoft
Word). The Bureau of the Census has distributed a CD-ROM with a virus
on it. One of the problems posed by viruses on distribution disks is
that many installation procedures require that the user disable any
antiviral software that is running.
In the last few years, email has become a major vector for macro
viruses. Several security companies that have standardized on
Microsoft products have unwittingly distributed viruses to their
customers, clients, and the press. In almost all cases, these viruses
have been distributed because the companies had an insufficiently
patched version of Outlook.
The mass-market software industry also has problems with
logic
bombs and Trojan horses. For example, in 1994, Adobe distributed a
version of a new Photoshop 3.0 for the Macintosh with a
"time bomb" designed to make the
program stop working at some point; the time bomb had inadvertently
been left in the program from the beta-testing cycle. In 2001 a
popular form-filling program named Gator started displaying its own
advertisements on top of other banner advertisements on web pages.
Later, the program started popping up windows with advertisements on
the user's desktop when the user was running other
programs. In 2002, we are seeing products, including
Microsoft's media player, shipped with end user
license agreements suggesting that they may automatically download
and install digital rights management and surveillance code without
user knowledge. Because commercial software is not distributed in
source code form, you cannot inspect a program and tell if these
kinds of intentional "bugs" are
present or not.
As with shrink-wrapped programs, shareware is also a mixed bag. Some
shareware sites have system administrators who are very conscientious
and go to great pains to scan their software libraries with viral
scanners before making them available for download. Other sites have
no controls, and allow users to place files directly in the download
libraries. In the spring of 1995, a program entitled
PKZIP30.EXE made its way around a variety of FTP
sites on the Internet and through America Online. This program
appeared to be the 3.0 beta release of PKZIP, a
popular DOS compression utility. But when the program was run, it
erased the user's hard disk. This kind of attack
seems to recur every 3-4 years.
26.2.3 Buggy Software
Consider the following, rather typical,
disclaimer on a piece of distributed software:
NO WARRANTY OF PERFORMANCE. THE PROGRAM AND ITS ASSOCIATED
DOCUMENTATION ARE LICENSED "AS IS"
WITHOUT WARRANTY AS TO THEIR PERFORMANCE, MERCHANTABILITY, OR FITNESS
FOR ANY PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE RESULTS AND
PERFORMANCE OF THE PROGRAM IS ASSUMED BY YOU AND YOUR DISTRIBUTEES.
SHOULD THE PROGRAM PROVE DEFECTIVE, YOU AND YOUR DISTRIBUTEES (AND
NOT THE VENDOR) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING,
REPAIR, OR CORRECTION.
Software always has bugs. You install it on your disk, and under
certain circumstances, it damages your files or returns incorrect
results. The examples are legion. You may think that the software is
infected with a virus—it is certainly behaving as if it is
infected with a virus—but the problem is merely the result of
poor programming.
If the creators and vendors of the software don't
have confidence in their own software, why should you? If the vendors
disclaim " . . . warranty as to [its] performance,
merchantability, or fitness for any particular
purpose," then why are you paying them money and
using their software as a base for your business?
For too many software vendors, quality is not a priority. In most
cases, they license the software to you with a broad disclaimer of
warranty (similar to the one above) so there is little incentive for
them to be sure that every bug has been eradicated before they go to
market. The attitude is often one of
"We'll fix it in the next release,
after the customers have found all the major bugs."
Then they introduce new features with new flaws. Yet people wait in
line at midnight to be the first to buy software that is full of
errors and may erase their disks when they try to install it.
(Vendors counter by saying that users are not willing to pay for
quality, and that users value time-to-market far more than they do
quality or security.)
Other problems abound. Recall that the first study by Professor
Barton
Miller (cited in Chapter 16) found that more than
one-third of common programs supplied by several Unix vendors crashed
or hung when they were tested with a trivial program that generated
random input. Five years later, he reran the tests. The results?
Although most vendors had improved to where
"only" one-fourth of the programs
crashed, one vendor's software exhibited a 46%
failure rate! This failure rate occurred despite wide circulation and
publication of the report, and despite the fact that
Miller's team made the test code available to
vendors for free. Although we don't know the results
of the same tests that have been run recently, anecdotal experience
indicates that similarly dismal results should be expected from many
of today's vendors.
Most frightening, the testing performed by Miller's
group is one of the simplest, least effective forms of testing that
can be performed (random, black-box testing). Do vendors do any
reasonable testing at all?
Consider the case of a software engineer from a major PC software
vendor who came to Purdue to recruit in 1995. During his
presentation, students reported that he stated that 2 of the top 10
reasons to work for his company were "You
don't need to bother with that software engineering
stuff—you simply need to love to code" and
"You'd rather write assembly code
than test software." As you might expect, the
company has developed a reputation for very bad software quality
problems. What is somewhat surprising is that they continue to be a
market leader, year after year, and that people continue to buy their
software.
What are your vendor's policies about testing and
good software-engineering practices?
Or consider the case of someone who implements security features
without really understanding the "big
picture." As we noted in Section 16.7.2 in Chapter 16, a sophisticated encryption algorithm was
built into Netscape Navigator to protect credit card numbers in
transit on the network. Unfortunately, the implementation used a weak
initialization of the "random
number" used to generate a system key. The result?
Someone with an account on a client machine could easily obtain
enough information to crack the key in a matter of seconds, using
only a small program.
26.2.4 Hacker Challenges
Over the past dozen years, several vendors
have issued public announcements stating that their systems are
secure because they haven't been broken during
"hacker challenges." Usually, these
challenges involve some vendor putting its system on the Internet and
inviting all comers to take a whack in return for some token prize.
Then, after a few weeks or months, the vendor shuts down the site,
proclaims their product invulnerable, and advertises the results as
if they were a badge of honor.
But consider the following:
Few such "challenges" are conducted
using established testing techniques. They are ad hoc, random tests.
The fact that no problems are found does not mean that no problems
exist. The testers might not have recognized or exposed them yet.
(Consider how often software is released with bugs, even after
careful scrutiny.) Furthermore, how do you know that the testers will
report what they find? In some cases, the information may be more
valuable to the attackers later on, after the product has been sold
to many customers—because at that time,
they'll have more profitable targets to pursue.
Simply because the vendor does not report a successful penetration
does not mean that one did not occur—the vendor may choose not
to report it because it would reflect poorly on the product. Or the
vendor may not have recognized the penetration.
Challenges give potential miscreants a period to try to break into
the system without penalty. Challenges also give miscreants an excuse
if they are caught trying to break into the system later (e.g.,
"We thought the contest was still going
on").
Seldom do the really good experts, on either side of the fence,
participate in such exercises. Thus, anything done is usually done by
amateurs. (The "honor" of having
won the challenge is not sufficient to lure the good ones into the
challenge. Think about it. Good consultants can command fees of
several thousand dollars per day. Why should they effectively donate
their time and names for free advertising?)
Intruders will be reluctant to use their latest techniques on
challenges because the challenge machines are closely monitored. Why
reveal a technique in a challenge if that same technique could be
used to break into many other systems first?
Furthermore, the whole process sends the wrong message—that we
should build things and then try to break them (rather than building
them right in the first place), or that there is some prestige or
glory in breaking systems. We don't test the
strengths of bridges by driving over them with a variety of cars and
trucks to see if they fail, and pronounce them safe if no collapse
occurs during the test.
Some software designers could learn a lot from civil engineers. So
might the rest of us. In ancient times, if a house fell or a bridge
collapsed and injured someone, the engineer who designed it was
crushed to death in the rubble as punishment! Roman engineers were
required to stand under their bridges when they were used the first
few times. Some of those bridges are still standing—and in
use—2,000 years later.
Next time you see an advertiser using a challenge to sell a product,
you should ask if the challenge is really giving you more confidence
in the producct . . . or convincing you that the vendor
doesn't have a clue as to how to really design and
test security.
If you think that a security challenge builds the right kind of
trust, then get in touch with us. We have these magic pendants. No
one wearing one has ever had a system broken into, despite challenges
to all the computer users who happened to be around when the systems
were developed. Thus, the pendants must be effective at keeping out
attackers. We'll be happy to sell some to you. After
all, we employ the same rigorous testing methodology as your security
software vendors, so our product must be reliable, right?
26.2.5 Security Bugs That Never Get Fixed
There is also the question of legitimate
software distributed by computer manufacturers that contains glaring
security holes. More than a year after the release of
sendmail Version 8, nearly every major Unix
vendor was still distributing its computers equipped with
sendmail Version 5. (Versions 6 and 7 were
interim releases that were never released.) While Version 8 had many
improvements over Version 5, it also had many critical security
patches. Was the unwillingness of Unix vendors to adopt Version 8
negligence—a demonstration of their laissez-faire attitude
towards computer security—or merely a reflection of pressing
market conditions? Are the two really different?
How about the case in which many vendors released versions of TFTP
that, by default, allowed remote users to obtain copies of the
password file? What about versions of RPC that allow users to spoof
NFS by using proxy calls through the RPC system? What about software
that includes a writable utmp file that enables
a user to overwrite arbitrary system files? Each of these cases is a
well-known security flaw. In each case, the vendors did not provide
fixes for years—even now, they may not be fixed everywhere,
more than a decade after some of these were first identified as
problems.
Many vendors say that computer security is not a high priority
because they are not convinced that spending more money on computer
security will pay off for them. Computer companies are rightly
concerned with the amount of money that they spend on computer
security. Developing a more secure computer is an expensive
proposition that not every customer may be willing to pay for. The
same level of computer security may not be necessary for a server on
the Internet as for a server behind a corporate firewall, or on a
disconnected network. Furthermore, increased computer security will
not automatically increase sales. Firms that want security generally
hire staff who are responsible for keeping systems secure; users who
do not want (or do not understand) security are usually unwilling to
pay for it at any price, and frequently disable security when it is
provided.
On the other hand, a computer company is far better equipped to
safeguard the security of its operating system than an individual
user is. One reason is that a computer company has access to the
system's source code. A second reason is that most
large companies can easily devote two or three people to assuring the
security of their operating system, whereas most businesses are
hard-pressed to devote even a single full-time employee to the job of
computer security.
This can be a place where open source software shines. By providing
the source code freely to hundreds or thousands of users (or more),
any security flaws present may be found more quickly in open source
operating systems and applications, and are more likely to be
disclosed and fixed. The patch that fixes the flaw is itself open to
review by the users, which tends to ensure that few security patches
are released that cause more damage than they prevent.
We believe that more and more computer users are beginning to see
system security and software quality as distinguishing features, much
in the way that they see usability, performance, and new
functionality as features. When a person breaks into a computer, over
the Internet or otherwise, the act reflects poorly on the maker of
the software. We hope that more computer companies will make software
quality at least as important as new features. Vendors that rush
fault-ridden code to the market should be penalized by their
customers, not rewarded for giving customers access to
"beta software."
26.2.6 Network Providers That Network Too Well
Network providers pose special
challenges for businesses and individuals. By their nature, network
providers have computers that connect directly to your computer
network, placing the provider (or perhaps a rogue employee at the
providing company) in an ideal position to launch an attack against
your installation. Also, providers are usually in possession of
confidential billing information belonging to the users. Some
providers even have the ability to directly make charges to a
user's credit card or deduct funds from a
user's bank account.
Dan Geer, a
well-known computer security professional, tells an interesting story
about an investment brokerage firm that set up a series of direct IP
connections between its clients' computers and the
computers at the brokerage firm. The purpose of the links was to
allow the clients to trade directly on the brokerage
firm's computer system. But as the client firms were
also competitors, the brokerage house equipped the link with a
variety of sophisticated firewall systems.
It turns out, says Geer, that although the firm had protected itself
from its clients, it did not invest the time or money to protect the
clients from each other. One of the firm's clients
used the direct connection to break into the system operated by
another client. A significant amount of proprietary information was
stolen before the intrusion was discovered.
In another case, a series of articles appearing in The New
York Times during the first few months of 1995 revealed
how hacker Kevin Mitnick allegedly broke into a computer
system operated by Netcom Communications. One of the things that
Mitnick is alleged to have stolen was a complete copy of
Netcom's client database, including the credit card
numbers for more than 30,000 of Netcom's customers.
Certainly, Netcom needed the credit card numbers to bill its
customers for service. But why were they placed on a computer system
that could be reached from the Internet? Why were they not encrypted?
Five years later, history repeated itself on a much larger scale, as
web sites belonging to numerous companies, including EggHead Software
and CD Universe, were broken into, and hundreds of thousands of
credit card numbers were covertly downloaded.
Think about all those services on the Web. They claim to use all
kinds of super encryption protocols to safeguard your credit card
number as it is sent across the network. But remember—you can
reach their machines via the Internet to make the transaction. What
kinds of safeguards do they have in place at their sites to protect
all the card numbers after
they're collected? Simply encrypting the connection
is not sufficient. As Spafford originally said in a conference
address in 1995:
Secure web servers are the equivalent of heavy armored cars. The
problem is, they are being used to transfer rolls of coins and checks
written in crayon by people on park benches to merchants doing
business in cardboard boxes from beneath highway bridges.
|