3.7 The Problem with Security Through Obscurity
We'd like to close this
chapter on policy formation with a few words about knowledge. In
traditional security, derived largely from military intelligence,
there is the concept of "need to
know." Information is partitioned, and you are given
only as much as you need to do your job. In environments where
specific items of information are sensitive or where inferential
security is a concern, this policy makes considerable sense. If three
pieces of information together can form a damaging conclusion and no
one has access to more than two, you can ensure confidentiality.
In a computer operations environment, applying the same need-to-know
concept is usually not appropriate. This is especially true if you
find yourself basing your security on the fact that something
technical is unknown to your attackers. This concept can even hurt
your security.
Consider
an environment where management decides to keep the manuals away from
the users to prevent them from
learning about commands and options that might be used to crack the
system. Under such circumstances, the managers might believe they
have increased their security, but they probably have not. A
determined attacker will find the same documentation
elsewhere—from other users or from other sites. Extensive
amounts of Unix documentation are as close as the nearest bookstore!
Management cannot close down all possible avenues for learning about
the system.
In the meantime, the local users are likely to make less efficient
use of the machine because they are unable to view the documentation
and learn about more efficient options. They are also likely to have
a poorer attitude because the implicit message from management is
"We don't completely trust you to
be a responsible user." Furthermore, if someone does
start abusing commands and features of the system, management may not
have a pool of talent to recognize or deal with the problem. And if
something should happen to the one or two users authorized to access
the documentation, there is no one with the requisite experience or
knowledge to step in or help out.
3.7.1 Keeping Secrets
Keeping bugs or features secret to
protect them is also a poor approach to security. System developers
often insert back doors in
their programs to let them gain privileges without supplying
passwords (see Chapter 19). Other times, system
bugs with profound security implications are allowed to persist
because management assumes that nobody knows of them. The problem
with these approaches is that features and problems in the code have
a tendency to be discovered by accident or by determined attackers.
The fact that the bugs and features are kept secret means that they
are unwatched, and probably unpatched. After being discovered, the
existence of the problem will make all similar systems vulnerable to
attack by the persons who discover the problem.
Keeping
algorithms, such as a locally developed encryption algorithm, secret
is also of questionable value. Unless you are an expert in
cryptography, you most likely can't analyze the
strength of your algorithm. The result may be a mechanism that has a
serious flaw in it. An algorithm that is kept secret
isn't scrutinized by others, and thus someone who
does discover the hole may have free access to your data without your
knowledge.
Likewise, keeping the source code of
your operating system or application secret is no guarantee of
security. Those who are determined to break into your system will
occasionally find security holes, with or without source
code. But without the source code, users
cannot carry out a systematic examination of a program for problems.
Thus, there may be some small benefit to keeping the code hidden, but
it shouldn't be depended on.
The key is attitude. Defensive measures that are based primarily on
secrecy lose their value if their secrecy is breached. Even worse,
when maintaining secrecy restricts or prevents auditing and
monitoring, it can be impossible to determine whether secrecy has
been breached. You are better served by algorithms and mechanisms
that are inherently strong, even if they're known to
an attacker. The very fact that you are using strong, known
mechanisms may discourage an attacker and cause the idly curious to
seek excitement elsewhere. Putting your money in a wall safe is
better protection than depending on the fact that no one knows that
you hide your money in a mayonnaise jar in your refrigerator.
3.7.2 Responsible Disclosure
Despite
our objection to "security through
obscurity," we do not advocate that you widely
publicize new security holes the moment that you find them. There is
a difference between secrecy and prudence! If you discover a security
hole in distributed or widely available software, you should
quietly report it to the vendor as soon as
possible. We also recommend that you report it to one of the FIRST
teams (described in Appendix E). These
organizations can take action to help vendors develop patches and see
that they are distributed in an appropriate manner.
If you "go public" with a security
hole, you endanger all of the people who are running that software
but who don't have the ability to apply fixes. In
the Unix environment, many users are accustomed to having the source
code available to make local modifications to correct flaws.
Unfortunately, not everyone is so lucky, and many people have to wait
weeks or months for updated software from their vendors. Some sites
may not even be able to upgrade their software because
they're running a turn-key application, or one that
has been certified in some way based on the current configuration.
Other systems are being run by individuals who don't
have the necessary expertise to apply patches. Still others are no
longer in production, or are at least out of maintenance. Always act
responsibly. It may be preferable to circulate a patch without
explaining or implying the underlying vulnerability than to give
attackers details on how to break into unpatched systems.
We have seen many instances in which a well-intentioned person
reported a significant security problem in a very public forum.
Although the person's intention was to elicit a
rapid fix from the affected vendors, the result was a wave of
break-ins to systems where the administrators did not have access to
the same public forum, or were unable to apply a fix appropriate for
their environment.
Posting details of the latest
security vulnerability in your system to a mailing list if there is
no patch available will not only endanger many other sites, it may
also open you to civil action for damages if that flaw is used to
break into those sites. If you are concerned with your security,
realize that you're a part of a community. Seek to
reinforce the security of everyone else in that community as
well—and remember that you may need the assistance of others
one day.
Some
security-related information is rightfully confidential. For
instance, keeping your passwords from becoming public knowledge makes
sense. This is not an example of security through obscurity. Unlike a
bug or a back door in an operating system that gives an attacker
superuser powers, passwords are designed to be kept secret and should
be routinely changed to remain so.
|
|