| | |
25.3. Getting Strategic and Policy Decisions Made
Strategic decisions need to be understood and made by top-level
management, or they will never be successfully implemented. If you
don't have top-level management support for security, you
aren't going to have security; it's that simple. Why
wouldn't you have support from top-level managers? Probably
because you haven't addressed their concerns in ways they
understand. Here are some things to consider in making your case.
25.3.1. Enlist Allies
You don't need to do all of this alone. In fact, you probably
can't. You will need an executive sponsor (somebody high up in
the organization -- a vice president or the equivalent, at least).
If you don't talk to people that high up on a regular basis,
you need somebody at an intermediate level who can help you figure
out who to talk to and how. It need not be anybody in your management
chain; anybody you can trust who passionately wants a security policy
will do. The most likely people are people in security (yes, the same
people who do locks on doors), legal, accounting, internal audit, and
quality certification. If anybody has suffered real pain from a
security incident recently, try that department -- it may include
sales or marketing if you had a publicized incident.
It's also often highly effective to bring in consultants. A
consultant has the authority of a certified Expert and has both more
patience and more skill at the necessary meetings than most technical
staff. The consultant also doesn't have the complicated
existing relationships with people and can afford to cheerfully be a
scapegoat for things people don't like forever after. In this
situation, you want a consultant whose strengths are politics and
authority, not necessarily the most technical person you can find.
25.3.2. Involve Everybody Who's Affected
You may be the person with the best understanding of the technical
issues, but you aren't necessarily the person with the best
understanding of the institution's needs as a whole. Strategic
and policy decisions must be made by people working together. You
can't just come up with a policy you like, take it around to a
lot of people, and have them rubber-stamp it. Even if you manage to
get them to do it -- which may well be more difficult than
getting them to help make intelligent decisions -- they
won't actually follow it.
One major computer manufacturer had a policy forbidding dial-in
modems. Unfortunately, the company's centralized dial-in access
didn't satisfy all of their programmers. Although the
programmers couldn't request modem lines, some of them figured
out that they could unplug the telephone lines from fax machines,
connect them to modems, go home at night, and dial up their work
computers. Even more unfortunately, a programmer in one of the groups
with this habit was fired and proceeded to break into the site. He
systematically tried all the phone numbers in the range the company
had assigned to fax machines until he connected to one of the
redirected ones and got a login prompt from an unsecured machine
inside the corporate firewall. The former employee did significant
damage before he was detected and shut out. He was able to gain a lot
of time because the people trying to shut him out didn't know
the modems existed. When they did figure out that modems were
involved, the process of getting rid of them all proved to be tedious
and prolonged, because lines were diverted only when people planned
to use them.
That whole incident was the result of the fact that management and
system administrators had a policy that ignored some genuine needs of
the people using the computer facility. The official policy required
dial-in access to be so secure it was almost completely unusable, and
the unofficial policy required dial-in access to be so usable that it
was almost completely insecure. If a policy that allowed moderately
insecure dial-in access had been in place, the break-in might have
been avoided, and it certainly would have been easier to detect and
stop. It would also have been avoided if the programmers had agreed
that security was more important than dial-in access, but that kind
of agreement is much harder to achieve than a compromise.
In fact, there wasn't much actual disagreement between the
parties involved in this case. If the managers had been asked, they
would have said that letting people work from home was important to
them; they didn't understand that the existing dial-in system
was not providing acceptable service. If the programmers had been
asked, they would have said that preventing people from maliciously
deleting their work was important to them; they didn't
understand the risks of what they were doing. But nobody thought
about security and usability at the same time, and the result was
pure disaster.
25.3.3. Accept "Wrong" Decisions
You may find that the security policy you come up with is one you
don't particularly like. If this happens because the people who
made it don't understand what they've done, then you
should fight strongly to get it fixed. If, on the other hand, people
understand the risks, but they don't share your priorities, put
your objections down in writing and go ahead with the policies. Yes,
this will sometimes lead to disasters. Nonetheless, if you ask a
group to make a decision, you can't insist that it be your
decision. You also can't be sure that your way is the only
right way.
Sometimes managers have a genuine willingness to accept risks that
seem overwhelming to system administrators. For example, one computer
manufacturer chose to put one of their large and powerful machines on
an unprotected network and to give accounts on the machine to
customers and prospective customers upon request. The system
administrator thought it was a terrible idea and pointed out that the
machine was fundamentally impossible to secure; there were a large
number of accounts, changing rapidly, with no pattern, and they
belonged to people the company couldn't control. Furthermore,
the reason the company was giving out test accounts was that the
machine was a fast parallel processor, which also meant that it might
as well have been designed as the ultimate password-cracking machine.
To the system administrator, it seemed extremely likely that once
this machine was broken into (which was probably inevitable), it was
going to be used as a tool to break into other machines.
A battle ensued, and eventually, a compromise was reached. The
machine was made available, but extra security was employed to
protect internal networks from it. (It was a compromise because it
interfered with employees' abilities to use the machine, which
they needed to do to assist the outsiders who were using it.)
anagement chose to accept the remaining risk that the machine would
be used as a platform to attack other sites, knowing that there was a
potential for extremely bad publicity as a result.
What happened? Sure enough, the machine was
compromised and was used to attack at least the internal networks.
The attacks on the internal networks were extremely annoying and cost
the company money in system administrators' time, but the
attacks didn't produce significant damage, and there was little
or no bad publicity. Management considered this expense to be
acceptable, however, given the sales generated by letting people
test-drive the machine. In this case, conflicting security policies
were resolved explicitly -- by discussion and compromise --
and the result was a policy that seemed less strong than the
original, but that provided sufficient protection. By openly and
intentionally choosing to accept a risk, the company brought it
within acceptable limits.
25.3.4. Present Risks and Benefits in Different Ways for Different People
You need to recognize that different people have different concerns.
ostly, these concerns are predictable from their positions, but some
are personal. For example, suppose that:
- Your chief financial officer is concerned about the cost of security,
or the cost of not having enough security.
- Your chief executive officer is concerned about the negative
publicity a security incident involving your site could bring, or
about potential loss or theft of intellectual property via the
Internet or other network connectivity.
- A department chair is concerned that tenure reviews will be revealed.
- A mid-level manager is concerned employees are squandering all their
time reading Usenet news or surfing the Web.
- Another mid-level manager is concerned employees are importing
virus-infected PC software from the Internet.
- Still another mid-level manager is concerned how best to provide
technical support to customers over the Internet.
- A professor on sabbatical is concerned his or her data won't be
accessible from other institutions.
- An instructor is concerned that students are stealing answers from
each other or tests from instructors.
- Users are concerned about the availability of Internet services they
feel are vital for their jobs.
- Users are concerned they won't be able to work together if
there are too many security issues.
- Students are concerned they won't be able to play with the
computers, which is a part of how they learn.
- Graduate students and project managers are concerned that security
measures are going to slow down projects with strict time lines.
You need to take the time to discover all of these different,
legitimate concerns and address them. You may also decide that these
various people should be worried about some
things, but aren't because they don't know any better;
you have to educate them about those issues. This means you need to
take the time to understand their jobs, what they want to accomplish
with the network, and how well they appreciate the security issues.
Talk to each of these people in terms they care about. This requires
a lot of listening, and probably some research, before you ever start
talking. To managers, talk about things like probable costs and
potential losses; to executives, talk about risk versus benefit; and
to technical staff, talk about capabilities. Before you present a
proposal, be prepared with an explanation that suits your
audience's point of view and technical level. If you have
trouble understanding or communicating with a particular group, you
may find it helps to build a relationship with someone who
understands that group and can translate for you.
Be prepared to think about other people's issues in other
people's terms, which means that you're going to give
different explanations to different people. You're not trying
to deceive anybody. The basic information is the same, no matter who
you're talking to. On the other hand, if a particular decision
saves money and makes for a more enjoyable working environment, you
don't go to the chief financial officer and say "We want
to do it this way because it's more fun", and then go the
programmers and say "We want to do it this way because
it's cheaper".
If you are a technical person, you may initially despair at the idea
that you need to discuss security in terms of money. In particular,
you may feel that you can't possibly come up with the
"right" answer. You don't need to come up with the
right answer. Nobody could possibly actually say how much a given
security policy costs -- hardware and software costs are usually
easy, but then you have the time to set it up, the management
meetings to argue about it, the maintenance, the extra five minutes a
day for every programmer to log in, the changes to other systems.
Saying how much money it saves is even worse; generally, the
worst-case possibility is utter disaster, costing any amount of money
your imagination can dream up or your organization can dredge up from
the bottom of its pockets. However, that's so implausible you
can't use it, so you have to guess how much more mundane
incidents will cost you and how. Will people sue you? Will you lose
customers? Will you lose control of a valuable asset? This process is
not going to come up with answers that will make a technical person
happy. That's OK. Come up with a method of estimating that you
find plausible and that gives the results you want, attach equally
plausible numbers to it, chart them, and present them. You can be
perfectly honest about the fact that they're imprecise; the
important thing is that you have numbers, and that you believe the
justification for those numbers, no matter how accurate (or
inaccurate) you think the result is. In general, you're not
expected to produce absolute truth.
25.3.5. Avoid Surprises
When it comes to security, nobody likes surprises. That's why
you need to make sure that the relevant people understand the
relevant issues and are aware of, and agree with (or at least agree
to abide by), the decisions made concerning those issues.
In particular, people need to know about the consequences of their
decisions, including best, worst, and probable outcomes. Consequences
that are obvious to you may not be obvious to other people. For
example, people who are not knowledgeable about Unix may be quite
willing to give out root passwords. They don't realize what the
implications are, and they may be very upset when they find out.
People who have been surprised often overreact. They may go from
completely unconcerned to demanding the impossible. One good
break-in, or even a prank, can convert people from not understanding
all the fuss about passwords to inquiring about the availability of
voiceprint identification and machine gun turrets. (It's
preferable to get them to make decisions while they are mildly
worried, instead of blindly panicked!)
25.3.6. Condense to Important Decisions, with Implications
When you're asking a top manager to decide issues of policy,
present only the decision to be made and the pros, cons, and
implications of the various options -- not a lot of extraneous
decisions. For example, you shouldn't waste your CEO's
time by asking him or her to decide whether you should run Sendmail
or Microsoft Exchange as your mailer, or whether you should use
NetBEUI or TCP/IP as the primary transport on internal networks;
those decisions are primarily technical and should be resolved by the
relevant technical staff and managers. On the other hand, you may
need to call upon your CEO to decide strategic issues regarding mail,
such as whether or not everyone in the organization is to have email
access, or only certain people (and if it's to be limited, to
whom).
Don't offer people decisions unless they have both the
authority and the information with which to make those decisions. You
don't want somebody to get attached to a decision, only to have
it overruled from higher up (or worse yet, from somebody at their
level but with the appropriate span of control). Always make it clear
why they're being asked to decide (instead of having the
decision made somewhere else).
In most cases, you want to avoid open-ended questions. It's
better to ask "Should we invest money in a single place to be a
defense, or should we try to protect each machine
individually?" than "What do you think we should do about
Internet security?" (The open question gives the replier the
option of saying "nothing", which is probably not an
answer you're going to be happy with.) In most cases,
it's better yet to say "Should we spend about $5,000 on a
single defensive system, or $15,000 on protecting each machine
individually?"
25.3.7. Justify Everything Else in Terms of Those Decisions
All of the technical and implementation decisions you make should
follow from the high-level guidance you've obtained from your
top managers and executives. If you don't see which way you
should go with a technical issue because it depends on nontechnical
issues, you may need to request more guidance on that issue. Again,
explain clearly the problem; the options; and the pros, cons, and
implications of each option.
When you explain policies or procedures, explain them in terms of the
original decisions. Show people the reasoning process. If you find
that you can't do so, either the original decisions
didn't cover some issues that are important to you (maybe so
important you didn't think they needed to be mentioned), or the
policies and procedures are unfounded and possibly unreasonable.
25.3.8. Emphasize that Many Issues Are Management and Personnel Issues, not Technical Issues
Certain problems, which some people try to characterize or solve as
technical problems, are really management or personnel problems. For
example, some managers worry that their employees will spend all
their time at work reading Usenet news or surfing the Web. However,
this is not a technical problem but a personnel problem -- the
online equivalent of employees spending the day at their desks
reading the newspaper or doing crossword puzzles.
Another common example of misdirected concern involves managers
worrying that employees will distribute confidential information over
the Internet. Again, this usually isn't a technical problem;
it's a management problem. The same employee who could email
your source code to a competitor could also carry it out the door in
his pocket on an zip disk (generally far more conveniently and with
less chance of being caught). It is irrational to place technological
restrictions on information that can be sent out by email unless you
also check everybody's bags and pockets as they leave the
premises.
25.3.9. Don't Assume That Anything Is Obvious
Certain things that seem obvious to a technical person who is
interested in security may not be at all obvious to nontechnical
managers and executives. As we've mentioned, it's obvious
to anyone who understands IP that packet filtering will allow you to
restrict access to services by IP addresses, but not by user (unless
you can tie specific users to specific IP address). Why? Because
"user" is not a concept in IP, and nothing in the IP
packet reflects what "user" is responsible for that
packet. Conversely, certain things that seem obvious to managers and
executives are not at all obvious to technical staff -- for
example, that the public's perception (which is often
incomplete or simply incorrect) of a problem at your company is often
more important than the technical "truth" of the matter.
| | | 25.2. Putting Together a Security Policy | | 25.4. What If You Can't Get a Security Policy? |
|