home | O'Reilly's CD bookshelfs | FreeBSD | Linux | Cisco | Cisco Exam  


Previous Section Next Section

2.2 Security and Unix

Many years ago, Dennis Ritchie said this about the security of Unix: "It was not designed from the start to be secure. It was designed with the necessary characteristics to make security serviceable." In other words, Unix can be secured, but any particular Unix system may not be secure when it is distributed.

Unix is a multiuser, multitasking operating system. Multiuser means that the operating system allows many different people to use the same computer at the same time. Multitasking means that each user can run many different programs simultaneously.

One of the natural functions of such operating systems is to prevent different people (or programs) using the same computer from interfering with each other. Without such protection, a wayward program (perhaps written by a student in an introductory computer science course) could affect other programs or other users, could accidentally delete files, or could even crash (halt) the entire computer system. To keep such disasters from happening, some form of computer security has always had a place in the Unix design philosophy.

But Unix security provides more than mere memory protection. Unix has a sophisticated security system that controls the ways users access files, modify system databases, and use system resources. Unfortunately, those mechanisms don't help much when the systems are misconfigured, are used carelessly, or contain buggy software. Nearly all of the security holes that have been found in Unix over the years have resulted from these kinds of problems rather than from shortcomings in the intrinsic design of the system. Thus, nearly all Unix vendors believe that they can (and perhaps do) provide a reasonably secure Unix operating system. We believe that Unix systems can be fundamentally more secure than other common operating systems. However, there are influences that work against better security in the Unix environment.

2.2.1 Expectations

The biggest problem with improving Unix security is arguably one of expectations. Many users have grown to expect Unix to be configured in a particular way. Their experience with Unix in academic, hobbyist, and research settings has always been that they have access to most of the directories on the system and that they have access to most commands. Users are accustomed to making their files world-readable by default. Users are also often accustomed to being able to build and install their own software, frequently requiring system privileges to do so. The trend in "free" versions of Unix for personal computer systems has amplified these expectations.

Unfortunately, all of these expectations are contrary to good security practice in the business world. To have stronger security, system administrators must often curtail access to files and commands that are not required for users to do their jobs. Thus, someone who needs email and a text processor for his work should not also expect to be able to run the network diagnostic programs and the C compiler. Likewise, to heighten security, users should not be able to install software that has not been examined and approved by a trained and authorized individual.

The tradition of open access is strong, and is one of the reasons that Unix has been attractive to so many people. Some users argue that to restrict these kinds of access would make the systems something other than Unix. Although these arguments may be valid, restrictive measures are needed in instances where strong security is required.

At the same time, administrators can strengthen security by applying some general security principles, in moderation. For instance, rather than removing all compilers and libraries from each machine, these tools can be protected so that only users in a certain user group can access them. Users with a need for such access, and who can be trusted to take due care, can be added to this group. Similar methods can be used with other classes of tools, too, such as network monitoring software.

The most critical aspect of enhancing Unix security is to get users themselves to participate in the alteration of their expectations. The best way to meet this goal is not by decree, but through education and motivation. Technical security measures are crucial, but experience has proven repeatedly that people problems are not amenable to technological solutions.

Many users started using Unix in an environment that was less threatening than the one they face today. By educating users about the dangers of lax security, and how their cooperation can help to thwart those dangers, the security of the system is increased. By properly motivating users to participate in good security practice, you make them part of the security mechanism. Better education and motivation work well only when applied together, however; education without motivation may mean that security measures are not actually applied, and motivation without education leaves gaping holes in what is done.

2.2.2 Software Quality

Large portions of the Unix operating system and utilities that people take for granted were written as student projects, or as quick "hacks" by software developers inside research labs or by home hobbyists experimenting with Linux. These programs were not formally designed and tested: they were put together and debugged on the fly.[7] The result is a large collection of tools and OS code that usually works, but sometimes fails in unexpected and spectacular ways. Utilities were not the only things written by non-experts. Much of BSD Unix, including the networking code, was written by students as research projects of one sort or another—and these efforts sometimes ignored existing standards and conventions. Many of the drivers and extensions to Linux have also been written and tested under varying levels of rigor, and often by programmers with less training and experience than Berkeley graduate students.

[7] As one of this book's technical reviewers suggests, developers today may be even less likely to spend time in the careful design of code than in the past. In the days when computers ran slowly and compile time was a scarce and valuable resource, time spent ensuring that the program would behave properly when compiled was a good investment. Today, software compilation is so fast that the temptation to repeatedly compile, test, debug, and recompile may lead to a greater reliance on discovering bugs in testing, rather than preventing them in design.

This analysis is not intended to cast aspersions on the abilities of those who wrote all this code; we wish only to point out that most of today's versions of Unix were not created as carefully designed and tested systems. Indeed, a considerable amount of the development of Unix and its utilities occurred at a time when good software engineering tools and techniques were not yet developed or readily available.[8] The fact that occasional bugs are discovered that result in compromises of the security of some systems should be no surprise! (However, we do note that there is a very large range between, for example, the frequency of security flaws announced for OpenBSD and Red Hat Linux.)

[8] Some would argue that they are still not available. Few academic environments currently have access to modern software engineering tools because of their cost, and few vendors are willing to provide copies at prices that academic institutions can afford. It is certainly the case that typical home contributors to a *BSD or Linux system code base do not have access to advanced software engineering tools (even if they know how to use them).

Unfortunately, two things are not occurring as a result of the discovery of faults in the existing code. The first is that software designers do not seem to be learning from past mistakes. Consider that buffer overruns (mostly resulting from fixed-length buffers and functions that do not check their arguments) have been recognized as a major problem area for over four decades, yet critical software containing such bugs continues to be written—and exposed. For instance, a fixed-length buffer overrun in the gets( ) library call was one of the major propagation modes of the Internet worm of 1988, yet, as we were working on the second edition of this book in late 1995, news of yet another buffer overrun security flaw surfaced—this time in the BSD-derived syslog( ) library call. During preparation of the third edition in 2002, a series of security advisories were being issued for the Apache web server, the ssh secure login server, and various Microsoft programs, all because of buffer overflows. It is inexcusable that software continues to be formally released with these kinds of problems in place.

A more serious problem than any particular flaw is the fact that few, if any, vendors are performing an organized, well-designed program of design and testing on the software they provide. Although many vendors test their software for compliance with industry "standards," few apparently test their software to see what it does when presented with unexpected data or conditions. According to one study, as much as 40% of the utilities on some machines may have significant problems.[9] One might think that vendors would be eager to test their new versions of the software to correct lurking bugs. However, as more than one vendor's software engineer has told us, "The customers want their Unix—including the flaws—exactly like every other implementation. Furthermore, it's not good business: customers will pay extra for performance, but not for better testing."

[9] See the reference to the papers by Barton Miller, et al., given in Appendix C. Note that they found similar problems in Windows, so the problems are clearly not limited to Unix-like systems.

As long as users demand strict conformance of behavior to existing versions of the programs, and as long as software quality is not made a fundamental acquisition criterion by those same users, vendors and producers will most likely do very little to systematically test and fix their software. Formal standards, such as the ANSI C standard and POSIX standard help perpetuate and formalize these weaknesses, too. For instance, the ANSI C standard[10] perpetuates the gets( ) library call, forcing Unix vendors to support the call, or to issue systems at a competitive disadvantage because they are not in compliance with the standard.

[10] ANSI X3J11.

We should note that these problems are not confined to the commercial versions of Unix. Many of the open software versions of Unix also incorporate shoddy software. In part, this is because contributors have variable levels of skill and training. Furthermore, these contributors are generally more interested in providing new functionality than they are in testing and fixing flaws in existing code. There are some exceptions, such as the careful code review conducted on OpenBSD, but, paradoxically, the code that is more carefully tested and developed in the open software community also seems to be the code that is least used.

2.2.3 Add-on Functionality Breeds Problems

One final influence on Unix security involves the way that new functionality has been added over the years. Unix is often cited for its flexibility and reuse characteristics; therefore, new functions are constantly built on top of Unix platforms and are eventually integrated into released versions. Unfortunately, the addition of new features is often done without understanding the assumptions that were made with the underlying mechanisms and without concern for the added complexity presented to the system operators and maintainers. Applying the same features and code in a heterogeneous computing environment can also lead to problems.

As a special case, consider how large-scale computer networks such as the Internet have dramatically changed the security ground rules from those under which Unix was developed. Unix was originally developed in an environment where computers did not connect to each other outside of the confines of a small room or research lab. Networks today interconnect hundreds of thousands of machines, and millions of users, on every continent in the world. For this reason, each of us confronts issues of computer security directly: a doctor in a major hospital might never imagine that a postal clerk on the other side of the world could pick the lock on her desk drawer to rummage around her files, yet this sort of thing happens on a regular basis to "virtual desk drawers" on the Internet.

Most colleges and many high schools now grant network access to all of their students as a matter of course. The number of primary schools with network access is also increasing, with initiatives in many U.S. states to put a networked computer in every classroom. Granting telephone network access to a larger number of people increases the chances of telephone abuse and fraud, just as granting widespread computer network access increases the chances that the access will be used for illegitimate purposes. Unfortunately, the alternative of withholding access is equally unappealing. Imagine operating without a telephone because of the risk of receiving prank calls!

The foundations and traditions of Unix network security were profoundly shaped by the earlier, more restricted view of networks, and not by our more recent experiences. For instance, the concept of user IDs and group IDs controlling access to files was developed at a time when the typical Unix machine was in a physically secure environment. On top of this was added remote manipulation commands such as rlogin and rcp that were designed to reuse the user-ID/group-ID paradigm with the concept of "trusted ports" for network connections. Within a local network in a closed lab, using only relatively slow computers, this design (usually) worked well. But now, with the proliferation of workstations and non-Unix machines on international networks, this design, with its implicit assumptions about restricted access to the network, leads to major weaknesses in security.[11]

[11] Internet pioneer Bob Metcalf warned of these dangers in 1973, in RFC 602. That warning, and others like it, went largely unheeded.

Not all of these unsecure foundations were laid by Unix developers. The IP protocol suite on which the Internet is based was developed outside of Unix initially, and it was developed without a sufficient concern for authentication and confidentiality. This lack of concern has enabled cases of password sniffing and IP sequence spoofing to occur, and these make news as "sophisticated" attacks.[12] (These attacks are discussed in Chapter 11.)

[12] To be fair, the designers of TCP/IP were aware of many of the problems. However, they were more concerned about making everything work so they did not address many of the problems in their design. The problems are really more the fault of people trying to build critical applications on an experimental set of protocols before the protocols were properly refined—a familiar problem.

Another facet of the problem has to do with the "improvements" made by each vendor. Rather than attempting to provide a unified, simple interface to system administration across platforms, each vendor has created a new set of commands and functions. In many cases, improvements to the command set have been available to the administrator. However, there are also now hundreds (perhaps thousands) of new commands, options, shells, permissions, and settings that the administrator of a heterogeneous computing environment must understand and remember. Additionally, many of the commands and options are similar to each other, but have different meanings depending on the environment in which they are used. The result can often be disaster when the poor administrator suffers momentary confusion about the system or has a small lapse in memory. This complexity further complicates the development of tools that are intended to provide cross-platform support and control. For a "standard" operating system, Unix is one of the most nonstandard systems to administer.

That such difficulties arise is both a tribute to Unix and a condemnation. The robust nature of Unix enables it to accept and support new applications by building on the old. However, existing mechanisms are sometimes completely inappropriate for the tasks assigned to them. Rather than being a condemnation of Unix itself, such shortcomings are actually an indictment of the developers for failing to give more consideration to the human and functional ramifications of building on the existing foundation.

Here, then, is a conundrum: to rewrite large portions of Unix and the protocols underlying its environment, or to fundamentally change its structure, would be to attack the very reasons that Unix has become so widely used. Furthermore, such restructuring would be contrary to the spirit of standardization that has been a major factor in the wide acceptance of Unix. At the same time, without re-evaluation and some restructuring, there is serious doubt about the level of trust that can be placed in the system. Ironically, the same spirit of development and change is what has led Unix to its current niche.

2.2.4 The Failed P1003.1e/2c Unix Security Standard

In 1994, work was started within the Unix community on develoing a set of security extensions to the Unix POSIX standard. This standardization effort was known as POSIX P1003.1e/2c.

The ambitious project hoped to create a single Unix security standard comprised of the key security building blocks missing from the underlying Unix design. These included:

  • Access control lists (ACLs), so that specific individuals or groups of individuals could be given (or denied) access to specific files

  • Data labeling, allowing classified and confidential data to be labeled as such

  • Mandatory access control, so that individuals would be unable to override certain security decisions made by the system management

  • Capabilities that could be used to place restrictions on processes running as the superuser

  • Standardized auditing and logging

Work on this project continued until October 1997 when, despite good intentions on the part of the participants and the sponsoring vendors, the draft standard was officially withdrawn and the P1003.1e and P1003.2c committees were disbanded. The final drafts of the documents can be downloaded from http://wt.xpilot.org/publications/posix.1e/.

Many factors were responsible for the failure of the P1003.1e/2c standards efforts. Because the standards group sought to create a single standard, areas of disagreement prevented the committee from publishing and adopting smaller standards that represented the areas of consensus. Then a year's worth of work was lost when the "source document" for the standard was lost.

Today, most vendors that sell trusted versions of Unix implement some aspects of the P1003.1e/2c draft standard. Furthermore, the draft has been used as the basis of the Linux capabilities system and the BSD filesystem ACLs. So even though the standards effort was not adopted, it has had a lasting impact.

    Previous Section Next Section