home | O'Reilly's CD bookshelfs | FreeBSD | Linux | Cisco | Cisco Exam  


Previous Section Next Section

2.1 History of Unix

The roots of Unix[1] go back to the mid-1960s, when American Telephone and Telegraph, Honeywell, General Electric, and the Massachusetts Institute of Technology embarked on a massive project to develop an information utility. The goal was to provide computer service 24 hours a day, 365 days a year—a computer that could be made faster by adding more parts, much in the same way that a power plant can be made bigger by adding more furnaces, boilers, and turbines. The project, heavily funded by the Department of Defense Advanced Research Projects Agency ( ARPA, also known as DARPA), was called Multics.

[1] A more comprehensive history of Unix, from which some of this chapter is derived, is Peter Salus's book, A Quarter Century of UNIX (Addison-Wesley).

2.1.1 Multics: The Unix Prototype

Multics (which stands for Multiplexed Information and Computing Service) was designed to be a modular system built from banks of high-speed processors, memory, and communications equipment. By design, parts of the computer could be shut down for service without affecting other parts or the users. Although this level of processing is assumed for many systems today, such a capability was not available when Multics was begun.

Multics was also designed with military security in mind, both to be resistant to external attacks and to protect the users on the system from each other. By design, Top Secret, Secret, Confidential, and Unclassified information could all coexist on the same computer: the Multics system was designed to prevent information that had been classified at one level from finding its way into the hands of someone who had not been cleared to see that information. Multics eventually provided a level of security and service that is still unequaled by many of today's computer systems—including, perhaps, Unix.

Great plans, but in 1969 the Multics project was far behind schedule. Its creators had promised far more than they could deliver within their projected time frame. Already at a disadvantage because of the distance between its New Jersey laboratories and MIT, AT&T decided to pull out of the Multics Project.

That year, Ken Thompson, an AT&T researcher who had worked on Multics, took over an unused PDP-7 computer to pursue some of the ideas on his own. Thompson was soon joined by Dennis Ritchie, who had also worked on Multics. Peter Neumann suggested the name Unix for the new system. The name was a pun on the name Multics and a backhanded slap at the project that was continuing in Cambridge (which was indeed continued for another decade and a half). Whereas Multics tried to do many things, Unix tried to do one thing well: run programs. Strong security was not part of this goal.

2.1.2 The Birth of Unix

The smaller scope was all the impetus that the researchers needed; an early version of Unix was operational several months before Multics. Within a year, Thompson, Ritchie, and others rewrote Unix for Digital's new PDP-11 computer.

As AT&T's scientists added features to their system throughout the 1970s, Unix evolved into a programmer's dream. The system was based on compact programs, called tools, each of which performed a single function. By putting tools together, programmers could do complicated things. Unix mimicked the way programmers thought. To get the full functionality of the system, users needed access to all of these tools—and in many cases, to the source code for the tools as well. Thus, as the system evolved, nearly everyone with access to the machines aided in the creation of new tools and in the debugging of existing ones.

In 1973, Thompson rewrote most of Unix in Ritchie's newly invented C programming language. C was designed to be a simple, portable language. Programs written in C could be moved easily from one kind of computer to another—as was the case with programs written in other high-level languages like FORTRAN—yet they ran nearly as fast as programs coded directly in a computer's native machine language.

At least, that was the theory. In practice, every different kind of computer at Bell Labs had its own operating system. C programs written on the PDP-11 could be recompiled on the lab's other machines, but they didn't always run properly, because every operating system performed input and output in slightly different ways. Mike Lesk developed a " portable I/O library" to overcome some of the incompatibilities, but many remained. Then, in 1977, the group realized that it might be easier to port the entire Unix operating system itself rather than trying to port all of the libraries.

The first Unix port was to AT&T's Interdata 8/32, a microcomputer similar to the PDP-11. In 1978, the operating system was ported to Digital's new VAX minicomputer. Unix still remained very much an experimental operating system. Nevertheless, Unix had become a popular operating system in many universities and was already being marketed by several companies. Unix was suddenly more than just a research curiosity.

2.1.2.1 Unix escapes AT&T

Indeed, as early as 1973, there were more than 16 different AT&T or Western Electric sites outside Bell Labs running the operating system. Unix soon spread even further. Thompson and Ritchie presented a paper on the operating system at the ACM Symposium on Operating System Principles (SOSP) at Purdue University in November 1973. Within a matter of months, sites around the world had obtained and installed copies of the system. Even though AT&T was forbidden under the terms of its 1956 Consent Decree with the U.S. federal government from advertising, marketing, or supporting computer software, demand for Unix steadily rose. By 1977, more than 500 sites were running the operating system; 125 of them were at universities in the U.S. and more than 10 foreign countries. 1977 also saw the first commercial support for Unix, then at Version 6.

At most sites, and especially at universities, the typical Unix environment was much like that inside Bell Labs: the machines were in well-equipped labs with restricted physical access. The people who made extensive use of the machines typically had long-term access and usually made significant modifications to the operating system and its utilities to provide additional functionality. They did not need to worry about security on the system because only authorized individuals had access to the machines. In fact, implementing security mechanisms often hindered the development of utilities and customization of the software. One of the authors worked in two such labs in the early 1980s, and one location viewed having a password on the root account as an annoyance because everyone who could get to the machine was authorized to use it as the superuser!

This environment was perhaps best typified by the development at the University of California at Berkeley. Like other schools, Berkeley had paid $400 for a tape that included the complete source code to the operating system. Instead of merely running Unix, two of Berkeley's bright graduate students, Bill Joy and Chuck Haley, started making significant modifications. In 1978, Joy sent out 30 copies of the "Berkeley Software Distribution (BSD)," a collection of programs and modifications to the Unix system. The charge: $50 for media and postage.[2]

[2] For more information about the history of Berkeley Unix, see "Twenty Years of Berkeley Unix: From AT&T-Owned to Freely Redistributable," by Marshall Kirk McKusick, in Open Sources: Voices from the Open Source Revolution (O'Reilly & Associates, January 1999). McKusick's essay is also available online at http://www.oreilly.com/catalog/opensources/book/kirkmck.html.

Over the next six years, in an effort funded by ARPA, the so-called BSD Unix grew into an operating system of its own that offered significant improvements over AT&T's. For example, a programmer using BSD Unix could switch between multiple programs running at the same time. AT&T's Unix allowed the names of files to be only 14 letters long, but Berkeley's allowed names of up to 255 characters. But perhaps the most important of the Berkeley improvements was in the area of networking software, which made it easy to connect Unix computers to local area networks (LANs). For all of these reasons, the Berkeley version of Unix became very popular with the research and academic communities.

2.1.2.2 Unix goes commercial

At about the same time, AT&T had been freed from its restrictions on developing and marketing source code as a result of the enforced divestiture of the phone company. Executives realized that they had a strong potential product in Unix, and they set about developing it into a more polished commercial product. This led to an interesting change in the numbering of the BSD releases.

The version of Berkeley Unix that followed the 4.1 release had so many changes from the original AT&T operating system that it should have been numbered 5.0. However, by the time that the next Berkeley Software Distribution was ready to be released, friction was growing between the developers at Berkeley and the management of AT&T—the company that owned the Unix trademark and rights to the operating system. As Unix grew in popularity, AT&T executives became increasingly concerned that the popularity of Berkeley Unix would soon result in AT&T's losing control of a valuable property right. To retain control of Unix, AT&T formed the Unix Support Group (USG) to continue development and marketing of the Unix operating system. USG proceeded to christen a new version of Unix as AT&T System V, and declare it the new "standard"; AT&T representatives referred to BSD Unix as nonstandard and incompatible.

Under Berkeley's license with AT&T, the university was free to release updates to existing AT&T Unix customers. But if Berkeley had decided to call its new version of Unix "5.0," it would have needed to renegotiate its licensing agreement to distribute the software to other universities and companies. Thus, Berkeley released BSD 4.2. By calling the new release of the operating system "4.2," they pretended that the system was simply a minor update.

2.1.2.3 The Unix Wars: Why Berkeley 4.2 over System V

As interest in Unix grew, the industry was beset by two competing versions of Unix: AT&T's "standard" System V and the technically superior Berkeley 4.2. The biggest non-university proponent of Berkeley Unix was Sun Microsystems. Founded in part by graduates from Berkeley's computer science program, Sun's SunOS operating system was, for all practical purposes, a patched-up version of BSD 4.1c. Many people believe that Sun's adoption of Berkeley Unix was one of the factors responsible for the early success of the company.

Two other companies that based their version of Unix on BSD 4.2 were Digital Equipment Corporation, which sold a Unix variant called Ultrix, and NeXT Computer, which developed and sold a Unix workstation based on the BSD 4.2 utilities and the "Mach" kernel developed at Carnegie-Mellon University.[3]

[3] Mach was a Department of Defense-funded project that developed a message-passing microkernel that could support various higher-level operating systems. It was a success in that regard; NeXT used it as one of the most visible applications (but it wasn't the only one).

As other companies entered the Unix marketplace, they faced the question of which version of Unix to adopt. On the one hand, there was Berkeley Unix, which was preferred by academics and developers, but which was "unsupported" and was frighteningly similar to the operating system used by Sun, soon to become the market leader. On the other hand, there was AT&T System V Unix, which AT&T, the owner of Unix, was proclaiming as the operating system "standard." As a result, most computer manufacturers that developed Unix in the mid- to late-1980s—including Data General, IBM, Hewlett-Packard, and Silicon Graphics—adopted System V as their standard.[4] A few tried to do both, coming out with systems that had dual "universes." A third version of Unix, called Xenix, was developed by Microsoft in the early 1980s and licensed to the Santa Cruz Operation (SCO). Xenix was based on AT&T's older System III operating system, although Microsoft and SCO had updated it throughout the 1980s, adding some new features, but not others.

[4] It has been speculated that the adoption of System V as a base had more to do with these companies' attempts to differentiate their products from Sun Microsystems' offerings than anything to do with technical superiority. The resulting confusion of different versions of Unix is arguably one of the major reasons that Windows was able to gain such a large market share in the late 1980s and early 1990s: there was only one version of Windows, while the world of Unix was a mess.

As Unix started to move from the technical to the commercial markets in the late 1980s, this conflict of operating system versions was beginning to cause problems for all vendors. Commercial customers wanted a standard version of Unix, hoping that it could cut training costs and guarantee software portability across computers made by different vendors. And the nascent Unix applications market wanted a standard version, believing that this would make it easier for them to support multiple platforms, as well as compete with the growing PC-based market.

The first two versions of Unix to merge were Xenix and AT&T's System V. The resulting version, Unix System V/386, Release 3.12, incorporated all the functionality of traditional Unix System V and Xenix. It was released in August 1988 for 80386-based computers.

2.1.2.4 Unix Wars 2: SVR4 versus OSF/1

In the spring of 1988, AT&T and Sun Microsystems signed a joint development agreement to merge the two versions of Unix. The new version of Unix, System V Release 4 (SVR4), was to have the best features of System V and Berkeley Unix and be compatible with programs written for both. Sun proclaimed that it would abandon its SunOS operating system and move its entire user base over to its own version of the new operating system, which it would call Solaris.[5]

[5] Some documentation labeled the combined versions of SunOS and AT&T System V as SunOS 5.0, and used the name Solaris to designate SunOS 5.0 with the addition of OpenWindows and other applications.

The rest of the Unix industry felt left out and threatened by the Sun/AT&T announcement. Companies including IBM and Hewlett-Packard worried that because they were not a part of the SVR4 development effort, they would be at a disadvantage when the new operating system was finally released. In May 1988, seven of the industry's Unix leaders—Apollo Computer, Digital Equipment Corporation, Hewlett-Packard, IBM, and three major European computer manufacturers—announced the formation of the Open Software Foundation (OSF).

The goal of OSF was to wrest control of Unix away from AT&T and put it in the hands of a not-for-profit industry coalition, which would be chartered with shepherding the future development of Unix and making it available to all under uniform licensing terms. OSF decided to base its version of Unix on IBM's implementation, then moved to the Mach kernel from Carnegie Mellon University, and an assortment of Unix libraries and utilities from HP, IBM, and Digital. The result of that effort was not widely adopted or embraced by all the participants. The OSF operating system (OSF/1) was late in coming, so some companies built their own (e.g., IBM's AIX). Others adopted SVR4 after it was released, in part because it was available, and in part because AT&T and Sun went their separate ways—thus ending the threat against which OSF had rallied. Eventually, after producing the Distributed Computing Environment (DCE) standard, OSF merged with a standards group named X/Open to form "The Open Group"—an industry standards organization that includes Sun and HP as members.

As the result of the large number of Unix operating systems, many organizations took part in a standardization process to create a unified Unix standard. This set of standards was named POSIX, originally initiated by IEEE, but also adopted as ISO/IEC 9945. POSIX created a standard interface for Unix programmers and users. Eventually, the same interface was adopted for the VMS and Windows operating systems, which made it easier to port programs among these three systems.

As of 1996, when the second edition of this book was published, the Unix wars were far from settled, but they were much less important than they seemed in the early 1990s. In 1993, AT&T sold Unix Systems Laboratories (USL) to Novell, having succeeded in making SVR4 an industry standard, but having failed to make significant inroads against Microsoft's Windows operating system on the corporate desktop. Novell then transferred the Unix trademark to the X/Open Consortium, which grants use of the name to systems that meet its 1170 test suite. Novell subsequently sold ownership of the Unix source code to SCO in 1995, effectively disbanding USL.

2.1.3 Free Unix

Although Unix was a powerful force in academia in the 1980s, there was a catch: the underlying operating system belonged to AT&T, and AT&T did not want the source code to be freely distributed. Access to the source code had to be controlled. Universities that wished to use the operating system as a "textbook example" for their operating system courses needed to have their students sign Non-Disclosure Agreements. Many universities were unable to distribute the results of their research to other institutions because they did not own the copyright on their underlying system.

2.1.3.1 FSF and GNU

Richard Stallman was a master programmer who had come to MIT in the early 1970s and never left. He had been a programmer with the MIT Artificial Intelligence Laboratory's Lisp Machine Project and had been tremendously upset when the companies that were founded to commercialize the research adopted rules prohibiting the free sharing of software. Stallman devoted the better part of five years to "punishing" one of the companies by re-implementing their code and giving it to the other. In 1983 he decided to give up on that project and instead create a new community of people who shared software.

Stallman realized that if he wanted to have a large community of people sharing software, he couldn't base it on speciality hardware manufactured by only a few companies that runs only LISP. So instead, he decided to base his new software community on Unix, a powerful operating system that looked like it had a future. He called his project GNU, a recursive acronym meaning "GNU's Not Unix!"

The first program that Stallman wrote for GNU was Emacs, a powerful programmer's text editor. Stallman had written the first version of Emacs in the early 1970s. Back then, it was a set of Editing MACroS that ran on the Incompatible Timesharing System (ITS) at the MIT AI Lab. When Stallman started work on GNU Emacs, there were already several versions of Emacs available for Unix, but all of them were commercial products—none of them were "free." To Stallman, being "free" wasn't simply a measure of price, it was also a measure of freedom. Being free meant that he was free to inspect and make changes to the source code, and that he was free to share copies of the program with his friends. He wanted free software—as in free speech, not free beer.

By 1985 GNU Emacs had grown to the point that it could be used readily by people other than Stallman. Stallman next started working on a free C compiler, gcc. Both of these programs were distributed under Stallman's GNU General Public License (GPL). This license gave developers the right to distribute the source code and to make their own modifications, provided that all future modifications were released in source code form and under the same license restrictions. That same year, Stallman founded the Free Software Foundation, a non-profit foundation that solicited donations, and used them to hire programmers who would write freely redistributable software.

2.1.3.2 Minix

At roughly the same time that Stallman started the GNU project, professor Andrew S. Tanenbaum decided to create his own implementation of the Unix operating system to be used in teaching and research. As all of the code would be original; he would be free to publish the source code in his textbook and distribute working operating systems without paying royalties to AT&T. The system, Minix, ran on IBM PC AT clones equipped with Intel-based processors and was designed around them. The project resulted in a stable, well-documented software platform and an excellent operating system textbook. However, efficiency was not a design criterion for Minix, and coupled with the copyright issues associated with the textbook, Minix did not turn out to be a good choice for widespread, everyday use.

2.1.3.3 Xinu

Another free operating system, Xinu, was developed by Purdue professor Douglas E. Comer while at Bell Labs in 1978. Xinu was specifically designed for embedded systems and was not intended to be similar to Unix. Instead, it was intended for research and teaching, goals similar to those of Minix (although it predated Minix by several years).

Xinu doesn't have the same system call interface as Unix, nor does it have the same internal structure or even the same abstractions (e.g., the notion of process). Professor Comer had not read Unix source code before writing Xinu. The only thing he used from Unix was a C compiler and a few names (e.g., Xinu has read, write, putc, and getc calls, but the semantics and implementations differ from the Unix counterparts).

Xinu resulted in a well-documented code base and a series of operating system textbooks that have been widely used in academia. Xinu itself has been used in a number of research projects, and several commercial products (e.g., Lexmark) have adopted Xinu as the embedded system for their printers.

However, Xinu Is Not Unix.

2.1.3.4 Linux

In 1991, a Finnish computer science student named Linus Torvalds decided to create a free version of the Unix operating system that would be better suited to everyday use. Starting with the Minix code set, Torvalds solely re-implemented the kernel and filesystem piece-by-piece until he had a new system that had none of Tanenbaum's original code in it. Torvalds named the resulting system "Linux" and decided to license and distribute it under Stallman's GPL. By combining his system with other freely available tools, notably the C compiler and editor developed by the Free Software Foundation's GNU project and the X Consortium's window server, Torvalds was able to create an entire working operating system. Work on Linux continues to this day by a multitude of contributors.

2.1.3.5 NetBSD, FreeBSD, and OpenBSD

The Berkeley programmers were not unaware of the problems caused by the AT&T code within their operating system. For years they had been in the position of only being able to share the results of their research with companies and universities that had AT&T source code licenses. In the early 1980s such licenses were fairly easy to get, but as the 1980s progressed the cost of the licenses became prohibitive.

In 1988 the Berkeley Computer Systems Research Group (CSRG) started on a project to eliminate all AT&T code from their operating system. The first result of their effort was called Networking Release 1. First available in June 1989, the release consisted of Berkeley's TCP/IP implementation and the related utilities. It was distributed on tape for a cost of $1,000, although anyone who purchased it could do anything that he wanted with the code, provided that the original copyright notice was preserved. Several large sites put the code up for anonymous FTP; the Berkeley code rapidly became the base of many TCP/IP implementations throughout the industry.

Work at CSRG continued on the creation of an entire operating system that was free of AT&T code. The group asked for volunteers who would reimplement existing Unix utilities by referencing only the manual pages. An interim release named 4.3BSD-Reno occurred in early 1990; a second interim release, Networking Release 2, occurred in June 1991. This system was a complete operating system except for six remaining files in the kernel that contained AT&T code and had thus not been included in the operating system. In the fall of 1991, Bill Jolitz wrote those files for the Intel processor and created a working operating system. Jolitz called the system 386/BSD. Within a few months a group of volunteers committed to maintaining and expanding the system formed and christened their effort "NetBSD."

The NetBSD project soon splintered. Some of the members decided that the project's primary goal should be to support as many different platforms as possible and should continue to do operating system research. But another group of developers thought that they should devote their resources to making the system run as well as possible on the Intel 386 platform and making the system easier to use. This second group split off from the first and started the FreeBSD project.

A few years later, a second splinter group broke off from the NetBSD project. This group decided that security and reliability were not getting the attention they should. The focus of this group was on careful review of the source code to identify potential problems. They restricted adoption of new code and drivers until they had been thoroughly vetted for quality. This third group adopted the name "OpenBSD."

2.1.3.6 Businesses adopt Unix

As a result of monopolistic pricing on the part of Microsoft and the security and elegance of the Unix operating systems, many businesses developed an interest in adopting a Unix base for some commercial products. A number of network appliance vendors found the stability and security of the OpenBSD platform to be appealing, and they adopted it for their projects. Other commercial users, especially many early web-hosting firms, found the stability and support options offered by BSDI (described in the next section) to be attractive, and they adopted BSD/OS. Several universities also adopted BSD/OS because of favorable licensing terms for students and faculty when coupled with the support options.

Meanwhile, individual hobbyists and students were coming onto the scene. For a variety of reasons (not all of which we pretend to understand), Linux became extremely popular among individuals seeking an alternative OS for their PCs. Perhaps the GPL was appealing, or perhaps it was the vast array of supported platforms. The personae of Linus Torvalds and "Tux," Linux's emblematic penguin, may also have had something to do with it. Interestingly, OpenBSD and BSD/OS were both more secure and more stable, all of the BSDs were better documented than Linux at the time, and most of the BSD implementations performed better under heavy loads.[6]

[6] As of mid-2002, there was still some truth to these statements, although by virtue of its larger user base, Linux has generated considerably more user-contributed documentation, particularly of the "HOWTO" variety. The earlier adoption of Linux has also led to much greater availability of drivers for both common and arcane hardware.

One key influence in the mid to late 1990s occurred when researchers at various national laboratories, universities, and NASA began to experiment with cluster computing. High-end supercomputers were getting more and more expensive to produce and run, so an alternative was needed. With cluster computing, scores (or hundreds) of commodity PCs were purchased, placed in racks, and connected with high-speed networks. Instead of running one program really fast on one computer, big problems were broken into manageable chunks that were run in parallel on the racked PCs. This approach, although not appropriate for all problems, often worked better than using high-end supercomputers. Furthermore, it was often several orders of magnitude less costly. One of the first working systems of this type, named Beowulf, was based on Linux. Because of the code sharing and mutual development of the supercomputing community, Linux quickly spread to other groups around the world wishing to do similar work.

All of this interest, coupled with growing unease with Microsoft's de facto monopoly of the desktop OS market, caught the attention of two companies—IBM and Dell—both of which announced commercial support for Linux. Around the same time, two companies devoted to the Linux operating system—Red Hat and VA Linux—had two of the most successful Initial Public Offerings in the history of the U.S. stock market. Shortly thereafter, HP announced a supported version of Linux for their systems.

Today, many businesses and research laboratories run on Linux. They use Linux to run web servers, mail servers, and, to a lesser extent, as a general desktop computing platform. Instead of purchasing supercomputers, businesses create large Linux clusters that can solve large computing problems via parallel execution. FreeBSD, NetBSD, and OpenBSD are similarly well-suited to these applications, and are also widely used. However, based on anecdotal evidence, Linux appears to have (as of early 2003) a larger installed base of users than any of the other systems. Based on announced commercial support, including ventures by Sun Microsystems, Linux seems better poised to grow in the marketplace. Nonetheless, because of issues of security and performance (at least), we do not expect the *BSD variants to fade from the scene; as long as the *BSD camps continue in their separate existences, however, it does seem unlikely that they will gain on Linux's market share.

2.1.4 Second-Generation Commercial Unix Systems

Shortly after the release of Networking Release 2, a number of highly regarded Unix developers, including some of the members of CSRG, formed a company named Berkeley Software Design, Inc. (BSDI). Following the lead of Jolitz, they took the Networking Release 2 tape, wrote their own version of the "six missing files," and started to sell a commercial version of the BSD system they called BSD/OS. They, and the University of California, were soon sued by Unix System Laboratories for theft of trade secrets (the inclusion or derivation of AT&T Unix code). The legal proceedings dragged on, in various forms, until January 1994, when the court dismissed the lawsuit, finding that the wide availability of Unix meant that AT&T could no longer claim it was a trade secret.

Following the settlement of the lawsuit, the University of California at Berkeley released two new operating systems: 4.4BSD-Encumbered, a version of the Unix operating system that required the recipient to have a full USL source-code license, and 4.4BSD-Lite, a version that was free of all AT&T code. Many parts of 4.4BSD-Lite were incorporated into BSD/OS, NetBSD, and FreeBSD (OpenBSD had not yet split from the NetBSD project when 4.4BSD was released). This release was in June 1994, and a final bug fix edition, 4.4BSD-Lite Release 2, was released in June 1995. Following Release 2, the CSRG was disbanded.

BSD/OS was widely used by organizations that wanted a high-performance version of the BSD operating system that was commercially supported. The system ended up at many ISPs and in a significant number of network firewall systems, VAR systems, and academic research labs. But BSDI was never able to achieve the growth that its business model required. Following an abortive attempt to sell bundled hardware and software systems, the company was sold to Wind River Systems.

In 1996 Apple Computer Corporation bought the remains of NeXT Computer, since renamed NeXT Software, for $400 million. Apple purchased NeXT to achieve ownership of the NeXTSTEP Operating System, which Apple decided to use as a replacement for the company's aging "System 7" Macintosh operating system. In 2001 Apple completed its integration of NeXTSTEP and introduced Mac OS X. This operating system was based on a combination of NeXT's Mach implementation, BSD 4.3, and Apple's MacOS 9. The result was a stable OS for PowerPC platforms with all of the advantages of Unix and a beautiful Apple-designed interface. Because of the installed base of Apple machines, and the success of Apple's new products such as the iMac and the Titanium PowerBook, Apple's Mac OS X was probably the most widely installed version of Unix in the world by the middle of 2002.

2.1.5 What the Future Holds

Despite the lack of unification, the number of Unix systems continues to grow. As of early 2003, Unix runs on tens of millions of computers throughout the world. Versions of Unix run on nearly every computer in existence, from handheld computers to large supercomputers and superclusters. Because it is easily adapted to new kinds of computers, Unix is an operating system of choice for many of today's high-performance microprocessors. Because a set of versions of the operating system's source code is readily available to educational institutions, Unix has also become an operating system of choice for high-end educational computing at many universities and colleges. It is also popular in the computer science research community because computer scientists like the ability to modify the tools they use to suit their own needs.

Unix is also being rapidly adopted on new kinds of computing platforms. Versions of Linux are available for handheld computers such as the Compaq iPaq, and Sharp's Zaurus uses Linux as its only operating system.

There are several versions of the Linux and BSD operating systems that will boot off a single floppy. These versions, including Trinix, PicoBSD, and ClosedBSD, are designed for applications in which tight security is required; they incorporate forensics, recovery, and network appliances.

Finally, a growing number of countries seem to be adopting the Linux operating system. These countries, including China and Germany, see Linux as a potentially lower-cost and more secure alternative to software sold by Microsoft Corporation.

Now that you've seen a snapshot of the history of these systems, we'd like to standardize some terminology. In the rest of the book, we'll use "Unix" to mean "the extended family of Unix and Unix-like systems including Linux." We'll also use the term "vendors" as a shorthand for "all the commercial firms providing some version of Unix, plus all the groups and organizations providing coordinated releases of some version of *BSD or Linux."

    Previous Section Next Section