Survey of Operating Systems:
§ 2: Overview

Instructor: M.S. Schmalz


This section reviews the history and underlying concepts of operating system development, and is organized as follows:

Discussion is based on information compiled from texts by Tannenbaum, Peterson, and Crowley, several of which are listed in the Optional Class Materials section of the main Web page for this course. We begin our historical discussion of operating systems with a perspective on generations of computer technology and early efforts at OS development in Section 2.1.

Reading Assignments and Exercises

Computer hardware technology has been classified by Tannenbaum [1] into the following generational categories:

Generation 1: Vacuum Tubes and Plugboards (1945-1955)

Generation 2: Transistors and Batch Systems (1955-1965)

Generation 3: Integrated Circuits and Multiprogramming (1965-1980)

Generation 4: Personal Computers (1980-1990)

To this taxonomy we would also add:

Generation 5: Networks and Distributed OS (1985-present) .

Instead of exactly following Tannenbaum's classification scheme, we present a simpler organization of OS development into early, mid-level, and more sophisticated efforts, as follows.

2.1. Early Operating Systems (1940s).

Recall from the in-class discussion of history of computers that Charles Babbage and others who developed purely mechanical computers did not have to worry about an operating system. The computational hardware had fixed functionality, and only one computation could run at any given time. Furthermore, programming as we know it did not exist. Thus, there was no operating system to interface the user to the bare machine.

The early electronic computers developed by Charles Atanasoff at Iowa State, as well as Aitken at Princeton, Mauchly and Eckert at the University of Pennsylvania, and Zuse in Germany used electronic components. However, the Princeton and Penn State machines, which were based on vacuum tubes and relays, were complete working machines during the WWII period. In contrast, Atanasoff's machine was used prior to WWII to solve systems of differential equations, and Zuse's machine only worked in part. These machines were programmed with plugboards that encoded a form of machine language. High-level programming languages and operating systems had not yet been discovered.

On these early vacuum-tube machines, a programmer would run a program by inserting his plugboard(s) into a rack, then turning on the machine and hoping that none of the vacuum tubes would die during the computation. Most of the computations were numerical, for example, tables of sine, cosine, or logarithm functions. In WWII, these machines were used to calculate artillery trajectories. It is interesting to note that the Generation 1 computers produced results at a slower rate than today's hand-held calculators.

2.2. Growth of Operating System Concepts (1950s)

By the early 1950s, punch-card input had been developed, and there was a wide variety of machinery that supported the punching (both manual and automatic) and reading of card decks. The programmer would take his or her deck to the operator, who would insert it into the reader, and the program would be loaded into the machine. Computation would ensue, without other tasks being performed. This was more flexible than using plugboards, but most programming was done in machine language (ones and zeroes). Otherwise, the procedure was identical to the plugboard machines of the 1940s.

In 1947, the transistor was invented at Bell Laboratories. Within several years, transistorized electronic equipment was being produced in commercial quantity. This led computer designers to speculate on the use of transistor-based circuitry for computation. By 1955, computers were using transistor circuits, which were more reliable, smaller, and less power-consumptive than vacuum tubes. As a result, a small number of computers were produced and sold commercially, but at enormous cost per machine.

Definition. A job is a program or collection of programs to be run on a computing machine.

Definition. A job queue is a list of waiting jobs.

The early transistor machines required that a job be submitted well in advance of being run. The operator would run the card deck through the reader, load the job on tape, load the tape into the computer, run the computer, get the printout from the printer, and put the printout in the user's or programmer's mailbox. All this human effort took time that was large in relationship to the time spent running each job on the computer. If ancillary software (e.g., a FORTRAN or COBOL compiler) was required, the operator would have to spend more time retrieving the compiler's card deck from a filing cabinet and reading it in to the machine, prior to actual computation.

At this time, a few businesses were using computers to track inventory and payroll, as well as to conduct research in process optimization. The business leaders who had payed great sums of money for computers were chagrined that so much human effort was required to initiate and complete jobs, while the expensive machine sat idle. The first solution to this largely economic problem was batch processing.

Definition. A batch processing system is a system that processes collections of multiple jobs, one collection at a time. This processing does not occur in real time, but the jobs are collected for a time before they are processed.

Definition. Off line processing consists of tasks performed by an ancillary machine not connected to the main computer.

Definition. A scratch tape or scratch disk is a separate tape/disk device or disk partition used for temporary storage, i.e., like a physical scratchpad.

Batch processing used a small, inexpensive machine to input and collect jobs. The jobs were read onto magnetic tape, which implemented a primitive type of job queue. When the reel of tape was full of input, an operator would take the reel to the main computer and load it onto the main computer's tape drive. The main computer would compute the jobs, and write the results to another tape drive. After the batch was computed, the output tape would be taken to another small, inexpensive computer that was connected to the printer or to a card punch. The output would be produced by this third computer. The first and third computers performed off-line processing in input and output mode, respectively. This approach greatly decreased the time operators dedicated to each job.

However, the main computer needed to know what jobs were being input to its tape drive, and what were the requirements of those jobs. So, a system of punched card identifiers was developed served as input to a primitive operating system which in turn replaced the operator's actions in loading programs, compilers, and data into the computer.

For example, a COBOL program with some input data would have cards arranged in the following partitions:

  1. $JOB Card: Specified maximum runtime, account to be charged, and programmer's name

  2. $COBOL Card: Instructed computer to load the COBOL compiler from a system tape drive connected to the main computer

  3. Program: A stack of punched cards that contained COBOL instructions comprising a program, which was compiled and written in the form of object code to a scratch tape

  4. $LOAD Card: Instructed computer to load the compiled program into memory from scratch tape

  5. $RUN Card: Instructed computer to run the program, with data (if any) that followed the $RUN card

  6. Data: If present, data were encoded on one or more punched cards and were read in on an as-needed basis. The primitive operating system also controlled the tape drive that read in the data.

  7. $END Card: Specified the end of a job. This was required to keep the computer from running the scratch tape(s) and output tape(s) when there was no computation being performed.

These sequences of job control cards were forerunners of Job Control Language (JCL) and command-based operating systems. Since large second-generation computers were used primarily for scientific calculations, they were largely programmed in FORTRAN. Examples of early operating systems were the FORTRAN Monitor System (FMS) and IBSYS, which was IBM's operating system for its scientific computer, the Model 7094.

2.3. IBM System 360/OS and 370 Operating Systems (1960s, 1970s)

Not only was IBM firmly into the computer business by the early 1960s, but IBM had many competitors since numerous large and small companies had discovered there was money to be made in computing machinery. Unfortunately, computer manufacturers had developed a divided product line, with one set of computers for business (e.g., tabulating, inventory, payroll, etc.) and another collection of machines for scientific work (e.g., matrix inversion, solution of differential equations). The scientific machines had high-performance arithmetic logic units (ALUs), whereas business machines were better suited to processing large amounts of data using integer arithmetic.

Maintenance of two separate but incompatible product lines was prohibitively expensive. Also, manufacturers did not usually support customers who initially bought a small computer then wanted to move their programs to a bigger machine. Instead, the users had to rewrite all their programs to work on the larger machine. Both of these practices (called application incompatibility and version incompatibility) were terribly inefficient and contributed to the relatively slow adoption of computers by medium-sized businesses.

Definition. A multi-purpose computer can perform many different types of computations, for example, business and scientific applications.

Definition. An upward compatible software and hardware paradigm ensures that a program written for a smaller (or earlier) machine in a product line will run on a larger (subsquently-developed) computer in the same product line.

In the early 1960s, IBM proposed to solve such problems with one family of computers that were to be multi-purpose and have upward compatibility (i.e., a user could in principle run his program on any model). This revolutionary concept was instantiated in the System/360 series of computers, which ranged from small business computers to large scientific machines (eventually embracing a primitive type of parallel, multiprocessor computer). The computing hardware varied only in price, features, and performance, and was amenable to either business or scientific computing needs. The 360 was the first computer to use transistors in small groups, which were forerunners of the integrated circuit (introduced by Fairchild Electronics in 1968). The idea of a family of software-compatible machines was quickly adopted by all the major computer manufacturers. System/360 machines were so well designed and ruggedly built that many of them are still in use today, mainly in countries outside the United States.

Unfortunately, System/360 attempted to be all things to all users, and its OS thus developed into millions of lines of assembly code that was difficult to maintain, enhance, or customize for application-specific needs. Not only did the OS have to run well on all different types of machines and for different programs, but it had to be efficient for each of these applications. Although noble in concept, the System/360 OS (unsurprisingly called OS/360) was constantly being modified to suit customer demands and accomodate new features. These new releases introduced further bugs. In response to customer complaints arising from the effects of these bugs, IBM produced a steady stream of software patches designed to correct a given bug (often, without much regard for other effects of the patch). Since the patches were hastily written, new bugs were often introduced in addition to the undesireable side effects of each new system release or patch.

Given its size, complexity, and problems, it is a wonder that OS/360 survived as long as it did (in the mainstream of American computing, until the mid 1970s). The IBM System 370 family of computers capitalized on the new technology of large-scale integration, and took IBM's large-scale operating system strategies well into the late 1970s, when several new lines of machines (4300 and 3090 series) were developed, the latter of which persists to the present day.

Despite its shortcomings, OS/360 satisfied most of IBM's customers, produced vast amounts of revenue for IBM, and introduced a revolutionary new concept called multiprogramming. OS/370 greatly extended the functionality of OS/360 and continued to grow into a ever-larger body of software that facilitated enhanced efficiency, multiprogramming, virtual memory, file sharing and computer security, etc. We next discuss the key innovation of multiprogramming.

2.3.1. Multiprogramming

Recall our discussion of early operating systems in Section 2.2. For example, when the main computer was loading a job from its input tape or was loading a compiled program from scratch tape, the CPU and arithmetic unit sat idle. Following the same line of reasoning as in the development of the first operating systems, computer designers produced an operating system concept that could make better utilization of the CPU while I/O functions were being performed by the remainder of the machine. This was especially important in business computing, where 80 to 90 percent of the tasks in a given program are typically I/O operations.

Multiprogramming provided a solution to the idle CPU problem by casting computation in terms of a process cycle, which has the following process states:

Load: A given process P is loaded into memory from secondary storage such as tape or disk.

Run: All or a part of P executes, producing intermediate or final results.

Wait (or Block:) The execution of P is suspended and P is loaded onto a wait queue, which is a job queue in memory where waiting jobs are temporarily stored (similar to being put on hold during a telephone conversation).

Resume: P is taken out of the wait queue and resumes its execution on the CPU exactly where it left off when it was put into the wait queue (also called the wait state).

End: Execution of P terminates (normally or abnormally) and the results of P are written to an output device. The memory occupied by P is cleared for another process.

Implementationally, memory was partitioned into several pieces, each of which had a different job. While one job was running on the CPU, another might be using the I/O processor to output results. This virtually ensured that the CPU was utilized nearly all the time. Special hardware and software was required to protect each job from having its memory overwritten or read by other jobs, but these problems were worked out in later versions of OS/360.

Definition. Spooling (an acronym for Simultaneous Peripheral Operation On Line) involves concurrent loading of a process from input into memory.

Another technological development that made computers more efficient was the introduction of magnetic disks as storage media. Originally the size of a washing machine and storing a whopping 10MB of data, disks were seen as a way to eliminate the costly overhead (as well as mistakes) associated with human operators who had to carry and mount/demount magnetic tapes on prespecified tape drives. The availability of disk storage led to the obsolescence of batch queues by allowing a program to be transferred directly to system disk while it was being read in from the card reader. This application of spooling at the input, and the simultaneous introduction of spooling for printers, further increased computing efficiency by reducing the need for operators to mount tapes. Similarly, the need for separate computers to handle the assembly of job queues from cards to tape also disappeared, further reducing the cost of computing. This meant that less time could be allocated to each job, thereby increasing response time.

2.3.2. Time Sharing

The economic benefits of faster response time, together with the development of faster CPUs and individual computer terminals that had near-real-time response, introduced the possibility of a variation on multiprogramming that allowed multiple users to access a given machine. This practice, called time sharing is based upon the following principle. Each user program or task is divided into partitions of fixed length in time or fixed memory size.

For example, let two programs P1 and P2 be divided into slices P1(1), P1(2), ..., P1(N1), and P2(1), P2(2), ..., P2(N2). Given a fast CPU, a time-sharing system can interleave the process segments so that P1 and P2 have alternate access to the CPU, for example:

P1(1),P2(1),P1(2),P2(2),...

until the shorter of the two processes completes. It is easy to see how this could be combined with the I/O blocking strategy in multiprogramming (Section 2.3.1) to yield a highly efficient computer system that gives the illusion of multiple concurrent processes.

The chief problem with time sharing is the underlying assumption that only a few of the users will be demanding CPU resources at any given time. If many users demand CPU resources, then the distance between each segment of a given program or process increases (e.g., time between P1(1) and P1(2) in the preceding example), and CPU delays can become prohibitive. A solution to this problem, called parallel processing, employs a multiprocessor computer to run each user program or segment of a program on one or more processors, thereby achieving true concurrency. This approach will be discussed briefly in Section 2.4.

Another problem with time-sharing is the assumption that most interactive users will request jobs with short runtimes, such as syntax debugging or compilation of a 600-line program, versus sorting of a 10-billion record database. As a result, when the users of a time-sharing system all request large jobs, the system bogs down.

The first realistic time-sharing system was CTSS, developed at MIT on a customized IBM 7094 [2]. This project was undertaken with the best of intentions, but did not become practical until memory protection hardware was included later in the third generation of computer hardware development.

However, the success of CTSS on the modified 7094 motivated development of a next-generation time-sharing system, called MULTICS (from Multiplexed Information and Computing Service) a general-purpose computer operating system developed by the Computer System Research group at M. I. T. Project MAC, in cooperation with Honeywell information Systems (formerly the General Electric Company Computer Department) and the Bell Telephone Laboratories. This system was designed to be a "computer utility", where users could request computer services on an as-needed basis, within a primitive interactive environment. MULTICS was implemented initially on the Honeywell 645 computer system, an enhanced relative of the Honeywell 635 computer. MULTICS was then ported to (implemented on) a variety of Honeywell machines, including the Honeywell 6180 computer system. It embodied many capabilities substantially in advance of those provided by other operating systems, particularly in the areas of security, continuous operation, virtual memory, shareability of programs and data, reliability and control. It also allowed different programming and human interfaces to co-exist within a single system.

In summary, the revolutionary concepts that MULTICS attempted to implement [3,4] were:

Security: A program could have its code or data stored in memory made invisible (and inaccessible) to other programs being run by the same user or by other users. This led to the development of modern computer security systems that protect executable and data partitions from unauthorized intrusion.

Continuous Operation: Instead of being started or restarted every time a new batch or job was run, a machine supporting MULTICS would run continuously, similar to an electric generating plant (electric utility) providing power service to its customers. This idea has been extended in subsequent development to include fault tolerance, where a computer keeps running even if certain key components fail or are swapped out.

Virtual Memory: Prior to MULTICS, computer memory was confined to physical memory, i.e., the memory that was physically installed in the machine. Thus, if a computer had 256KB of memory (a large machine, at that time!), then it could only hold 256KB of programs and data in its memory. MULTICS introduced an implementation of paging programs and data from memory to disk and back. For example, if a program was not running in memory, valuable memory could be freed up by writing the memory partition (page) in which the program was stored to disk, then using that page for another program. When the original program was needed again (e.g., taken out of wait state), it would be paged back from disk into memory and executed. This allowed a slower disk unit to serve as a sort of extended (or virtual) memory, while all the work was still done in the computer's physical memory. This technique is commonplace today, and is even used in some personal computers.

Shareability of Programs and Data: Before MULTICS, each computer programmer or user had his or her own data and programs, usually stored on one or more magnetic tapes. There was no way to share data with other users online, other than to fill out a signed permission slip (piece of paper) that authorized the computer operator to mount your data tape for another user to access. This was time-consuming and unreliable. MULTICS exploited the relatively new technology of magnetic disks to make data or programs stored on one user's disk readable to other users, without computer operator intervention. This in part motivated the collaborative culture of computing that we know and enjoy today. MULTICS also provided a primitive type of data security mechanism that allowed a user to specify which other users could read his data. This was the beginning of computer security at the user level.

Reliability and Control: Prior to MULTICS, software ran rather haphazardly on computers, and there was little in the way of reliable commercial products. For example, in my FORTRAN class in 1971, we ran small card decks on an IBM 360/44 (scientific machine), and when an erroneous job control card was read in, the entire machine would crash, along with all programs running on that machine. MULTICS attempted (not too successfully) to remedy this situation by introducing a strategy for crash detection and process rollback that could be thought of as forming the foundation for robust client-server systems today. Also, MULTICS attempted to place more control of the computing process in the user's hands, by automating functions that had previously been the domain of computer operators. MULTICS also tried to support multiple user interfaces, including (in its later years) text and primitive line-drawing graphics, which was a revolutionary concept at the time.

Although MULTICS introduced many important new ideas into the computer literature, building a reliable MULTICS implementation was much more difficult than anyone had anticipated. General Electric dropped out early on and sold its computer division to Honeywell, and Bell Labs dropped out later in the effort. MULTICS implementations eventually were developed than ran more or less reliably, but were installed at only a few dozen sites.

2.3.3. Minicomputers and UNIX

Part of the reason for the slow adoption of MULTICS was the emergence of the minicomputer, a scaled-down single-user version of a mainframe machine. Digital Equipment Corporation (DEC) introduced the PDP-1 in 1961, which had only 4K of 18-bit words. However, at less than 1/20 the price of a 7094, the PDP-1 sold well. A series of PDP models (all incompatible) followed, culminating in the PDP-11, which was workhorse machine at many laboratories well into the 1980s.

A computer scientist at Bell Labs by the name of Ken Thompson, who had been on the MULTICS project, developed a single-user version of MULTICS for the PDP-7. This was called UNiplexed Information and Computing Service (UNICS), which name was quickly changed to UNIXTM. Bell Labs then supported its implementation on a PDP-11. Dennis Ritchie then worked with Thompson to rewrite UNIX in a new, high-level language called "C", and Bell Labs licensed the result to universities for a very small fee.

The availability of UNIX source code (at that time, a revolutionary development) led to many improvements and variations. The best-known version was developed at University of California at Berkeley, and was called the Berkeley Software Distribution (UNIX/BSD). This version featured improvements such as virtual memory, a faster file system, TCP/IP support and sockets (for network communication), etc. AT&T Bell Labs also developed a version called UNIX System V, and various commercial workstation vendors have their own versions (e.g., Sun Solaris, HP UNIX, Digital UNIX, SGI IRIX, and IBM AIX), which have largely been derived from UNIX/BSD or System V.

UNIX has formed the basis for many operating systems, whose improvements in recent years still originate in UNIX-based systems. The UNIX system call interface and implementation has been borrowed somewhat by most operating systems developed in the last 25 years.

2.4. Concurrency and Parallelism

Digressing briefly until we resume the discussion of personal computer operating systems in Section 2.5, we note that multiprogramming and time sharing led naturally to the issue of concurrency. That is, if segments of different programs could be run adjacent in time and interleaved so that users were deceived into believing that a computer was running all their programs at the same time, why couldn't this be done using multiple processors?

Definition. Concurrent processing involves computing different parts of processes or programs at the same time.

For example, given processes P1, P2, ..., PN with average runtime T seconds, instead of interleaving them to execute in slightly greater time than N · T seconds, why not use N processors to have an average total runtime slightly greater than T seconds? (Here, we say "slightly greater than" because there is some operating system overhead in both the sequential and parallel cases.)

This technique of using multiple processors concurrently is called parallel processing, and has led to many advances in computer science in the late 1980s and 1990s. Briefly stated, parallel processing operates on the assumption that "many hands make less work". Parallel algorithms have been discovered that can sort N data in order log(N) time using order N processors. Similar algorithms have been employed to drastically reduce the computation time of large matrix operations that were previously infeasible on sequential machines.

Parallel programming is quite difficult, since one has the added problems of concurrent control of execution, security, and reliability in a perspective of space and time. (In sequential computers, time is the primary variable, and there is no real concurrency.) As a result, one has a multi-variable optimization problem that is difficult to solve in the static case (no fluctuations in parameters). In the dynamic case (i.e, fluctuating system parameters), this problem is extremely difficult, and is usually solved only by approximation techniques.

Due to the difficulty of parallel programming, we will not in this course cover computers or operating systems related to this technique in high-performance dedicated machines. Instead, we will concentrate on operating systems for sequential machines and workstations that are connected to networks (e.g., an office network of personal computers). We note in passing that network operating systems (e.g., Linda and MVL) have been developed that use a large array of networked computers (e.g., in a large office buildings) to support a type of parallel distributed processing. These systems are currently topics of keen research interest but have encountered practical difficulties, not the least of which is getting users' permission to have their computers taken over by a foreign host when the users are away from their respective offices.

2.5. Impact of Personal Computers on OS Technology

Third-generation computer technology was based on integrated circuits that were relatively simple and used small-scale integration (SSI, tens of transistors on a chip) or medium-scale integration (MSI, hundreds of transistors per chip). The advent of large-scale integrated circuits (LSI) with thousands of transistors per square centimeter made possible the construction of smaller minicomputers that became affordable for personal or hobby use. These early computers were, in a sense, throwbacks to early plugboard-programmable devices. They did not have operating systems as such, and were usually programmed in machine code using front-panel toggle switches. Although fun for electronics hobbyists, it was difficult to get useful work done with such machines.

2.5.1. Emergence of PC Operating Systems

In the mid-1970s, the Altair computer dominated the personal market, and the IBM-PC was yet to be built. A young man from a suburb of Seattle and his high-school friend decided to try constructing a simple operating system and some user-friendly utilities for the Altair and other personal computers. Their company, Microsoft, experimented with UNIX by making its interface more like the DEC VAX operating system (VMS). By combining concepts of UNIX and VMS, Bill Gates was able to produce a simple operating system called MS-DOS (MicroSoft Disk Operating System) that could be used on a few personal machines. Early version of MS-DOS were developed for very small computers and thus had to run in very small memory partitions (approximately 16KB at the outset). MS-DOS was copied or adapted by a variety of companies, including Tandy, which issued TRS-DOS for their TRS line of PCs.

In 1980, IBM released the IBM-PC (personal computer), which happened also to be their first computing product not to be designated by a number (as opposed to the 1401, 7094, 360, 370, 4300, 3060, 3090, ad nauseam). Microsoft was a key player in the release of the PC, having extended MS-DOS to run on the Intel 8086 (and the follow-on 8088) processor. For many years after its initial release, MS-DOS still lacked features of mainframe operating systems, such as security, virtual memory, concurrency, multiprogramming, etc. MS-DOS was (and remains) a text-only operating system and thus did not support the graphics displays and protocols that later emerged from Apple Computer Company's development of the Lisa and Macintosh machines (Section 2.5.2).

Other personal computer operating systems of note in the early 1980s included CP/M and VM, which were developed for a broad range of systems, but did not survive the competition with Microsoft in the IBM-PC arena and are basically footnotes in the history of OS development. A number of idiosyncratic PCs, notably the Osborne and Kaypro, which sold large numbers of units initially, were equipped with CP/M. Kaypro later converted to MS-DOS and sold IBM-PC compatible computers.

2.5.2. Apple and Macintosh/OS, with a Taste of Linux

The Apple computer company was selling their Apple II machines at a brisk pace when the IBM-PC was introduced in 1980. Apple II platforms ran Apple-DOS, a text-based OS with graphics capabilities and a simple programming language (BASIC) that was developed in part by Microsoft.

As the IBM-PC emerged from the shadows, Apple was putting the finishing touches on their prototype graphics-based machine called Lisa. The Lisa machine was an unfortunately clumsy and rather slow (not to mention expensive) computer that was based on research performed at Xerox Palo Alto Research Center (Xerox PARC) in the 1970s on graphical user interfaces (GUIs) and hypertext. Fortunately, Apple very quickly remedied the Lisa's defects and produced the Macintosh (or Mac), which was the first graphics-based personal computer. Macs were easier to use than IBM-PC's and posed a challenge to IBM's growing domination of the PC market. Macintosh computers initially used the Motorola 68k series of processors, and have lately embraced the PowerPC chip, developed by a consortium of computer manufacturers. In contrast, Microsoft has largely stuck with the Intel line of 80x86 chips, which has dominated the PC marketplace.

The Macintosh operating system, called MacOS, provided a fully functional graphics interface and was more complete and significantly easier to use than MS-DOS. MacOS did not have all the features of UNIX, but provided substantial graphics support, a sophisticated file system, and early support for networking and virtual memory systems. It is interesting to note that, as of January 2000, Apple remains Microsoft's sole viable competitor in the alternative PC operating system market, and is now partially owned by Microsoft.

Another PC-based operating system, called Linux, is a public-domain version of UNIX that runs on Intel platforms and is rapidly gaining acceptance among technically oriented computer users. Unfortunately, Linux has proven less tractable for non-technical people, who constitute the vast majority of computer users. As a result, it is reasonable to suggest that Microsoft is likely to dominate the PC OS market for the foreseeable future.

2.5.3. Mach

As operating systems develop, whether for personal computers or mainframes, they tend to grow larger and more cumbersome. Problems associated with this trend include increased errors, difficulty in making the OS more reliable, and lack of adaptation to new features. In response to these problems, Carnegie-Mellon University researchers developed Mach in the mid- to late-1980s. Mach is based on the idea of a central partition of the OS being active at all times. This partition is called a microkernel. The remainder of the usual operating system functionality is provided by processes running outside the microkernel, which can be invoked on an as-needed basis.

Mach provides processes for UNIX emulation, so UNIX programs can be supported in addition to a wide variety of Mach-compatible programs. Mach was used in the NeXt machine, developed by Steve Jobs after he left Apple Computer in the late 1980s. Although this machine was ultimately unsuccessful commercially, it introduced many important new ideas in software design and use, and capitalized upon the synergy between Mach and UNIX.

A consortium of computer vendors has developed an operating system called Open Systems Foundation 1 (OSF/1), which is based on Mach. Additionally, the ideas developed in Mach have been highly influential in operating system development since the late 1980s.

2.5.4. Microsoft Windows and NT

Recall that early IBM PCs used early version of MS-DOS, which poorly supported graphics and had a text-only interface. With the popularity of Macintosh computers with their more sophisticated MacOS operating system, Microsoft felt commercial pressure to develop the Windows operating system. Initially designed as a sort of graphical interface for MS-DOS, and built directly upon MS-DOS (with all its inherent deficiencies), MS-Windows went through several revisions before including more standard OS functionality and becoming commercially attractive. MS-Windows 3.1 was the first commercially successful version, and introduced the important concept of interoperability to PCs.

Definition. Interoperability means that a given application can be run on many different computers (hardware platforms) or operating systems (software platforms).

Interoperability was a key buzzword at Microsoft for several years, and remains a keystone of the company's aggressive, commercially successful plan to unify business and home computing environments worldwide. In short MS-Windows can (in principle) run on a variety of processors. Additionally, other companies (such as Citrix) have constructed software emulators that give the user the look-and-feel of MS-Windows on machines for which Windows is not natively available (e.g., Sun workstations). These OS emulators allow Windows-compatible software to run in nearly the same mode that it runs on Windows-compatible machines, and thus extend interoperability to further unify computing practice.

MS-Windows adds so much functionality to MS-DOS that it could itself be thought of as an operating system. However, MS-DOS deficiencies became obvious as PCs gained more power and flexibility. In particular, MS-DOS was developed for a small PC with small memory, no hard disk, no protection, no memory management, and no network (all capabilities taken for granted today). Although later versions of MS-DOS addressed these problems, the OS has severe drawbacks that render it obsolete in a network-intensive environment. Unfortunately, MS-Windows also suffers from these problems, since it is built on MS-DOS.

As a result, Microsoft was motivated to develop a modern operating system for PCs, which it called Windows/New Technology (Windows/NT). This system was first released in 1993, and incorporates many features of UNIX and Mach. In later sections of these notes, NT and UNIX will be compared and contrasted. Suffice to say that NT is now the choice of most network administrators for a Microsoft-compatible networking environment. However, it is interesting to note that Microsoft, for all its promotion of the Windows family of OS products, still sells UNIX under the trade name of XENIX.

Reading Assignments and Exercises


This concludes our overview of operating system development. We next overview the UNIX operating system and basic UNIX commands.


References

[1] Tannenbaum, A.S. Operating Systems: Design and Implementation, Prentice-Hall (1987).

[2] Corbato, F.J., M. Merwin-Daggett, and R.C. Daley. "An experimental time-sharing system", Proceedings of the AFIPS Fall Joint Computer Conference, pp. 335-344 (1962).

[3] Corbato, F.J., J.H. Saltzer, and C.T. Clingen. "MULTICS - The first seven years", Proceedings of the AFIPS Spring Joint Computer Conference, pp. 571-583 (1972).

[4] Corbato, F.J. and V.A. Vyssotsky. "Introduction and overview of the MULTICS system", Proceedings of the AFIPS Fall Joint Computer Conference, pp. 185-196 (1965).

[5] Jarvis, J.E. "The many faces of MULTICS", Computing Journal 18(1):2-6 (1975).