Dr. Oliver Selfridge

Former Chief Scientist
GTE Laboratories
Boston, MA
172 Lexington Ave., Apt. 1
Cambridge, MA 02138-3688

Artificial Intelligence and the Future of Software Technology


Dr. Selfridge is a fellow of both the AAAS and the AAAI. He delivered the Distinguished Lecture at U. Mass. CIS Dept. in May 1990, and has been publishing in the field of artificial intelligence, communications, and computer science for forty years. He served as a member of the NSA Science Advisory Board for 20 years, chairing the Data Processing Panel for the last 15 of those. He has served on various advisory panels to the White House, as well as the peer review committee for the the NIH. He joined the Lincoln Lab at MIT in 1951 after a stint in the Signal Corps. Also at MIT he was Associate Director of Project Mac (large-scale time-sharing effort), and later the Cambridge Project. In 1975 he went to BBN as a Senior Scientist, and in 1983 became Chief Scientist at GTE Laboratories, Computer and Information Systems Laboratory. He retired in 1993, but has continued interest in Machine Learning and AI, especially self-improving systems.

Artificial Intelligence and the Future of Software Technology

I trace the oveall history of computing; and the parallel rise of the field called Artifical Intelligence. We watched the development of the magnetic core, and its later integration with solid state hardware these were really the practical trigger for modern practices of computers and computation. At the same time, a deeper philosophical question was being tackled by certain computer scientists; namely, the nature of the mind of man. These days we deal with computation primarily as a matter of software technology. It is a strange technology indeed. Ask a manager of programmers what a programmer does, and he/she will talk of logical design and ordered structures and writing program, especially, these days, with object-oriented languages. In fact, the ecology of programming is such that overall programmers spend over 80% of their time modifying code, not writing it. Yet the magazines and books on software technology do not even acknowledge that fact; there are no adequate categorizations, there is not even a large vocabulary dealing with and describing the nature of changes and modifications that software always needs.

I present a program of rationalizing the handling, constructing, and management of software and the people who deal with it. The aim is eventually to hand much of the responsiblity for certain kinds of maintenance to the computer itself; which must adapt its software to changing conditions and changing requirements. Changing and adaptation are at the foundation of these goals, and they are the essential subject of Artifical Intelligence. What is needed is a good deal of abstract and applied research on these problems, and I will suggest places to start.