Personal computers, now as ubiquitous as typewriters, are direct descendants of the LINC, an invention made some two decades ago, at the close of the paleo/computing era. In the early 1960’s, digital computers were accorded the reverence of religious totems. Massive machines engendered a mystique that daunted even the most adventurous biomedical researchers. It was an age when computers were untouchable, except through the ministrations of computer acolytes; a time when programs, once relinquished to the computer center, would be run and returned in about as long as it took a suit to be cleaned (special rush jobs — 24-hour service); when computers resided in large glassed-in sanctuaries, with signs warning “Do Not Enter.” In short, a time of hands off. Only the privileged priestly caste — designers, operators, system programmers — could gain direct access to a digital computer, and they understood what the others were missing. Computer economics encouraged the status quo. An hour on the IBM 7090, a common large commercial machine, circa 1960 (some of today’s micros are as powerful), cost $100 or more. The image of programmers pondering an elusive bug at such rates could turn any cost-conscious administrator green.
The computer landscape was almost completely dominated by IBM megaliths. With about 85 percent of the computer market, they ignored what came to be called “minicomputers.” With its investment in large computers such as the 7094, and the introduction of the 360 series machine, IBM had decided to push large “systems” which were wildly expensive for most laboratory applications. Perhaps most importantly, these systems were simply inappropriate for most laboratory uses. Small companies like Digital Equipment Corporation (DEC) sold digital equipment for the lab, but they were building blocks not computers.
In this environment, two groups at MIT came up with different approaches to encourage direct access. The first, called “time-sharing,” was intended to create the illusion for a number of users that each was in control of a large machine — in the first experiment, an IBM 7090. The basic idea was to slice a second into, say, 20 slivers, so that each of 20 users would be addressed in turn every second for 50 milliseconds. In such a system, a programmer would be charged for three minutes of computer time every hour. This approach allowed programmers to find design errors and to debug their programs much more efficiently than with the old “batch” mode of access.
About 10 miles away from the main campus, at MIT’s Lincoln Laboratory, in Lexington, Massachusetts, a different approach emerged. While participating in designing two highly advanced computers, TX-0 and TX-2, under Navy sponsorship, Wesley Clark realized that time sharing was not the only solution to the problem of direct access. Having designed a special-purpose machine for brain research, Clark understood how computers could transform the biomedical laboratory. Direct access was crucial, but for Clark, that also meant complete “ownership” of the machine. As he put it, “a computer should be just another piece of lab equipment.” In 1961 this notion was heretical. Computers were too expensive and veiled in mystery for most biomedical researchers. Even at MIT’s prestigious neighbor, Harvard, advanced experiment control meant a rack of clicking relays, clumsy to change and seriously limited in interpretive power.
Clark, who had contributed substantially to the development of the large TX-0 and TX-2 computers, had seen small but inconvenient CDC-160’s used in a few labs. He proposed building a relatively inexpensive, general-purpose computer that could be controlled easily by biomedical researchers, but his suggestions were met with indifference by Lincoln Lab’s management. The Air Force was paying most of the bills for operational support and biomedical applications were not high on the agenda. However, with encouragement and support by William Papian, the group leader, Clark continued to work on his idea for a small computer. Papian, one of the developers of ferrite core memories, a landmark in computer design, understood the implications of Clark’s ideas. Despite a lukewarm reaction from Fred Frick, director of division 6, Clark disappeared from the lab for about 3 weeks in 1961, and returned with a complete design for a small computer, with characteristics that marketing representatives would later call “user friendly.”
Clark’s computer was designed to satisfy four basic criteria: easy to program, easy to communicate with while in operation, easy to maintain, and able to process biotechnical signals directly. No computer in the early 1960’s could come close to fulfilling those objectives. Later, Clark added two shrewd criteria: it could not be too high to see over, and it must cost at most $25,000, the amount a lab director could spend without higher-level approval. The guideline for the height of the machine, which at first seems only whimsical, indicated Clark’s belief that a machine should not intimidate its owner — no awe-inspiring Golem for Clark.
Despite increasing pressure from Frick to support defense-related work at Lincoln Lab, Bill Papian and Wes Clark were determined to build a prototype machine — a concrete test of Clark’s design. Rather than designing new circuits, Clark decided to use modules manufactured by DEC, a company founded by Kenneth Olsen, who once had worked for Bill Papian at Lincoln Lab. DEC modules, components of a kind of electronic erector set, were in fact based on the digital circuits of the TX-2. Although the packaging was new, the circuits were familiar to Clark and his associates. So, in 1962, using off-the-shelf DEC modules, Clark and his associates put together a working computer. With a bow to Lincoln Lab and a pun on the feature linking the user closely to the machine, they dubbed the computer “LINC.”