RSS

Tag Archives: microsoft

Doing Windows, Part 3: A Pair of Strike-Outs

Come August of 1984, Microsoft Windows had missed its originally announced release date by four months and was still nowhere near ready to go. That month, IBM released the PC/AT, a new model of their personal computer based around the more powerful Intel 80286 processor. Amidst the hoopla over that event, they invited Microsoft and other prominent industry players to a sneak preview of something called TopView, and Bill Gates got an answer at last to the fraught question of why IBM had been so uninterested in his own company’s Windows operating environment.

TopView had much in common with Windows and the many other attempts around the industry, whether already on the market or still in the works, to build a more flexible and user-friendly operating environment upon the foundation of MS-DOS. Like Windows and so many of its peers, it would offer multitasking, along with a system of device drivers to isolate applications from the underlying hardware and a toolkit for application developers that would allow them to craft software with a consistent look and feel. Yet one difference made TopView stand out from the pack — and not necessarily in a good way. While it did allow the use of a mouse and offered windows of a sort, it ran in text rather than graphics mode. The end result was a long, long way from the Macintosh-inspired ideal of intuitiveness and attractiveness which Microsoft dreamed of reaching with their own GUI environment.

TopView at the interface level resembled something IBM might have produced for the mainframe market back in the day more than it did Windows and the other microcomputer GUI environments that were its ostensible competitors. Like IBM’s mainframe system software, it was a little stodgy, not terribly pretty, and not notably forgiving toward users who hadn’t done their homework, yet had a lot to offer underneath the hood to anyone who could accept its way of doing business. It was a tool that seemed designed to court power users and office IT administrators, even as its competitors advertised their ease of use to executives and secretaries.

Within its paradigm, though, TopView was a more impressive product than it’s generally given credit for being even today. It sported, for example, true preemptive multitasking [1]This is perhaps a good point to introduce a quick primer on multitasking techniques to those of you who may not be familiar with its vagaries. The first thing to understand is that multitasking during this period was fundamentally an illusion. The CPUs in the computers of this era were actually only capable of doing one task at a time. Multitasking was the art of switching the CPU’s attention between tasks quickly enough that several things seemed to be happening at once — that several applications seemed to be running at once. There are two basic approaches to creating this illusionary but hugely useful form of multitasking.

Cooperative multitasking — found in systems like the Apple Lisa, the Apple Macintosh between 1987’s System 5 and the introduction of OS X in 2001, and early versions of Microsoft Windows — is so named because it relies on the cooperation of the applications themselves. A well-behaved, well-programmed application is expected to periodically relinquish its control of the computer voluntarily to the operating system, which can then see if any of its own tasks need to be completed or any other applications have something to do. A cooperative-multitasking operating system is easier to program and less resource-intensive than the alternative, but its most important drawback is made clear to the user as soon as she tries to use an application that isn’t terribly well-behaved or well-programmed. In particular, an application that goes into an infinite loop of some sort — a very common sort of bug — will lock up the whole computer, bringing the whole operating system down with it.

Preemptive multitasking — found in the Commodore Amiga, Mac OS X, Unix and Linux, and later versions of Microsoft Windows — is so named because it gives the operating system the authority to wrest control from — to preempt — individual applications. Thus even a looping program can only slow down the system as a whole, not kill it entirely. For this reason, it’s by far the more desirable approach to multitasking, but also the more complicated to implement.
months before the arrival of the Commodore Amiga, the first personal computer to ship with such a feature right out of the box. Even ill-behaved vanilla MS-DOS applications could be coerced into multitasking under TopView. Indeed, while IBM hoped, like everyone else making extended operating environments, to tempt third-party programmers into making native applications just for them, they were willing to go to heroic lengths to get existing MS-DOS applications working inside TopView in the meantime. They provided special specifications files — known as “Program Information Files,” or PIFs — for virtually all popular MS-DOS software. These told TopView exactly how and when their subjects would try to access the computer’s hardware, whereupon TopView would step in to process those calls itself, transparently to the ill-behaved application. It was an admittedly brittle solution to a problem which seemed to have no unadulteratedly good ones; it required IBM to research the technical underpinnings of every major new piece of MS-DOS software that entered the market in order to keep up with an endless game of whack-a-mole that was exhausting just to think about. Still, it was presumably better than punting on the whole problem of MS-DOS compatibility, as Visi On had done. Whatever else one could say about IBM’s approach to extending MS-DOS, they thus had apparently learned at least a little something from the travails of their competitors. Even the decision to run in character mode sounds far more defensible when you consider that up to two-thirds of MS-DOS computers at the time of TopView’s release were equipped only with a monochrome screen capable of no other mode.

Unfortunately, TopView failed to overcome many of the other issues that dogged its competitors. Having been so self-consciously paired with the pricey PC/AT, it was still a bit out in front of the sweet spot of hardware requirements, requiring a 512 K machine to do much of anything at all. And it was still dogged by the 640 K barrier, that most troublesome of all aspects of MS-DOS’s primitiveness. With hacks to get around the barrier still in their relative infancy, TopView didn’t even try to support more memory, and this inevitably limited the appeal of its multitasking capability. With applications continuing to grow in complexity and continuing to gobble up ever more memory, it wouldn’t be long before 640 K wouldn’t be enough to run even two pieces of heavyweight business software at the same time, especially after one had factored in the overhead of the operating environment itself.

A Quick Tour of TopView


While it isn’t technically a graphical user interface, TopView shares many features with contemporaneous products like Visi On and Microsoft Windows. Here we’re choosing an application to launch from a list of those that are installed. The little bullet to the left of each name on the list is important; it indicates that we have enough memory free to run that particular application. With no more than 640 K available in this multitasking environment and no virtual-memory capability, memory usage is a constant concern.

Here we see TopView’s multitasking capabilities. We’re running the WordStar word processor and the dBase database, two of the most popular MS-DOS business applications, at the same time. Note the “windows” drawn purely out of text characters. Preemptive multitasking like TopView is doing here wouldn’t come to Microsoft Windows until Windows 95, and wouldn’t reach the Macintosh until OS X was released in 2001.

We bring up a TopView context window by hitting the third — yes, third — button on IBM’s official mouse. Here we can switch between tasks, adjust window sizes and positions (albeit somewhat awkwardly, given the limitations of pure text), and even cut and paste between many MS-DOS applications that never anticipated the need for such a function. No other operating environment would ever jump through more hoops to make MS-DOS applications work like they had been designed for a multitasking windowed paradigm from the start.

Some of those hoops are seen above. Users make MS-DOS applications run inside TopView by defining a range of parameters explaining just what the application in question tries to do and how it does it. Thankfully, pre-made definition files for a huge range of popular software shipped with the environment. Brittle as heck though this solution might be, you certainly can’t fault IBM’s determination. Microsoft would adopt TopView’s “Program Information File,” or PIF, for use in Windows as well. It would thereby become the one enduring technical legacy of TopView, persisting in Windows for years after the IBM product was discontinued in 1988.

One of the hidden innovations of TopView is its “Window Design Aid,” which lets programmers of native applications define their interface visually, then generates the appropriate code to create it. Such visually-oriented time-savers wouldn’t become commonplace programming aids for another decade at least. It all speaks to a product that’s more visionary than its reputation — and its complete lack of graphics — might suggest.

TopView shipped in March of 1985 — later than planned, but nowhere near as late as Microsoft Windows, which was now almost a full year behind schedule. It met a fractious reception. Some pundits called it the most important product to come out of IBM since the release of the original IBM PC, while others dismissed it as a bloated white elephant that hadn’t a prayer of winning mainstream acceptance — not even with the IBM logo on its box and a surprisingly cheap suggested list price of just $149. For many IBM watchers — not least those watching with concern inside Microsoft — TopView was most interesting not so much as a piece of technology as a sign of IBM’s strategic direction. “TopView is the subject of fevered whispers throughout the computer industry not because of what it does but because of what it means,” wrote PC Magazine. It had “sent shivers through the PC universe and generated watchfulness” and “possibly even paranoia. Many experts think, and some fear, that TopView is the first step in IBM’s lowering of the skirt over the PC — the beginning of a closed, proprietary operating system.”

Many did indeed see TopView as a sign that IBM was hoping to return to the old System/360 model of computing, seizing complete control of the personal-computing market by cutting Microsoft out of the system-software side. According to this point of view, the MS-DOS compatibility IBM had bent over backward to build into TopView needed last only as long as it took third-party developers to write native TopView applications. Once a critical mass of same had been built up, it shouldn’t be that difficult to decouple TopView from MS-DOS entirely, turning it into a complete, self-standing operating system in its own right. For Bill Gates, this was a true nightmare scenario, one that could mean the end of his business.

But such worries about a TopView-dominated future, to whatever extent he had them, proved unfounded. A power-user product with mostly hacker appeal in a market that revolved around the business user just trying to get her work done, TopView quickly fizzled into irrelevance, providing in the process an early warning sign to IBM, should they choose to heed it, that their omnipotence in the microcomputer market wasn’t as complete as it had been for so long in the mainframe market. IBM, a company that didn’t abandon products easily, wouldn’t officially discontinue TopView until 1988. By that time, though, the most common reaction to the news would be either “Geez, that old thing was still around?” or, more likely, “What’s TopView?”

Of course, all of this was the best possible news from Microsoft’s perspective. IBM still needed the MS-DOS they provided as much as ever — and, whatever else happened, TopView wasn’t going to be the as-yet-unreleased Windows’s undoing.

In the meantime, Bill Gates had Windows itself to worry about, and that was becoming more than enough to contend with. Beginning in February of 1984, when the planned Windows release date was given a modest push from April to May of that year, Microsoft announced delay after delay after delay. The constant postponements made the project an industry laughingstock. It became the most prominent target for a derisive new buzzword that had been coined by a software developer named Ann Winblad in 1983: “vaporware.”

Inside Microsoft, Windows’s reputation was little better. As 1984 wore on, the project seemed to be regressing rather than progressing, becoming a more and more ramshackle affair that ran more and more poorly. Microsoft’s own application developers kicked and screamed when asked to consider writing something for Windows; they all wanted to write for the sexy Macintosh.

Neil Konzen, a Microsoft programmer who had been working with the Macintosh since almost two years before that machine’s release, was asked to take a hard look at the state of Windows in mid-1984. He told Bill Gates that it was “a piece of crap,” “a total disaster.” Partially in response to that verdict, Gates pushed through a corporate reorganization, placing Steve Ballmer, his most trusted lieutenant, in charge of system software and thus of Windows. He reportedly told Ballmer to get Windows done or else find himself a job at another company. And in corporate America, of course, shit rolls downhill; Ballmer started burning through Windows project managers at a prodigious pace. The project acquired a reputation inside Microsoft as an assignment to be avoided at all costs, a place where promising careers went to die. Observers inside and outside the project’s orbit were all left with the same question: just what the hell was preventing all these smart people from just getting Windows done?

The fact was that Windows was by far the biggest thing Microsoft had ever attempted from the standpoint of software engineering, and it exposed the limitations of the development methodology that had gotten them this far. Ever since the days when Gates himself had cranked out their very first product, a version of BASIC to be distributed on paper tape for the Altair kit computer, Microsoft had functioned as a nested set of cults of personality, each project driven by if not belonging solely to a single smart hacker who called all the shots. For some time now, the cracks in this edifice had been peeking through; even when working on the original IBM PC, Gates was reportedly shocked and nonplussed at the more structured approach to project management that was the norm at IBM, a company that had already brought to fruition some of the most ambitious projects in the history of the computer industry. And IBM’s project managers felt the same way upon encountering Microsoft. “They were just a bunch of nerds, just kids,” remembers one. “They had no test suites, nothing.” Or, as another puts it:

They had a model where they just totally forgot about being efficient. That blew our minds. There we were watching all of these software tools that were supposed to work together being built by totally independent units, and nobody was talking to each other. They didn’t use any of each other’s code and they didn’t share anything.

With Windows, the freelancing approach to software development finally revealed itself to be clearly, undeniably untenable. Scott MacGregor, the recent arrival from Xerox who was Windows’s chief technical architect in 1984, remembers his frustration with this hugely successful young company — one on whose products many of the Fortune 500 elite of the business world were now dependent — that persisted in making important technical decisions on the basis of its employees’ individual whims:

I don’t think Bill understood the magnitude of doing a project such as Windows. All the projects Bill had ever worked on could be done in a week or a weekend by one or two different people. That’s a very different kind of project than one which takes multiple people more than a year to do.

I don’t think of Bill as having a lot of formal management skills, not in those days. He was kind of weak on managing people, so there was a certain kind of person who would do well in the environment. There were a lot of people at that time with no people skills whatsoever, people who were absolutely incompetent at managing people. It was the Peter Principle: very successful technical people would get promoted to management roles. You’d get thirty people reporting to one guy who was not on speaking terms with the rest of the group, which is inconceivable.

One has to suspect that MacGregor had one particular bête noir in mind when talking about his “certain kind of person.” In the eyes of MacGregor and many others inside Microsoft, Steve Ballmer combined most of Bill Gates’s bad qualities with none of his good ones. Like Gates, he had a management style that often relied on browbeating, but he lacked the technical chops to back it up. He was a yes man in a culture that didn’t suffer fools gladly, a would-be motivational speaker who too often failed to motivate, the kind of fellow who constantly talked at you rather than with you. One telling anecdote has him visiting the beleaguered Windows team to deliver the sort of pep talk one might give to a football team at halftime, complete with shouts and fist pumps. He was greeted by… laughter. “You don’t believe in this?” Ballmer asked, more than a little taken aback. The team just stood there uncomfortably, uncertain how to respond to a man that MacGregor and many of the rest of them considered almost a buffoon, a “non-tech cheerleader.”

And yet MacGregor had problems of his own in herding the programmers who were expected to implement his grand technical vision. Many of them saw said vision as an overly slavish imitation of the Xerox Star office system, whose windowing system he had previously designed. He seemed willfully determined to ignore the further GUI innovations of the Macintosh, a machine with which much of Microsoft — not least among them Bill Gates — were deeply enamored. The most irritating aspect of his stubbornness was his insistence that Windows should employ only “tiled windows” that were always stretched the horizontal length of the screen and couldn’t overlay one another or be dragged about freely in the way of their equivalents on the Macintosh.

All of this created a great deal of discord inside the project, especially given that much of MacGregor’s own code allegedly didn’t work all that well. Eventually Gates and Ballmer brought in Neil Konzen to rework much of MacGregor’s code, usurping much of his authority in the process. As Windows began to slip through MacGregor’s fingers, it began to resemble the Macintosh more and more; Konzen was so intimately familiar with Apple’s dream machine that Steve Jobs had once personally tried to recruit him. According to Bob Belleville, another programmer on the Windows team, Konzen gave to Windows “the same internal structure” as the Macintosh operating system; “in fact, some of the same errors were carried across.” Unfortunately, the tiled-windows scheme was judged to be too deeply embedded by this point to change.

In October of 1984, Microsoft announced that Windows wouldn’t ship until June of 1985. Gates sent Ballmer on an “apology tour” of the technology press, prostrating himself before journalist after journalist. It didn’t seem to help much; the press continued to pile on with glee. Stewart Alsop II, the well-respected editor of InfoWorld magazine, wrote that “buyers probably believe the new delivery date for Windows with the same fervor that they believe in Santa Claus.” Then, he got downright nasty: “If you’ve got something to sell, deliver. Otherwise, see to the business of creating the product instead of hawking vaporware.”

If the technology press was annoyed with Microsoft’s constant delays and prevarications, the third parties who had decided or been pressured into supporting Windows were getting even more impatient. One by one, the clone makers who had agreed to ship Windows with their machines backed out of their deals. Third-party software developers, meanwhile, kept getting different versions of the same letter from Microsoft: “We’ve taken the wrong approach, so everything you’ve done you need to trash and start over.” They too started dropping one by one off the Windows bandwagon. The most painful defection of all was that of Lotus, who now reneged on their promise of a Windows version of Lotus 1-2-3. The latter was the most ubiquitous single software product in corporate America, excepting only MS-DOS, and Microsoft had believed that the Windows Lotus 1-2-3 would almost guarantee their new GUI environment’s success. The question now must be whether the lack of same would have the opposite effect.

In January of 1985, Steve Ballmer brought in Microsoft’s fifth Windows project manager: Tandy Trower, a three-year veteran with the company who had recently been managing Microsoft BASIC. Trower was keenly aware of Bill Gates’s displeasure at recent inroads being made into Microsoft’s traditional BASIC-using demographic by a new product called Turbo Pascal, from a new industry player called Borland. The Windows project’s reputation inside Microsoft was such that he initially assumed he was being set up to fail, thereby giving Gates an excuse to fire him. “Nobody wanted to touch Windows,” remembers Trower. “It was like the death project.”

Trower came in just as Scott MacGregor, the Xerox golden boy who had arrived amidst such high expectations a year and a half before, was leaving amidst the ongoing discord and frustration. Ballmer elected to replace MacGregor with… himself as Windows’s chief technical architect. Not only was he eminently unqualified for such a role, but he thus placed Trower in the awkward position of having the same person as both boss and underling.

As it happened, though, there wasn’t a lot of need for new technical architecting. In that respect at least, Trower’s brief was simple. There were to be no new technical or philosophical directions explored, no more debates over the merits of tiled versus overlapping windows or any of the rest. The decisions that had already been made would remain made, for better or for worse. Trower was just to get ‘er done, thereby stemming the deluge of mocking press and keeping Ballmer from having to go on any more humiliating apology tours. He did an admirable job, all things considered, of bringing some sort of coherent project-management methodology to a group of people who desperately needed one.

What could get all too easily lost amidst all the mockery and all very real faults with the Windows project as a functioning business unit was the sheer difficulty of the task of building a GUI environment without abandoning the legacy of MS-DOS. Unlike Apple, Microsoft didn’t enjoy the luxury of starting with a clean slate; they had to keep one foot in the past as well as one in the future. Nor did they enjoy their competitor’s advantage of controlling the hardware on which their GUI environment must run. The open architecture of the IBM PC, combined with a market for clones that was by now absolutely exploding, meant that Microsoft was forced to contend with a crazy quilt of different hardware configurations. All those different video cards, printers, and memory configurations that could go into an MS-DOS machine required Microsoft to provide drivers for them, while all of the popular existing MS-DOS applications had to at the very least be launchable from Windows. Apple, by contrast, had been able to build the GUI environment of their dreams with no need to compromise with what had come before, and had released exactly two Macintosh models to date — models with an architecture so closed that opening their cases required a special screwdriver only available to Authorized Apple Service Providers.

In the face of all the challenges, some thirty programmers under Trower “sweated blood trying to get this thing done,” as one of them later put it. It soon became clear that they weren’t going to make the June 1985 deadline (thus presumably disappointing those among Stewart Alsop’s readers who still believed in Santa Claus). Yet they did manage to move forward in far more orderly fashion than had been seen during all of the previous year. Microsoft was able to bring to the Comdex trade show in May of 1985 a version of Windows which looked far more complete and polished than anything they had shown before, and on June 28, 1985, a  feature-complete “Preview Edition” was sent to many of the outside developers who Microsoft hoped would write applications for the new environment. But the official first commercial release of Windows, known as Windows 1.01, didn’t ship until November of 1985, timed to coincide with that fall’s Comdex show.

In marked contrast to the inescapable presence Windows had been at its first Comdex of two years before, the premiere of an actual shipping version of Windows that November was a strangely subdued affair. But then, the spirit of the times as well was now radically different. In the view of many pundits, the bloom was rather off the rose for GUIs in general. Certainly the GUI-mania of the Fall 1983 Comdex and Apple’s “1984” advertisement now seemed like the distant past. IBM’s pseudo-GUI TopView had already failed, as had Visi On, while the various other GUI products on offer for MS-DOS machines were at best struggling for marketplace acceptance. Even the Macintosh had fallen on hard times, such that many were questioning its very survival. Steve Jobs, the GUI’s foremost evangelist, had been ignominiously booted from Apple the previous summer — rendered, as the conventional wisdom would have it, a has-been at age thirty. Was the GUI itself doomed to suffer the same fate? What, asked the conventional-wisdom spouters, was really so bad about MS-DOS’s blinking command prompt? It was good enough to let corporate America get work done, and that was the important thing. Surely it wouldn’t be Windows, an industry laughingstock for the better part of two years now, that turned all this GUI hostility back in the market’s face. Windows was launching into a headwind fit to sink the Queen Mary.

It was a Microsoft public-relations specialist named Pam Edstrom who devised the perfect way of subverting the skepticism and even ridicule that was bound to accompany the belated launch of the computer industry’s most infamous example of vaporware to date. She did so by stealing a well-worn page from the playbook of media-savvy politicians and celebrities who have found themselves embroiled in controversy. How do you stop people making fun of you? Why, you beat them to the punch by making fun of yourself first.

Edstrom invited everybody who was anybody in technology to a “Microsoft Roast” that Comdex. The columnist John C. Dvorak became master of ceremonies, doing a credible job with a comedic monologue to open the affair. (Sample joke about the prematurely bald Ballmer: “When Windows was first announced, Ballmer still had hair!”) Gates and Ballmer themselves then took the stage, where Stewart Alsop presented them with an InfoWorld “Golden Vaporware Award.” The two main men of Microsoft then launched into a comedy routine of their own that was only occasionally cringe-worthy, playing on their established reputations as the software industry’s enfant terrible and his toothy-but-not-overly-bright guard dog. Gates said that Ballmer had wanted to cut features: “He came up with this idea that we could rename this thing Microsoft Window; we would have shipped that a long time ago.” Ballmer told how Gates had ordered him to “ship this thing before the snow falls, or you’ll end your career here doing Windows!”; the joke here was that in Seattle, where the two lived and worked, snow almost never falls. Come the finale, they sang “The Impossible Dream” together as a giant shopping cart containing the first 500 boxed copies of Windows rolled onto the stage amidst billows of dry ice.

All told, it was a rare display of self-deprecating humanity and showmanship from two people not much known for either. From a PR perspective, it was about the best lemonade Microsoft could possibly have made out of a lemon of a situation. The press was charmed enough to start writing about Windows in more cautiously positive terms than they had in a long, long time. “The future of integration [can] be perceived through Windows,” wrote PC World. Meanwhile Jim Seymour, another respected pundit, wrote a column for the next issue of PC Week that perfectly parroted the message Microsoft was trying to get across:

I am a Windows fan, not because of what it is today but what it almost certainly will become. I think developers who don’t build Windows compatibility into new products and new releases of successful products are crazy. The secret of Windows in its present state is how much it offers program developers. They don’t have to write screen drivers [or] printer drivers; they can offer their customers a kind of two-bit concurrency and data exchange.

The most telling aspect of even the most sympathetic early reviews is their future orientation; they emphasize always what Windows will become, not what it is. Because what Windows actually was in November of 1985 was something highly problematic if not utterly superfluous.

The litany of problems began with that same old GUI bugaboo: performance. Two years before, Bill Gates had promised an environment that would run on any IBM PC or clone with at least 192 K of memory. Technically speaking, Microsoft had come very close to meeting that target: Windows 1.01 would run even on the original IBM PC from 1981, as long as it had at least 256 K of memory. It didn’t even absolutely require a hard drive. But running and running well — or, perhaps better put, running usably — were two very different matters. Windows could run on a floppy-based system, noted PC Magazine dryly, “in the same sense that you can bail a swimming pool dry with a teaspoon.” To have a system that wasn’t so excruciatingly slow as to outweigh any possible benefit it might deliver, you really needed a hard drive, 640 K or more of memory, and an 80286 processor like that found in the IBM PC/AT. Even on a hot-rod machine like this, Windows was far from snappy. “Most people will say that any screen refresh that can be watched takes too long,” wrote PC Magazine. “Very little happens too quickly to see in Windows.” One of Microsoft’s own Windows programmers would later offer a still more candid assessment: even at this late date, he would say, “Windows was a pig,” the result of a project that had passed through too many hands and had too many square chunks of code hammered into too many round holes.

Subjectively, Windows felt like it had been designed and programmed by a group of people who had read a whole lot about the Macintosh but never actually seen or used one. “I use a Macintosh enough to know what a mouse-based point-and-click interface should feel like,” wrote John C. Dvorak after the goodwill engendered by the Microsoft Roast had faded. “Go play with a Mac and you’ll see what I mean. Windows is clunky by comparison. Very clunky.” This reputation for endemic clunkiness — for being a Chrysler minivan pitted against Apple’s fine-tuned Porsche of a GUI — would continue to dog Windows for decades to come. In this first release, it was driven home most of all by the weird and unsatisfying system of “tiled” windows.

All of which was a shame because in certain ways Windows was actually far more technically ambitious than the contemporary Macintosh. It offered a cooperative-multitasking system that, if not quite the preemptive multitasking of TopView or the new Commodore Amiga, was more than the single-tasking Mac could boast. And it also offered a virtual-memory scheme which let the user run more applications than would fit inside 640 K. Additional RAM beyond the 640 K barrier or a hard drive, if either or both were extant, could be used as a swap space when the user tried to open more applications than there was room for in conventional memory. Windows would then automatically copy data back and forth between main memory and the swap space as needed in order to keep things running. The user was thus freed from having to constantly worry about her memory usage, as she did in TopView — although performance problems quickly started to rear their head if she went too crazy. In that circumstance, “the thrashing as Windows alternately loads one application and then the other brings the machine to its knees,” wrote PC Magazine, describing another trait destined to remain a Windows totem for years to come.

A Quick Tour of Windows 1.01


Windows 1.01 boots into what it calls the “MS-DOS Executive,” which resembles one of the many popular aftermarket file managers of the MS-DOS era, such as Norton Commander. Applications are started from here by double-clicking on their actual .exe files. This version of Windows does nothing to insulate the users from the file-level contents of their hard drives; it has no icons representing installed applications and, indeed, no concept of installation at all. Using Windows 1.01 is thus akin to using Windows 10 if the Start Menu, Taskbar, Quick-Launch Toolbar, etc. didn’t exist, and all interactions happened at the level of File Explorer windows.

In a sense, the MS-DOS Executive is Windows. Closing it serves as the shutdown command.

Under Microsoft’s “tiled windows” approach, windows always fill the width of the screen but can be tiled vertically. They’re never allowed to overlap one another under any circumstances, and taken as a group will always fill the screen. One window, the MS-DOS Executive will always be open and thus filling the screen even if nothing else is running. There is no concept of a desktop “beneath” the windows.

Windows can be sized to suit in vertical terms by grabbing the widget at their top right and dragging. Here we’re making the MS-DOS Executive window larger. When we release the mouse button, the Clock window will automatically be made smaller in proportion to its companion’s growth. Remember, overlapping windows aren’t allowed, no matter how hard you try to trick the software…

…with one exception. Sub-windows opened by applications can be dragged freely around the screen and can overlay other windows. Go figure!

If we try to drag a window around by its title bar, an interesting philosophical distinction is revealed between Windows 1.01 and more recent versions. We wind up swapping the contents of one window with those of another. Applications, in other words, aren’t intrinsically bound to their windows, but can be moved among them. In the screenshot above, the disk icon is actually our mouse cursor, representing the MS-DOS Executive window’s contents, which we’re about to swap with the contents of what is currently the Clock window.

Windows 1.01 shipped with Write, a fairly impressive minimalist word processor — arguably the most impressive application ever made for the little-used operating environment.

In contrast to the weirdness of other aspects of Windows 1.01, working within an application like Write feels reassuringly familiar, what with its scroll bars and Macintosh-like pull-down menus. Interestingly, the latter use the click-and-hold approach of the Mac rather than the click-once approach of later versions of Windows.

Windows 1.01 doesn’t have a great way of getting around the 640 K barrier, but it does implement a virtual-memory scheme — no mean feat in itself on a processor without built-in memory protection — which uses any memory beyond 640 K as essentially a RAM disk — or, as Microsoft called it, a “Smart Drive.” In the absence of extra memory, or if it too is filled up, the hard disk becomes the swap area.

By the time Windows was ready, all of the clone makers whom Bill Gates had cajoled and threatened into shipping it with their computers had jumped off the bandwagon, telling him that it had simply taken him too long to deliver, and that the product which he had finally delivered was simply too slow on most hardware for them to foist it on their customers in good conscience. With that path to acceptance closed to them, Microsoft was forced to push Windows as a boxed add-on sold through retail channels, a first for them in the context of a piece of system software. In a measure of just how badly Gates wanted Windows to succeed, Microsoft elected to price it at only $99 — one-tenth of what VisiCorp had tried to ask for Visi On two years before — despite its huge development cost.

Unfortunately, the performance problems, the awkwardness of the tiled windows, and the almost complete lack of native Windows applications beyond those that shipped with the environment outweighed the low price; almost nobody bought the thing. Microsoft was trapped by the old chicken-or-the-egg conundrum that comes with the launch of any new computing platform — a problem that is solved only with difficulty in even the best circumstances. Buyers wanted to see Windows applications before they bought the operating environment, while software developers wanted to see a market full of eager buyers before they invested in the platform. The fact that Windows could run most vanilla MS-DOS applications with some degree or another of felicity only helped the software developers make the decision to stay away unless and until the market started screaming for Windows-native versions of their products. Thus, the MS-DOS compatibility Microsoft had built into Windows, which had been intended as a mere bridge to the Windows-native world of the future, proved something of a double-edged sword.

When you add up all of the hard realities, it comes as little surprise that Microsoft’s first GUI sparked a brief run of favorable press notices, a somewhat longer run of more skeptical commentary, and then disappeared without a trace. Already by the spring of 1986, it was a non-factor, appearing for all the world to be just one more gravestone in the GUI graveyard, likely to be remembered only as a pundit’s punch line. Bill Gates could comfort himself only with the fact that IBM’s own big system-software innovation had landed with a similar splat.

IBM and Microsoft had each tried to go it alone, had each tried to build something better upon the foundation of MS-DOS, and had each struck out swinging. What now? Perhaps the odd couple still needed one another, loath though either was to admit it. In fact, by that spring of 1986 a gradual rapprochement had already been underway for a year, despite deep misgivings from both parties. TopView and Windows 1 had both been a bust, but neither company had gotten where they were by giving up easily. If they pooled their forces once again, who knew what they might achieve. After all, it had worked out pretty well the first time around.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Magazine of April 30 1985, February 25 1986, April 18 1987, and April 12 1988; Byte of February 1985, May 1988, and the special issue of Fall 1985; InfoWorld of May 7 1984 and November 19 1984; PC World of December 1985; Tandy Trower’s “The Secret Origins of Windows” on the Technologizer website. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 This is perhaps a good point to introduce a quick primer on multitasking techniques to those of you who may not be familiar with its vagaries. The first thing to understand is that multitasking during this period was fundamentally an illusion. The CPUs in the computers of this era were actually only capable of doing one task at a time. Multitasking was the art of switching the CPU’s attention between tasks quickly enough that several things seemed to be happening at once — that several applications seemed to be running at once. There are two basic approaches to creating this illusionary but hugely useful form of multitasking.

Cooperative multitasking — found in systems like the Apple Lisa, the Apple Macintosh between 1987’s System 5 and the introduction of OS X in 2001, and early versions of Microsoft Windows — is so named because it relies on the cooperation of the applications themselves. A well-behaved, well-programmed application is expected to periodically relinquish its control of the computer voluntarily to the operating system, which can then see if any of its own tasks need to be completed or any other applications have something to do. A cooperative-multitasking operating system is easier to program and less resource-intensive than the alternative, but its most important drawback is made clear to the user as soon as she tries to use an application that isn’t terribly well-behaved or well-programmed. In particular, an application that goes into an infinite loop of some sort — a very common sort of bug — will lock up the whole computer, bringing the whole operating system down with it.

Preemptive multitasking — found in the Commodore Amiga, Mac OS X, Unix and Linux, and later versions of Microsoft Windows — is so named because it gives the operating system the authority to wrest control from — to preempt — individual applications. Thus even a looping program can only slow down the system as a whole, not kill it entirely. For this reason, it’s by far the more desirable approach to multitasking, but also the more complicated to implement.

 

Tags: , , ,

Doing Windows, Part 2: From Interface Manager to Windows

Bill Gates was as aware as everyone else of the abundant deficiencies of his own company’s hastily procured operating system for the IBM PC. So, in September of 1981, before the PC had even shipped and just a handful of months after VisiCorp had started their own similar project, he initiated work at Microsoft on a remedy for MS-DOS’s shortcomings. Initially called the “Interface Manager,” it marks the start of a long, fraught tale of struggle and woe that would finally culminate in the operating system still found on hundreds of millions of computers today.

As the name would imply, the Interface Manager was envisioned first and foremost as a way to make computing easier for ordinary people, a graphical layer to sit atop MS-DOS and insulate them from the vagaries of the command line. As such, it was the logical follow-on to an even older project inside Microsoft with similar goals, another whose distant descendant is still ubiquitous today: Microsoft Multiplan, the forefather of Excel.

In those days, people who had worked at the already legendary Xerox Palo Alto Research Center were traded around the computer industry like the scarce and precious commodity they were, markers of status for anyone who could get their hands on one of them. Thus it could only be regarded as something of a coup when Charles Simonyi came to work for Microsoft on February 6, 1981, after almost a decade spent at PARC. There he had been responsible for a word processor known as Bravo, the very first in history to implement the “what you see is what you get” philosophy — meaning that the text you saw on the monitor screen looked exactly like what would be produced by the printer. When the 32-year-old Hungarian immigrant, debonair and refined, showed his secretary at PARC a snapshot of his soon-to-be boss Bill Gates, 25-going-on-15 and looking like he could really use a shower and a haircut, she nearly fell out of her chair laughing: “Charles, what are you doing? Here you are at the best research lab in the world!” What could he say? A rapidly changing industry could make for strange bedfellows. Simonyi became Microsoft’s First Director of Applications Development.

At Microsoft, he found the Multiplan project, an attempt to make a competitor to VisiCalc, already underway. He pushed hard to turn it into not just another spreadsheet but a different kind of spreadsheet, placing a premium on ease of use in a field of business software already becoming known for its crypticness. For him, ease of use meant augmenting the long lists of command keystrokes with a menu of possibilities that would always be at the user’s fingertips. Simonyi:

I like the obvious analogy of a restaurant. Let’s say I go to a French restaurant and I don’t speak the language. It’s a strange environment and I’m apprehensive. I’m afraid of making a fool of myself, so I’m kind of tense. Then a very imposing waiter comes over and starts addressing me in French. Suddenly, I’ve got clammy hands. What’s the way out?

The way out is that I get the menu and point at something on the menu. I cannot go wrong. I may not get what I want — I might end up with snails — but at least I won’t be embarrassed.

But imagine if you had a French restaurant without a menu. That would be terrible.

It’s the same thing with computer programs. You’ve got to have a menu. Menus are friendly because people know what their options are, and they can select an option just by pointing. They do not have to look for something that they will not be able to find, and they don’t have to type some command that might be wrong.

It’s true that Multiplan’s implementation of menus was a long way from what a modern GUI user might expect to see. For one thing, they were lined up at the bottom rather than the top of the screen. (It would take software makers a surprisingly long time to settle on the topside placement we know today, as evidenced by the menus we saw at the bottom of Visi On’s windows as well in my previous article.) More generally, much of what Simonyi had been able to implement in Bravo on the graphical terminals at Xerox PARC way back in the mid-1970s was impossible on an IBM PC running Multiplan in the early 1980s, thanks to the lack of a mouse and a restriction to text-only display modes. One could only do what one could with the tools to hand — and by that standard, it must be said, Microsoft Multiplan was a pretty good first effort.

Multiplan was released in 1982. Designed to run inside as little as 64 K of memory and ported to several platforms (including even the humble Commodore 64), it struggled to compete with Lotus 1-2-3, which was designed from the start for an IBM PC with at least 256 K. The Lotus product would come to monopolize the spreadsheet market to the tune of an 80-percent share and sales of 5 million copies by the end of the 1980s, while Multiplan would do… rather less well. Still, the general philosophy that would guide Microsoft’s future efforts was there. Their software would distinguish itself by being approachable for the average person. Sometimes this would yield great results, other times it would come off more as a condescending caricature of user-friendliness, but it’s the philosophy that still guides Microsoft’s consumer software to this day.

Here we see Microsoft Multiplan in action. Note the two rows of menus along the bottom of the screen; this counted as hugely user-friendly circa 1982.

Charles Simonyi left an even bigger mark upon Microsoft’s next important application. Like Multiplan, Multi-Tool Word attempted to compete with the leading application of its type primarily on the basis of ease of use. This time, however, the application type in question was the word processor, and the specific application in question was WordStar, a product which was so successful that its publisher, MicroPro International, had gross sales that exceeded Microsoft’s as late as 1983. Determined to recreate what he had wrought at Xerox PARC more exactly than had been possible with Multiplan, a project he had come into in the middle, Simonyi convinced Microsoft to make a mouse just for the new word processor. (“The mouse,” InfoWorld magazine had to explain, “is a pointing device that is designed to roll on the desktop next to the keyboard of a personal computer.”)

The very first Microsoft mouse, which retailed for $195 in 1983.

Debuting in May of 1983, in many ways Multi-Tool Word was the forerunner of the operating environment that would come to be known as Microsoft Windows, albeit in the form of one self-contained application. Certainly most of the touted advantages to a GUI environment were in place. It implemented windows, allowing multiple documents to be open simultaneously within them; it utilized the mouse if anything more elegantly than the full-blown GUI environment Visi On would upon its debut six months later; it could run in graphical mode, allowing it to display documents just as they would later appear on the printer; it did its best to duplicate the interface of Multiplan, on the assumption that a user shouldn’t be expected to relearn the most basic interface concepts every time she needs to use a new application; it had an undo command to let the user walk back her mistakes. Unfortunately, it was also, like most early GUI experiments, slow in comparison to more traditional software, and it lacked such essential features as a spell checker and a mailing-list manager. Like Multiplan, it would have a hard time breaking through in one of the most competitive segments of the business-software market, one which was dominated first by the more powerful WordStar and then by the still more power-user-friendly WordPerfect. But, once again, it gave a glimpse of the future of computing as Microsoft envisioned it.

Multi-Tool Word. Here someone is using the mouse to create a text style. Note the WYSIWYG text displayed above.

Even as these applications were being developed at Microsoft, work on the Interface Manager, the software designed to integrate all of their interface enhancements and more into a non-application-specific operating environment, was continuing at its own pace. As usual with such projects, the Interface Manager wound up encompassing far more than just a new interface. Among other requirements, Gates had stated that it had to introduce a system of drivers to insulate applications from the hardware, and that it had to expose a toolkit to application programmers that was far larger and richer than MS-DOS’s 27 bare-bones function calls. Such a toolkit would allow programmers to make diverse applications with a uniform look and feel, thus delivering on another of the GUI’s most welcome promises.

This is one of a series of screenshots, published in the December 1983 issue of Byte Magazine, which together may represent the oldest extant evidence of Microsoft Windows’s early appearance. Note in particular the menus at the bottom of the screen. Oddly, a much more mature version of Windows, with menus at the top of the individual windows, was demonstrated at the Comdex trade show which began on November 23, 1983. Despite the magazine’s cover date, one therefore has to assume that these screenshots are older — probably considerably older, given how dramatic the differences between the Windows demonstrated at Comdex and the one we see here really are.

In early 1983, Bill Gates and a few colleagues met with IBM to show them their Interface Manager in progress. They had expected a thrilled reception, expected IBM to immediately embrace it as the logical next stage in the two companies’ partnership. What they got was something much different. “They thought it was neat stuff,” recalls Gates’s right-hand man Steve Ballmer, “but they said, ‘We have this other thing we are pretty excited about.'” IBM, it seemed, must be working on an extension to MS-DOS of their own. This unsatisfying and, from Microsoft’s perspective, vaguely alarming meeting heralded the beginning of a new, far less trusting phase in the two companies’ relationship. The unlikely friendship between the young and freewheeling Microsoft and the middle-aged and staid IBM had spawned the IBM PC, a defining success for both companies. Now, though, it was entering a much more prickly phase.

IBM had been happy to treat this scruffy kid named Bill Gates almost as an equal partner as long as their first general-purpose microcomputer remained nothing more than a marketplace experiment. Now, though, with the IBM PC the first bullet item on their stock reports, the one exploding part of an otherwise fairly stagnant business, they were beginning to wonder what they had wrought when they signed that generous deal to merely license MS-DOS from Microsoft rather than buy it outright. Gates had already made it clear that he would happily license the same operating system to others; this, combined with the open architecture and easy-to-duplicate commodity hardware of the IBM PC itself, was allowing the first of what would soon be known as the “PC clones” to enter the market, undercutting IBM’s prices. IBM saw this development, for understandable reasons, as a potential existential threat to the one truly exciting part of their business, and they weren’t at all sure whose side Microsoft was really on. The two partners were bound together in a hopeless tangle of contracts and mutual dependencies that could quite possibly never be fully severed. Still, there wasn’t, thought IBM, any point in getting themselves yet more entangled. From here on, then, IBM and Microsoft’s relationship would live in an uncertain no man’s land somewhere between partners and competitors — a situation destined to have major implications for the quest to replace MS-DOS with something better.

IBM’s suspicions about Microsoft were probably at least partly justified — Bill Gates’s reputation as a shark whom you trusted at your peril was by no means unearned — but undoubtedly became something of a self-fulfilling prophecy as well. Suddenly aware of the prospect of a showdown between their Interface Manager and whatever IBM was playing so close to the vest, Microsoft began reaching out to the emerging clone makers — to names like Compaq, Zenith, and Tandy — in a far more concerted way. If matters should indeed end in a showdown, these could be the bridges which would allow their system software rather than IBM’s to remain the standard in corporate America.

As if all this wasn’t creating concern enough inside Microsoft and IBM alike, there was also the question of what to make of the Apple Lisa, which had been announced in January of 1983 and would ship in June. The much-heralded first personal computer designed from the ground up for the GUI paradigm had a lot of problems when you looked below the surface. For one thing, it was far too expensive for even the everyday corporate market, what with its price tag of over $10,000. And it suffered from a bad case of over-ambition on the part of its software architects, who had decided to ask its 5 MHz Motorola 68000 processor to run a highly sophisticated operating system sporting virtual memory and cooperative multitasking. The inevitable result was that the thing was slow. A popular knock-knock joke inside the computer industry followed the “Who’s there?” with a fifteen-second pause before a “Lisa” finally came forth. If someone was going to pay over $10,000 for a personal computer, everyone agreed, she was justified in expecting it to run like a Ferrari rather than a Volkswagen bus.

The Lisa GUI, looking and working pretty much the way we still expect such things to look and work today.

When you looked beyond the pricing and performance problems, however, the Lisa was… well, the Lisa was amazing. Apple’s engineering team had figured this whole GUI thing out in a way that no one, not even the demigods at Xerox PARC, had managed to do before. The greatest testament to Apple’s genius today is just how normal the Lisa interface still looks, how easily one can imagine oneself just sitting down and getting to work using it. (Try saying that about any other unfamiliar operating system of this period!) All the stuff we expect is present, working as we expect it to: draggable windows with scroll bars on the side and sizing widgets attached to the corners; pull-down menus up there at the top of the screen; a desktop to function as the user’s immediate workspace; icons representing disks, files, and applications which can be dragged from place to place or even thrown in the trash can; drag-and-drop and copy-and-paste. Parts of all this had appeared before in other products, such as the Xerox Star, but never before had it all come together like this. After the Lisa, further refinements of the GUI would all be details; the really essential, really important pieces were all in place. It instantly made all of the industry’s many other GUI projects, including Microsoft’s, look hopelessly clunky.

Thanks not least to that $10,000 price tag, the Lisa itself was doomed to be a commercial failure. But Apple was promising a new machine for 1984, one which would be cheaper and would employ the same interface without the speed-sapping virtual memory and multitasking. For obvious reasons, the prospect of this next Apple computer, to be called the Macintosh, made plenty of people in the MS-DOS world, among them Bill Gates, very nervous.

One can view much of the history of the personal computer in the United States through the shifting profiles of Bill Gates and Steve Jobs, those two personalities who will always be most identified with a whole era of technology in the public imagination. Just a few years hence from 1983, Jobs would be widely viewed as a has-been in his early thirties, a flighty hippie whom the adults who were now running Apple had wisely jettisoned; Gates, on the other hand, would be a darling of Wall Street well on the way to his reputation as the computer industry’s all-powerful Darth Vader. In 1983, however, the picture was very different. Jobs was still basking in the glory of having been one half — and by far the most charismatic half at that — of the pair of dreamers who had supposedly invented the personal computer in that famous California garage of theirs, while Gates was the obscure head of a rather faceless company whose importance was understood only by industry insiders. None could foresee the utter domination of virtually all personal computing that would soon be within Gates’s grasp. He was still balanced on the divide between his old way of doing business, as the head of an equal-opportunity purveyor of programming languages and other software to whoever cared to pay for them, and his new, as the supreme leader in the cause of one platform to rule them all under the banner of Microsoft.

This list of the top software companies of 1983 provides a fascinating snapshot of an industry in rapid transition. VisiCorp, which would easily have topped the list in any of the three previous years, has fallen back to number 5, already a spent force. Lotus, the spreadsheet-making rival responsible for their downfall, will take over the top spot in 1984 and remain there through 1986. The biggest company of all this year is the now-forgotten MicroPro, maker of WordStar, the most popular early word processor; they will be wiped out by WordPerfect, which doesn’t even make this list yet, within a year or two. Finally, note the number of home- and entertainment-software publishers which manage to sneak onto the bottom half of this list. In years to come, the business-software market will continue to explode so dramatically in contrast to a comparatively slow-growing home-computing software market as to make that a thing of the past.

So, Jobs still had the edge on Gates in lots of ways in 1983, and he wasn’t afraid to let him know. He expected Microsoft to support the Macintosh in the form of application software. Specifically, he expected them to provide a spreadsheet, a business-graphics application, and a database; they’d signed a contract to do so, and been provided with their first extremely crude prototype of the new machine in return, back in January of 1982. According to Mike Murray, the Mac’s first marketing manager, Jobs would call Gates up and hector him in a way that no one would have dared talk to the Bill Gates of just a few years later: “You get down here right now. I don’t care what your schedule says. I want you down here tomorrow morning at 8:30 and I want you to tell me exactly what you’re doing [for the Macintosh] at Microsoft.”

For his part, Gates was willing to play the role of Jobs’s good junior partner, just as he had played the same role so dutifully for IBM, but he never lost sight of the big picture. The fact was that when it came to business sense, the young Bill Gates was miles ahead of the young Steve Jobs. One can’t help but imagine him smiling to himself when Jobs lectured him on how he should forget about MS-DOS and the rest of the system-software business, how application software was where the money was really at. Gates knew something which Jobs had apparently yet to realize: if you control the operating system on people’s computers, you can potentially control everything.

Still, Jobs was aware enough of business realities to see an environment like the Interface Manager, available on commodity clone hardware much cheaper than the Macintosh, as a significant threat. He reminded Gates pointedly of language in the January 1982 contract between the two companies which prohibited Microsoft from using knowledge gained of the Macintosh in competing products for other platforms. Gates respectfully but firmly held his ground, not even precisely denying that insights gained from the Macintosh might find their way into the Interface Manager but rather saying that the “competing products” mentioned in the contract would naturally have to mean other spreadsheets, business-graphic applications, or databases — not full-fledged operating environments. Further, he pointed out, the restrictions only applied until January 1, 1984, or the first shipment of the Macintosh, whichever came first. By the time the Interface Manager was actually ready to sell, it would all be a moot point anyway.

It was at about this time that the Interface Manager became suddenly no longer the Interface Manager. The almost aggressively generic name of “Windows” was the brainchild of a new marketing manager named Rowland Hanson, who was just 31 years old when he came to Microsoft but had already left his stamp on such brands as Betty Crocker, Contadina, and Neutrogena. At his first interview with Bill Gates, the latter’s words immediately impressed him:

You know, the only difference between a dollar-an-ounce moisturizer and a forty-dollar-an-ounce moisturizer is in the consumer’s mind. There is no technical difference between moisturizers. We will technically be the best software. But if people don’t believe it or people don’t recognize it, it won’t matter. While we’re on the leading edge of technology, we also have to be creating the right perception about our products and our company, the right image.

Who would have thought that this schlubby-looking nerd understood so much about marketing? Having taken the interview on a lark, Hanson walked out of Gates’s office ready to help him create a new, slicker image for Microsoft. He knew nothing whatsoever about computers, but that didn’t matter. He hadn’t known anything about moisturizers either when he went to work for Neutrogena.

Hanson devised the approach to product branding that persists at Microsoft to this day. Each product’s name would be stripped down to its essence, creating the impression that it was the definitive — or the only — product of its type. The only ornamentation would be the Microsoft name, to make sure no one forgot who made it. Thus Multi-Tool Word, after just a few months on the market under that unwieldy name, now became simply Microsoft Word. If he had arrived just a little earlier, Hanson grumbled, he would have been able to make sure that Multiplan shipped as Microsoft Spreadsheet, and MS-DOS — the software that “tells the IBM PC how to think” in his new marketing line — would have had the first part of the abbreviation spelled out every single time: Microsoft DOS. Luckily, there was still time to save the next generation of Microsoft system software from the horrid name of Interface Manager. It should rather be known simply as Microsoft Windows. “It appeared there were going to be multiple systems like this on the market,” remembers Hanson. “Well, we wanted our name basically to define the generic.” Gates agreed, and one of the most enduring brands in the history of computing was born.

The Windows project had run hot and cold inside Microsoft over the past couple of years in the face of other pressing priorities. Now, though, Gates started pushing hard under the prompting of external developments. The Macintosh was scheduled to make its debut in January of 1984. Just as worryingly, VisiCorp planned to ship Visi On at last before 1983 was up, and had scheduled a big, much-anticipated unveiling of the final product for the Comdex business-computing trade show which would begin on November 23. Determined to avoid the impression that Microsoft was being left behind by the GUI arms race, and even more determined to steal VisiCorp’s thunder, Gates wanted a Windows unveiling before Comdex. To help accomplish that, he hired another refugee from Xerox named Scott MacGregor and put him in charge of the project’s technical architecture. At 26 years old, MacGregor was a little too young even by the early-blooming standards of hacker culture to have made a major contribution during the glory days of Xerox PARC, but he had done the next best thing: he had designed the windowing system for the Star office workstation, the only tangible commercial product Xerox themselves ever developed out of all the work done with mice and menus at PARC. Other Xerox veterans would soon join MacGregor on the Windows project, which spent the late summer and early autumn of 1983 in a mad scramble to upstage its various competitors.

On November 10, at a lavish event inside New York City’s posh Helmsley Palace Hotel, Microsoft officially announced Windows, saying it would be available for purchase by April of 1984 and that it would run on a computer without a hard drive and with as little as 192 K of memory — a stark contrast to Visi On’s minimum specification of a hard-drive-equipped 512 K machine. And, unlike under Visi On, all applications, even those not specifically written for Windows, would be able to run in the environment, at least after a fashion. “Misbehaved” programs, as Microsoft referred to what was actually the entirety of the MS-DOS application market at the time of the unveiling, could be started through Windows but would run in full-screen mode and not have access to its features; Windows would effectively shut down when the time came to run such an application, then start itself back up when the user exited. It wasn’t ideal, but it struck most people as an improvement on Visi On’s our-way-or-the-highway approach.

The dirty little secret hiding behind this very first demonstration of Windows was that the only actual Windows application that existed at the time was a little paint program Microsoft’s programmers had put together, along with a few applets like a calendar, a calculator, and an extremely basic text editor. Microsoft had, they claimed, “commitments” from such big players as Lotus, Ashton-Tate, and Peachtree to port their vanilla MS-DOS applications to Windows, but the reality was that none of these took the form of much more than a vague promise and a handshake.

The work Bill Gates had been doing to line up support from the emerging community of clone makers was in plainer evidence. Microsoft could announce that no fewer than 23 of their current MS-DOS licensees had signed up to ship Windows on their machines as well, including names like Compaq, Data General, Hewlett-Packard, Radio Shack/Tandy, Wang, and Zenith. The only important licensee absent from the list was the biggest of them all, IBM — a fact which the business and technology press could hardly fail to notice. Yet the plan was, as Gates didn’t hesitate to declare, to have Windows on 90 percent of all MS-DOS machines by the end of 1984. Where did that leave IBM? Among the trailing 10 percent?

As it happened, Microsoft was still trying to get IBM onboard the Windows train. The day after the big rollout, Gates flew from New York to Boca Raton, Florida, where the division of IBM responsible for their microcomputers was located, and made another pitch. Perhaps he believed that the good press stemming from the previous day’s festivities, which was to be found in the business and technology sections of this day’s newspapers all over the country, would sway them. If so, he was disappointed. Once again, IBM was noncommittal in all senses of the adjective, alluding vaguely to a potential similar product of their own. Then, a few days after Gates left them, IBM announced that they would distribute Visi On through their dealer network. This move was several steps short of anointing it the only or the official GUI of the IBM PC, but it was nevertheless a blessing of a certain type, and far more than IBM had yet agreed to do for Windows. It was becoming abundantly clear that IBM was, at the very least, hedging their bets.

A week later, the Comdex show opened in Las Vegas, with the finished Visi On on public display for the first time. Just a few booths down from that spectacle, Microsoft, still determined to undermine Visi On’s debut, showed Windows as well. Indeed, Windows was everywhere at Comdex; “You couldn’t take a leak in Vegas without seeing a Windows sticker,” remembers one Microsoft executive. Yet the actual product behind all the hype was presented only in the most circumscribed way. Microsoft employees ran through a carefully scripted spiel inside the Windows booth, making sure the public got nowhere close to the controls of the half-finished (at best) piece of software.

Still, Microsoft had some clear advantages to point out when it came to Windows, and point them out they did. For one, there was the aforementioned ability to run — or at least to start — non-Windows applications within the environment. For another, true multitasking would be possible under Windows, claimed Microsoft, not just the concurrently open applications of Visi On. And it would be possible, they said, to write Windows programs on the selfsame Windows computer on which they would run, in contrast to the $20,000 minicomputer one had to buy to develop for Visi On. This led Microsoft to refer to Windows as the open GUI, a system carrying forward the original promise of the personal computer as an anything tool for ordinary users.

In the nuts and bolts of their interfaces as well, the two systems presented contrasting approaches. The Visi On interface strongly resembled something that might have been seen at Xerox PARC in the 1970s, but Windows betrayed the undeniable influence of Apple’s recent work on the Lisa and, as would later become clear, the Macintosh — not hugely surprising, given that Microsoft had been able to follow the step-by-step evolution of the latter since January of 1982, thanks to their privileged position as contracted application developers for the new machine. Windows already seemed to operate a bit more intuitively than the rather awkward Visi On; Microsoft already understood, as their competitor so plainly did not, that a mouse could be used for things other than single clicks.

In other ways, though, Windows was less impressive than Visi On, not to mention the Lisa and Macintosh. And one of these ways was, ironically given the new product’s name, the windows themselves. They weren’t allowed to overlap one another — at all. In what Microsoft spun as the “automatic window layout” feature, sizing one window would cause all of the others to resize and reposition themselves in response. Nor could you freely drag windows around the screen like you could on the Lisa and Macintosh. “It’s the metaphor of the neat desktop,” said Steve Ballmer, spinning like mad. Neat or not, this wasn’t quite the way most people expected a window manager to work — and yet Microsoft would stick with it for a well-nigh absurdly long time to come.

A Quick Tour of Windows as Shown at the 1983 Comdex Show


None other than Dan Bricklin of VisiCalc fame visited the November 1983 Comdex show with a camcorder. The footage he took is a precious historical document, not least in showing Windows in action as it existed at the time of these first public demonstrations. Much must still be surmised thanks to the shaky camerawork and the fact that the public was kept at arm’s length from a far-from-complete piece of software, but we’re very lucky Bricklin and his camcorder were there that day. We learn from his footage that Windows had progressed hugely since the screenshot shown earlier in this article, showing the clear influence of Apple’s Lisa and Macintosh interfaces.

Windows apparently boots up to a blank screen with a row of (non-draggable) icons at the bottom, each representing an installed application.

Here a text editor, a clock applet, and a paint program have been opened. Unlike in Visi On and Apple’s GUIs, windows cannot overlap one another. On the other hand, note that the menu bar has been moved to the top of the window, where we expect it to be today. On the other other hand, it appears that the menu still provides single-click options only, not drop-down lists of choices. Note how cluttered the two-line text-editor menu at the top is.

At the bottom of each window (just to the left of the mouse pointer in the photograph) is a set of widgets. From left, these are: minimize the window; maximize the window (minimizing all of the others in the process, since windows are not allowed to overlap one another); automatically “tile” the window with the others that are open (it’s not entirely clear how this worked); initiate a resize operation; close the window. Despite the appearance of a resizing widget in this odd location, it does appear from other video evidence that it was already possible to size a window by dragging on its border. Whether one first had to click the resizing widget to initiate such an operation is, once again, unclear.

A scroll bar is in place, but it’s at the left rather than the right side of the window.

A few weeks after Comdex closed up shop, VisiCorp shipped Visi On, to cautiously positive press notices behind which lurked all of the concerns that would prove the product’s undoing: its high price; its high system requirements and slow performance even on a hot-rod machine; its lack of compatibility with vanilla MS-DOS applications; the huge hardware hurdle developers had to leap to make applications for the system. Bill Gates, in other words, needn’t worry himself overmuch on that front.

But a month after Visi On made its underwhelming debut, the Apple Macintosh made its overwhelming version of same in the form of that famous “1984” television advertisement, which aired to an audience of 96 million viewers during the third quarter of the Super Bowl. Two days later, when the new computer was introduced in a slightly more orderly way at De Anza College’s Flint Auditorium, Bill Gates was there to support his sometime friend, sometime rival Steve Jobs in the biggest moment of his busy life to date. Versions of Microsoft Multiplan and BASIC for the Macintosh, Gates could announce there, would be available from the day the new computer shipped.

The announcement of the Mac version of Microsoft BASIC at the ceremony marked one of the last gasps of the old Microsoft business model which dated back to the days of the Altair kit computer, when they would supply a BASIC as a matter of course for every new microcomputer to come down the pipe. [1]That said, it wasn’t quite the last gasp: Microsoft would also supply a BASIC for the Commodore Amiga, constituting the only piece of software they would ever develop for that machine, shortly after its release in 1985. But more important than the BASIC or even the Mac Multiplan was the mere fact that Microsoft was there at all in Flint Auditorium, getting their piece of the action. Bill Gates was doing what he always did, seeking to control those parts of the industry which he could and exploit those parts which he couldn’t. He didn’t know whether the Macintosh was destined to take over business computing entirely, as some were claiming, or whether its flaws, all too easily overlooked under the auditorium’s bright lights, would undermine its prospects in the end. Certainly those flaws were legion when you dug below the surface, including but not limited to a price which was, if vastly less than that of the Lisa, still far more than a typical MS-DOS machine; the lack of a hard drive; the straitened memory of just 128 K; the lack of amenability to expansion, which only exacerbated the previous three flaws; the lack of multitasking or even the ability to open concurrent programs; and an interface which corporate America might read as too friendly, crossing past the friend zone into cutesy and unbusinesslike. But what Bill Gates did know, with every bit as much certainty as Steve Jobs, was that the GUI in the abstract was the future of computing.

In June of 1984, with Windows having missed its release target of two months previous but still hopefully listed in Microsoft’s catalog as “coming soon,” Gates and Steve Ballmer wrote an internal memo which described in explicit, unvarnished detail their future strategy of playing the Macintosh and Windows off against one another:

Microsoft believes in the mouse and graphics as invaluable to the man-machine interface. We will bet on that belief by focusing new development on the two new environments with the mouse and graphics: Macintosh and Windows.

This also makes sense from a marketing perspective. Our focus will be on the business user, a customer who can afford the extra hardware expense of a mouse and high-resolution screen, and who will pay premium prices for quality easy-to-use software.

Microsoft will not invest significant development resources in new Apple II, MSX, CP/M-80, or character-based IBM PC applications. We will finish development and do a few enhancements to existing products.

Over the foreseeable future, our plan is to implement products first for the Mac and then to port them to Windows. We are taking care in the design of the Windows user interface to make this as easy as possible.

In his more unguarded moments, Gates would refer to Windows as “Mac on the [IBM] PC.”

Just one worrisome unknown continued to nag at him: what role would IBM play in his GUI-driven world of the future?

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC World of September 1983; InfoWorld of May 30 1983, November 21 1983, April 2 1984, October 21 1991, and November 20 1995; MacWorld of September 1991; Byte of December 1983. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 That said, it wasn’t quite the last gasp: Microsoft would also supply a BASIC for the Commodore Amiga, constituting the only piece of software they would ever develop for that machine, shortly after its release in 1985.
 

Tags: , , ,

Doing Windows, Part 1: MS-DOS and Its Discontents

Has any successful piece of software ever deserved its success less than the benighted, unloved exercise in minimalism that was MS-DOS? The program that started its life as a stopgap under the name of “The Quick and Dirty Operating System” at a tiny, long-forgotten hardware maker called Seattle Computer Products remained a stopgap when it was purchased by Bill Gates of Microsoft and hastily licensed to IBM for their new personal computer. Archaic even when the IBM PC shipped in October of 1981, MS-DOS immediately sent half the software industry scurrying to come up with something better. Yet actually arriving at a viable replacement would absorb a decade’s worth of disappointment and disillusion, conflict and compromise — and even then the “replacement” would still have to be built on top of the quick-and-dirty operating system that just wouldn’t die.

This, then, is the story of that decade, and of how Microsoft at the end of it finally broke Windows into the mainstream.


When IBM belatedly turned their attention to the emerging microcomputer market in 1980, it was both a case of bold new approaches and business-as-usual. In the willingness they showed to work together with outside partners on the hardware and especially the software front, the IBM PC was a departure for them. In other ways, though, it was a continuation of a longstanding design philosophy.

With the introduction of the System/360 line of mainframes back in 1964, IBM had in many ways invented the notion of a computing platform: a nexus of computer models that could share hardware peripherals and that could all run the same software. To buy an IBM system thereafter wasn’t so much to buy a single computer as it was to buy into a rich computing ecosystem. Long before the saying went around corporate America that “no one ever got fired for buying Microsoft,” the same was said of IBM. When you contacted them, they sent a salesman or two out to discuss your needs, desires, and budget. Then, they tailored an installation to suit and set it up for you. You paid a bit more for an IBM, but you knew it was safe. System/360 models were available at prices ranging from $2500 per month to $115,000 per month, with the latter machine a thousand times more powerful than the former. Their systems were thus designed, as all their sales literature emphasized, to grow with you. When you needed more computer, you just contacted the mother ship again, and another dark-suited fellow came out to help you decide what your latest needs really were. With IBM, no sharp breaks ever came in the form of new models which were incompatible with the old, requiring you to remake from scratch all of the processes on which your business depended. Progress in terms of IBM computing was a gradual evolution, not a series of major, disruptive revolutions. Many a corporate purchasing manager loved them for the warm blanket of safety, security, and compatibility they provided. “Once a customer entered the circle of 360 users,” noted IBM’s president Thomas Watson Jr., “we knew we could keep him there a very long time.”

The same philosophy could be seen all over the IBM PC. Indeed, it would, as much as the IBM name itself, make the first general-purpose IBM microcomputer the accepted standard for business computing on the desktop, just as were their mainframe lines in the big corporate data centers. You could tell right away that the IBM PC was both built to last and built to grow along with you. Opening its big metal case revealed a long row of slots just waiting to be filled, thereby transforming it into exactly the computer you needed. You could buy an IBM PC with one or two floppy drives, or more, or none; with a color or a monochrome display card; with anywhere from 16 K to 256 K of RAM.

But the machine you configured at time of purchase was only the beginning. Both IBM and a thriving aftermarket industry would come to offer heaps more possibilities in the months and years that followed the release of the first IBM PC: hard drives, optical drives, better display cards, sound cards, ever larger RAM cards. And even when you finally did bite the bullet and buy a whole new machine with a faster processor, such as 1984’s PC/AT, said machine would still be able to run the same software as the old, just as its slots would still be able to accommodate hardware peripherals scavenged from the old.

Evolution rather than revolution. It worked out so well that the computer you have on your desk or in your carry-on bag today, whether you prefer Windows, OS X, or Linux, is a direct, lineal descendant of the microcomputer IBM released more than 35 years ago. Long after IBM themselves got out of the PC game, and long after sexier competitors like the Commodore Amiga and the first and second generation Apple Macintosh have fallen by the wayside, the beast they created shambles on. Its long life is not, as zealots of those other models don’t hesitate to point out, down to any intrinsic technical brilliance. It’s rather all down to the slow, steady virtues of openness, expandibility, and continuity. The timeline of what’s become known as the “Wintel” architecture in personal computing contains not a single sharp break with the past, only incremental change that’s been carefully managed — sometimes even technologically compromised in comparison to what it might have been — so as not to break compatibility from one generation to the next.

That, anyway, is the story of the IBM PC on the hardware side, and a remarkable story it is. On the software side, however, the tale is more complicated, thanks to the failure of IBM to remember the full lesson of their own System/360.

At first glance, the story of the IBM PC on the software side seems to be just another example of IBM straining to offer a machine that can be made to suit every potential customer, from the casual home user dabbling in games and BASIC to the most rarefied corporate purchaser using it to run mission-critical applications. Thus when IBM announced the computer, four official software operating paradigms were also announced. One could use the erstwhile quick-and-dirty operating system that was now known as MS-DOS; [1]MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles. one could use CP/M, the standard for much of pre-IBM business microcomputing, from which MS-DOS had borrowed rather, shall we say, extensively (remember the latter’s original name?); one could use an innovative cross-platform environment, developed by the University of California San Diego’s computer-science department, that was based around the programming language Pascal; or one could choose not to purchase any additional operating software at all, instead relying on the machine’s built-in ROM-hosted Microsoft BASIC environment, which wasn’t at all dissimilar from those the same company had already provided for many or most of the other microcomputers on the market.

In practice, though, this smorgasbord of possibilities only offered one remotely appetizing entree in the eyes of most users. The BASIC environment was really suited only to home users wanting to tinker with simple programs and save them on cassettes, a market IBM had imagined themselves entering with their first microcomputer but had in reality priced themselves out of. The UCSD Pascal system was ahead of its time with its focus on cross-platform interoperability, accomplished using a form of byte code that would later inspire the Java virtual machine, but it was also rather slow, resource-hungry, and, well, just kind of weird — and it was quite expensive as well. CP/M ought to have been poised for success on the new machine given its earlier dominance, but its parent company Digital Research was unconscionably late making it available for the IBM PC, taking until well after the machine’s October 1981 launch to get it ported from the Zilog Z-80 microprocessor to the Intel architecture of the IBM PC and its successor models — and when CP/M finally did appear it was, once again, expensive.

That left MS-DOS, which worked, was available, and was fairly cheap. As corporations rushed out to purchase the first safe business microcomputer at a pace even IBM had never anticipated, MS-DOS relegated the other three solutions to a footnote in computing history. Nobody’s favorite operating system, it was about to become the most popular one in the world.

The System/360 line that had made IBM the 800-pound gorilla of large-scale corporate data-processing had used an operating system developed in-house by them with an eye toward the future every bit as pronounced as that evinced by the same line’s hardware. The emerging IBM PC platform, on the other hand, had gotten only half of that equation down. MS-DOS was locked into the 1 MB address space of the Intel 8088, allowing any computer on which it ran just 640 K of RAM at the most. When newer Intel processors with larger address spaces began to appear in new IBM models as early as 1984, software and hardware makers and ordinary users alike would be forced to expend huge amounts of time and effort on ugly, inefficient hacks to get around the problem.

Infamous though the 640 K barrier would become, memory was just one of the problems that would dog MS-DOS programmers throughout the operating system’s lifetime. True to its post-quick-and-dirty moniker of the Microsoft Disk Operating System, most of its 27 function calls involved reading and writing to disks. Otherwise, it allowed programmers to read the keyboard and put text on the screen — and not much of anything else. If you wanted to show graphics or play sounds, or even just send something to the printer, the only way to do it was to manually manipulate the underlying hardware. Here the huge amount of flexibility and expandability that had been designed into the IBM PC’s hardware architecture became a programmer’s nightmare. Let’s say you wanted to put some graphics on the screen. Well, a given machine might have an MDA monochrome video card or a CGA color card, or, soon enough, a monochrome Hercules card or a color EGA card. You the programmer had to build into your program a way of figuring out which one of these your host had, and then had to write code for dealing with each possibility on its own terms.

An example of how truly ridiculous things could get is provided by WordPerfect, the most popular business word processor from the mid-1980s on. WordPerfect Corporation maintained an entire staff of programmers whose sole job function was to devour the technical specifications and command protocols of each new printer that hit the market and write drivers for it. Their output took the form of an ever-growing pile of disks that had to be stuffed into every WordPerfect box, even though only one of them would be of any use to any given buyer. Meanwhile another department had to deal with the constant calls from customers who had purchased a printer for which they couldn’t find a driver on their extant mountain of disks, situations that could be remedied in the era before widespread telecommunications only by shipping off yet more disks. It made for one hell of a way to run a software business; at times the word processor itself could almost feel like an afterthought for WordPerfect Printer Drivers, Inc.

But the most glaringly obvious drawback to MS-DOS stared you in the face every time you turned on the computer and were greeted with that blinking, cryptic “C:\>” prompt. Hackers might have loved the command line, but it was a nightmare for a secretary or an executive who saw the computer only as an appliance. MS-DOS contrived to make everything more difficult through its sheer primitive minimalism. Think of the way you work with your computer today. You’re used to having several applications open at once, used to being able to move between them and cut and paste bits and pieces from one to the other as needed. With MS-DOS, you couldn’t do any of this. You could run just one application at a time, which would completely fill the screen. To do something else, you had to shut down the application you were currently using and start another. And if what you were hoping to do was to use something you had made in the first application inside the second, you could almost always forget about it; every application had its own proprietary data formats, and MS-DOS didn’t provide any method of its own of moving data from one to another.

Of course, the drawbacks of MS-DOS spelled opportunity for those able to offer ways to get around them. Thus Lotus Corporation became one of the biggest software success stories of the 1980s by making Lotus 1-2-3, an unwieldy colossus that integrated a spreadsheet, a database manager, and a graph- and chart-maker into a single application. People loved the thing, bloated though it was, because all of its parts could at least talk to one another.

Other solutions to the countless shortcomings of MS-DOS, equally inelegant and partial, were rampant by the time Lotus 1-2-3 hit it big. Various companies published various types of hacks to let users keep multiple applications resident in memory, switching between them using special arcane key sequences. Various companies discussed pacts to make interoperable file formats for data transfer between applications, although few of them got very far. Various companies made a cottage industry out of selling pre-packaged printer drivers to other developers for use in their applications. People wrote MS-DOS startup scripts that brought up easy-to-choose-from menus of common applications on bootup, thereby insulating timid secretaries and executives alike from the terrifying vagueness of the command line. And everybody seemed to be working a different angle when it came to getting around the 640 K barrier.

All of these bespoke solutions constituted a patchwork quilt which the individual user or IT manager would have to stitch together for herself in order to arrive at anything like a comprehensive remedy for MS-DOS’s failings. But other developers had grander plans, and much of their work quickly coalesced around various forms of the graphical user interface. Initially, this fixation may sound surprising if not inexplicable. A GUI built using a mouse, menus, icons, and windows would seem to fix only one of MS-DOS’s problems, that being its legendary user-unfriendliness. What about all the rest of its issues?

As it happens, when we look closer at what a GUI-based operating environment does and how it does it, we find that it must or at least ought to carry with it solutions to MS-DOS’s other issues as well. A windowed environment ideally allows multiple applications to be open at one time, if not actually running simultaneously. Being able to copy and paste pieces from one of those open applications to another requires interoperable data formats. Running or loading multiple applications also means that one of them can’t be allowed to root around in the machine’s innards indiscriminately, lest it damage the work of the others; this, then, must mark the end of the line for bare-metal programming, shifting the onus onto the system software to provide a proper layer of high-level function calls insulating applications from a machine’s actual or potential hardware. And GUIs, given that they need to do all of the above and more, are notoriously memory-hungry, which obligated many of those who made such products in the 1980s to find some way around MS-DOS’s memory constraints. So, a GUI environment proves to be much, much more than just a cutesy way of issuing commands to the computer. Implementing one on an IBM PC or one of its descendants meant that the quick-and-dirty minimalism of MS-DOS had to be chucked forever.

Some casual histories of computing would have you believe that the entire software industry was rigidly fixated on the command line until Steve Jobs came along to show them a better way with the Apple Macintosh, whereupon they were dragged kicking and screaming into computing’s necessary future. Such histories generally do acknowledge that Jobs himself got the GUI religion after a visit to the Xerox Palo Alto Research Center in December of 1979, but what tends to get lost is the fact that he was hardly alone in viewing PARC’s user-interface innovations as the natural direction for computing to go in the more personal, friendlier era of high technology being ushered in by the microcomputer. Indeed, by 1981, two years before a GUI made its debut on an Apple product in the form of the Lisa, seemingly everyone was already talking about them, even if the acronym itself had yet to be invented. This is not meant to minimize the hugely important role Apple really would play in the evolution of the GUI; as we’ll see to a large extent in the course of this very series of articles, they did much original formative work that has made its way into the computer you’re probably using to read these words right now. It’s rather just to say that the complete picture of how the GUI made its way to the personal computer is, as tends to happen when you dig below the surface of any history, more variegated than a tidy narrative of “A caused B which caused C” allows for.

In that spirit, we can note that the project destined to create the MS-DOS world’s first GUI was begun at roughly the same time that a bored and disgruntled Steve Jobs over at Apple, having been booted off the Lisa project, seized control of something called the Macintosh, planned at the time as an inexpensive and user-friendly computer for the home. This other pioneering project in question, also started during the first quarter of 1981, was the work of a brief-lived titan of business software called VisiCorp.

VisiCorp had been founded by one Dan Fylstra under the name of Personal Software in 1978, at the very dawn of the microcomputer age, as one of the first full-service software publishers, trafficking mostly in games which were submitted to him by hobbyists. His company became known for their comparatively slick presentation in a milieu that was generally anything but; MicroChess, one of their first releases, was quite probably the first computer game ever to be packaged in a full-color box rather than a Ziploc baggie. But their course was changed dramatically the following year when a Harvard MBA student named Dan Bricklin contacted Fylstra with a proposal for a software tool that would let accountants and other businesspeople automate most of the laborious financial calculations they were accustomed to doing by hand. Fylstra was intrigued enough to lend the microcomputer-less Bricklin one of his own Apple IIs — whereupon, according to legend at least, the latter proceeded to invent the electronic spreadsheet over the course of a single weekend. He hired a more skilled programmer named Bob Frankston and formed a company called Software Arts to develop that rough prototype into a finished application, which Fylstra’s Personal Software published in October of 1979.

Up to that point, early microcomputers like the Apple II, Radio Shack TRS-80, and Commodore PET had been a hard sell as practical tools for business — even for their most seemingly obvious business application of all, that of word processing. Their screens could often only display 40 columns of big, blocky characters, often only in upper case — about as far away from the later GUI ideal of “what you see is what you get” as it was possible to go — while their user interfaces were arcane at best and their minuscule memories could only accommodate documents of a few pages in length. Most potential business users took one look at the situation, added on the steep price tag for it all, and turned back to their typewriters with a shrug.

VisiCalc, however, was different. It was so clearly, manifestly a better way to do accounting that every accountant Fylstra showed it to lit up like a child on Christmas morning, giggling with delight as she changed a number here or there and watched all of the other rows and columns update automagically. VisiCalc took off like nothing the young microcomputer industry had ever seen, landing tens of thousands of the strange little machines in corporate accounting departments. As the first tangible proof of what personal computing could mean to business, it prompted people to begin asking why IBM wasn’t a part of this new party, doing much to convince the latter to remedy that absence by making a microcomputer of their own. It’s thus no exaggeration to say that the entire industry of business-oriented personal computing was built on the proof of concept that was VisiCalc. It would sell 500,000 copies by January of 1983, an absolutely staggering figure for that time. Fylstra, seeing what was buttering his bread, eventually dropped all of the games and other hobbyist-oriented software from his catalog and reinvented Personal Software as VisiCorp, the first major publisher of personal-computer business applications.

But all was not quite as rosy as it seemed at the new VisiCorp. Almost from the moment of the name change, Dan Fylstra found his relationship with Dan Bricklin growing strained. The latter was suspicious of his publisher’s rebranding themselves in the image of his intellectual property, feeling they had been little more than the passive beneficiaries of his brilliant stroke. This point of view was by no means an entirely fair one. While it may have been true that Fylstra had been immensely lucky to get his hands on Bricklin’s once-in-a-lifetime innovation, he’d also made it possible by loaning Bricklin an Apple II in the first place, then done much to make VisiCalc palatable for corporate America through slick, professional packaging and marketing that projected exactly the right conservative, businesslike image, consciously eschewing the hippie ethos of the Homebrew Computer Club. Nevertheless, Bricklin, perhaps a bit drunk on all the praise of his genius, credited VisiCorp’s contribution to VisiCalc’s success but little. And so Fylstra, nervous about continuing to stake his entire company on Bricklin, set up an internal development team to create more products for the business market.

By the beginning of 1981, the IBM PC project which VisiCalc had done so much to prompt was in full swing, with the finished machine due to be released before the end of the year. Thanks to their status as publisher of the hottest application in business software, VisiCorp had been taken into IBM’s confidence, one of a select number of software developers and publishers given access to prototype hardware in order to have products ready to go on the day the new machine shipped. It seems that VisiCorp realized even at this early point how underwhelming the new machine’s various operating paradigms were likely to be, for even before they had actual IBM hardware to hand, they started mocking up the GUI environment that would become known as Visi On using Apple II and III machines. Already at this early date, it reflected a real, honest, fundamental attempt to craft a more workable model for personal computing than the nightmare that MS-DOS alone could be. William Coleman, the head of the development team, later stated in reference to the project’s founding goals that “we wanted users to be able to have multiple programs on the screen at one time, ease of learning and use, and simple transfer of data from one program to another.”

Visi On seemed to have huge potential. When VisiCorp demonstrated an early version, albeit far later than they had expected to be able to, at a trade show in December of 1982, Dan Fylstra remembers a rapturous reception, “competitors standing in front of [the] booth at the show, shaking their heads and wondering how the company had pulled the product off.” It was indeed an impressive coup; well before the Apple Macintosh or even Lisa had debuted, VisiCorp was showing off a full-fledged GUI environment running on hardware that had heretofore been considered suitable only for ugly old MS-DOS.

Still, actually bringing a GUI environment to market and making a success out of it was a much taller order than it might have first appeared. As even Apple would soon be learning to their chagrin, any such product trying to make a go of it within the increasingly MS-DOS-dominated culture of mainstream business computing ran headlong into a whole pile of problems which lacked clearly best solutions. Visi On, like almost all of the GUI products that would follow for the IBM hardware architecture, was built on top of MS-DOS, using the latter’s low-level function calls to manage disks and files. This meant that users could install it on their hard drive and pop between Visi On and vanilla MS-DOS as the need arose. But a much thornier question was that of running existing MS-DOS applications within the Visi On environment. Those which assumed they had full control of the system — which was practically all of them, because why wouldn’t they? — would flame out as soon as they tried to directly access some piece of hardware that was now controlled by Visi On, or tried to put something in some specific place inside what was now a shared pool of memory, or tried to do any number of other now-forbidden things. VisiCorp thus made the hard decision to not even try to get existing MS-DOS applications to run under Visi On. Software developers would have to make new, native applications for the system; Visi On would effectively be a new computing platform onto itself.

This decision was questionable in commercial if not technical terms, given how hard it must be to get a new platform accepted in an MS-DOS-dominated marketplace. But VisiCorp then proceeded to make the problem even worse. It would only be possible to program Visi On, they announced, after purchasing an expensive development kit and installing it on a $20,000 DEC PDP-11 minicomputer. They thus opted for an approach similar to one Apple was opting for with the Lisa: to allow that machine to be programmed only by yoking it up to a second Lisa. In thus betraying the original promise of the personal computer as an anything machine which ordinary users could program to do their will, both Visi On and the Lisa operating system arguably removed their hosting hardware from that category entirely, converting it into a closed electronic appliance more akin to a game console. Taxonomical debates aside, the barriers to entry even for one who wished merely to use Visi On to run store-bought applications were almost as steep: when this first MS-DOS-based GUI finally shipped on December 16, 1983, after a long series of postponements, it required a machine with 512 K of memory and a hard drive to run and cost more than $1000 to buy.

Visi On was, as the technology pundits like to say, “ahead of the hardware market.” In quite a number of ways it was actually far more ambitious than what would emerge a month or so after it as the Apple Macintosh. Multiple Visi On applications could be open at the same time (although they didn’t actually run concurrently), and a surprisingly sophisticated virtual-memory system was capable of swapping out pages to hard disk if software tried to allocate more memory than was physically available on the computer. Similar features wouldn’t reach MacOS until 1987’s System 5 and 1991’s System 7 respectively.

In the realm of usability, however, Visi On unquestionably fell down in comparison to Apple’s work. The user interfaces for the Lisa and the Macintosh made almost all the right choices right from the beginning, expanding upon the work done at Xerox PARC in all the right ways. Many of the choices made by VisiCorp, on the other hand, feel far more dubious today — and, one has to believe, not just out of the contempt bred by all those intervening decades of user interfaces modeled on Apple’s. Consider the task of moving and sizing windows on the screen, which was implemented so elegantly on the original Lisa and Macintosh that it’s been changed not at all in all the decades since. While Visi On too allows windows to be sized and placed where you will, and allows them to overlay one another — something by no means true of all of the MS-DOS GUI systems that would follow — doing so is a clumsy process involving picking options out of menus rather than simply dragging title bars or sizing widgets. In fact, Visi On uses no icons whatsoever. For anyone still enamored with the old saw that Apple just ripped off the Xerox PARC interface in its entirety and stuck it on the Lisa and Mac, Visi On, being much more slavishly based on the PARC model, provides an instructive demonstration of how far the likes of the Xerox Alto still was from the intuitive ease of Apple’s interface.

A Quick Tour of Visi On


With mice still exotic creatures, VisiCorp provided their own to work with Visi On. Many other early GUI-makers, Microsoft among them, would follow their lead.

Visi On looks like this upon booting up on an original IBM PC with 640 K of memory and a CGA video card, running in high-resolution monochrome mode at 640 X 200. “Services” is Visi On’s terminology for installed applications. The list of them which you see here, all provided by VisiCorp themselves, are the only ones that would ever exist, thanks to Visi On’s complete commercial failure.

We’ve started up a spreadsheet, a graphing application, and a word processor at the same time. These don’t actually run concurrently, as they would under a true multitasking operating system, but are visible onscreen in their separate windows, becoming active when we click them. (Something similar would not have been possible under MacOS prior to 1987.)

Although Visi On does sport windows that can be sized and placed anywhere and can overlap one another, arranging them is made extremely tedious by its lack of any concept of mouse-dragging; the mouse can only be used for single clicks. So, you have to click the “Frame” menu option and see its instructions through step by step. Note also the lack of pull-down menus, another of Apple’s expansions upon the work down at Xerox PARC. Menus here are just one-shot commands, akin to what a modern GUI user would call a button.

Fortunately, you can make a window full-screen with just a couple of clicks. Unfortunately, you then have to laboriously re-“Frame” it when you want to shrink it again; it doesn’t remember where it used to be.

The lack of a mouse-drag affordance makes the “Transfer” function — Visi On’s version of copy-and-paste — extremely tedious.

And, as with most things in Visi On, transferring data is also slow. Moving that little snippet of text from the word processor to the spreadsheet took about ten seconds.

On the plus side, Visi On sports a help system that’s crazily comprehensive for its time — much more so than the one that would ship with MacOS or, for that matter, Microsoft Windows for quite some years.

As if it didn’t have enough intrinsic problems working against it, extrinsic ones also contrived to undo Visi On in the marketplace. By the time it shipped, VisiCorp was a shadow of what they had so recently been. VisiCalc sales had collapsed over the past year, going from nearly 40,000 units in December of 1982 alone to fewer than 6000 units in December of 1983 in the face of competing products — most notably the burgeoning juggernaut Lotus 1-2-3 — and what VisiCorp described as Software Arts’s failure to provide “timely upgrades” amidst a relationship that was growing steadily more tense. With VisiCorp’s marketplace clout thus dissipating like air out of a balloon, it was hardly the ideal moment for them to ask for the sorts of commitments from users and developers required by Visi On.

The very first MS-DOS-based GUI struggled along with no uptake whatsoever for nine months or so; the only applications made for it were the word processor, spreadsheet, and graphing program VisiCorp made themselves. In September of 1984, with VisiCorp and Software Arts now embroiled in a court battle that would benefit only their competitors, the Visi On technology was sold to a veteran manufacturer of mainframes and supercomputers called Control Data Corporation, who proceeded to do very little if anything with it. VisiCorp went bankrupt soon after, while Lotus bought out Software Arts for a paltry $800,000, thus ending the most dramatic boom-and-bust tale of the early business-software industry. “VisiCorp’s auspicious climb and subsequent backslide,” wrote InfoWorld magazine, “will no doubt become a ‘how-not-to’ primer for software companies of the future.”

Visi On’s struggles may have been exacerbated by the sorry state of its parent company, but time would prove them to be by no means atypical of MS-DOS-based GUI systems in general.  Already in February of 1984, PC Magazine could point to at least four other GUIs of one sort or another in the works from other third-party developers: Concurrent CP/M with Windows by Digital Research, VisuALL by Trillian Computer Corporation, DesqView by Quarterdeck Office Systems, and WindowMaster by Structured Systems. All of these would make different choices in trying to balance the seemingly hopelessly competing priorities of reasonable speed and reasonable hardware requirements, compatibility with MS-DOS applications and compatibility with post-MS-DOS philosophies of computing. None would find the sweet spot. Neither they nor the still more GUI environments that followed them would be able to offer a combination of features, ease of use, and price that the market found compelling, so much so that by 1985 the whole field of MS-DOS GUIs was coming to be viewed with disdain by computer users who had been disappointed again and again. If you wanted a GUI, went the conventional wisdom, buy a Macintosh and live with the paltry software selection and the higher price. The mainstream of business computing, meanwhile, continued to truck along with creaky old MS-DOS, a shaky edifice made still more unstable by all of the hacks being grafted onto it to expand its memory model or to force it to load more than one application at a time. “Windowing and desktop environments are a solution looking for a problem,” said Robert Lefkowits, director of software services for Infocorp, in the fall of 1985. “Users aren’t really looking for any kind of windowing environment to solve problems. Users are not expressing a need or desire for it.”

The reason they weren’t, of course, was because they hadn’t yet seen a GUI in which the pleasure outweighed the pain. Entrenched as users were in the old way of doing things, accepting as they had become of all of MS-DOS’s discontents as simply the way computing was, it was up to software developers to show them why a GUI was something they had never known they couldn’t live without. Microsoft at least, the very people who had saddled their industry with the MS-DOS albatross, were smart enough to realize that mainstream business computing must be remade in the image of the much-scoffed-at Macintosh at some point. Further, they understood that it behooved them to do the remaking if they didn’t want to go the way of VisiCorp. By the time Lefkowits said his words, the long, winding tale of dogged perseverance in the face of failure and frustration that would become the story of Microsoft Windows had already been playing out for several years. One of these days, the GUI was going to make its breakthrough in one way or another, and it was going to do so with a Microsoft logo on its box — even if Bill Gates had to personally ram it down his customers’ throats.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper and Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris; InfoWorld of October 31 1983, November 14 1983, April 2 1984, July 2 1984, and October 7 1985; Byte of June 1983, July 1983; PC Magazine of February 7 1984, and October 2 1984; the episode of the Computer Chronicles television program called “Integrated Software.” Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles.
 

Tags: , , ,

The 640 K Barrier

There was a demon in memory. They said whoever challenged him would lose. Their programs would lock up, their machines would crash, and all their data would disintegrate.

The demon lived at the hexadecimal memory address A0000, 655,360 in decimal, beyond which no more memory could be allocated. He lived behind a barrier beyond which they said no program could ever pass. They called it the 640 K barrier.

— with my apologies to The Right Stuff[1]Yes, that is quite possibly the nerdiest thing I’ve ever written.

The idea that the original IBM PC, the machine that made personal computing safe for corporate America, was a hastily slapped-together stopgap has been vastly overstated by popular technology pundits over the decades since its debut back in August of 1981. Whatever the realities of budgets and scheduling with which its makers had to contend, there was a coherent philosophy behind most of the choices they made that went well beyond “throw this thing together as quickly as possible and get it out there before all these smaller companies corner the market for themselves.” As a design, the IBM PC favored robustness, longevity, and expandability, all qualities IBM had learned the value of through their many years of experience providing businesses and governments with big-iron solutions to their most important data–processing needs. To appreciate the wisdom of IBM’s approach, we need only consider that today, long after the likes of the Commodore Amiga and the original Apple Macintosh architecture, whose owners so loved to mock IBM’s unimaginative beige boxes, have passed into history, most of our laptop and desktop computers — including modern Macs — can trace the origins of their hardware back to what that little team of unlikely business-suited visionaries accomplished in an IBM branch office in Boca Raton, Florida.

But of course no visionary has 20-20 vision. For all the strengths of the IBM PC, there was one area where all the jeering by owners of sexier machines felt particularly well-earned. Here lay a crippling weakness, born not so much of the hardware found in that first IBM PC as the operating system the marketplace chose to run on it, that would continue to vex programmers and ordinary users for two decades, not finally fading away until Microsoft’s release of Windows XP in 2001 put to bed the last legacies of MS-DOS in mainstream computing. MS-DOS, dubbed the “quick and dirty” operating system during the early days of its development, is likely the piece of software in computing history with the most lopsided contrast between the total number of hours put into its development and the total number of hours it spent in use, on millions and millions of computers all over the world. The 640 K barrier, the demon all those users spent so much time and energy battling for so many years, was just one of the more prominent consequences of corporate America’s adoption of such a blunt instrument as MS-DOS as its standard. Today we’ll unpack the problem that was memory management under MS-DOS, and we’ll also examine the problem’s multifarious solutions, all of them to one degree or another ugly and imperfect.


 

The original IBM PC was built around an Intel 8088 microprocessor, a cost-reduced and somewhat crippled version of an earlier chip called the 8086. (IBM’s decision to use the 8088 instead of the 8086 would have huge importance for the expansion buses of this and future machines, but the differences between the two chips aren’t important for our purposes today.) Despite functioning as a 16-bit chip in most ways, the 8088 had a 20-bit address space, meaning it could address a maximum of 1 MB of memory. Let’s consider why this limitation should exist.

Memory, whether in your brain or in your computer, is of no use to you if you can’t keep track of where you’ve put things so that you can retrieve them again later. A computer’s memory is therefore indexed by bytes, with every single byte having its own unique address. These addresses, numbered from 0 to the upper limit of the processor’s address space, allow the computer to keep track of what is stored where. The biggest number that can be represented in 20 bits is 1,048,575, or 1 MB. Thus this is the maximum amount of memory which the 8088, with its 20-bit address bus, can handle. Such a limitation hardly felt like a deal breaker to the engineers who created the IBM PC. Indeed, it’s difficult to overemphasize what a huge figure 1 MB really was when they released the machine in 1981, in which year the top-of-the-line Apple II had just 48 K of memory and plenty of other competing machines shipped with no more than 16 K.

A processor needs to address other sorts of memory besides the pool of general-purpose RAM which is available for running applications. There’s also ROM memory — read-only memory, burned inviolably into chips — that contains essential low-level code needed for the computer to boot itself up, along with, in the case of the original IBM PC, an always-available implementation of the BASIC programming language. (The rarely used BASIC in ROM would be phased out of subsequent models.) And some areas of RAM as well are set aside from the general pool for special purposes, like the fully 128 K of addresses given to video cards to keep track of the onscreen display in the original IBM PC. All of these special types of memory must be accessed by the CPU, must be given their own unique addresses to facilitate that, and must thus be subtracted from the address space available to the general pool.

IBM’s engineers were quite generous in drawing the boundary between their general memory pool and the area of addresses allocated to special purposes. Focused on expandability and longevity as they were, they reserved big chunks of “special” memory for purposes that hadn’t even been imagined yet. In all, they reserved the upper three-eighths of the available addresses for specialized purposes actual or potential, leaving the lower five-eighths — 640 K — to the general pool. In time, this first 640 K of memory would become known as “conventional memory,” the remaining 384 K — some of which would be ROM rather than RAM — as “high memory.” The official memory map which IBM published upon the debut of the IBM PC looked like this:

It’s important to understand when looking at a memory map like this one that the existence of a logical address therein doesn’t necessarily mean that any physical memory is connected to that address in any given real machine. The first IBM PC, for instance, could be purchased with as little as 16 K of conventional memory installed, and even a top-of-the-line machine had just 256 K, leaving most of the conventional-memory space vacant. Similarly, early video cards used just 32 K or 64 K of the 128 K of address space offered to them in high memory. The 640 K barrier was thus only a theoretical limitation early on, one few early users or programmers ever even noticed.

That blissful state of affairs, however, wouldn’t last very long. As IBM’s creations — joined, soon enough, by lots of clones — became the standard for American business, more and more advanced applications appeared, craving more and more memory alongside more and more processing power. Already by 1984 the 640 K barrier had gone from a theoretical to a very real limitation, and customers were beginning to demand that IBM do something about it. In response, IBM that year released the PC/AT, built around Intel’s new 80286 microprocessor, which boasted a 24-bit address space good for 16 MB of memory. To unlock all that potential extra memory, IBM made the commonsense decision to extend the memory map above the specialized high-memory area that ended at 1 MB, making all addresses beyond 1 MB a single pool of “extended memory” available for general use.

Problem solved, right? Well, no, not really — else this would be a much shorter article. Due more to software than hardware, all of this potential extended memory proved not to be of much use for the vast majority of people who bought PC/ATs. To understand why this should be, we need to examine the deadly embrace between the new processor and the old operating system people were still running on it.

The 80286 was designed to be much more than just a faster version of the old 8086/8088. Developing the chip before IBM PCs running MS-DOS had come to dominate business computing, Intel hadn’t allowed the need to stay compatible with that configuration to keep them from designing a next-generation chip that would help to take computing to where they saw it as wanting to go. Intel believed that microcomputers were at the stage at which the big institutional machines had been a couple of decades earlier, just about ready to break free of what computer scientist Brian L. Stuart calls the “Triangle of Ones”: one user running one program at a time on one machine. At the very least, Intel believed, the second leg of the Triangle must soon fall; everyone recognized that multitasking — running several programs at a time and switching freely between them — was a much more efficient way to do complex work than laboriously shutting down and starting up application after application. But unfortunately for MS-DOS, the addition of multitasking complicates the life of an operating system to an absolutely staggering degree.

Operating systems are of course complex subjects worthy of years or a lifetime of study. We might, however, collapse their complexities down to a few fundamental functions: to provide an interface for the user to work with the computer and manage her programs and files; to manage the various tasks running on the computer and allocate resources among them; and to act as a buffer or interface between applications and the underlying hardware of the computer. That, anyway, is what we expect at a minimum of our operating systems today. But for a computer ensconced within the Triangle of Ones, the second and third functions were largely moot: with only one program allowed to run at a time, resource-management concerns were nonexistent, and, without the need for a program to be concerned about clashing with other programs running at the same time, bare-metal programming — manipulating the hardware directly, without passing requests through any intervening layer of operating-system calls — was often considered not only acceptable but the expected approach. In this spirit, MS-DOS provided just 27 function calls to programmers, the vast majority of them dealing only with disk and file management. (Compare that, my fellow programmers, with the modern Windows or OS X APIs!) For everything else, banging on the bare metal was fine.

We can’t even begin here to address all of the complications that are introduced when we add multitasking into the equation, asking the operating system in the process to fully embrace all three of the core functions listed above. Memory management alone, the one aspect we will look deeper into today, becomes complicated enough. A program which is sharing a machine with other programs can no longer have free run of the memory map, placing whatever it wants to wherever it wants to; to do so risks overwriting the code or data of another program running on the system. Instead the operating system must demand that individual programs formally request the memory they’d like to use, and then must come up with a way to keep a program, whether due to bugs or malice, from running roughshod over areas of memory that it hasn’t been granted.

Or perhaps not. The Commodore Amiga, the platform which pioneered multitasking on personal computers in 1985, didn’t so much solve the latter part of this problem as punted it away. An application program is expected to request from the Amiga’s operating system any memory that it requires. The operating system then returns a pointer to a block of memory of the requested size, and trusts the application not to write to  memory outside of these bounds. Yet nothing besides the programmer’s skill and good nature absolutely prevents such unauthorized memory access from happening. Every application on the Amiga, in other words, can write to any address in the machine’s memory, whether that address be properly allocated to it or not. Screen memory, free memory, another program’s data, another program’s code — all are fair game to the errant program. Such unauthorized memory access will almost always eventually result in a total system crash. A non-malicious programmer who wants her program to be a good citizen would of course never intentionally write to memory she hasn’t properly requested, but bugs of this nature are notoriously easy to create and notoriously hard to track down, and on the Amiga a single instance of one can bring down not only the offending program but the entire operating system. With all due respect to the Amiga’s importance as the first multitasking personal computer, this is obviously not the ideal way to implement it.

A far more sustainable approach is to take the extra step of tracking and protecting the memory that has been allocated to each program. Memory protection is usually accomplished using  what’s known as virtual memory: when a program requests memory, it’s returned not a true address within the system’s memory pool but rather a virtual address that’s translated back into the real address to which it corresponds every time the program accesses its data. Each program is thus effectively sandboxed from everything else, allowed to read from and write to only its own data. Only the lowest levels of the operating system have global access to the memory pool as a whole.

Implementing such memory protection in software alone, however, must be an untenable drain on the resources available to systems engineers in the 1980s — a fact which does everything to explain its absence from the Amiga. Intel therefore decided to give software a leg up via hardware. They built into the 80286 a memory-management unit that could automatically translate from virtual to real memory addresses and vice versa, making this constantly ongoing process fairly transparent even to the operating system.

Nevertheless, the operating system must know about this capability, must in fact be written very differently if it’s to run on a CPU with memory protection built into its circuitry. Intel recognized that it would take time for such operating systems to be created for the new chip, and recognized that compatibility with the earlier 8086/8088 chips would be a very good thing to have in the meantime. They therefore built two possible operating modes into the 80286. In “protected mode” — the mode they hoped would eventually come to be used almost universally — the chip’s full potential would be realized, including memory protection and the ability to address up to 16 MB of memory. In “real mode,” the 80286 would function essentially like a turbocharged 8086/8088, with no memory-protection capabilities and with the old limitation on addressable memory of 1 MB still in place. Assuming that in the early days at least the new chip would need to run on operating systems with no knowledge of its full capabilities, Intel made the 80286 default to real mode on startup. An operating system which did know about the 80286 and wanted to bring out its full potential could switch it to protected mode at boot-up and be off to the races.

It’s at the intersection between the 80286 and the operating system that Intel’s grand plans for the future of their new chip went awry. An overwhelming percentage of the early 80286s were used in IBM PC/ATs and clones, and an overwhelming percentage of those machines were running MS-DOS. Microsoft’s erstwhile “quick and dirty” operating system knew nothing of the 80286’s full capabilities. Worse, trying to give it knowledge of those capabilities would have to entail a complete rewrite which would break compatibility with all existing MS-DOS software. Yet the whole reason MS-DOS was popular in the first place — it certainly wasn’t because of a generous feature set, a friendly interface, or any aesthetic appeal — was that very same huge base of business software. Getting users to make the leap to some hypothetical new operating system in the absence of software to run on it would be as difficult as getting developers to write programs for an operating system with no users. It was a chicken-or-the-egg situation, and neither chicken nor egg was about to stick its neck out anytime soon.

IBM was soon shipping thousands upon thousands of PC/ATs every month, and the clone makers were soon shipping even more 80286-based machines of their own. Yet at least 95 percent of those machines were idling along at only a fraction of their potential, thanks to the already creakily archaic MS-DOS. For all these users, the old 640 K barrier remained as high as ever. They could stuff their machines full of extended memory if they liked, but they still couldn’t access it. And of course the multitasking that the 80286 was supposed to have enabled remained as foreign a concept to MS-DOS as a GPS unit to a Model T. The only solution IBM offered those who complained about the situation was to run another operating system. And indeed, there were a number of alternatives to MS-DOS available for the PC/AT and other 80286-based machines, including several variants of the old institutional-computing favorite Unix — one of them even from Microsoft — and new creations like Digital Research’s Concurrent DOS, which struggled with mixed results to wedge in some degree of MS-DOS compatibility. Still, the only surefire way to take full advantage of MS-DOS’s huge software base was to run the real — in more ways than one now! — MS-DOS, and this is what the vast majority of people with 80286-equipped machines wound up doing.

Meanwhile the very people making the software which kept MS-DOS the only viable choice for most users were feeling the pinch of being confined to 640 K more painfully almost by the month. Finally Lotus Corporation —  makers of the Lotus 1-2-3 spreadsheet package that ruled corporate America, the greatest single business-software success story of their era — decided to use their clout to do something about it. They convinced Intel to join them in devising a scheme for breaking the 640 K barrier without abandoning MS-DOS. What they came up with was one mother of an ugly kludge — a description the scheme has in common with virtually all efforts to break through the 640 K barrier.

Looking through the sparsely populated high-memory area which the designers of the original IBM PC had so generously carved out, Lotus and Intel realized it should be possible on almost any extant machine to identify a contiguous 64 K chunk of those addresses which wasn’t being used for anything. This chunk, they decided, would be the gateway to potentially many more megabytes installed elsewhere in the machine. Using a combination of software and hardware, they implemented what’s known as a bank-switching scheme. The 64 K chunk of high-memory addresses was divided into four segments of 16 K, each of which could serve as a lens focused on a 16 K segment of additional memory above and beyond 1 MB. When the processor accessed the addresses in high memory, the data it would actually access would be the data at whatever sections of the additional memory their lenses were currently pointing to. The four lenses could be moved around at will, giving access, albeit in a roundabout way, to however much extra memory the user had installed. The additional memory unlocked by the scheme was dubbed “expanded memory.”  The name’s unfortunate similarity to “extended memory” would cause much confusion over the years to come; from here on, we’ll call it by its common acronym of “EMS.”

All those gobs of extra memory wouldn’t quite come for free: applications would have to be altered to check for the existence of EMS memory and make use of it, and there would remain a distinct difference between conventional memory and EMS memory with which programmers would always have to reckon. Likewise, the overhead of constantly moving those little lenses around made EMS memory considerably slower to access than conventional memory. On the brighter side, though, EMS worked under MS-DOS with only the addition of a single device driver during startup. And, since the hardware mechanism for moving the lenses around was completely external to the CPU, it would even work on machines that weren’t equipped with the new 80286.

This diagram shows the different types of memory available on PCs of the mid-1980s. In blue, we see the original 1 MB memory map of the IBM PC. In green, we see a machine equipped with additional extended memory. And in orange we see a machine equipped with additional expanded memory.

Shortly before the scheme made its official debut at a COMDEX trade show in May of 1985, Lotus and Intel convinced a crucial third partner to come aboard: Microsoft. “It’s garbage! It’s a kludge!” said Bill Gates. “But we’re going to do it.” With the combined weight of Lotus, Intel, and Microsoft behind it, EMS took hold as the most practical way of breaking the 640 K barrier. Imperfect and kludgy though it was, software developers hurried to add support for EMS memory to whatever programs of theirs could practically make use of it, while hardware manufacturers rushed EMS memory boards onto the market. EMS may have been ugly, but it was here today and it worked.

At the same time that EMS was taking off, however, extended memory wasn’t going away. Some hardware makers — most notably IBM themselves — didn’t want any part of EMS’s ugliness. Software makers therefore continued to probe at the limits of machines equipped with extended memory, still looking for a way to get at it from within the confines of MS-DOS. What if they momentarily switched the 80286 into protected mode, just for as long as they needed to manipulate data in extended memory, then went back into real mode? It seemed like a reasonable idea — except that Intel, never anticipating that anyone would want to switch modes on the fly like this, had neglected to provide a way to switch an 80286 in protected mode back into real mode. So, proponents of extended memory had to come up with a kludge even uglier than the one that allowed EMS memory to function. They could force the 80286 back into real mode, they realized, by resetting it entirely, just as if the user had rebooted her computer. The 80286 would go through its self-check again — a process that admittedly absorbed precious milliseconds — and then pick back up where it left off. It was, as Microsoft’s Gordon Letwin memorably put it, like “turning off the car to change gears.” It was staggeringly kludgy, it was horribly inefficient, but it worked in its fashion. Given the inefficiencies involved, the scheme was mostly used to implement virtual disks stored in the extended memory, which wouldn’t be subject to the constant access of an application’s data space.

In 1986, the 32-bit 80386, Intel’s latest and greatest chip, made its public bow at the heart of the Compaq Deskpro 386 rather than an IBM machine, a landmark moment signaling the slow but steady shift of business computing’s power center from IBM to Microsoft and the clone makers using their operating system. While working on the new chip, Intel had had time to see how the 80286 was actually being used in the wild, and had faced the reality that MS-DOS was likely destined to be cobbled onto for years to come rather than replaced in its entirety with something better. They therefore made a simple but vitally important change to the 80386 amidst its more obvious improvements. In addition to being able to address an inconceivable total of 4 GB of memory in protected mode thanks to its 32-bit address space, the 80386 could be switched between protected mode and real mode on the fly if one desired, without needing to be constantly reset.

In freeing programmers from that massive inefficiency, the 80386 cracked open the door that much further to making practical use of extended memory in MS-DOS. In 1988, the old EMS consortium of Lotus, Intel, and Microsoft came together once again, this time with the addition to their ranks of the clone manufacturer AST; the absence of IBM is, once again, telling. Together they codified a standard approach to extended memory on 80386 and later processors, which corresponded essentially to the scheme I’ve already described in the context of the 80286, but with a simple command to the 80386 to switch back to real mode replacing the resets. They called it the eXtended Memory Specification; memory accessed in this way soon became known universally as “XMS” memory. Under XMS as under EMS, a new device driver would be loaded into MS-DOS. Ordinary real-mode programs could then call this driver to access extended memory; the driver would do the needful switching to protected mode, copy blocks of data from extended memory into conventional memory or vice versa, then switch the processor back to real mode when it was time to return control to the program. It was still inelegant, still a little inefficient, and still didn’t use the capabilities of Intel’s latest processors in anything like the way Intel’s engineers had intended them to be used; true multitasking still remained a pipe dream somewhere off in a shadowy future. Owners of sexier machines like the Macintosh and Amiga, in other words, still had plenty of reason to mock and scoff. In most circumstances, working with XMS memory was actually slower than working with EMS memory. The primary advantage of XMS was that it let programs work with much bigger chunks of non-conventional memory at one time than the four 16 K chunks that EMS allowed. Whether any given program chose EMS or XMS came to depend on which set of advantages and disadvantages best suited its purpose.

The arrival of XMS along with the ongoing use of EMS memory meant that MS-DOS now had two competing memory-management solutions. Buyers now had to figure out not only whether they had enough extra memory to run a program but whether they had the right kind of extra memory. Ever accommodating, hardware manufacturers began shipping memory boards that could be configured as either EMS or XMS memory — whatever the application you were running at the moment happened to require.

The next stage in the slow crawl toward parity with other computing platforms in the realm of memory management would be the development of so-called “DOS extenders,” software to allow applications themselves to run in protected mode, thus giving them direct access to extended memory without having to pass their requests through an inefficient device driver. An application built using a DOS extender would only need to switch the processor to real mode when it needed to communicate with the operating system. The development of DOS extenders was driven by Microsoft’s efforts to turn Windows, which like seemingly everything else in business computing ran on top of MS-DOS, into a viable alternative to the command line and a viable challenger to the Macintosh. That story is thus best reserved for a future article, when we look more closely at Windows itself. As it is, the story that I’ve told so far today moves us nicely into the era of computer-gaming history we’ve reached on the blog in general.

In said era, the MS-DOS machines that had heretofore been reserved for business applications were coming into homes, where they were often used to play a new generation of games taking advantage of the VGA graphics, sound cards, and mice sported by the latest systems. Less positively, all of the people wanting to play these new games had to deal with the ramifications of a 640 K barrier that could still be skirted only imperfectly. As we’ve seen, both EMS and XMS imposed to one degree or another a performance penalty when accessing non-conventional memory. What with games being the most performance-sensitive applications of all, that made that first 640 K of lightning-fast conventional memory most precious of all for them.

In the first couple of years of MS-DOS’s gaming dominance, developers dealt with all of the issues that came attached to using memory beyond 640 K by the simple expedient of not using any memory beyond 640 K. But that solution was compatible neither with developers’ growing ambitions for their games nor with the gaming public’s growing expectations of them.

The first harbinger of what was to come was Origin Systems’s September 1990 release Wing Commander, which in its day was renowned — and more than a little feared — for pushing the contemporary state of the art in hardware to its limits. Even Wing Commander didn’t go so far as to absolutely require memory beyond 640 K, but it did use it to make the player’s audiovisual experience snazzier if it was present. Setting a precedent future games would largely follow, it was quite inflexible in its approach, demanding EMS — as opposed to XMS — memory. In the future, gamers would have to become all too familiar with the differences between the two standards, and how to configure their machines to use one or the other. Setting another precedent, Wing Commander‘s “installation guide” included a section on “memory usage” that was required reading in order to get things working properly. In the future, such sections would only grow in length and complexity, and would need to be pored over by long-suffering gamers with far more concentrated attention than anything in the manual having anything to do with how to actually play the games they purchased.

In Accolade’s embarrassing Leisure Suit Larry knockoff Les Manley in: Lost in LA, the title character explains EMS and XMS memory to some nubile companions. The ironic thing was that anyone who wished to play the latest games on an MS-DOS machine really did need to know this stuff, or at least have a friend who did.

Thus began the period of almost a decade, remembered with chagrin but also often with an odd sort of nostalgia by old-timers today, in which gamers spent hours monkeying about with MS-DOS’s “config.sys” and “autoexec.bat” files and swapping in and out various third-party utilities in the hope of squeezing out that last few kilobytes of conventional memory that Game X needed to run. The techniques they came to employ were legion.

In the process of developing Windows, Microsoft had discovered that the kernel of MS-DOS itself, a fairly tiny program thanks to its sheer age, could be stashed into the first 64 K of memory beyond 1 MB and still accessed like conventional memory on an 80286 or later processor in real mode thanks to what was essentially an undocumented technical glitch in the design of those processors. Gamers thus learned to include the line “DOS=HIGH” in their configuration files, freeing up a precious block of conventional memory. Likewise, there was enough unused space scattered around in the 384 K of high memory on most machines to stash many or all of MS-DOS’s device drivers there instead of in conventional memory. Thus “DOS=HIGH” soon became “DOS=HIGH,UMB,” the second parameter telling the computer to make use of these so-called “upper-memory blocks” and thereby save that many kilobytes more.

These were the most basic techniques, the starting points. Suffice to say that things got a lot more complicated from there, turning into a baffling tangle of tweaks, some saving mere bytes rather than kilobytes of conventional memory, but all of them important if one was to hope to run games that by 1993 would be demanding 604 K of 640 K for their own use. That owners of machines which by that point typically contained memories in the multi-megabytes should have to squabble with the operating system over mere handfuls of bytes was made no less vexing by being so comically absurd. And every new game seemed to up the ante, seemed to demand that much more conventional memory. Those with a sunnier disposition or a more technical bent of mind took the struggle to get each successive purchase running as the game before the game got started, as it were. Everyone else gnashed their teeth and wondered for the umpteenth time if they might not have been better off buying a console where games Just Worked. The only thing that made it all worthwhile was the mixture of relief, pride, and satisfaction that ensued when you finally got it all put together just right and the title screen came up and the intro music sprang to life — if, that is, you’d managed to configure your sound card properly in the midst of all your other travails. Such was the life of the MS-DOS gamer.

Before leaving the issue of the 640 K barrier behind in exactly the way that all those afflicted by it for so many years were so conspicuously unable to do, we have to address Bill Gates’s famous claim, allegedly made at a trade show in 1981, that “640 K ought to be enough for anybody.” The quote has been bandied about for years as computer-industry legend, seeming to confirm as it does the stereotype of Bill Gates as the unimaginative dirty trickster of his industry, as opposed to Steve Jobs the guileless visionary (the truth is, needless to say, far more complicated). Sadly for the stereotypers, however, the story of the quote is similar to all too many legends in the sense that it almost certainly never happened. Gates himself, for one, vehemently denies ever having said any such thing. Fred Shapiro, for another, editor of The Yale Book of Quotations, conducted an exhaustive search for a reputable source for the quote in 2008, going so far as to issue a public plea in The New York Times for anyone possessing knowledge of such a source to contact him. More than a hundred people did so, but none of them could offer up the smoking gun Shapiro sought, and he was left more certain than ever that the comment was “apocryphal.” So, there you have it. Blame Bill Gates all you want for the creaky operating system that was the real root cause of all of the difficulties I’ve spent this article detailing, but don’t ever imagine he was stupid enough to say that. “No one involved in computers would ever say that a certain amount of memory is enough for all time,” said Gates in 2008. Anyone doubting the wisdom of that assertion need only glance at the history of the IBM PC.

(Sources: the books Upgrading and Repairing PCs, 3rd edition by Scott Mueller and Principles of Operating Systems by Brian L. Stuart; Computer Gaming World of June 1993; Byte of January 1982, November 1984, and March 1992; Byte‘s IBM PC special issues of Fall 1985 and Fall 1986; PC Magazine of May 14 1985, January 14 1986, May 30 1989, June 13 1989, and June 27 1989; the episode of the Computer Chronicles television show entitled “High Memory Management”; the online article “The ‘640K’ quote won’t go away — but did Gates really say it?” on Computerworld.)

Footnotes

Footnotes
1 Yes, that is quite possibly the nerdiest thing I’ve ever written.
 
 

Tags: , , ,

A Slow-Motion Revolution

CD-ROM

A quick note on terminology before we get started: “CD-ROM” can be used to refer either to the use of CDs as a data-storage format for computers in general or to the Microsoft-sponsored specification for same. I’ll be using the term largely in the former sense in the introduction to this article, in the latter after something called “CD-I” enters the picture. I hope the point of transition won’t be too hard to identify, but my apologies if this leads to any confusion. Sometimes this language of ours is a very inexact thing.



In the first week of March 1986, much of the computer industry converged on Seattle for the first annual Microsoft CD-ROM Conference. Microsoft had anticipated about 500 to 600 attendees to the four-day event. Instead more than 1000 showed up, forcing the organizers to reject many of them at the door of a conference center that by law could only accommodate 800 people. Between the presentations on CD-ROM’s bright future, the attendees wandered through an exhibit hall showcasing the format’s capabilities. The hit of the hall was what was about to become the first CD-ROM product ever to be made available for sale to the public, consisting of the text of all 21 volumes of the Grolier Academic Encyclopedia, some 200 MB in all, on a single disc. It was to be published by KnowledgeSet, a spinoff of Digital Research. Digital’s founder Gary Kildall, apparently forgiving Bill Gates his earlier trespasses in snookering a vital IBM contract out from under his nose, gave the conference’s keynote address.

Kildall’s willingness to forgive and forget in light of the bright optical-storage future that stood before the computer industry seemed very much in harmony with the mood of the conference as a whole. Sentiments often verged on the utopian, with talk of a new “paperless society” abounding, a revolution to rival that of Gutenberg. “The compact disc represents a major discontinuity in the cost of producing and distributing information,” said one Ed Schmid of DEC. “You have to go back to the invention of movable type and the printing press to find something equivalent.” The enthusiasm was so intense and the good vibes among the participants — many of them, like Gates and Kildall, normally the bitterest of enemies — so marked that some came to call the conference “the computer industry’s Woodstock.” If the attendees couldn’t quite smell peace and love in the air, they certainly could smell potential and profit.

All the excitement came down to a single almost unbelievable number: the 650 MB of storage offered by every tiny, inexpensive-to-manufacture compact disc. It’s very, very difficult to fully convey in our current world of gigabytes and terabytes just how inconceivably huge a figure 650 MB actually was in 1986, a time when a 40 MB hard drive was a cavernous, how-can-I-ever-possibly-fill-this-thing luxury found on only the most high-end computers. For developers who had been used to making their projects fit onto floppy disks boasting less than 1 MB of space, the idea of CD-ROM sounded like winning the lottery several times over. You could put an entire 21-volume encyclopedia on one of the things, for Pete’s sake, and still have more than two-thirds of the space left over! Suddenly one of the most nail-biting constraints against which they had always labored would be… well, not so much eased as simply erased. After all, how could anything possibly fill 650 MB?

And just in case that wasn’t enough great news, there was also the fact that the CD was a read-only format. If the industry as a whole moved to CD-ROM as its format of choice, the whole piracy problem, which organizations like the Software Publishers Association ardently believed was costing it billions every year, would dry up and blow away like a dandelion in the fall. Small wonder that the mood at the conference sometimes approached evangelistic fervor. Microsoft, as swept away with it all as anyone, published a collection of the papers that were presented there under the very non-businesslike, non-Microsoft-like title of CD-ROM: The New Papyrus. The format just seemed to demand a touch of rhapsodic poetry.

But the rhapsody wasn’t destined to last very long. The promised land of a software industry built around the effectively unlimited storage capacity of the compact disc would prove infuriatingly difficult to reach; the process of doing so would stretch over the better part of a decade, by the end of which time the promised land wouldn’t seem quite so promising anymore. Throughout that stretch, CD-ROM was always coming in a year or two, always the next big thing right there on the horizon that never quite arrived. This situation, so antithetical to the usual propulsive pace of computer technology, was brought about partly by limitations of the format itself which were all too easy to overlook amid the optimism of that first conference, and partly by a unique combination of external factors that sometimes almost seemed to conspire, perfect-storm-like, to keep CD-ROM out of the hands of consumers.



The compact disc was developed as a format for music by a partnership of the Dutch electronics giant Philips and the Japanese Sony during the late 1970s. Unlike the earlier analog laser-disc format for the storage of video, itself a joint project of Philips and the American media conglomerate MCA, the CD stored information digitally, as long strings of ones and zeros to be passed through digital-to-analog converters and thus turned into rich stereo sound. Philips and Sony published the final specifications for the music CD in 1980, opening up to others who wished to license the technology what would become known as the “Red Book” standard after the color of the binder in which it was described. The first consumer-oriented CD players began to appear in Japan in 1982, in the rest of the world the following year. Confined at first to the high-end audiophile market, by the time of that first Microsoft CD-ROM Conference in 1986 the CD was already well on its way to overtaking the record album and, eventually, the cassette tape to become the most common format for music consumption all over the world.

There were good reasons for the CD’s soaring popularity. Not only did CDs sound better than at least all but the most expensive audiophile turntables, with a complete absence of hiss or surface noise, but, given that nothing actually touched the surface of a disc when it was being played, they could effectively last forever, no matter how many times you listened to them; “Perfect sound forever!” ran the tagline of an early CD advertising campaign. Then there was the way you could find any song you liked on a CD just by tapping a few buttons, as opposed to trying to drop a stylus on a record at just the right point or rewind and fast-forward a cassette to just the right spot. And then there was the way that CDs could be carried around and stored so much more easily than a record album, plus the way they could hold up to 75 minutes worth of music, enough to pack many double vinyl albums onto a single CD. Throw in the lack of a need to change sides to listen to a full album, and seldom has a new media format appeared that is so clearly better than the existing formats in almost all respects.

It didn’t take long for the computer industry to come to see the CD format, envisioned originally strictly as a music medium, as a natural one to extend to other types of data storage. Where the rubber met the road — or the laser met the platter — a CD player was just a mechanism for reading bits off the surface of the disc and sending them on to some other circuitry that knew what to do with them. This circuitry could just as easily be part of a computer as a stereo system.

Such a sanguine view was perhaps a bit overly reductionist. When one started really delving into the practicalities of the CD as a format for data storage, one found a number of limitations, almost all of them drawn directly from the technology’s original purpose as a music-delivery solution. For one thing, CD drives were only capable of reading data off a disc at a rate of 153.6 K per second, this figure corresponding not coincidentally to the speed required to stream standard CD sound for real-time playback. [1]The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data. Such a throughput was considered pretty good but hardly breathtaking by mid-1980s hard-disk standards; an average 10 MB hard drive of the period might have a transfer rate of about 96 K per second, although high-performance drives could triple or even quadruple that figure.

More problematic was a CD drive’s atrocious seek speed — i.e., the speed at which files could be located for reading on a disc. An average 10 MB hard disk of 1986 had a typical seek time of about 100 milliseconds, a worst-case-scenario maximum of about 200 — although, again, high-performance models could improve on those figures by a factor of four. A CD drive, by contrast, had a typical seek time of 500 milliseconds, a maximum of 1000  — one full second. The designers of the music CD hadn’t been particularly concerned by the issue, for a music-CD player would spend the vast majority of its time reading linear streams of sound data. On those occasions when the user did request a certain track found deeper on the disc, even a full second spent by the drive in seeking her favorite song would hardly be noticed unduly, especially in comparison to the pain of trying to find something on a cassette or a record album. For storage of computer data, however, the slow seek speed gave far more cause for concern.

The LMS LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It can hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

The Laser Magnetic Storage LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It could hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

Given these issues of performance, which promised only to get more marked in comparison to hard drives as the latter continued to get faster, one might well ask why the industry was so determined to adapt the music CD specifically to data storage rather than using Philips and Sony’s work as a springboard to another optical format with affordances more suitable to the role. In fact, any number of companies did choose the latter course, developing optical formats in various configurations and capacities, many even offering the ability to write to as well as read from the disc. (Such units were called “WORM” drives, for “Write Once Read Many”; data, in other words, could be written to their discs, but not erased or rewritten thereafter.) But, being manufactured in minuscule quantities as essentially bespoke items, all such efforts were doomed to be extremely expensive.

The CD, on the other hand, had the advantage of an existing infrastructure dedicated to stamping out the little silver discs and filling them with data. At the moment, that data consisted almost exclusively of encoded music, but the process of making the discs didn’t care a whit what the ones and zeros being burned into them actually represented. CD-ROM would allow the computer industry to piggy-back on an extant, mature technology that was already nearing ubiquity. That was a huge advantage when set against the cost of developing a new format from scratch and setting up a similar infrastructure to turn it out in bulk — not to mention the challenge of getting the chaotic, hyper-competitive computer industry to agree on another format in the first place. For all these reasons, there was surprisingly little debate on whether adapting the music CD to the purpose of data storage was really the best way to go. For better or for worse, the industry hitched its wagon to the CD; its infelicities as a general-purpose data-storage solution would just have to be worked around.

One of the first problems to be confronted was the issue of a logical file format for CD-ROM. The physical layout of the bits on a data CD was largely dictated by the design of the platters themselves and the machinery used to burn data into them. Yet none of that existing infrastructure had anything to say about how a filesystem appropriate for use with a computer should work within that physical layout. Microsoft, understanding that a certain degree of inter-operability was a valuable thing to have even among the otherwise rival platforms that might wind up embracing CD-ROM, pushed early for a standardized logical format. As a preliminary step on the road to that landmark first CD-ROM Conference, they brought together a more intimate group of eleven other industry leaders at the High Sierra Resort and Casino in Lake Tahoe in November of 1985 to hash out a specification. Among those present were Philips, Sony, Apple, and DEC; notably absent was IBM, a clear sign of Microsoft’s growing determination to step out of the shadow of Big Blue and start dictating the direction of the industry in their own right. The so-called “High Sierra” format would be officially published in finalized form in May of 1986.

In the run-up to the first Microsoft CD-ROM Conference, then, everything seemed to be coming together nicely. CD-ROM had its problems, but virtually everyone agreed that it was a tremendously exciting development. For their part, Microsoft, driven by a Bill Gates who was personally passionate about the format and keenly aware that his company, the purveyor of clunky old MS-DOS, needed for reasons of public relations if nothing else a cutting-edge project to rival any of Apple’s, had established themselves as the driving force behind the nascent optical revolution. And then, just five days before the conference was scheduled to convene — timing that struck very few as accidental — Philips injected a seething ball of chaos into the system via something called CD-I.

CD-I was a different, competing file format for CD data storage. But CD-I was also much, much more. Excited by the success the music CD had enjoyed, Philips, with the tacit support of Sony, had decided to adapt the format into the all-singing, all-dancing, all-around future of home entertainment in the abstract. Philips would be making a CD-I box for the home, based on a minimalist operating system called OS-9 running on a Motorola 68000 processor. But this would be no typical home computer; the user would be able to control CD-I entirely using a VCR-style remote control. CD-I was envisioned as the interactive television of the future, a platform for not only conventional videogames but also lifestyle products of every description, from interactive astronomy lessons to the ultimate in exercise tapes. Philips certainly wasn’t short of ideas:

Think of owning an encyclopedia which presents chosen topics in several different ways. Watching a short audio/video sequence to gain a general background to the topic. Then choosing a word or subject for more in-depth study. Jumping to another topic without losing your place — and returning again after studying the related topic to proceed further. Or watching a cartoon film, concert, or opera with the interactive capabilities of CD-I added. Displaying the score, libretto, or text onscreen in a choice of languages. Or removing one singer or instrument to be able to sing along with the music.

Just as they had with the music CD, Philips would license the specifications to whoever else wanted to make gadgets of their own capable of playing the CD-I discs. They declared confidently that there would be as many CD-I players in the world as phonographs within a few years of the format’s debut, that “in the long run” CD-I “could be every bit as big as the CD-audio market.”

Already at the Microsoft CD-ROM Conference, Philips began aggressively courting developers in the existing computer-games industry to embrace CD-I. Plenty of them were more than happy to do so. Despite the optimism that dominated at the conference, it wasn’t clear how much priority Microsoft, who earned the vast majority of their money from business computing, would really give to more consumer-focused applications of CD-ROM like gaming. Philips, on the other hand, was a giant of consumer electronics. While they paid due lip service to applications of CD-I in areas like corporate training, it was always clear that it would be first and foremost a technology for the living room, one that comprehensively addressed what most believed was the biggest factor limiting the market for conventional computer games: that the machines that ran them were just too fiddly to operate. At the time that CD-I was first announced, the videogame console was almost universally regarded as a dead fad; the machine that would so dramatically reverse that conventional wisdom, the Nintendo Entertainment System, was still an oddball upstart being sold in selected markets only. Thus many game makers saw CD-I as their only viable route out of the back bedroom and into the living room — into the mainstream of home entertainment.

So, when Philips spoke, the game developers listened. Many publishers, including big powerhouses like Activision as well as smaller boutique houses like the 68000 specialists Aegis Development, committed to CD-I projects during 1986, receiving in return a copy of the closely guarded “Green Book” that detailed the inner workings of the system. There was no small pressure to get in on the action quickly, for Philips was promising to ship the first finished CD-I units in time for the Christmas of 1987. Trip Hawkins of Electronic Arts made CD-I a particular priority, forming a whole new in-house development division for the platform. He’d been waiting for a true next-generation mainstream game machine for years. At first, he’d thought the Commodore Amiga would be that machine, but Commodore’s clueless marketing and the Amiga’s high price were making such an outcome look less and less likely. So now he was looking to CD-I, which promised graphics and sound as good as those of the Amiga, along with the all but infinite storage of the unpirateable CD format, and all in a tidy, inexpensive package designed for the living room. What wasn’t to like? He imagined Silicon Valley becoming “the New Hollywood,” imagined a game like Electronic Arts’s hit Starflight remade as a CD-I experience.

You could actually do it just like a real movie. You could hire a costume designer from the movie business, and create special-effects costumes for the aliens. Then you’d videotape scenes with the aliens, and have somebody do a soundtrack for the voices and for the text that they speak in the game.

Then you’d digitize all of that. You could fill up all the space on the disc with animated aliens and interesting sounds. You would also have a universe that’s a lot more interesting to look at. You might have an out-of-the-cockpit view, like Star Trek, with planets that look like planets — rotating, with detailed zooms and that sort of thing.

Such a futuristic vision seemed thoroughly justifiable based on Philips’s CD-I hype, which promised a rich multimedia environment combining CD-quality stereo sound with full-motion video, all at a time when just displaying a photo-realistic still image captured from life on a computer screen was considered an amazing feat. (Among extant personal computers, only the Amiga could manage it.) When developers began to dive into the Green Book, however, they found the reality of CD-I often sharply at odds with the hype. For instance, if you decided to take advantage of the CD-quality audio, you had to tie up the CD drive entirely to stream it, meaning you couldn’t use it to fetch pictures or video or anything else for this supposed rich multimedia environment.

Video playback became an even bigger sore spot that echoed back to those fundamental limitations that had been baked into the CD when it was regarded only as a medium for music delivery. A transfer rate of barely 150 K per second just wasn’t much to work with in terms of streaming video. Developers found themselves stymied by an infuriating Catch-22. If you tried to work with an uncompressed or only modestly compressed video format, you simply couldn’t read it off the disk fast enough to display it in real-time. Yet if you tried to use more advanced compression techniques, it became so expensive in terms of computation to decompress the data that the CD-I unit’s 68000 CPU couldn’t keep up. The best you could manage was to play video snippets that only filled a quarter of the screen — not a limitation that felt overly compatible with the idea of CD-I as the future of home entertainment in the abstract. It meant that a game like the old laser-disc-driven arcade favorite Dragon’s Lair, the very sort of thing people tended to think of first when you mentioned optical storage in the context of entertainment, would be impossible with CD-I. The developers who had signed contracts with Philips and committed major resources to CD-I could only soldier on and hope the technology would continue to evolve.

By 1987, then, the CD as a computer format had been split into two camps. While the games industry had embraced CD-I, the powers that were in business computing had jumped aboard the less ambitious, Microsoft-sponsored standard of CD-ROM, which solved issues like the problematic video playback of CD-I by the simple expediency of not having anything at all to say about them. Perhaps the most impressive of the very early CD-ROM products was the Microsoft Bookshelf, which combined Roget’s Thesaurus, The American Heritage Dictionary, The Chicago Manual of Style, The World Almanac and Book of Facts, and Bartlett’s Familiar Quotations alongside spelling and grammar checkers, a ZIP Code directory, and a collection of forms and form letters, all on a single disc — as fine a demonstration of the potential of the new format as could be imagined short of all that rich multimedia that Philips had promised. Microsoft proudly noted that Bookshelf was their largest single product ever in terms of the number of bits it contained and their smallest ever in physical size. Nevertheless, with most drives costing north of $1000 and products to use with them like Microsoft Bookshelf hundreds more, CD-ROM remained a pricey proposition found in vanishingly few homes — and for that matter not in all that many businesses either.

But at least actual products were available in CD-ROM format, which was more than could be said for CD-I. As 1986 turned into 1987, developers still hadn’t received any CD-I hardware at all, being forced to content themselves with printed specifications and examples of the system in action distributed on videotape by Philips. Particularly for a small company like Aegis, which had committed heavily to a game based on Jules Verne’s 20,000 Leagues Under the Sea, for which they had recruited Jim Sachs of Defender of the Crown fame as illustrator, it was turning into a potentially dangerous situation.

The computer industry — even those parts of it now more committed to CD-I than CD-ROM — dutifully came together once again for the second Microsoft CD-ROM Conference in March of 1987. In contrast to the unusual Pacific Northwest sunshine of the previous conference, the weather this year seemed to match the more unsettled mood: three days of torrential downpour. It was a more skeptical and decidedly less Woodstock-like audience who filed into the auditorium one day for a presentation by no less unlikely a party than the venerable old American conglomerate General Electric. But in the course of that presentation, the old rapture came back in a hurry, culminating in a spontaneous standing ovation. What had so shocked and amazed the audience was the impossible made real: full-screen video running in real-time off a CD drive connected to what to all appearances was an ordinary IBM PC/AT computer. Digital Video Interactive, or DVI, had just made its dramatic debut.

DVI’s origins dated back to 1983, when engineer Larry Ryan of another old-school American company, RCA, had been working on ways to make the old analog laser-disc technology more interactive. Growing frustrated with the limitations he kept bumping against, he proposed to his bosses that RCA dump the laser disc from the equation entirely and embrace digital optical storage. They agreed, and a new project on those lines was begun in 1984. It was still ongoing two years later — just reaching the prototype stage, in fact — when General Electric acquired RCA.

DVI worked by throwing specialized hardware at the problem which Philips had been fruitlessly trying to solve via software alone. By using ultra-intensive compression techniques, it was possible to crunch video playing at a resolution of 256 X 240 — not an overwhelming resolution even by the standards of the day, but not that far below the practical resolution of a typical television set either — down to a size below 153.6 K per second of footage without losing too much quality. This fact was fairly well-known, not least to Philips. The bottleneck had always been the cost of decompressing the footage fast enough to get it onto the screen in real time. DVI attacked this problem via a hardware add-on that consisted principally of a pair of semi-autonomous custom chips designed just for the task of decompressing the video stream as quickly as possible. DVI effectively transformed the potential 75 minutes of sound that could be stored on a CD into 75 minutes of video.

Philosophically, the design bore similarities to the Amiga’s custom chips — similarities which became even more striking when you considered some of the other capabilities that came almost as accidental byproducts of the design. You could, for instance, overlay conventional graphics onto the streaming video by using the computer’s normal display circuitry in conjunction with DVI, just as you could use an Amiga to overlay titles and other graphics onto a “genlocked” feed from a VCR or other video source. But the difference with DVI was that it required no complicated external video source at all, just a CD in the computer’s CD drive. The potential for games was obvious.

In this demonstration of DVI's potential, the user can explore an ancient Mayan archeological site that's depicted using real-world video footage, while the control icons are traditional computer graphics.

In this demonstration of DVI’s potential, the user can explore an ancient Mayan archeological site that’s depicted using real-world video footage, while the icons used as controls are traditional computer graphics.

Still, DVI’s dramatic debut barely ended before the industry’s doubts began. It seemed clear enough that DVI was technically better than CD-I, at least in the hugely important area of video playback, but General Electric — hardly anyone’s idea of a nimble innovator — offered as yet no clear road map for the technology, no hint of what they really planned to do with it. Should game developers place their CD-I projects on hold to see if something better really was coming in the form of DVI, or should they charge full speed ahead and damn the torpedoes? Some did one, some did the other; some made halfhearted commitments to both technologies, some vacillated between them.

But worst of all was the effect that DVI had on Philips. They were thrown into a spin by that presentation from which they never really recovered. Fearful of getting their clock cleaned in the marketplace by a General Electric product based on DVI, Philips stopped CD-I in its tracks, demanding that a way be found to make it do full-screen video as well. From an original plan to ship the first finished CD-I units in time for Christmas 1987, the timetable slipped to promise the first prototypes for developers by January of 1988. Then that deadline also came and went, and all that developers had received were software emulators. Now the development prototypes were promised by summer 1988, finished units expected to ship in 1989. The delay notwithstanding, Philips still confidently predicted sales in “the tens of millions.” But then world domination was delayed again until 1990, then 1991.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Wanting CD-I to offer the best of everything, the project chased its own tail for years, trying to address every actual or potential innovation from every actual or potential rival. The game publishers who had jumped aboard with such enthusiasm in the early days were wracked with doubt upon the announcement of each successive delay. Should they jump off the merry-go-round now and cut their losses, or should they stay the course in the hope that CD-I finally would turn into the revolutionary product Philips had been promising for so long? To this day, you merely have to mention CD-I to even the most mild-mannered old games-industry insider to be greeted with a torrent of invective. Philips’s merry-go-round cost the industry huge. Some smaller developers who had trusted Philips enough to bet their very survival on CD-I paid the ultimate price. Aegis, for example, went out of business in 1990 with CD-I still vaporware.

While CD-I chased its tail, General Electric, the unwitting instigators of all this chaos, tried to decide in their slow, bureaucratic way what to do with this DVI thing they’d inherited. Thus things were as unsettled as ever on the CD-I and DVI fronts when the third Microsoft CD-ROM Conference convened in March of 1988. The old plain-Jane CD-ROM format, however, seemed still to be advancing slowly but steadily. Certainly Microsoft appeared to be in fine fettle; harking back to the downpour that had greeted the previous year’s conference, they passed out oversized gold umbrellas to everyone — emblazoned, naturally, with the Microsoft logo in huge type. They could announce at their conference that the High Sierra logical format for CD-ROM had been accepted, with some modest modifications to support languages other than English, by the International Standards Organization as something that would henceforward be known as “ISO 9660.” (It remains the standard logical format for CD-ROM to this day.) Meanwhile Philips and Sony were about to begrudgingly codify the physical format for CD-ROM, extant already as a de facto standard for several years now, as the Yellow Book, latest addition to a library of binders that was turning into quite the rainbow. Apple, who had previously been resistant to CD-ROM, driven as it was by their arch-rival Microsoft, showed up with an official CD-ROM drive for a Macintosh or even an Apple II, albeit at a typically luxurious Apple price of $1200. Even IBM showed up for the conference this time, albeit with a single computer attached to a non-IBM CD-ROM drive and a carefully noncommittal official stance on all this optical evangelism.

As CD-ROM gathered momentum, the stories of DVI and CD-I alike were already beginning to peter out in anticlimax. After doing little with DVI for eighteen long months, General Electric finally sold it to Intel at the end of 1988, explaining that DVI just “didn’t mesh with [their] strategic plans.” Intel began shipping DVI setups to early adopters in 1989, but they cost a staggering $20,000 — a long, long way from a reasonable consumer price point. DVI continued to lurch along into the 1990s, but the price remained too high. Intel, possessed of no corporate tradition of marketing directly to consumers, often seemed little more motivated to turn DVI into a practical product than had been General Electric. Thus did the technology that had caused such a sensation and such disruption in 1987 gradually become yesterday’s news.

Ironically, we can lay the blame for the creeping irrelevancy of DVI directly at the feet of the work for which Intel was best known. As Gordon Moore — himself an Intel man — had predicted decades before, the overall throughput of Intel’s most powerful microprocessors continued to double every two years or so. This situation meant that the problem DVI addressed through all that specialized hardware — that of conventional general-purpose CPUs not having enough horsepower to decompress an ultra-compressed video stream fast enough — wasn’t long for this world. And meanwhile other engineers were attacking the problem from the other side, addressing the standard CD’s reading speed of just 153.6 K per second. They realized that by applying an integral multiplier to the timing of a CD drive’s circuitry, its reading (and seeking) speed could be increased correspondingly. Soon so-called “2X” drives began to appear, capable of reading data at well over 300 K per second, followed in time by “4X” drives, “8X” drives, and whatever unholy figure they’ve reached by today. These developments rendered all of the baroque circuitry of DVI pointless, a solution in search of a problem. Who needed all that complicated stuff?

CD-I’s end was even more protracted and ignominious. The absurd wait eventually got to be too much for even the most loyal CD-I developers. One by one, they dropped their projects. It marked a major tipping point when in 1989 Electronic Arts, the most enthusiastic of all the software publishers in the early days of CD-I, closed down the department they had formed to develop for the platform, writing off millions of dollars on the aborted venture. In another telling sign of the times, Greg Riker, the manager of that department, left Electronic Arts to work for Microsoft on CD-ROM.

When CD-I finally trickled onto store shelves just a few weeks shy of Christmas 1991, it was able to display full-screen video of a sort but only in 128 colors, and was accompanied by an underwhelming selection of slapdash games and lifestyle products, most funded by Philips themselves, that were a far cry from those halcyon expectations of 1986. CD-I sales disappointed — immediately, consistently, and comprehensively. Philips, nothing if not persistent, beat the dead horse for some seven years before giving up at last, having sold only 1 million units in total, many of them at fire-sale discounts.

In the end, the big beneficiary of the endless CD-I/DVI standoff was CD-ROM, the simple, commonsense format that had made its public debut well before either of them. By 1993 or so, you didn’t need anything special to play video off a CD at equivalent or better quality to that which had been so amazing in 1987; an up-to-date CPU combined with a 2X CD-ROM drive would do the job just fine. The Microsoft standard had won out. Funny how often that happened in the 1980s and 1990s, isn’t it?

Bill Gates’s reputation as a master Machiavellian being what it is, I’ve heard it suggested that the chaos and indecision which followed the public debut of DVI had been consciously engineered by him — that he had convinced a clueless General Electric to give that 1987 demonstration and later convinced Intel to keep DVI at least ostensibly alive and thus paralyzing Philips long enough for everyday PC hardware and vanilla CD-ROM to win the day, all the while knowing full well that DVI would never amount to anything. That sounds a little far-fetched to this writer, but who knows? Philips’s decision to announce CD-I five days before Microsoft’s CD-ROM Conference had clearly been a direct shot across Bill Gates’s bow, and such challenges did tend not to end well for the challenger. Anything else is, and must likely always remain, mere speculation.

(Sources: Amazing Computing of May 1986; Byte of May 1986, October 1986, April 1987, January 1989, May 1989, and December 1990; Commodore Magazine of November 1988; 68 Micro Journal of August/September 1989; Compute! of February 1987 and June 1988; Macworld of April 1988; ACE of September 1989, March 1990, and April 1990; The One of October 1988 and November 1988; Sierra On-Line’s newsletter of Autumn 1989; PC Magazine of April 29 1986; the premiere issue of AmigaWorld; episodes of the Computer Chronicles television series entitled “Optical Storage Devices,” “CD-ROMs,” and “Optical Storage”; the book CD-ROM: The New Papyrus from the Microsoft Press. Finally, my huge thanks to William Volk, late of Aegis and Mediagenic, for sharing his memories and impressions of the CD wars with me in an interview.)

Footnotes

Footnotes
1 The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data.
 
46 Comments

Posted by on September 30, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags: