RSS

Category Archives: Digital Antiquaria

The 68000 Wars, Part 6: The Unraveling

Commodore International’s roots are in manufacturing, not computing. They’re used to making and selling all kinds of things, from calculators to watches to office furniture. Computers just happen to be a profitable sideline they stumbled into. Commodore International isn’t really a computer company; they’re a company that happens to make computers. They have no grand vision of computing in the future; Commodore International merely wants to make products that sell. Right now, the products they’re set up to sell happen to be computers and video games. Next year, they might be bicycles or fax machines.

The top execs at Commodore International don’t really understand why so many people love the Amiga. You see, to them, it’s as if customers were falling in love with a dishwasher or a brand of paper towels. Why would anybody love a computer? It’s just a chunk of hardware that we sell, fer cryin’ out loud.

— Amiga columnist “The Bandito,” Amazing Computing, January 1994

Commodore has never been known for their ability to arouse public support. They have never been noted for their ability to turn lemons into lemonade. But, they have been known to take a bad situation and create a disaster. At least there is some satisfaction in noting their consistency.

— Don Hicks, managing editor, Amazing Computing, July 1994

In the summer of 1992, loyal users of the Commodore Amiga finally heard a piece of news they had been anxiously awaiting for half a decade. At long last, Commodore was about to release new Amiga models sporting a whole new set of custom chips. The new models would, in other words, finally do more than tinker at the edges of the aging technology which had been created by the employees of little Amiga, Incorporated between 1982 and 1985. It was all way overdue, but, as they say, better late than never.

The story of just how this new chipset managed to arrive so astonishingly late is a classic Commodore tale of managerial incompetence, neglect, and greed, against which the company’s overtaxed, understaffed engineering teams could only hope to fight a rearguard action.



Commodore’s management had not woken up to the need to significantly improve the Amiga until the summer of 1988, after much of its technological lead over its contemporaries running MS-DOS and MacOS had already been squandered. Nevertheless, the engineers began with high hopes for what they called the “Advanced Amiga Architecture,” or the “AAA” chipset. It was to push the machine’s maximum screen resolution from 640 X 400 to 1024 X 768, while pushing its palette of available colors from 4096 to 16.8 million. The blitter and other pieces of custom hardware which made the machine so adept at 2D animation would be retained and vastly improved, even as the current implementation of planar graphics would be joined by “chunky-graphics” modes which were more suitable for 3D animation. Further, the new chips would, at the user’s discretion, do away with the flicker-prone interlaced video signal that the current Amiga used for vertical resolutions above 200 pixels, which made the machine ideal for desktop-video applications but annoyed the heck out of anyone trying to do anything else with it. And it would now boast eight instead of four separate sound channels, each of which would now offer 16-bit resolution — i.e., the same quality as an audio CD.

All of this was to be ready to go in a new Amiga model by the end of 1990. Had Commodore been able to meet that time table and release said model at a reasonable price point, it would have marked almost as dramatic an advance over the current state of the art in multimedia personal computing as had the original Amiga 1000 back in 1985.

Sadly, though, no plan at Commodore could long survive contact with management’s fundamental cluelessness. By this point, the research-and-development budget had been slashed to barely half what it had been when the company’s only products were simple 8-bit computers like the Commodore 64. Often only one engineer at a time was assigned to each of the three core AAA chips, and said engineer was more often than not young and inexperienced, because who else would work 80-hour weeks at the salaries Commodore paid? Throw in a complete lack of day-to-day oversight or management coordination, and you had a recipe for endless wheel-spinning. AAA fell behind schedule, then fell further behind, then fell behind some more.

Some fifteen months after the AAA project had begun, Commodore started a second chip-set project, which they initially called “AA.” The designation was a baseball metaphor rather than an acronym; the “AAA” league in American baseball is the top division short of the major leagues, while the “AA” league is one rung further down. As the name would indicate, then, the AA chipset was envisioned as a more modest evolution of the Amiga’s architecture, an intermediate step between the original chipset and AAA. Like the latter, AA would offer 16.8 million colors — of which 256 could be onscreen at once without restrictions, more with some trade-offs —  but only at a maximum non-interlaced resolution of 640 X 480, or 800 X 600 in interlace mode. Meanwhile the current sound system would be left entirely alone. On paper, even these improvements moved the Amiga some distance beyond the existing Wintel VGA standard — but then again, that world of technology wasn’t standing still either. Much depended on getting the AA chips out quickly.

But “quick” was an adjective which seldom applied to Commodore. First planned for release on roughly the same time scale that had once been envisioned for the AAA chipset, AA too fell badly behind schedule, not least because the tiny engineering team was now forced to split their energies between the two projects. It wasn’t until the fall of 1992 that AA, now renamed to the “Advanced Graphics Architecture,” or “AGA,” made its belated appearance. That is to say, the stopgap solution to the Amiga’s encroaching obsolescence arrived fully two years after the comprehensive solution to the problem ought to have shipped. Such was life at Commodore.

Rather than putting the Amiga out in front of the competition, AGA at this late date could only move it into a position of rough parity with the majority of the so-called “Super VGA” graphics cards which had become fairly commonplace in the Wintel world over the preceding year or so. And with graphics technology evolving quickly in the consumer space to meet the demands of CD-ROM and full-motion video, even the Amiga’s parity wouldn’t last for long. The Amiga, the erstwhile pioneer of multimedia computing, was now undeniably playing catch-up against the rest of the industry.

The Amiga 4000

AGA arrived inside two new models which evoked immediate memories of the Amiga 2000 and 500 from 1987, the most successful products in the platform’s history. Like the old Amiga 2000, the new Amiga 4000 was the “professional” machine, shipping in a big case full of expansion slots, with 4 MB of standard memory, a large hard drive, and a 68040 processor running at 25 MHz, all for a street price of around $2600. Like the old Amiga 500, the Amiga 1200 was the “home” model, shipping in an all-in-one-case form factor without room for internal expansion, with 2 MB of standard memory, a floppy drive only, and a 68020 processor running at 14 MHz, all for a price of about $600.

The two models were concrete manifestations of what a geographically bifurcated computing platform the Amiga had become by the early 1990s. In effect, the Amiga 4000 was to be the new face of Amiga computing in North America; ditto the Amiga 1200 in Europe. Commodore would make only scattered, desultory attempts to sell each model outside of its natural market.



Although the Amiga 500 had once enjoyed some measure of success in the United States as a games machine and general-purpose home computer, those days were long gone by 1992. That year, MS-DOS and Windows accounted for 87 percent of all American computer-game sales and the Macintosh for 9 percent, while the Amiga was lumped rudely into the 4 percent labeled simply “other.” Small wonder that very few American games publishers still gave any consideration to the platform at all; what games were still available for the Amiga in North America must usually be acquired by mail order, often as pricey imports from foreign climes. Then, too, most of the other areas where the Amiga had once been a player, and as often as not a pioneer — computer-based art and animation, 3D modeling, music production, etc. — had also fallen by the wayside, with most of the slack for such artsy endeavors being picked up by the Macintosh.

The story of Eric Graham was depressingly typical of the trend. Back in 1986, Graham had created a stunning ray-traced 3D animation called The Juggler on the Amiga; it became a staple of shop windows, filling at least some of the gap left by Commodore’s inept marketing. User demand had then led him to create Sculpt 3D, one of the first two practical 3D modeling applications for a consumer-class personal computer, and release it through the publisher Byte by Byte in mid-1987. (The other claimant to the status of absolute first of the breed ran on the Amiga as well; it was called Videoscape 3D, and was released virtually simultaneously with Sculpt 3D). But by 1989 the latest Macintosh models had also become powerful enough to support Graham’s software. Therefore Byte by Byte and Graham decided to jump to that platform, which already boasted a much larger user base who tended to be willing to pay higher prices for their software. Orphaned on the Amiga, the Sculpt 3D line continued on the Mac until 1996. Thanks to it and many other products, the Mac took over the lead in the burgeoning field of 3D modeling. And as went 3D modeling, so went a dozen other arenas of digital creativity.

The one place where the Amiga’s toehold did prove unshakeable was desktop video, where its otherwise loathed interlaced graphics modes were loved for the way they let the machine sync up with the analog video-production equipment typical of the time: televisions, VCRs, camcorders, etc. From the very beginning, both professionals and a fair number of dedicated amateurs used Amigas for titling, special effects, color correction, fades and wipes, and other forms of post-production work. Amigas were used by countless television stations to display programming information and do titling overlays, and found their way onto the sets of such television and film productions as Amazing Stories, Max Headroom, and Robocop 2. Even as the Amiga was fading in many other areas, video production on the platform got an enormous boost in December of 1990, when an innovative little Kansan company called NewTek released the Video Toaster, a combination of hardware and software which NewTek advertised, with less hyperbole than you might imagine, as an “all-in-one broadcast studio in a box” — just add one Amiga. Now the Amiga’s production credits got more impressive still: Babylon 5, seaQuest DSV, Quantum Leap, Jurassic Park, sports-arena Jumbotrons all over the country. Amiga models dating from the 1980s would remain fixtures in countless local television stations until well after the millennium, when the transition from analog to digital transmission finally forced their retirement.

Ironically, this whole usage scenario stemmed from what was essentially an accidental artifact of the Amiga’s design; Jay Miner, the original machine’s lead hardware designer, had envisioned its ability to mix and match with other video sources not as a means of inexpensive video post-production but rather as a way of overlaying interactive game graphics onto the output from an attached laser-disc accessory, a technique that was briefly en vogue in videogame arcades. Nonetheless, the capability was truly a godsend, the only thing keeping the platform alive at all in North America.

On the other hand, though, it was hard not to lament a straitening of the platform’s old spirit of expansive, experimental creativity across many fields. As far as the market was concerned, the Amiga was steadily morphing from a general-purpose computer into a piece of niche technology for a vertical market. By the early 1990s, most of the remaining North American Amiga magazines had become all but indistinguishable from any other dryly technical trade journal serving a rigidly specialized readership. In a telling sign of the times, it was almost universally agreed that early sales of the Amiga 4000, which were rather disappointing even by Commodore’s recent standards, were hampered by the fact that it initially didn’t work with the Video Toaster. (An updated “Video Toaster 4000” wouldn’t finally arrive until a year after the Amiga 4000 itself.) Many users now considered the Amiga little more than a necessary piece of plumbing for the Video Toaster. NewTek and others sold turnkey systems that barely even mentioned the name “Commodore Amiga.” Some store owners whispered that they could actually sell a lot more Video Toaster systems that way. After all, Commodore was known to most of their customers only as the company that had made those “toy computers” back in the 1980s; in the world of professional video and film production, that sort of name recognition was worth less than no name recognition at all.

In Europe, meanwhile, the nature of the baggage that came attached to the Commodore name perhaps wasn’t all that different in the broad strokes, but it did carry more positive overtones. In sheer number of units sold, the Amiga had always been vastly more successful in Europe than in North America, and that trend accelerated dramatically in the 1990s. Across the pond, it enjoyed all the mass-market acceptance it lacked in its home country. It was particularly dominant in Britain and West Germany, two of the three biggest economies in Europe. Here, the Amiga was nothing more nor less than a great games machine. The sturdy all-in-one-case design of the Amiga 500 and, now, the Amiga 1200 placed it on a continuum with the likes of the Sinclair Spectrum and the Commodore 64. There was a lot more money to be made selling computers to millions of eager European teenagers than to thousands of sober American professionals, whatever the wildly different price tags of the individual machines. Europe was accounting for as much as 88 percent of Commodore’s annual revenues by the early 1990s.

And yet here as well the picture was less rosy than it might first appear. While Commodore sold almost 2 million Amigas in Europe during 1991 and 1992, sales were trending in the wrong direction by the end of that period. Nintendo and Sega were now moving into Europe with their newest console systems, and Microsoft Windows as well was fast gaining traction. Thus it comes as no surprise that the Amiga 1200, which first shipped in December of 1992, a few months after the Amiga 4000, was greeted with sighs of relief by Commodore’s European subsidiaries, followed by much nervous trepidation. Was the new model enough of an improvement to reverse the trend of declining sales and steal a march on the competition once again? Sega especially had now become a major player in Europe, selling videogame consoles which were both cheaper and easier to operate than an Amiga 1200, lacking as they did any pretense of being full-fledged computers.

If Commodore was facing a murky future on two continents, they could take some consolation in the fact that their old arch-rival Atari was, as usual, even worse off. The Atari Lynx, the handheld game console which Jack Tramiel had bilked Epyx out of for a song, was now the company’s one reasonably reliable source of revenue; it would sell almost 3 million units between 1989 and 1994. The Atari ST, the computer line which had caused such a stir when it had beat the original Amiga into stores back in 1985 but had been playing second fiddle ever since, offered up its swansong in 1992 in the form of the Falcon, a would-be powerhouse which, as always, lagged just a little bit behind the latest Amiga models’ capabilities. Even in Europe, where Atari, like Commodore, had a much stronger brand than in North America, the Falcon sold hardly at all. The whole ST line would soon be discontinued, leaving it unclear how Atari intended to survive after the Lynx faded into history. Already in 1992, their annual sales fell to $127 million — barely a seventh those of Commodore.

Still, everything was in flux, and it was an open question whether Commodore could continue to sell Amigas in Europe at the pace to which they had become accustomed. One persistent question dogging the Amiga 1200 was that of compatibility. Although the new chipset was designed to be as compatible as possible with the old, in practice many of the most popular old Amiga games didn’t work right on the new machine. This reality could give pause to any potential upgrader with a substantial library of existing games and no desire to keep two Amigas around the house. If your favorite old games weren’t going to work on the new machine anyway, why not try something completely different, like those much more robust and professional-looking Windows computers the local stores had just started selling, or one of those new-fangled videogame consoles?



The compatibility problems were emblematic of the way that the Amiga, while certainly not an antique like the 8-bit generation of computers, wasn’t entirely modern either. MacOS and Windows isolated their software environment from changing hardware by not allowing applications to have direct access to the bare metal of the computer; everything had to be passed through the operating system, which in turn relied on the drivers provided by hardware manufacturers to ensure that the same program worked the same way on a multitude of configurations. AmigaOS provided the same services, and its technical manuals as well asked applications to function this way — but, crucially, it didn’t require that they do so. European game programmers in particular had a habit of using AmigaOS only as a bootstrap. Doing so was more efficient than doing things the “correct” way; most of the audiovisually striking games which made the Amiga’s reputation would have been simply impossible using “proper” programming techniques. Yet it created a brittle software ecosystem which was ill-suited for the long term. Already before 1992 Amiga gamers had had to contend with software that didn’t work on machines with more than 512 K of memory, or with a hard drive attached, or with or without a certain minor revision of the custom chips, or any number of other vagaries of configuration. With the advent of the AGA chipset, such problems really came home to roost.

An American Amiga 1200. In the context of American computing circa 1992, it looked like a bizarrely antediluvian gadget. “Real” computers just weren’t sold in this style of all-in-one case anymore, with peripherals dangling messily off the side. It looked like a chintzy toy to Americans, whatever the capabilities hidden inside. A telling detail: notice the two blank keys on the keyboard, which were stamped with characters only in some continental European markets that needed them. Rather than tool up to produce more than one physical keyboard layout, Commodore just let them sit there in all their pointlessness on the American machines. Can you imagine Apple or IBM, or any other reputable computer maker, doing this? To Americans, and to an increasing number of Europeans as well, the Amiga 1200 just seemed… cheap.

Back in 1985, AmigaOS had been the very first consumer-oriented operating system to boast full-fledged preemptive multitasking, something that neither MacOS nor Microsoft Windows could yet lay claim to even in 1992; they were still forced to rely on cooperative multitasking, which placed them at the mercy of individual applications’ willingness to voluntarily cede time to others. Yet the usefulness of AmigaOS’s multitasking was limited by its lack of memory protection. Thanks to this lack, any individual program on the system could write, intentionally or unintentionally, all over the memory allocated to another; system crashes were a sad fact of life for the Amiga power user. AmigaOS also lacked a virtual-memory system that would have allowed more applications to run than the physical memory could support. In these respects and others — most notably its graphical interface, which still evinced nothing like the usability of Windows, much less the smooth elegance of the Macintosh desktop — AmigaOS lagged behind its rivals.

It is true that MacOS, dating as it did from roughly the same period in the evolution of the personal computer, was struggling with similar issues: trying to implement some form of multitasking where none at all had existed originally, kludging in support for virtual memory and some form of rudimentary memory protection. The difference was that MacOS was evolving, however imperfectly. While AmigaOS 3.0, which debuted with the AGA machines, did offer some welcome improvements in terms of cosmetics, it did nothing to address the operating system’s core failings. It’s doubtful whether anyone in Commodore’s upper management even knew enough about computers to realize they existed.

This quality of having one foot in each of two different computing eras dogged the platform in yet more ways. The very architectural approach of the Amiga — that of an ultra-efficient machine built around a set of tightly coupled custom chips — had become passé as Wintel and to a large extent even the Macintosh had embraced modular architectures where almost everything could live on swappable cards, letting users mix and match capabilities and upgrade their machines piecemeal rather than all at once. One might even say that it was down almost as much to the Amiga’s architectural philosophy as it was to Commodore’s incompetence that the machine had had such a devil of a time getting itself upgraded.

And yet the problems involved in upgrading the custom chips were as nothing compared to the gravest of all the existential threats facing the Amiga. It was common knowledge in the industry by 1992 that Motorola was winding down further development of the 68000 line, the CPUs at the heart of all Amigas. Indeed, Apple, whose Macintosh used the same CPUs, had seen the writing on the wall as early as the beginning of 1990, and had started projects to port MacOS to other architectures, using software emulation as a way to retain compatibility with legacy applications. In 1991, they settled on the PowerPC, a CPU developed by an unlikely consortium of Apple, IBM, and Motorola, as the future of the Macintosh. The first of the so-called “Power Macs” would debut in March of 1994. The whole transition would come to constitute one of the more remarkable sleights of hand in all computing history; the emulation layer combined with the ported version of MacOS would work so seamlessly that many users would never fully grasp what was really happening at all.

But Commodore, alas, was in no position to follow suit, even had they had the foresight to realize what a ticking time bomb the end of the 68000 line truly was. AmigaOS’s more easygoing attitude toward the software it enabled meant that any transition must be fraught with far more pain for the user, and Commodore had nothing like the resources Apple had to throw at the problem in any case. But of course, the thoroughgoing, eternal incompetence of Commodore’s management prevented them from even seeing the problem, much less doing anything about it. While everyone was obliviously rearranging the deck chairs on the S.S. Amiga, it was barreling down on an iceberg as wide as the horizon. The reality was that the Amiga as a computing platform now had a built-in expiration date. After the 68060, the planned swansong of the 68000 family, was released by Motorola in 1994, the Amiga would literally have nowhere left to go.

As it was, though, Commodore’s financial collapse became the more immediate cause that brought about the end of the Amiga as a vital computing platform. So, we should have a look at what happened to drive Commodore from a profitable enterprise, still flirting with annual revenues of $1 billion, to bankruptcy and dissolution in the span of less than two years.



During the early months of 1993, an initial batch of encouraging reports, stating that the Amiga 1200 had been well-received in Europe, was overshadowed by an indelibly Commodorian tale of making lemons out of lemonade. It turned out that they had announced the Amiga 1200 too early, then shipped it too late and in too small quantities the previous Christmas. Consumers had chosen to forgo the Amiga 500 and 600 — the latter being a recently introduced ultra-low-end model — out of the not-unreasonable belief that the generation of Amiga technology these models represented would soon be hopelessly obsolete. Finding the new model unavailable, they’d bought nothing at all — or, more likely, bought something from a competitor like Sega instead. The result was a disastrous Christmas season: Commodore didn’t yet have the new computers everybody wanted, and couldn’t sell the old computers they did have, which they’d inexplicably manufactured and stockpiled as if they’d had no inkling that the Amiga 1200 was coming. They lost $21.9 million in the quarter that was traditionally their strongest by far.

Scant supplies of the Amiga 1200 continued to devastate overall Amiga sales well after Christmas, leaving one to wonder why on earth Commodore hadn’t tooled up to manufacture sufficient quantities of a machine they had hoped and believed would be a big hit, for once with some justification. In the first quarter of 1993, Commodore lost a whopping $177.6 million on sales of just $120.9 million, thanks to a massive write-down of their inventory of older Amiga models piling up in warehouses. Unit sales of Amigas dropped by 25 percent from the same quarter of the previous year; Amiga revenues dropped by 45 percent, thanks to deep price cuts instituted to try to move all those moribund 500s and 600s. Commodore’s share price plunged to around $2.75, down from $11 a year before, $20 the year before that. Wall Street estimated that the whole company was now worth only $30 million after its liabilities had been subtracted — a drop of one full order of magnitude in the span of a year.

If anything, Wall Street’s valuation was generous. Commodore was now dragging behind them $115.3 million in debt. In light of this, combined with their longstanding reputation for being ethically challenged in all sorts of ways, the credit agencies considered them to be too poor a risk for any new loans. Already they were desperately pursuing “debt restructuring” with their major lenders, promising them, with little to back it up, that the next Christmas season would make everything right as rain again. Such hopes looked even more unfounded in light of the fact that Commodore was now making deep cuts to their engineering and marketing staffs — i.e., the only people who might be able to get them out of this mess. Certainly the AAA chipset, still officially an ongoing project, looked farther away than ever now, two and a half years after it was supposed to have hit the streets.

The Amiga CD32

It was for these reasons that the announcement of the Amiga CD32, the last major product introduction in Commodore’s long and checkered history, came as such a surprise to everyone in mid-1993. CD32 was perhaps the first comprehensively, unadulteratedly smart product Commodore had released since the Amiga 2000 and 500 back in 1987. It was as clever a leveraging of their dwindling assets as anyone could ask for. Rather than another computer, it was a games console, built around a double-speed CD-ROM drive. Commodore had tried something similar before, with the CDTV unit of two and a half years earlier, only to watch it go down in flames. For once, though, they had learned from their mistakes.

CDTV had been a member of an amorphously defined class of CD-based multimedia appliances for the living room — see also the Philips CD-I, the 3DO console, and the Tandy VIS — which had all been or would soon become dismal failures. Upon seeing such gadgets demonstrated, consumers, unimpressed by ponderous encyclopedias that were harder to use and less complete than the ones already on their bookshelves, trip-planning software that was less intuitive than the atlases already in their glove boxes, and grainy film clips that made their VCRs look high-fidelity, all gave vent to the same plaintive question: “But what good is it really?” The only CD-based console which was doing well was, not coincidentally, the only one which could give a clear, unequivocal answer to this question. The Sega Genesis with CD add-on was good for playing videogames, full stop.

Commodore followed Sega’s lead with the CD32. It too looked, acted, and played like a videogame console — the most impressive one on the market, with specifications far outshining the Sega CD. Whereas Commodore had deliberately obscured the CDTV’s technological connection to the Amiga, they trumpeted it with the CD32, capitalizing on the name’s association with superb games. At heart, the CD32 was just a repackaged Amiga 1200, in the same way that the CDTV had been a repackaged Amiga 500. Yet this wasn’t a problem at all. For all that it was a little underwhelming in the world of computers, the AGA chipset was audiovisually superior to anything the console world could offer up, while the 32-bit 68020 that served as the CD32’s brain gave it much more raw horsepower. Meanwhile the fact that at heart it was just an Amiga in a new form factor gave it a huge leg up with publishers and developers; almost any given Amiga game could be ported to the CD32 in a week or two. Throw in a price tag of less than $400 (about $200 less than the going price of a “real” Amiga 1200, if you could find one), and, for the first time in years, Commodore had a thoroughly compelling new product, with a measure of natural appeal to people who weren’t already members of the Amiga cult. Thanks to the walled-garden model of software distribution that was the norm in the console world, Commodore stood to make money not only on every CD32 sold but also from a licensing fee of $3 on every individual game sold for the console. If the CD32 really took off, it could turn into one heck of a cash cow. If only Commodore could have released it six months earlier, or have managed to remain financially solvent for six months longer, it might even have saved them.

As it was, the CD32 made a noble last stand for a company that had long made ignobility its calling card. Released in September of 1993 in Europe, it generated some real excitement, thanks not least to a surprisingly large stable of launch titles, fruit of that ease of porting games from the “real” Amiga models. Commodore sold CD32s as fast as they could make them that Christmas — which was unfortunately nowhere near as fast as they might have liked, thanks to their current financial straits. Nevertheless, in those European stores where CD32s were on-hand to compete with the Sega CD, the former often outsold the latter by a margin of four to one. Over 50,000 CD32s were sold in December alone.

The Atari Jaguar. There was some mockery, perhaps justifiable, of its “Jetsons” design aesthetic.

Ironically, Atari’s last act took much the same form as Commodore’s. In November of 1993, following a horrific third quarter in which they had lost $17.6 million on sales of just $4.4 million, they released a game console of their own, called the Jaguar, in North America. In keeping with the tradition dating back to 1985, it was cheaper than Commodore’s take on the same concept — its street price was under $250 — but not quite as powerful, lacking a CD-ROM drive. Suffering from a poor selection of games, as well as reliability problems and outright hardware bugs, the Jaguar faced an uphill climb; Atari shipped less than 20,000 of them in 1993. Nevertheless, the Tramiel clan confidently predicted that they would sell 500,000 units in 1994, and at least some people bought into the hype, sending Atari’s stock soaring to almost $15 even as Commodore’s continued to plummet.

For the reality was, the rapid unraveling of all other facets of Commodore’s business had rendered the question of the CD32’s success moot. The remaining employees who worked at the sprawling campus in West Chester, Pennsylvania, purchased a decade before when the VIC-20 and 64 were flying off shelves and Jack “Business is War” Tramiel was stomping his rival home-computer makers into dust, felt like dwarfs wandering through the ancient ruins of giants. Once there had been more than 600 employees here; now there were about 50. There was 10,000 square feet of space per employee in a facility where it cost $8000 per day just to keep the lights on. You could wander for hours through the deserted warehouses, shuttered production lines, and empty research labs without seeing another living soul. Commodore was trying to lease some of it out for an attractive rent of $4 per square foot, but, as with with most of their computers, nobody seemed all that interested. The executive staff, not wanting the stigma of having gone down with the ship on their resumes, were starting to jump for shore. Commodore’s chief financial officer threw up his hands and quit in the summer of 1993; the company’s president followed in the fall.

Apart from the CD32, for which they lacked the resources to manufacture enough units to meet demand, virtually none of the hardware piled up in Commodore’s European warehouses was selling at all anymore. In the third quarter of 1993, they lost $9.7 million, followed by $8 million in the fourth quarter, on sales of just $70 million. After a second disastrous Christmas in a row, it could only be a question of time.

In a way, it was the small things rather than the eye-popping financial figures which drove the point home. For example, the April 1994 edition of the New York World of Commodore show, for years already a shadow of its old vibrant self, was cancelled entirely due to lack of interest. And the Army and Air Force Exchange, which served as a storefront to American military personnel at bases all over the world, kicked Commodore off its list of suppliers because they weren’t paying their bills. It’s by a thousand little cuts like this one, each representing another sales opportunity lost, that a consumer-electronics company dies. At the Winter Consumer Electronics Show in January of 1994, at which Commodore did manage a tepid presence, their own head of marketing told people straight out that the Amiga had no future as a general-purpose computer; Commodore’s only remaining prospects, he said, lay with the American vertical market of video production and the European mass market of videogame consoles. But they didn’t have the money to continue building the hardware these markets were demanding, and no bank was willing to lend them any.

The proverbial straw which broke the camel’s back was a dodgy third-party patent relating to a commonplace programming technique used to keep a mouse pointer separate from the rest of the screen. Commodore had failed to pay the patent fee for years, the patent holder eventually sued, and in April of 1994 the court levied an injunction preventing Commodore from doing any more business at all in the United States until they paid up. The sum in question was a relatively modest $2.5 million, but Commodore simply didn’t have the money to give.

On April 29, 1994, in a briefly matter-of-fact press release, Commodore announced that they were going out of business: “The company plans to transfer its assets to unidentified trustees for the benefit of its creditors. This is the initial phase of an orderly voluntary liquidation.” And just like that, a company which had dominated consumer computing in the United States and much of Europe for a good part of the previous decade and a half was no more. The business press and the American public showed barely a flicker of interest; most of them had assumed that Commodore was already long out of business. European gamers reacted with shock and panic — few had realized how bad things had gotten for Commodore — but there was nothing to be done.

Thus it was that Atari, despite being chronically ill for a much longer period of time, managed to outlive Commodore in the end. Still, this isn’t to say that their own situation at the time of Commodore’s collapse was a terribly good one. When the reality hit home that the Jaguar probably wasn’t going to be a sustainable gaming platform at all, much less sell 500,000 units in 1994 alone, their stock plunged back down to less than $1 per share. In the aftermath, Atari limped on as little more than a patent troll, surviving by extracting judgments from other videogame makers, most notably Sega, for infringing on dubious intellectual property dating back to the 1970s. This proved to be an ironically more profitable endeavor for them than that of actually selling computers or game consoles. On July 30, 1996, the Tramiels finally cashed out, agreeing to merge the remnants of their company with JT Storage, a maker of hard disks, who saw some lingering value in the trademarks and the patents. It was a liquidation in all but name; only three Atari employees transitioned to the “merged” entity, which continued under the same old name of JT Storage.

And so disappeared the storied name of Atari and that of Tramiel simultaneously from the technology industry. Even as the trade magazines were publishing eulogies to the former, few were sorry to see the latter go, what with their long history of lawsuits, dirty dealing, and abundant bad faith. Jack Tramiel had purchased Atari in 1984 out of the belief that creating another phenomenon like the Commodore 64 — or for that matter the Atari VCS — would be easy. But the twelve years that followed were destined always to remain a footnote to his one extraordinary success, a cautionary tale about the dangers of conflating lucky timing and tactical opportunism with long-term strategic genius.

Even so, the fact does remain that the Commodore 64 brought affordable computing to millions of people all over the world. For that every one of those millions owes Jack Tramiel, who died in 2012, a certain debt of gratitude. Perhaps the kindest thing we can do for him is to end his eulogy there.



The story of the Amiga after the death of Commodore is long, confusing, and largely if not entirely dispiriting; for all these reasons, I’d rather not dwell on it at length here. Its most positive aspect is the surprisingly long commercial half-life the platform enjoyed in Europe, over the course of which game developers still found a receptive if slowly dwindling market ready to buy their wares. The last glossy newsstand magazine devoted to the Amiga, the British Amiga Active, didn’t publish its last issue until the rather astonishingly late date of November of 2001.

The Amiga technology itself first passed into the hands of a German PC maker known as Escom, who actually started manufacturing new Amiga 1200s for a time. In 1996, however, Escom themselves went bankrupt. The American PC maker Gateway 2000 became the last major company to bother with the aging technology when they bought it at the Escom bankruptcy auction. Afterward, though, they apparently had second thoughts; they did nothing whatsoever with it before selling it onward at a loss. From there, it passed into other, even less sure hands, selling always at a discount. There are still various projects bearing the Amiga name today, and I suspect they will continue until the generation who fell in love with the platform in its heyday have all expired. But these are little more than hobbyist endeavors, selling their products in minuscule numbers to customers motivated more by their nostalgic attachment to the Amiga name than by any practical need. It’s far from clear what the idea of an “Amiga computer” should even mean in 2020.

When the hardcore of the Amiga hardcore aren’t dreaming quixotically of the platform’s world-conquering return, they’re picking through the rubble of the past, trying to figure out where it all went wrong. Among a long roll call of petty incompetents in Commodore’s executive suites, two clear super-villains emerge: Irving Gould, Commodore’s chairman since the mid-1960s, and Mehdi Ali, his final hand-picked chief executive. Their mismanagement in the latter days of the company was so egregious that some have put it down to evil genius rather than idiocy. The typical hypothesis says that these two realized at some point in the very early 1990s that Commodore’s days were likely numbered, and that they could get more out of the company for themselves by running it into the ground than they could by trying to keep it alive. I usually have little time for such conspiracy theories; as far as I’m concerned, a good rule of thumb for life in general is never to attribute to evil intent what can just as easily be chalked up to good old human stupidity. In this case, though, there’s some circumstantial evidence lending at least a bit of weight to the theory.

The first and perhaps most telling piece of evidence is the two men’s ridiculously exorbitant salaries, even as their company was collapsing around them. In 1993, Mehdi Ali took home $2 million, making him the fourth highest-paid executive in the entire technology sector. Irving Gould earned $1.75 million that year — seventh on the list. Why were these men paying themselves as if they ran a thriving company when the reality was so very much the opposite? One can’t help but suspect that Gould at least, who owned 19 percent of Commodore’s stock, was trying to offset his losses on the one field by raising his personal salary on the other.

And then there’s the way that Gould, an enormously rich man whose personal net worth was much higher than that of all of Commodore by the end, was so weirdly niggardly in helping his company out of its financial jam. While he did loan fully $17.4 million back to Commodore, the operative word here is indeed “loan”: he structured his cash injections to ensure that he would be first in line to get his money back if and when the company went bankrupt, and stopped throwing good money after bad as soon as the threshold of the collateral it could offer up in exchange was exceeded. One can’t help but wonder what might have become of the CD32 if he’d been willing to go all-in to try to turn it into a success.

Of course, this is all rank speculation, which will quickly become libelous if I continue much further down this road. Suffice to say that questionable ethics were always an indelible part of Commodore. Born in scandal, the company would quite likely have ended in scandal as well if anyone in authority had been bothered enough by its anticlimactic bankruptcy to look further. I’d love to see what a savvy financial journalist could make of Commodore’s history. But, alas, I have neither the skill nor the resources for such a project, and the story is of little interest to the mainstream journalists of today. The era is past, the bodies are buried, and there are newer and bigger outrages to fill our newspapers.



Instead, then, I’ll conclude with two brief eulogies to mark the end of the Amiga’s role in this ongoing history. Rather than eulogizing in my own words, I’m going to use those of a true Amiga zealot: the anonymous figure known as “The Bandito,” whose “Roomers” columns in the magazine Amazing Computing were filled with cogent insights and nitty-gritty financial details every month. (For that reason, they’ve been invaluable sources for this series of articles.)

Jay Miner, the gentle genius, in 1990. In interviews like the one to which this photo was attached, he always seemed a little befuddled by the praise and love which Amiga users lavished upon him.

First, to Jay Miner, the canonical “father of the Amiga,” who died of the kidney disease he had been battling for most of his life on June 20, 1994, at age 62. If a machine can reflect the personality of a man, the Amiga certainly reflected his:

Jay was not only the inventive genius who designed the custom chips behind the Atari 800 and the Amiga, he also designed many more electronic devices, including a new pacemaker that allows the user to set their own heart rate (which allows them to participate in strenuous activities once denied to them). Jay was not only a brilliant engineer, he was a kind, gentle, and unassuming man who won the hearts of Amiga fans everywhere he went. Jay was continually amazed and impressed at what people had done with his creations, and he loved more than anything to see the joy people obtained from the Amiga.

We love you, Jay, for all the gifts that you have given to us, and all the fruits of your genius that you have shared with us. Rest in peace.

And now a last word on the Amiga itself, from the very last “Roomers” column, written by someone who had been there from the beginning:

The Amiga has left an indelible mark on the history of computing. [It] stands as a shining example of excellent hardware design. Its capabilities foreshadowed the directions of the entire computer industry: thousands of colors, multiple screen resolutions, multitasking, high-quality sound, fast animation, video capability, and more. It was the beauty and elegance of the hardware that sold the Amiga to so many millions of people. The Amiga sold despite Commodore’s neglect, despite their bumbling and almost criminal marketing programs. Developers wrote brilliantly for this amazing piece of hardware, creating software that even amazed the creators of the hardware. The Amiga heralded the change that’s even now transforming the television industry, with inexpensive CGI and video editing making for a whole new type of television program.

Amiga game software also changed the face of entertainment software. Electronic Arts launched themselves headlong into 16-bit entertainment software with their Amiga software line, which helped propel them into the $500 million giant they are today. Cinemaware’s Defender of the Crown showed people what computer entertainment could look like: real pictures, not blocky collections of pixels. For a while, the Amiga was the entertainment-software machine to have.

In light of all these accomplishments, the story of the Amiga really isn’t the tragedy of missed opportunities and unrealized potential that it’s so often framed as. The very design that made it able to do so many incredible things at such an early date — its tightly coupled custom chips, its groundbreaking but lightweight operating system — made it hard for the platform to evolve in the same ways that the less imaginative, less efficient, but modular Wintel and MacOS architectures ultimately did. While it lasted, however, it gave the world a sneak preview of its future, inspiring thousands who would go on to do good work on other platforms. We are all more or less the heirs to the vision embodied in the original Amiga Lorraine, whether we ever used a real Amiga or not. The platform’s most long-lived and effective marketing slogan, “Only Amiga Makes it Possible,” is of course no longer true. It is true, though, that the Amiga made many things possible first. May it stand forever in the annals of computing history alongside the original Apple Macintosh as one of the two most visionary computers of its generation. For without these two computers — one of them, alas, more celebrated than the other — the digital world that we know today would be a very different place.

(Sources: the books Commodore: The Final Years by Brian Bagnall and my own The Future Was Here; Amazing Computing of November 1992, December 1992, January 1993, February 1993, March 1993, April 1993, May 1993, June 1993, July 1993, September 1993, October 1993, November 1993, December 1993, January 1994, February 1994, March 1994, April 1994, May 1994, June 1994, July 1994, August 1994, September 1994, October 1994, and February 1995; Byte of January 1993; Amiga User International of June 1988; Electronic Gaming Monthly of June 1995; Next Generation of December 1996. My thanks to Eric Graham for corresponding with me about The Juggler and Sculpt 3D years ago when I was writing my book on the Amiga.

Those wishing to read about the Commodore story from the perspective of the engineers in the trenches, who so often accomplished great things in less than ideal conditions, should turn to Brian Bagnall’s full “Commodore Trilogy”: A Company on the Edge, The Amiga Years, and The Final Years.)

 

Tags: , , ,

Myst (or, The Drawbacks to Success)

Robyn Miller, one half of the pair of brothers who created the adventure game known as Myst with their small studio Cyan, tells a story about its development that’s irresistible to a writer like me. When the game was nearly finished, he says, its publisher Brøderbund insisted that it be put through “focus-group testing” at their offices. Robyn and his brother Rand reluctantly agreed, and soon the first group of guinea pigs shuffled into Brøderbund’s conference room. Much to its creators’ dismay, they hated the game. But then, just as the Miller brothers were wondering whether they had wasted the past two years of their lives making it, the second group came in. Their reaction was the exact opposite: they loved the game.

So would it be forevermore. Myst would prove to be one of the most polarizing games in history, loved and hated in equal measure. Even today, everyone seems to have a strong opinion about it, whether they’ve actually played it or not.

Myst‘s admirers are numerous enough to have made it the best-selling single adventure game in history, as well as the best-selling 1990s computer game of any type in terms of physical units shifted at retail: over 6 million boxed copies sold between its release in 1993 and the dawn of the new millennium. In the years immediately after its release, it was trumpeted at every level of the mainstream press as the herald of a new, dawning age of maturity and aesthetic sophistication in games. Then, by the end of the decade, it was lamented as a symbol of what games might have become, if only the culture of gaming had chosen it rather than the near-simultaneously-released Doom as its model for the future. Whatever the merits of that argument, the hardcore Myst lovers remained numerous enough in later years to support five sequels, a series of novels, a tabletop role-playing game, and multiple remakes and remasters of the work which began it all. Their passion was such that, when Cyan gave up on an attempt to turn Myst into a massively-multiplayer game, the fans stepped in to set up their own servers and keep it alive themselves.

And yet, for all the love it’s inspired, the game’s detractors are if anything even more committed than its proponents. For a huge swath of gamers, Myst has become the poster child for a certain species of boring, minimally interactive snooze-fest created by people who have no business making games — and, runs the spoken or unspoken corollary, played by people who have no business playing them. Much of this vitriol comes from the crowd who hate any game that isn’t violent and visceral on principle.

But the more interesting and perhaps telling brand of hatred comes from self-acknowledged fans of the adventure-game genre. These folks were usually raised on the Sierra and LucasArts traditions of third-person adventures — games that were filled with other characters to interact with, objects to pick up and carry around and use to solve puzzles, and complicated plot arcs unfolding chapter by chapter. They have a decided aversion to the first-person, minimalist, deserted, austere Myst, sometimes going so far as to say that it isn’t really an adventure game at all. But, however they categorize it, they’re happy to credit it with all but killing the adventure genre dead by the end of the 1990s. Myst, so this narrative goes, prompted dozens of studios to abandon storytelling and characters in favor of yet more sterile, hermetically sealed worlds just like its. And when the people understandably rejected this airless vision, that was that for the adventure game writ large. Some of the hatred directed toward Myst by stalwart adventure fans — not only fans of third-person graphic adventures, but, going even further back, fans of text adventures — reaches an almost poetic fever pitch. A personal favorite of mine is the description deployed by Michael Bywater, who in previous lives was himself an author of textual interactive fiction. Myst, he says, is just “a post-hippie HyperCard stack with a rather good music loop.”

After listening to the cultural dialog — or shouting match! — which has so long surrounded Myst, one’s first encounter with the actual artifact that spurred it all can be more than a little anticlimactic. Seen strictly as a computer game, Myst is… okay. Maybe even pretty good. It strikes this critic at least as far from the best or worst game of its year, much less of its decade, still less of all gaming history. Its imagery is well-composited and occasionally striking, its sound and music design equally apt. The sense of desolate, immersive beauty it all conveys can be strangely affecting, and it’s married to puzzle-design instincts that are reasonable and fair. Myst‘s reputation in some quarters as impossible, illogical, or essentially unplayable is unearned; apart from some pixel hunts and perhaps the one extended maze, there’s little to really complain about on that front. On the contrary: there’s a definite logic to its mechanical puzzles, and figuring out how its machinery works through trial and error and careful note-taking, then putting your deductions into practice, is genuinely rewarding, assuming you enjoy that sort of thing.

At the same time, though, there’s just not a whole lot of there there. Certainly there’s no deeper meaning to be found; Myst never tries to be about more than exploring a striking environment and solving intricate puzzles. “When we started, we wanted to make a [thematic] statement, but the project was so big and took so much effort that we didn’t have the energy or time to put much into that part of it,” admits Robyn Miller. “So, we decided to just make a neat world, a neat adventure, and say important things another time.” And indeed, a “neat world” and “neat adventure” are fine ways of describing Myst.

Depending on your preconceptions going in, actually playing Myst for the first time is like going to meet your savior or the antichrist, only to find a pleasant middle-aged fellow who offers to pour you a cup of tea. It’s at this point that the questions begin. Why does such an inoffensive game offend so many people? Why did such a quietly non-controversial game become such a magnet for controversy? And the biggest question of all: why did such a simple little game, made by five people using only off-the-shelf consumer software, become one of the most (in)famous money spinners in the history of the computer-games industry?

We may not be able to answers all of these whys to our complete satisfaction; much of the story of Myst surely comes down to sheer happenstance, to the proverbial butterfly flapping its wings somewhere on the other side of the world. But we can at least do a reasonably good job with the whats and hows of Myst. So, let’s consider now what brought Myst about and how it became the unlikely success it did. After that, we can return once again to its proponents and its detractors, and try to split the difference between Myst as gaming’s savior and Myst as gaming’s antichrist.


Rand Miller

Robyn Miller

If nothing else, the origin story of Myst is enough to make one believe in karma. As I wrote in an earlier article, the Miller brothers and their company Cyan came out of the creative explosion which followed Apple’s 1987 release of HyperCard, a unique Macintosh authoring system which let countless people just like them experiment for the first time with interactive multimedia and hypertext. Cyan’s first finished project was The Manhole. Published in November of 1988 by Mediagenic, it was a goal-less software toy aimed at children, a virtual fairy-tale world to explore. Six months later, Mediagenic added music and sound effects and released it on CD-ROM, marking the first entertainment product ever to appear on that medium. The next couple of years brought two more interactive explorations for children from Cyan, published on floppy disk and CD-ROM.

Even as these were being published, however, the wheels were gradually coming off of Mediagenic, thanks to a massive patent-infringement lawsuit they lost to the Dutch electronics giant Philips and a whole string of other poor decisions and unfortunate events. In February of 1991, a young bright spark named Bobby Kotick seized Mediagenic in a hostile takeover, reverting the company to its older name of Activision. By this point, the Miller brothers were getting tired of making whimsical children’s toys; they were itching to make a real game, with a goal and puzzles. But when they asked Activision’s new management for permission to do so, they were ordered to “keep doing what you’ve been doing.” Shortly thereafter, Kotick announced that he was taking Activision into Chapter 11 bankruptcy. After he did so, Activision simply stopped paying Cyan the royalties on which they depended. The Miller brothers were lost at sea, with no income stream and no relationships with any other publishers.

But at the last minute, they were thrown an unexpected lifeline. Lo and behold, the Japanese publisher Sunsoft came along offering to pay Cyan $265,000 to make a CD-ROM-based adult adventure game in the same general style as their children’s creations — i.e., exactly what the Miller brothers had recently asked Activision for permission to do. Sunsoft was convinced that there would be major potential for such a game on the upcoming generation of CD-ROM-based videogame consoles and multimedia set-top boxes for the living room — so convinced, in fact, that they were willing to fund the development of the game on the Macintosh and take on the job of porting it to these non-computer platforms themselves, all whilst signing over the rights to the computer version(s) to Cyan for free. The Miller brothers, reduced by this point to a diet of “rice and beans and government cheese,” as Robyn puts it, knew deliverance when they saw it. They couldn’t sign the contract fast enough. Meanwhile Activision had just lost out on the chance to release what would turn out to be one of the games of the decade.

But of course the folks at Cyan were as blissfully unaware of that future as those at Activision. They simply breathed sighs of relief and started making their game. In time, Cyan signed a contract with Brøderbund to release the computer versions of their game, starting with the Macintosh original.

Myst certainly didn’t begin as any conscious attempt to re-imagine the adventure-game form. Those who later insisted on seeing it in almost ideological terms, as a sort of artistic manifesto, were often shocked when they first met the Miller brothers in person. This pair of plain-spoken, baseball-cap-wearing country boys were anything but ideologues, much less stereotypical artistes. Instead they seemed a perfect match for the environs in which they worked: an unassuming two-story garage in Spokane, Washington, far from any centers of culture or technology. Their game’s unique personality actually stemmed from two random happenstances rather than any messianic fervor.

One of these was — to put it bluntly — their sheer ignorance. Working on the minority platform that was the Macintosh, specializing up to this point in idiosyncratic children’s software, the Miller brothers were oddly disengaged from the computer-games industry whose story I’ve been telling in so many other articles here. By their own account, they had literally never even seen any of the contemporary adventure games from companies like LucasArts and Sierra before making Myst. In fact, Robyn Miller says today that he had only played one computer game in his life to that point: Infocom’s ten-year-old Zork II. Rand Miller, being the older brother, the first mover behind their endeavors, and the more technically adept of the pair, was perhaps a bit more plugged-in, but only a bit.

The other circumstance which shaped Myst was the technology employed to create it. This statement is true of any game, but it becomes even more salient here because the technology in question was so different from that employed by other adventure creators. Myst is indeed simply a HyperCard stack — the “hippie-dippy” is in the eye of the beholder — gluing together pictures generated by the 3D modeler StrataVision. During the second half of its development, a third everyday Macintosh software package made its mark: Apple’s QuickTime video system, which allowed Myst‘s creators to insert snippets of themselves playing the roles of the people who previously visited the semi-ruined worlds you spend the game exploring. All of these tools are presentation-level tools, not conventional game-building ones. Seen in this light, it’s little surprise that so much of Myst is surface. At bottom, it’s a giant hypertext done in pictures, with very little in the way of systems of any sort behind it, much less any pretense of world simulation. You wander through its nodes, in some of which you can click on something, which causes some arbitrary event to happen. The one place where the production does interest itself in a state which exists behind its visuals is in the handful of mechanical devices found scattered over each of its landscapes, whose repair and/or manipulation form the basis of the puzzles that turn Myst into a game rather than an unusually immersive slideshow.

In making Myst, each brother fell into the role he was used to from Cyan’s children’s projects. The brothers together came up with the story and world design, then Robyn went off to do the art and music while Rand did the technical plumbing in HyperCard. One Chuck Carter helped Robyn on the art side and Rich Watson helped Rand on the programming side, while Chris Brandkamp produced the intriguing, evocative environmental soundscape by all sorts of improvised means: banging a wrench against the wall or blowing bubbles in a toilet bowl, then manipulating the samples to yield something appropriately other-worldly. And that was the entire team. It was a shoestring operation, amateurish in the best sense. The only thing that distinguished the boys at Cyan from a hundred thousand other hobbyists playing with the latest creative tools on their own Macs was the fact that Cyan had a contract to do so — and a commensurate quantity of real, raw talent, of course.

Ironically given that Myst was treated as such a cutting-edge product at the time of its release, in terms of design it’s something of a throwback — a fact that does become less surprising when one considers that its creators’ experience with adventure games stopped in the early 1980s. A raging debate had once taken place in adventure circles over whether the ideal protagonist should be a blank slate, imprintable by the player herself, or a fully-fleshed-out role for the player to inhabit. The verdict had largely come down on the side of the latter as games’ plots had grown more ambitious, but the whole discussion had passed the Miller brothers by.

So, with Myst we were back to the old “nameless, faceless adventurer” paradigm which Sierra and LucasArts had long since abandoned. Myst actively encourages you to think of it as yourself there in its world. The story begins when you open a mysterious book here on our world, whereupon you get sucked into an alternate dimension and find yourself standing on the dock of a deserted island. You soon learn that you’re following a trail first blazed by a father and his two sons, all of whom had the ability to hop about between dimensions — or “ages,” as the game calls them — and alter them to their will. Unfortunately, the father is now said to be dead, while the two brothers have each been trapped in a separate interdimensional limbo, each blaming the other for their father’s death. (These themes of sibling rivalry have caused much comment over the years, especially in light of the fact that each brother in the game is played by one of the real Miller brothers. But said real brothers have always insisted that there are no deeper meanings to be gleaned here…)

You can access four more worlds from the central island just as soon as you solve the requisite puzzles. In each of them, you must find a page of a magical book. Putting the pages together, along with a fifth page found on the central island, allows you to free the brother of your choice, or to do… something else, which actually leads to the best ending. This last-minute branch to an otherwise unmalleable story is a technique we see in a fair number of other adventure games wishing to make a claim to the status of genuinely interactive fictions. (In practice, of course, players of those games and Myst alike simply save before the final choice and check out all of the endings.)

For all its emphasis on visuals, Myst is designed much like a vintage text adventure in many ways. Even setting aside its explicit maze, its network of discrete, mostly empty locations resembles the map from an old-school text adventure, where navigation is half the challenge. Similarly, its complex environmental puzzles, where something done in one location may have an effect on the other side of the map, smacks of one of Infocom’s more cerebral, austere games, such as Zork III or Spellbreaker.

This is not to say that Myst is a conscious throwback; the nature of the puzzles, like so much else about the game, is as much determined by the Miller brothers’ ignorance of contemporary trends in adventure design as by the technical constraints under which they labored. Among the latter was the impossibility of even letting the player pick things up and carry them around to use elsewhere. Utterly unfazed, Rand Miller coined an aphorism: “Turn your problems into features.” Thus Myst‘s many vaguely steam-punky mechanical puzzles, all switches to throw and ponderous wheels to set in motion, are dictated as much by its designers’ inability to implement a player inventory as by their acknowledged love for Jules Verne.

And yet, whatever the technological determinism that spawned it, this style of puzzle design truly was a breath of fresh air for gamers who had grown tired of the “use this object on that hotspot” puzzles of Sierra and LucasArts. To their eternal credit, the Miller brothers took this aspect of the design very seriously, giving their puzzles far more thought than Sierra at least tended to do. They went into Myst with no experience designing puzzles, and their insecurity  about this aspect of their craft was perhaps their ironic saving grace. Before they even had a computer game to show people, they spent hours walking outsiders through their scenario Dungeons & Dragons-style, telling them what they saw and listening to how they tried to progress. And once they did have a working world on the computer, they spent more hours sitting behind players, watching what they did. Robyn Miller, asked in an interview shortly after the game’s release whether there was anything he “hated,” summed up thusly their commitment to consistent, logical puzzle design and world-building (in Myst, the two are largely one and the same):

Seriously, we hate stuff without integrity. Supposed “art” that lacks attention to detail. That bothers me a lot. Done by people who are forced into doing it or who are doing it for formula reasons and monetary reasons. It’s great to see something that has integrity. It makes you feel good. The opposite of that is something I dislike.

We tried to create something — a fantastic world — in a very realistic way. Creating a fantasy world in an unrealistic way is the worst type of fantasy. In Jurassic Park, the idea of dinosaurs coming to life in the twentieth century is great. But it works in that movie because they also made it believable. That’s how the idea and the execution of that idea mix to create a truly great experience.

Taken as a whole, Myst is a master class in designing around constraints. Plenty of games have been ruined by designers whose reach exceeded their core technology’s grasp. We can see this phenomenon as far back as the time of Scott Adams: his earliest text adventures were compact marvels, but quickly spiraled into insoluble incoherence when he started pushing beyond what his simplistic parsers and world models could realistically present. Myst, then, is an artwork of the possible. Managing inventory, with the need for a separate inventory screen and all the complexities of coding this portable object interacting on that other thing in the world, would have stretched HyperCard past the breaking point. So, it’s gone. Interactive conversations would have been similarly prohibitive with the technology at the Millers’ fingertips. So, they devised a clever dodge, showing the few characters that exist only as recordings, or through one-way screens where you can see them, but they can’t see (or hear) you; that way, a single QuickTime video clip is enough to do the trick. In paring things back so dramatically, the Millers wound up with an adventure game unlike any that had been seen before. Their problems really did become their game’s features.

For the most part, anyway. The networks of nodes and pre-rendered static views that constitute the worlds of Myst can be needlessly frustrating to navigate, thanks to the way that the views prioritize aesthetics over consistency; rotating your view in place sometimes turns you 90 degrees, sometimes 180 degrees, sometimes somewhere in between, according to what the designers believed would provide the most striking image. Orienting yourself and moving about the landscape can thus be a confusing process. One might complain as well that it’s a slow one, what with all the empty nodes which you must move through to get pretty much anywhere — often just to see if something you’ve done on one side of the map has had any effect on something on its other side. Again, a comparison with the twisty little passages of an old-school text adventure, filled with mostly empty rooms, does strike me as thoroughly apt.

On the other hand, a certain glaciality of pacing seems part and parcel of what Myst fundamentally is. This is not a game for the impatient. It’s rather targeted at two broad types of player: the aesthete, who will be content just to wander the landscape taking in the views, perhaps turning to a walkthrough to be able to see all of the worlds; and the dedicated puzzle solver, willing to pull out paper and pencil and really dig into the task of understanding how all this strange machinery hangs together. Both groups have expressed their love for Myst over the years, albeit in terms which could almost convince you they’re talking about two entirely separate games.



So much for Myst the artifact. What of Myst the cultural phenomenon?

The origins of the latter can be traced to the Miller brothers’ wise decision to take their game to Brøderbund. Brøderbund tended to publish fewer products per year than their peers at Electronic Arts, Sierra, or the lost and unlamented Mediagenic, but they were masterful curators, with a talent for spotting software which ordinary Americans might want to buy and then packaging and marketing it perfectly to reach them. (Their insistence on focus testing, so confusing to the Millers, is proof of their competence; it’s hard to imagine any other publisher of the time even thinking of such a thing.) Brøderbund published a string of products over the course of a decade or more which became more than just hits; they became cultural icons of their time, getting significant attention in the mainstream press in addition to the computer magazines: The Print Shop, Carmen Sandiego, Lode Runner, Prince of Persia, SimCity. And now Myst was about to become the capstone to a rather extraordinary decade, their most successful and iconic release of all.

Brøderbund first published the game on the Macintosh in September of 1993, where it was greeted with rave reviews. Not a lot of games originated on the Mac at all, so a new and compelling one was always a big event. Mac users tended to conceive of themselves as the sophisticates of the computer world, wearing their minority status as a badge of pride. Myst hit the mark beautifully here; it was the Mac-iest of Mac games. MacWorld magazine’s review is a rather hilarious example of a homer call. “It’s been polished until it shines,” wrote the magazine. Then, in the next paragraph: “We did encounter a couple of glitches and frozen screens.” Oh, well.

Helped along by press like this, Myst came out of the gates strong. By one report, it sold 200,000 copies on the Macintosh alone in its first six months. If correct or even close to correct, those numbers are extraordinary; they’re the numbers of a hit even on the gaming Mecca that was the Wintel world, much less on the Mac, with its vastly smaller user base.

Still, Brøderbund knew that Myst‘s real opportunity lay with those selfsame plebeian Wintel machines which most Mac users, the Miller brothers included, disdained. Just as soon as Cyan delivered the Mac version, Brøderbund set up an internal team — larger than the Cyan team which had made the game in the first place — to do the port as quickly as possible. Importantly, Myst was ported not to bare MS-DOS, where almost all “hardcore” games still resided, but to Windows, where the new demographics which Brøderbund hoped to attract spent all of their time. Luckily, the game’s slideshow visuals were possible even under Windows’s sluggish graphics libraries, and Apple had recently ported their QuickTime video system to Microsoft’s platform. The Windows version of Myst shipped in March of 1994.

And now Brøderbund’s marketing got going in earnest, pushing the game as the one showcase product which every purchaser of a new multimedia PC simply had to have. At the time, most CD-ROM based games also shipped in a less impressive floppy-disk-based version, with the latter often still outselling the former. But Brøderbund and Cyan made the brave choice not to attempt a floppy-disk version at all. The gamble paid off beautifully, furthering the carefully cultivated aspirational quality which already clung to Myst, now billed as the game which simply couldn’t be done on floppy disk. Brøderbund’s lush advertisements had a refined, adult air about them which made them stand out from the dragons, spaceships, and scantily-clad babes that constituted the usual motifs of game advertising. As the crowning touch, Brøderbund devised a slick tagline: Myst was “the surrealistic adventure that will become your world.” The Miller brothers scoffed at this piece of marketing-speak — until they saw how Myst was flying off the shelves in the wake of it.

So, through a combination of lucky timing and precision marketing, Myst blew up huge. I say this not to diminish its merits as a puzzle-solving adventure game, which are substantial, but simply because I don’t believe those merits were terribly relevant to the vast majority of people who purchased it. A parallel can be drawn with Infocom’s game of Zork, which similarly surfed a techno-cultural wave a decade before Myst. It was on the scene just as home computers were first being promoted in the American media as the logical, more permanent successors to the videogame-console fad. For a time, Zork, with its ability to parse pseudo-natural-English sentences, was seen by computer salespeople as the best overall demonstration of what a computer could do; they therefore showed it to their customers as a matter of course. And so, when countless new computer systems went home with their new owners, there was also a copy of Zork in the bag. The result was Infocom’s best-selling game of all time, to the tune of almost 400,000 copies sold.

Myst now played the same role in a new home-computer boom. The difference was that, while the first boom had fizzled rather quickly when people realized of what limited practical utility those early machines actually were, this second boom would be a far more sustained affair. In fact, it would become the most sustained boom in the history of the consumer PC, stretching from approximately 1993 right through the balance of the decade, with every year breaking the sales records set by the previous one. The implications for Myst, which arrived just as the boom was beginning, were titanic. Even long after it ceased to be particularly cutting-edge, it continued to be regarded as an essential accessory for every PC, to be tossed into the bags carried home from computer stores by people who would never buy another game.

Myst had already established its status by the time the hype over the World Wide Web and Windows 95 really lit a fire under computer sales in 1995. It passed the 1 million copy mark in the spring of that year. By the same point, a quickie “strategy guide” published by Prima, ideal for the many players who just wanted to take in its sights without worrying about its puzzles, had passed an extraordinary 300,000 copies sold — thus making its co-authors, who’d spent all of three weeks working on it, the two luckiest walkthrough authors in history. Defying all of the games industry’s usual logic, which dictated that titles sold in big numbers for only a few months before fizzling out, Myst‘s sales just kept accelerating from there. It sold 850,000 copies in 1996 in the United States alone, then another 870,000 copies in 1997. Only in 1998 did it finally begin to flag, posting domestic sales of just 540,000 copies. Fortunately, the European market for multimedia PCs, which lagged a few years behind the American one, was now also burning bright, opening up whole new frontiers for Myst. Its total retail sales topped 6 million by 2000, at least 2 million of them outside of North America. Still more copies — it’s impossible to say how many — had shipped as pack-in bonuses with multimedia upgrade kits and the like. Meanwhile, under the terms of Sunsoft’s original agreement with Cyan, it was also ported by the former to the Sega Saturn, Atari Jaguar, 3DO, and CD-I living-room consoles. Myst was so successful that another publisher came out with an elaborate parody of it as a full-fledged computer game in its own right, under the indelible title of Pyst. Considering that it featured the popular sitcom star John Goodman, Pyst must have cost far more to make than the shoestring production it mocked.

As we look at the staggering scale of Myst‘s success, we can’t avoid returning to that vexing question of why it all should have come to be. Yes, Brøderbund’s marketing campaign was brilliant, but there must be more to it than that. Certainly we’re far from the first to wonder about it all. As early as December of 1994, Newsweek magazine noted that “in the gimmick-dominated world of computer games, Myst should be the equivalent of an art film, destined to gather critical acclaim and then dust on the shelves.” So why was it selling better than guaranteed crowd-pleasers with names like Star Wars on their boxes?

It’s not that it’s that difficult to pinpoint some of the other reasons why Myst should have been reasonably successful. It was a good-looking game that took full advantage of CD-ROM, at a time when many computer users — non-gamers almost as much as gamers — were eager for such things to demonstrate the power of their new multimedia wundermachines. And its distribution medium undoubtedly helped its sales in another way: in this time before CD burners became commonplace, it was immune to the piracy that many publishers claimed was costing them at least half their sales of floppy-disk-based games.

Likewise, a possible explanation for Myst‘s longevity after it was no longer so cutting-edge might be the specific technological and aesthetic choices made by the Miller brothers. Many other products of the first gush of the CD-ROM revolution came to look painfully, irredeemably tacky just a couple of years after they had dazzled, thanks to their reliance on grainy video clips of terrible actors chewing up green-screened scenery. While Myst did make some use of this type of “full-motion video,” it was much more restrained in this respect than many of its competitors. As a result, it aged much better. By the end of the 1990s, its graphics resolution and color count might have been a bit lower than those of the latest games, and it might not have been quite as stunning at first glance as it once had been, but it remained an elegant, visually-appealing experience on the whole.

Yet even these proximate causes don’t come close to providing a full explanation of why this art film in game form sold like a blockbuster. There are plenty of other games of equal or even greater overall merit to which they apply equally well, but none of them sold in excess of 6 million copies. Perhaps all we can do in the end is chalk it up to the inexplicable vagaries of chance. Computer sellers and buyers, it seems, needed a go-to game to show what was possible when CD-ROM was combined with decent graphics and sound cards. Myst was lucky enough to become that game. Although its puzzles were complex, simply taking in its scenery was disarmingly simple, making it perfect for the role. The perfect product at the perfect time, perfectly marketed.

In a sense, Myst the phenomenon didn’t do that other MystMyst the actual artifact, the game we can still play today — any favors at all. The latter seems destined always to be judged in relation to the former, and destined always to be found lacking. Demanding that what is in reality a well-designed, aesthetically pleasing game live up to the earth-shaking standards implied by Myst‘s sales numbers is unfair on the face of it; it wasn’t the fault of the Miller brothers, humble craftsmen with the right attitude toward their work, that said work wound up selling 6 million copies. Nevertheless, we feel compelled to judge it, at least to some extent, with the knowledge of its commercial and cultural significance firmly in mind. And in this context especially, some of its detractors’ claims do have a ring of truth.

Arguably the truthiest of all of them is the oft-repeated old saw that no other game was bought by so many people and yet really, seriously played by so few of its purchasers. While such a hyperbolic claim is impossible to truly verify, there is a considerable amount of circumstantial evidence pointing in exactly that direction. The exceptional sales of the strategy guide are perhaps a wash; they can be as easily ascribed to serious players wanting to really dig into the game as they can to casual purchasers just wanting to see all the pretty pictures on the CD-ROM. Other factors, however, are harder to dismiss. The fact is, Myst is hard by casual-game standards — so hard that Brøderbund included a blank pad of paper in the box for the purpose of keeping notes. If we believe that all or most of its buyers made serious use of that notepad, we have to ask where these millions of people interested in such a cerebral, austere, logical experience were before it materialized, and where they went thereafter. Even the Miller brothers themselves — hardly an unbiased jury — admit that by their best estimates no more than 50 percent of the people who bought Myst ever got beyond the starting island. Personally, I tend to suspect that the number is much lower than that.

Perhaps the most telling evidence for Myst as the game which everyone had but hardly anyone played is found in a comparison with one of its contemporaries: id Software’s Doom, the other decade-dominating blockbuster of 1993 (a game about which I’ll be writing much more in a future article). Doom indisputably was played, and played extensively. While it wasn’t quite the first running-around-and-shooting-things-from-a-first-person-perspective game, it did become so popular that games of its type were codified as a new genre unto themselves. The first-person shooters which followed Doom in the 1990s were among the most popular games of their era. Many of their titles are known to gamers today who weren’t yet born when they debuted: titles like Duke Nukem 3D, Quake, Half-Life, Unreal. Myst prompted just as many copycats, but these were markedly less popular and are markedly less remembered today: AMBER: Journeys Beyond, Zork Nemesis, Rama, Obsidian. Only Cyan’s own eventual sequel to Myst can be found among the decade’s bestsellers, and even it’s a definite case of diminishing commercial returns, despite being a rather brilliant game in its own right. In short, any game which sold as well as Myst, and which was seriously played by a proportionate number of people, ought to have left a bigger imprint on ludic culture than this one did.

But none of this should affect your decision about whether to play Myst today, assuming you haven’t yet gotten around to it. Stripped of all its weighty historical context, it’s a fine little adventure game if not an earth-shattering one, intriguing for anyone with the puzzle-solving gene, infuriating for anyone without it. You know what I mean… sort of a niche experience. One that just happened to sell 6 million copies.

(Sources: the books Myst: Prima’s Official Strategy Guide by Rick Barba and Rusel DeMaria, Myst & Riven: The World of the D’ni by Mark J.P. Wolf, and The Secret History of Mac Gaming by Richard Moss; Computer Gaming World of December 1993; MacWorld of March 1994; CD-ROM Today of Winter 1993. Online sources include “Two Histories of Myst” by John-Gabriel Adkins, Ars Technica‘s interview with Rand Miller, Robyn Miller’s postmortem of Myst at the 2013 Game Developers Conference, GameSpot‘s old piece on Myst as one of the “15 Most Influential Games of All Time,” and Greg Lindsay’s Salon column on Myst as a “dead end.” Michael Bywater’s colorful comments about Myst come from Peter Verdi’s now-defunct Magnetic Scrolls fan site, a dump of which Stefan Meier dug up for me from his hard drive several years ago. Thanks again, Stefan!

The “Masterpiece Edition” of Myst is available for purchase from GOG.com.)

 
110 Comments

Posted by on February 21, 2020 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

The Deal of the Century (or, The Alliance of Losers)

+

= ?

I think [the] Macintosh accomplished everything we set out to do and more, even though it reaches most people these days as Windows.

— Andy Hertzfeld (original Apple Macintosh systems programmer), 1994

When rumors first began to circulate early in 1991 that IBM and Apple were involved in high-level talks about a major joint initiative, most people dismissed them outright. It was, after all, hard to imagine two companies in the same industry with more diametrically opposed corporate cultures. IBM was Big Blue, a bedrock of American business since the 1920s. Conservative and pragmatic to a fault, it was a Brylcreemed bastion of tradition where casual days meant that employees might remove their jackets to reveal the starched white shirts they wore underneath. Apple, on the other hand, had been founded just fifteen years before by two long-haired children of the counterculture, and its campus still looked more like Woodstock than Wall Street. IBM placed great stock in the character of its workforce; Apple, as journalist Michael S. Malone would later put it in his delightfully arch book Infinite Loop, “seemed to have no character, but only an attitude, a style, a collection of mannerisms.” IBM talked about enterprise integration and system interoperability; Apple prattled on endlessly about changing the world. IBM played Lawrence Welk at corporate get-togethers; Apple preferred the Beatles. (It was an open secret that the name the company shared with the Beatles’ old record label wasn’t coincidental.)

Unsurprisingly, the two companies didn’t like each other very much. Apple in particular had been self-consciously defining itself for years as the sworn enemy of IBM and everything it represented. When Apple had greeted the belated arrival of the IBM PC in 1981 with a full-page magazine advertisement bidding Big Blue “welcome, seriously,” it had been hard to read as anything other than snarky sarcasm. And then, and most famously, had come the “1984” television advertisement to mark the debut of the Macintosh, in which Apple was personified as a hammer-throwing freedom fighter toppling a totalitarian corporate titan — Big Blue recast as Big Brother. What would the rumor-mongers be saying next? That cats would lie down with dogs? That the Russians would tell the Americans they’d given up on the whole communism thing and would like to be friends… oh, wait. It was a strange moment in history. Why not this too, then?

Indeed, when one looked a little harder, a partnership began to make at least a certain degree of sense. Apple’s rhetoric had actually softened considerably since those heady early days of the Macintosh and the acrimonious departure of Steve Jobs which had marked their ending. In the time since, more sober minds at the company had come to realize that insulting conservative corporate customers with money to spend on Apple’s pricey hardware might be counter-productive. Most of all, though, both companies found themselves in strikingly similar binds as the 1990s got underway. After soaring to rarefied heights during the early and middle years of the previous decade, they were now being judged by an increasing number of pundits as the two biggest losers of the last few years of computing history. In the face of the juggernaut that was Microsoft Windows, that irresistible force which nothing in the world of computing could seem to defy for long, it didn’t seem totally out of line to ask whether there even was a future for IBM or Apple. Seen in this light, the pithy clichés practically wrote themselves: “the enemy of my enemy is my friend”; “any port in a storm”; etc. Other, somewhat less generous commentators just talked about an alliance of losers.

Each of the two losers had gotten to this juncture by a uniquely circuitous route.

When IBM released the IBM PC, their first mass-market microcomputer, in August of 1981, they were as surprised as anyone by the way it took off. Even as hackers dismissed it as boring and unimaginative, corporate America couldn’t get enough of the thing; a boring and unimaginative personal computer — i.e., a safe one — was exactly what they had been waiting for. IBM’s profits skyrocketed during the next several years, and the pundits lined up to praise the management of this old, enormous company for having the flexibility and wherewithal to capitalize on an emerging new market; a tap-dancing elephant became the metaphor of choice.

And yet, like so many great successes, the IBM PC bore the seeds of its downfall within it from the start. It was a simple, robust machine, easy to duplicate by plugging together readily available commodity components — a process made even easier by IBM’s commitment to scrupulously documenting every last detail of its design for all and sundry. Further, IBM had made the mistake of licensing its operating system from a small company known as Microsoft rather than buying it outright or writing one of their own, and Bill Gates, Microsoft’s Machiavellian CEO, proved more than happy to license MS-DOS to anyone else who wanted it as well. The danger signs could already be seen in 1982, when an upstart company called Compaq released a “portable” version of IBM’s computer — in those days, this meant a computer which could be packed into a single suitcase — before IBM themselves could get around to it. A more dramatic tipping point arrived in 1986, when the same company made a PC clone built around Intel’s hot new 80386 CPU before IBM managed to do so.

In 1987, IBM responded to the multiplying ranks of the clone makers by introducing the PS/2 line, which came complete with a new, proprietary bus architecture, locked up tight this time inside a cage of patents and legalese. A cynical move on the face of it, it backfired spectacularly in practice. Smelling the overweening corporate arrogance positively billowing out of the PS/2 lineup, many began to ask themselves for the first time whether the industry still needed IBM at all. And the answer they often came to was not the one IBM would have preferred. IBM’s new bus architecture slowly died on the vine, while the erstwhile clone makers put together committees to define new standards of their own which evolved the design IBM had originated in more open, commonsense ways. In short, IBM lost control of the very platform they had created. By 1990, the words “PC clone” were falling out of common usage, to be replaced by talk of the “Wintel Standard.” The new standard bearer, the closest equivalent to IBM in this new world order, was Microsoft, who continued to license MS-DOS and Windows, the software that allowed all of these machines from all of these diverse manufacturers to run the same applications, to anyone willing to pay for it. Meanwhile OS/2, IBM’s mostly-compatible alternative operating system, was struggling mightily; it would never manage to cross the hump into true mass-market acceptance.

Apple’s fall from grace had been less dizzying in some ways, but the position it had left them in was almost as frustrating.

After Steve Jobs walked away from Apple in September of 1985, leaving behind the Macintosh, his twenty-month-old dream machine, the more sober-minded caretakers who succeeded him did many of the reasonable, sober-minded things which their dogmatic predecessor had refused to allow: opening the Mac up for expansion, adding much-requested arrow keys to its keyboard, toning down the revolutionary rhetoric that spooked corporate America so badly. These things, combined with the Apple LaserWriter laser printer, Aldus PageMaker software, and the desktop-publishing niche they spawned between them, saved the odd little machine from oblivion. Yet something did seem to get lost in the process. Although the Mac remained a paragon of vision in computing in many ways — HyperCard alone proved that! — Apple’s management could sometimes seem more interested in competing head-to-head with PC clones for space on the desks of secretaries than nurturing the original dream of the Macintosh as the creative, friendly, fun personal computer for the rest of us.

In fact, this period of Apple’s history must strike anyone familiar with the company of today — or, for that matter, with the company that existed before Steve Jobs’s departure — as just plain weird. Quibbles about character versus attitude aside, Apple’s most notable strength down through the years has been a peerless sense of self, which they have used to carve out their own uniquely stylish image in the ofttimes bland world of computing. How odd, then, to see the Apple of this period almost willfully trying to become the one thing neither the zealots nor the detractors have ever seen them as: just another maker of computer hardware. They flooded the market with more models than even the most dutiful fans could keep up with, none of them evincing the flair for design that marks the Macs of earlier or later eras. Their computers’ bland cases were matched with bland names like “Performa” or “Quadra” — names which all too easily could have come out of Compaq or (gasp!) IBM rather than Apple. Even the tight coupling of hardware and software into a single integrated user experience, another staple of Apple computing before and after, threatened to disappear, as CEO John Sculley took to calling Apple a “software company” and intimated that he might be willing to license MacOS to other manufacturers in the way that Microsoft did MS-DOS and Windows. At the same time, in a bid to protect the software crown jewels, he launched a prohibitively expensive and ethically and practically ill-advised lawsuit against Microsoft for copying MacOS’s “look and feel” in Windows.

Apple’s attempts to woo corporate America by acting just as bland and conventional as everyone else bore little fruit; the Macintosh itself remained too incompatible, too expensive, and too indelibly strange to lure cautious purchasing managers into the fold. Meanwhile Apple’s prices remained too high for any but the most well-heeled private users. And so the Mac soldiered on with a 5 to 10 percent market share, buoyed by a fanatically loyal user base who still saw revolutionary potential in it, even as they complained about how many of its ideas Microsoft and others had stolen. Admittedly, their numbers were not insignificant: there were about 3 and a half million members of the Macintosh family by 1990. They were enough to keep Apple afloat and basically profitable, at least for now, but already by the early 1990s most new Macs were being sold “within the family,” as it were. The Mac became known as the platform where the visionaries tried things out; if said things proved promising, they then reached the masses in the form of Windows implementations. CD-ROM, the most exciting new technology of the early 1990s, was typical. The Mac pioneered this space; Mediagenic’s The Manhole, the very first CD-ROM entertainment product, shipped first on that platform. Yet most of the people who heard the hype and went out to buy a “multimedia PC” in the years that followed brought home a Wintel machine. The Mac was a sort of aspirational showpiece platform; in defiance of the Mac’s old “computer for the rest of us” tagline, Windows was the place where the majority of ordinary people did ordinary things.

The state of MacOS added weight to these showhorse-versus-workhorse stereotypes. Its latest incarnation, known as System 6, had fallen alarmingly behind the state of the art in computing by 1990. Once one looked beyond its famously intuitive and elegant user interface, one found that it lacked robust support for multitasking; lacked for ways to address memory beyond 8 MB; lacked the virtual memory that would allow users to open more and larger applications than the physical memory allowed; lacked the memory protection that could prevent errant applications from taking down the whole system. Having been baked into many of the operating system’s core assumptions from the start — MacOS had originally been designed to run on a machine with no hard drive and just 128 K of memory — these limitations were infuriatingly difficult to remedy after the fact. Thus Apple struggled mightily with the creation of a System 7, their attempt to do just that. When System 7 finally shipped in May of 1991, two years after Apple had initially promised it would, it still lagged behind Windows under the hood in some ways: for example, it still lacked comprehensive memory protection.

The problems which dogged the Macintosh were typical of any computing platform that attempts to survive beyond the technological era which spawned it. Keeping up with the times means hacking and kludging the original vision, as efficiency and technical elegance give way to the need just to make it work, by hook or by crook. The original Mac design team had been given the rare privilege of forgetting about backward compatibility — given permission to build something truly new and “insanely great,” as Steve Jobs had so memorably put it. That, needless to say, was no longer an option. Every decision at Apple must now be made with an eye toward all of the software that had been written for the Mac in the past seven years or so. People depended on it now, which sharply limited the ways in which it could be changed; any new idea that wasn’t compatible with what had come before was an ipso-facto nonstarter. Apple’s clever programmers doubtless could have made a faster, more stable, all-around better operating system than System 7 if they had only had free rein to do so. But that was pie-in-the-sky talk.

Yet the most pressing of all the technical problems confronting the Macintosh as it aged involved its hardware rather than its software. Back in 1984, the design team had hitched their wagon to the slickest, sexiest new CPU in the industry at the time: the Motorola 68000. And for several years, they had no cause to regret that decision. The 68000 and its successor models in the same family were wonderful little chips — elegant enough to live up to even the Macintosh ideal of elegance, an absolute joy to program. Even today, many an old-timer will happily wax rhapsodic about them if given half a chance. (Few, for the record, have similarly fond memories of Intel’s chips.)

But Motorola was both a smaller and a more diversified company than Intel, the international titan of chip-making. As time went on, they found it more and more difficult to keep up with the pace set by their rival. Lacking the same cutting-edge fabrication facilities, it was hard for them to pack as many circuits into the same amount of space. Matters began to come to a head in 1989, when Intel released the 80486, a chip for which Motorola had nothing remotely comparable. Motorola’s response finally arrived in the form of the roughly-equivalent-in-horsepower 68040 — but not until more than a year later, and even then their chip was plagued by poor heat dissipation and heavy power consumption in many scenarios. Worse, word had it that Motorola was getting ready to give up on the whole 68000 line; they simply didn’t believe they could continue to compete head-to-head with Intel in this arena. One can hardly overstate how terrifying this prospect was for Apple. An end to the 68000 line must seemingly mean the end of the Macintosh, at least as everyone knew it; MacOS, along with every application ever written for the platform, were inextricably bound to the 68000. Small wonder that John Sculley started talking about Apple as a “software company.” It looked like their hardware might be going away, whether they liked it or not.

Motorola was, however, peddling an alternative to the 68000 line, embodying one of the biggest buzzwords in computer-science circles at the time: “RISC,” short for “Reduced Instruction Set Chip.” Both the Intel x86 line and the Motorola 68000 line were what had been retroactively named “CISC,” or “Complex Instruction Set Chips”: CPUs whose set of core opcodes — i.e., the set of low-level commands by which they could be directly programmed — grew constantly bigger and more baroque over time. RISC chips, on the other hand, pared their opcodes down to the bone, to only those commands which they absolutely, positively could not exist without. This made them less pleasant for a human programmer to code for — but then, the vast majority of programmers were working by now in high-level languages rather than directly controlling the CPU in assembly language anyway. And it made programs written to run on them by any method bigger, generally speaking — but then, most people by 1990 were willing to trade a bit more memory usage for extra speed. To compensate for these disadvantages, RISC chips could be simpler in terms of circuitry than CISC chips of equivalent power, making them cheaper and easier to manufacture. They also demanded less energy and produced less heat — the computer engineer’s greatest enemy — at equivalent clock speeds. As of yet, only one RISC chip was serving as the CPU in mass-market personal computers: the ARM chip, used in the machines of the British PC maker Acorn, which weren’t even sold in the United States. Nevertheless, Motorola believed RISC’s time had come. By switching to RISC, they wouldn’t need to match Intel in terms of transistors per square millimeter to produce chips of equal or greater speed. Indeed, they’d already made a RISC CPU of their own, called the 88000, in which they were eager to interest Apple.

They found a receptive audience among Apple’s programmers and engineers, who loved Motorola’s general design aesthetic. Already by the spring of 1990, Apple had launched two separate internal projects to study the possibilities for RISC in general and the 88000 in particular. One, known as Project Jaguar, envisioned a clean break with the past, in the form of a brand new computer that would be so amazing that people would be willing to accept that none of their existing software would run on it. The other, known as Project Cognac, studied whether it might be possible to port the existing MacOS to the new architecture, and then — and this was the really tricky part — find a way to make existing applications which had been compiled for a 68000-based Mac run unchanged on the new machine.

At first, the only viable option for doing so seemed to be a sort of Frankenstein’s monster of a computer, containing both an 88000- and a 68000-series CPU. The operating system would boot and run on the 88000, but when the user started an application written for an older, 68000-based Mac, it would be automatically kicked over to the secondary CPU. Within a few years, so the thinking went, all existing users would upgrade to the newer models, all current software would get recompiled to run natively on the RISC chip, and the 68000 could go away. Still, no one was all that excited by this approach; it seemed the worst Macintosh kludge yet, the very antithesis of what the machine was supposed to be.

A eureka moment came in late 1990, with the discovery of what Cognac project leader Jack McHenry came to call the “90/10 Rule.” Running profilers on typical applications, his team found that in the case of many or most of them it was the operating system, not the application itself, that consumed 90 percent or more of the CPU cycles. This was an artifact — for once, a positive one! — of the original MacOS design, which offered programmers an unprecedentedly rich interface toolbox meant to make coding as quick and easy as possible and, just as importantly, to give all applications a uniform look and feel. Thus an application simply asked for a menu containing a list of entries; it was then the operating system that did all the work of setting it up, monitoring it, and reporting back to the application when the user chose something from it. Ditto buttons, dialog boxes, etc. Even something as CPU-intensive as video playback generally happened through the operating system’s QuickTime library rather than the application actually employing it.

All of this meant that it ought to be feasible to emulate the 68000 entirely in software. The 68000 code would necessarily run slowly and inefficiently through emulation, wiping out all of the speed advantages of the new chip and then some. Yet for many or most applications the emulator would only need to be used about 10 percent of the time. The other 90 percent of the time, when the operating system itself was doing things at native speed, would more than make up for it. In due course, applications would get recompiled and the need for 68000 emulation would largely go away. But in the meanwhile, it could provide a vital bridge between the past and the future — a next-generation Mac that wouldn’t break continuity with the old one, all with a minimum of complication, for Apple’s users and for their hardware engineers alike. By mid-1991, Project Cognac had an 88000-powered prototype that could run a RISC-based MacOS and legacy Mac applications together.

And yet this wasn’t to be the final form of the RISC-based Macintosh. For, just a few months later, Apple and IBM made an announcement that the technology press billed — sometimes sarcastically, sometimes earnestly — as the “Deal of the Century.”

Apple had first begun to talk with IBM in early 1990, when Michael Spindler, the former’s president, had first reached out to Jack Kuehler, his opposite number at IBM. It seemed that, while Apple’s technical rank and file were still greatly enamored with Motorola, upper management was less sanguine. Having been burned once with the 68000, they were uncertain about Motorola’s commitment and ability to keep evolving the 88000 over the long term.

It made a lot of sense in the abstract for any company interested in RISC technology, as Apple certainly was, to contact IBM; it was actually IBM who had invented the RISC concept back in the mid-1970s. Not all that atypically for such a huge company with so many ongoing research projects, they had employed the idea for years only in limited, mostly subsidiary usage scenarios, such as mainframe channel controllers. Now, though, they were just introducing a new line of “workstation computers” — meaning extremely high-powered desktop computers, too expensive for the consumer market — which used a RISC chip called the POWER CPU that was the heir to their many years of research in the field. Like the workstations it lay at the heart of, the chip was much too expensive and complex to become the brain of Apple’s next generation of consumer computers, but it might, thought Spindler, be something to build upon. And he knew that, with IBM’s old partnership with Microsoft slowly collapsing into bickering acrimony, Big Blue might just be looking for a new partner.

The back-channel talks were intermittent and hyper-cautious at first, but, as the year wore on and the problems both of the companies faced became more and more obvious, the discussions heated up. The first formal meeting took place in February of 1991 or shortly thereafter, at an IBM facility in Austin, Texas. The Apple people, knowing IBM’s ultra-conservative reputation and wishing to make a good impression, arrived neatly groomed and dressed in three-piece suits, only to find their opposite numbers, having acted on the same motivation, sitting there in jeans and denim shirts.

That anecdote illustrates how very much both sides wanted to make this work. And indeed, the two parties found it much easier to work together than anyone might have imagined. John Sculley, the man who really called the shots at Apple, found that he got along smashingly with Jack Kuehler, to the extent that the two were soon talking almost every day. After beginning as a fairly straightforward discussion of whether IBM might be able and willing to make a RISC chip suitable for the Macintosh, the negotiations just kept growing in scale and ambition, spurred on by both companies’ deep-seated desire to stick it to Microsoft and the Wintel hegemony in any and all possible ways. They agreed to found a joint subsidiary called Taligent, staffed initially with the people from Apple’s Project Jaguar, which would continue to develop a brand new operating system that could be licensed by any hardware maker, just like MS-DOS and Windows (and for that matter IBM’s already extant OS/2). And they would found another subsidiary called Kaleida Labs, to make a cross-platform multimedia scripting engine called ScriptX.

Still, the core of the discussions remained IBM’s POWER architecture — or rather the PowerPC, as the partners agreed to call the cost-reduced, consumer-friendly version of the chip. Apple soon pulled Motorola into these parts of the talks, thus turning a bilateral into a trilateral negotiation, and providing the name for their so-called “AIM alliance” — “AIM” for Apple, IBM, and Motorola. IBM had never made a mass-market microprocessor of their own before, noted Apple, and Motorola’s experience could serve them well, as could their chip-fabrication facilities once actual production began. The two non-Apple parties were perhaps less excited at the prospect of working together — Motorola in particular must have been smarting at the rejection of their own 88000 processor which this new plan would entail — but made nice and got along.

Jack Kuehler and John Sculley brandish what they call their “marriage certificate,” looking rather disturbingly like Neville Chamberlain declaring peace in our time. The marriage would not prove an overly long or happy one.

On October 2, 1991 — just six weeks after the first 68040-based Macintosh models had shipped — Apple and IBM made official the rumors that had been swirling around for months. At a joint press briefing held inside the Fairmont Hotel in downtown San Francisco, they trumpeted all of the initiatives I’ve just described. The Deal of the Century, they said, would usher in the next phase of personal computing. Wintel must soon give way to the superiority of a PowerPC-based computer running a Taligent operating system with ScriptX onboard. New Apple Macintosh models would also use the PowerPC, but the relationship between them and these other, Taligent-powered machines remained vague.

Indeed, it was all horribly confusing. “What Taligent is doing is not designed to replace the Macintosh,” said Sculley. “Instead we think it complements and enhances its usefulness.” But what on earth did that empty corporate speak even mean? When Apple said out of the blue that they were “not going to do to the Macintosh what we did to the Apple II” — i.e., orphan it — it rather made you suspect that that was exactly what they meant to do. And what did it all mean for IBM’s OS/2, which Big Blue had been telling a decidedly unconvinced public was also the future of personal computing for several years now? “I think the message in those agreements for the future of OS/2 is that it no longer has a future,” said one analyst. And then, what was Kaleida and this ScriptX thing supposed to actually do?

So much of the agreement seemed so hopelessly vague. Compaq’s vice president declared that Apple and IBM must be “smoking dope. There’s no way it’s going to work.” One pundit called the whole thing “a con job. There’s no software, there’s no operating system. It’s just a last gasp of extinction by the giants that can’t keep up with Intel.” Apple’s own users were baffled and consternated by this sudden alliance with the company which they had been schooled to believe was technological evil incarnate. A grim joke made the rounds: what do you get when you cross Apple and IBM? The answer: IBM.

While the journalists reported and the pundits pontificated, it was up to the technical staff at Apple, IBM, and Motorola to make PowerPC computers a reality. Like their colleagues who had negotiated the deal, they all got along surprisingly well; once one pushed past the surface stereotypes, they were all just engineers trying to do the best work possible. Apple’s management wanted the first PowerPC-based Macintosh models to ship in January of 1994, to commemorate the platform’s tenth anniversary by heralding a new technological era. The old Project Cognac team, now with the new code name of “Piltdown Man” after the famous (albeit fraudulent) “missing link” in the evolution of humanity, was responsible for making this happen. For almost a year, they worked on porting MacOS to the PowerPC, as they’d previously done to the 88000. This time, though, they had no real hardware with which to work, only specifications and software emulators. The first prototype chips finally arrived on September 3, 1992, and they redoubled their efforts, pulling many an all-nighter. Thus MacOS booted up to the desktop for the first time on a real PowerPC-based machine just in time to greet the rising sun on the morning of October 3, 1992. A new era had indeed dawned.

Their goal now was to make a PowerPC-based Macintosh work exactly like any other, only faster. MacOS wouldn’t even get a new primary version number for the first PowerPC release; this major milestone in Mac history would go under the name of System 7.1.2, a name more appropriate to a minor maintenance release. It looked so identical to what had come before that its own creators couldn’t spot the difference; they wound up lighting up a single extra pixel in the PowerPC version just so they could know which was which.

Their guiding rule of an absolutely seamless transition applied in spades to the 68000 emulation layer, duly ported from the 88000 to the PowerPC. An ordinary user should never have to think about — should not even have to know about — the emulation that was happening beneath the surface. Another watershed moment came in June of 1993, when the team brought a PowerPC prototype machine to the MacHack, a coding conference and competition. Without telling any of the attendees what was inside the machine, the team let them use it to demonstrate their boundary-pushing programs. The emulation layer performed beyond their most hopeful prognostications. It looked like the Mac’s new lease on life was all but a done deal from the engineering side of things.

But alas, the bonhomie exhibited by the partner companies’ engineers and programmers down in the trenches wasn’t so marked in their executive suites after the deal was signed. The very vagueness of so many aspects of the agreement had papered over what were in reality hugely different visions of the future. IBM, a company not usually given to revolutionary rhetoric, had taken at face value the high-flown words spoken at the announcement. They truly believed that the agreement would mark a new era for personal computing in general, with a new, better hardware architecture in the form of PowerPC and an ultra-modern operating system to run on it in the form of Taligent’s work. Meanwhile it was becoming increasingly clear that Apple’s management, who claimed to be changing the world five times before breakfast on most days, had in reality seen Taligent largely as a hedge in case their people should prove unable to create a PowerPC Macintosh that looked like a Mac, felt like a Mac, and ran vintage Mac software. As Project Piltdown Man’s work proceeded apace, Apple grew less and less enamored with those other, open-architecture ideas IBM was pushing. The Taligent people didn’t help their cause by falling headfirst into a pit of airy computer-science abstractions and staying mired there for years, all while Project Piltdown Man just kept plugging away, getting things done.

The first two and a half years of the 1990s were marred by a mild but stubborn recession in the United States, during which the PC industry had a particularly hard time of it. After the summer of 1992, however, the economy picked up steam and consumer computing eased into what would prove its longest and most sustained boom of all time, borne along on a wave of hype about CD-ROM and multimedia, along with the simple fact that personal computers in general had finally evolved to a place where they could do useful things for ordinary people in a reasonably painless way. (A bit later in the boom, of course, the World Wide Web would come along to provide the greatest impetus of all.)

And yet the position of both Apple and IBM in the PC marketplace continued to get steadily worse while the rest of their industry soared. At least 90 percent of the computers that were now being sold in such impressive numbers ran Microsoft Windows, leaving OS/2, MacOS, and a few other oddballs to divide the iconoclasts, the hackers, and the non-conformists of the world among themselves. While IBM continued to flog OS/2, more out of stubbornness than hope, Apple tried a little bit of everything to stop the slide in market share and remain relevant. Still not entirely certain whether their future lay with open architectures or their own closed, proprietary one, they started porting selected software to Windows, including most notably QuickTime, their much-admired tool for encoding and playing video. They even shipped a Mac model that could also run MS-DOS and Windows, thanks to an 80486 housed in its case alongside its 68040. And they entered into a partnership with the networking giant Novell to port MacOS itself to Intel hardware — a partnership that, like many Apple initiatives of these years, petered out without ultimately producing much of anything. Perhaps most tellingly of all, this became the only period in Apple’s history when the company felt compelled to compete solely on price. They started selling Macs in department stores for the first time, where a stream of very un-Apple-like discounts and rebates greeted prospective buyers.

While Apple thus toddled along without making much headway, IBM began to annihilate all previous conceptions of how much money a single company could possibly lose, posting oceans of red that looked more like the numbers found in macroeconomic research papers than entries in an accountant’s books. The PC marketplace was in a way one of their smaller problems. Their mainframe business, their real bread and butter since the 1950s, was cratering as customers fled to the smaller, cheaper computers that could often now do the jobs of those hulking giants just as well. In 1991, when IBM first turned the corner into loss, they did so in disconcertingly convincing fashion: they lost $2.82 billion that year. And that was only the beginning. Losses totaled $4.96 billion in 1992, followed by $8.1 billion in 1993. IBM lost more money during those three years alone than any other company in the history of the world to that point; their losses exceeded the gross domestic product of Ecuador.

The employees at both Apple and IBM paid the toll for the confusions and prevarications of these years: both companies endured rounds of major layoffs. Those at IBM marked the very first such in the long history of the company. Big Blue had for decades fostered a culture of employment for life; their motto had always been, “If you do your job, you will always have your job.” This, it was now patently obvious, was no longer the case.

The bloodletting at both companies reached their executive suites as well within a few months of one another. On April 1, 1993, John Akers, the CEO of IBM, was ousted after a seven-year tenure which one business writer called “the worst record of any chief executive in the history of IBM.” Three months later, following a terrible quarterly earnings report and a drop in share price of 58 percent in the span of six months, Michael Spindler replaced John Sculley as the CEO of Apple.

These, then, were the storm clouds under which the PowerPC architecture became a physical reality.

The first PowerPC computers to be given a public display bore an IBM rather than an Apple logo on their cases. They arrived at the Comdex trade show in November of 1993, running a port of OS/2. IBM also promised a port of AIX — their version of the Unix operating system — while Sun Microsystems announced plans to port their Unix-based Solaris operating system and, most surprisingly of all, Microsoft talked about porting over Windows NT, the more advanced, server-oriented version of their world-conquering operating environment. But, noted the journalists present, “it remains unclear whether users will be able to run Macintosh applications on IBM’s PowerPC” — a fine example of the confusing messaging the two alleged allies constantly trailed in their wake. Further, there was no word at all about the status of the Taligent operating system that was supposed to become the real PowerPC standard.

Meanwhile over at Apple, Project Piltdown Man was becoming that rarest of unicorns in tech circles: a major software-engineering project that is actually completed on schedule. The release of the first PowerPC Macs was pushed back a bit, but only to allow the factories time to build up enough inventory to meet what everyone hoped would be serious consumer demand. Thus the “Power Macs” made their public bow on March 14, 1994, at New York City’s Lincoln Center, in three different configurations clocked at speeds between 60 and 80 MHz. Unlike IBM’s machines, which were shown six months before they shipped, the Power Macs were available for anyone to buy the very next day.

The initial trio of Power Macs.

This speed test, published in MacWorld magazine, shows how all three of the Power Mac machines dramatically outperform top-of-the-line Pentium machines when running native code.

They were greeted with enormous excitement and enthusiasm by the Mac faithful, who had been waiting anxiously for a machine that could go head-to-head with computers built around Intel’s new Pentium chip, the successor to the 80486. This the Power Macs could certainly do; by some benchmarks at least, the PowerPC doubled the overall throughput of a Pentium. World domination must surely be just around the corner, right?

Predictably enough, the non-Mac-centric technology press greeted the machines’ arrival more skeptically than the hardcore Mac-heads. “I think Apple will sell [a] million units, but it’s all going to be to existing Mac users,” said one market researcher. “DOS and Windows running on Intel platforms is still going to be 85 percent of the market. [The Power Mac] doesn’t give users enough of a reason to change.” Another noted that “the Mac users that I know are not interested in using Windows, and the Windows users are not interested in using the Mac. There has to be a compelling reason [to switch].”

In the end, these more guarded predictions proved the most accurate. Apple did indeed sell an impressive spurt of Power Macs in the months that followed, but almost entirely to the faithful. One might almost say that they became a victim of Project Piltdown Man’s success: the Power Mac really did seem exactly like any other Macintosh, except that it ran faster. And even this fact could be obscured when running legacy applications under emulation, as most people were doing in the early months: despite Project Piltdown Man’s heroic efforts, applications like Excel, Word, and Photoshop actually ran slightly slower on a Power Mac than on a top-of-the-line 68040-based machine. So, while the transition to PowerPC allowed the Macintosh to persist as a viable computing platform, it ultimately did nothing to improve upon its small market share. And because the PowerPC MacOS was such a direct and literal port, it still retained all of the shortcomings of MacOS in general. It remained a pretty interface stretched over some almost laughably archaic plumbing. The new generation of Mac hardware wouldn’t receive an operating system truly, comprehensively worthy of it until OS X arrived seven long years later.

Still, these harsh realities shouldn’t be allowed to detract from how deftly Apple — and particularly the unsung coders of Project Piltdown Man — executed the transition. No one before had ever picked up a consumer-computing platform bodily and moved it to an entirely new hardware architecture at all, much less done it so transparently that many or most users never really had to think about what was happening at all. (There would be only one comparable example in computing’s future. And, incredibly, the Mac would once again be the platform in question: in 2006, Apple would move from the fading PowerPC line to Intel’s chips — if you can’t beat ’em, join ’em, right? — relying once again on a cleverly coded software emulator to see them through the period of transition. The Macintosh, it seems, has more lives than Lazarus.)

Although the briefly vaunted AIM alliance did manage to give the Macintosh a new lease on life, it succeeded in very little else. The PowerPC architecture, which had cost the alliance more than $1 billion to develop, went nowhere in its non-Mac incarnations. IBM’s own machines sold in such tiny numbers that the question of whether Apple would ever allow them to run MacOS was all but rendered moot. (For the record, though: they never did.) Sun Solaris and Microsoft Windows NT did come out in PowerPC versions, but their sales couldn’t justify their existence, and within a year or two they went away again. The bold dream of creating a new reference platform for general-purpose computing to rival Wintel never got off the ground, as it became painfully clear that said dream had been taken more to heart by IBM than by Apple. Only after the millennium would the PowerPC architecture find a measure of mass-market success outside the Mac, when it was adopted by Nintendo, Microsoft, and Sony for use in videogame consoles. In this form, then, it finally paid off for IBM; far more PowerPC-powered consoles than even Macs were sold over the lifetime of the architecture. PowerPC also eventually saw use in other specialized applications, such as satellites and planetary rovers employed by NASA.

Success, then, is always relative. But not so the complete lack thereof, as Kaleida and Taligent proved. Kaleida burned through $200 million before finally shipping its ScriptX multimedia-presentation engine years after other products, most notably Macromedia’s Director, had already sewn up that space; it was disbanded and harvested for scraps by Apple in November of 1995. Taligent burned through a staggering $400 million over the same period of time, producing only some tepid programming frameworks in lieu of the revolutionary operating system that had been promised, before being absorbed back into IBM.

There is one final fascinating footnote to this story of a Deal of the Century that turned out to be little more than a strange anecdote in computing history. In the summer of 1994, IBM, having by now stopped the worst of the bleeding, settling by now into their new life as a smaller, far less dominant company, offered to buy Apple outright for a premium of $5 over their current share price. In IBM’s view, the synergies made sense: the Power Macs were selling extremely well, which was more than could be said for IBM’s PowerPC models. Why not go all in?

Ironically, it was those same healthy sales numbers that scuppered the deal in the end. If the offer had come a year earlier, when a money-losing Apple was just firing John Sculley, they surely would have jumped at it. But now Apple was feeling their oats again, and by no means entirely without reason; sales were up more than 20 percent over the previous year, and the company was once more comfortably in the black. So, they told IBM thanks, but no thanks. The same renewed taste of success also caused them to reject serious inquiries from Philips, Sun Microsystems, and Oracle. Word had it that new CEO Michael Spindler was convinced not only that the Power Mac had saved Apple, but that it had fundamentally altered their position in the marketplace.

The following year revealed how misguided that thinking really was; the Power Mac had fixed none of Apple’s fundamental problems. That year it was Microsoft who cemented their world domination instead, with the release of Windows 95, while Apple grappled with the reality that almost all of those Power Mac sales of the previous year had been to existing members of the Macintosh family, not to the new customers they so desperately needed to attract. What happened now that everyone in the family had dutifully upgraded? The answer to that question wasn’t pretty: Apple plunged off a financial cliff as precipitous in its own way as the one which had nearly destroyed IBM a few years earlier. Now, nobody was interested in acquiring them anymore. The pundits smelled the stink of death; it’s difficult to find an article on Apple written between 1995 and 1998 which doesn’t include the adjective “beleaguered.” Why buy now when you can sift through the scraps at the bankruptcy auction in just a little while?

Apple didn’t wind up dying, of course. Instead a series of improbable events, beginning with the return of prodigal-son Steve Jobs in 1997, turned them into the richest single company in the world — yes, richer even than Microsoft. These are stories for other articles. But for now, it’s perhaps worth pausing for a moment to think about an alternate timeline where the Macintosh became an IBM product, and the Deal of the Century that got that ball rolling thus came much closer to living up to its name. Bizarre, you say? Perhaps. But no more bizarre than what really happened.

(Sources: the books Insanely Great: The Life and Times of Macintosh by Steven Levy, Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer, Infinite Loop: How the World’s Most Insanely Great Computer Company Went Insane by Michael S. Malone, Big Blues: The Unmaking of IBM by Paul Carroll, and The PowerPC Macintosh Book by Stephan Somogyi; InfoWorld of September 24 1990, October 15 1990, December 3 1990, April 8 1991, May 13 1991, May 27 1991, July 1 1991, July 8 1991, July 15 1991, July 22 1991, August 5 1991, August 19 1991, September 23 1991, September 30 1991, October 7 1991, October 21 1991, November 4 1991, December 30 1991, January 13 1992, January 20 1992, February 3 1992, March 9 1992, March 16 1992, March 23 1992, April 27 1992, May 11 1992, May 18 1992, June 15 1992, June 29 1992, July 27 1992, August 3 1992, August 10 1992, August 17 1992, September 7 1992, September 21 1992, October 5 1992, October 12 1992, October 19 1992, December 14 1992, December 21 1992, December 28 1992, January 11 1993, February 1 1993, February 22 1993, March 8 1993, March 15 1993, April 5 1993, April 12 1993, May 17 1993, May 24 1993, May 31 1993, June 21 1993, June 28 1993, July 5 1993, July 12 1993, July 19 1993, August 2 1993, August 9 1993, August 30 1993, September 6 1993, September 27 1993, October 4 1993, October 11 1993, October 18 1993, November 1 1993, November 15 1993, November 22 1993, December 6 1993, December 13 1993, December 20 1993, January 10 1994, January 31 1994, March 7 1994, March 14 1994, March 28 1994, April 25 1994, May 2 1994, May 16 1994, June 6 1994, June 27 1994; MacWorld of September 1992, February 1993, July 1993, September 1993, October 1993, November 1993, February 1994, and May 1994; Byte of November 1984. Online sources include IBM’s own corporate-history timeline and a vintage IBM lecture on the PowerPC architecture.)

 
 

Tags: , ,

Master of Orion

 

Given the shadow which the original Master of Orion still casts over the gaming landscape of today, one might be forgiven for assuming, as many younger gamers doubtless do, that it was the very first conquer-the-galaxy grand-strategy game ever made. The reality, however, is quite different. For all that its position of influence is hardly misbegotten for other very good reasons, it was already the heir to a long tradition of such games at the time of its release in 1993. In fact, the tradition dates back to well before computer games as we know them today even existed.

The roots of the strategic space opera can be traced back to the tabletop game known as Diplomacy, designed by Allan B. Calhamer and first published in 1959 by Avalon Hill. Taking place in the years just prior to World War I, it put seven players in the roles of leaders of the various “great powers” of Europe. Although it included a playing board, tokens, and most of the other accoutrements of a typical board game, the real action, at least if you were playing it properly, was entirely social, in the alliances that were forged and broken and the shady deals that were struck. In this respect, it presaged many of the ideas that would later go into Dungeons & Dragons and other role-playing games. It thus represents an instant in gaming history as seminal in its own way as the 1954 publication of Avalon Hill’s Tactics, the canonical first tabletop wargame and the one which touched off the hobby of experiential gaming in general. But just as importantly for our purposes, Diplomacy‘s shifting alliances and the back-stabbings they led to would become an essential part of countless strategic space operas, including Master of Orion 34 years later.

Because getting seven friends together in the same room for the all-day affair that was a complete game of Diplomacy was almost as hard in the 1960s as it is today, inventive gamers developed systems for playing it via post; the first example of this breed would seem to date from 1963. And once players had started modifying the rules of Diplomacy to make it work under this new paradigm, it was a relatively short leap to begin making entirely new play-by-post games with new themes which shared some commonalities of approach with Calhamer’s magnum opus.

Thus in December of 1966, Dan Brannon announced a play-by-post game called Xeno, whose concept sounds very familiar indeed in the broad strokes. Each player started with a cluster of five planets — a tiny toehold in a sprawling, unknown galaxy waiting to be colonized. “The vastness of the playing space, the secrecy of the identity of the other players, the secrecy of the locations of ships and planets, the total lack of information without efforts of investigation, all these factors are meant to create the real problems of a race trying to expand to other planets,” wrote Brannon. Although the new game would be like Diplomacy in that it would presumably still culminate in negotiations, betrayals, and the inevitable final war to determine the ultimate victor, these stages would now be preceded by those of exploration and colonization, until a galaxy that had seemed so unfathomably big at the start proved not to be big enough to accommodate all of its would-be space empires. Certainly all of this too will be familiar to any player of Master of Orion or one of its heirs. Brannon’s game even included a tech tree of sorts, with players able to acquire better engines, weapons, and shields for their ships every eight turns they managed to survive.

In practice, Xeno played out at a pace to which the word “glacial” hardly does justice. The game didn’t really get started until September of 1967, and by a year after that just three turns had been completed. I don’t know whether a single full game of it was ever finished. Nevertheless, it proved hugely influential within the small community of experiential-gaming fanzines and play-by-post enthusiasts. The first similar game, called Galaxy and run by H. David Montgomery, had already appeared before Xeno had processed its third turn.

But the idea was, literally and figuratively speaking, too big for the medium for which it had been devised; it was just too compelling to remain confined to those few stalwart souls with the patience for play-by-post gaming. It soon branched out into two new mediums, each of which offered a more immediate sort of satisfaction.

In 1975, following rejections from Avalon Hill and others, one Howard Thompson formed his own company to publish the face-to-face board game Stellar Conquest, the first strategic space opera to appear in an actual box on store shelves. When Stellar Conquest became a success, it spawned a string of similar board games with titles like Godsfire, Outreach, Second Empire, and Starfall during this, the heyday of experiential gaming on the tabletop. But the big problem with such games was their sheer scope and math-heavy nature, which were enough to test the limits of many a salty old grognard who usually reveled in complexity. They all took at least three or four hours to play in their simplest variants, and a single game of at least one of them — SPI’s Outreach — could absorb weeks of gaming Saturdays. Meanwhile they were all dependent on pages and pages of fiddly manual calculations, in the time before spreadsheet macros or even handheld calculators were commonplace. (One hates to contemplate the plight of the Outreach group who have just spent the last two months resolving who shall become master of the galaxy, only to discover that the victor made a mistake on her production worksheet back on the second turn which invalidated all of the numbers that followed…) These games were, in other words, crying out for computerization.

Luckily, then, that too had already started to happen by the end of the 1970s. One of the reasons that play-by-post games of this type tended to run so sluggishly — beyond, that is, the inherent sluggishness of the medium itself — came down to the same problem as that faced by their tabletop progeny: the burden their size and complexity placed on their administrators. Therefore in 1976, Rick Loomis, the founder of a little company called Flying Buffalo, started running the commercial play-by-post game Starweb on what gaming historian Shannon Appelcline has called “probably the first computer ever purchased exclusively to play games” (or, at least, to administrate them): a $14,000 Raytheon 704 minicomputer. He would continue to run Starweb for more than thirty years — albeit presumably not on the same computer throughout that time.

But the first full-fledged incarnation of the computerized strategic space opera — in the sense of a self-contained game meant to be played locally on a single computer — arrived only in 1983. Called Reach for the Stars, it was the first fruit of what would turn into a long-running and prolific partnership between the Aussies Roger Keating and Ian Trout, who in that rather grandiose fashion that was so typical of grognard culture had named themselves the Strategic Studies Group. Reach for the Stars was based so heavily upon Stellar Conquest that it’s been called an outright unlicensed clone. Nevertheless, it’s a remarkable achievement for the way that it manages to capture that sense of size and scope that is such a huge part of these games’ appeal on 8-bit Apple IIs and Commodore 64s with just 64 K of memory. Although the whole is necessarily rather bare-bones compared to what would come later, the computer players’ artificial intelligence, always a point of pride with Keating and Trout, is surprisingly effective; on the harder difficulty level, the computer can truly give you a run for your money, and seems to do so without relying solely on egregious cheating.

It doesn’t look like much, but the basic hallmarks of the strategic space opera are all there in Reach for the Stars.

Reach for the Stars did very well, prompting updated ports to more powerful machines like the Apple Macintosh and IIGS and the Commodore Amiga as the decade wore on. A modest trickle of other boxed computer games of a similar stripe also appeared, albeit none which did much to comprehensively improve on SSG’s effort: Imperium Galactum, Spaceward Ho!, Armada 2525, Pax Imperia. Meanwhile the commercial online service CompuServe offered up MegaWars III, in which up to 100 players vied for control of the galaxy; it played a bit like one of those years-long play-by-post campaigns of yore compressed into four to six weeks of constant — and expensive, given CompuServe’s hourly dial-up rates — action and intrigue. Even the shareware scene got in on the act, via titles like Anacreon: Reconstruction 4021 and the earliest versions of the cult classic VGA Planets, a game which is still actively maintained and played to this day. And then, finally, along came Master of Orion in 1993 to truly take this style of game to the next level.

Had things gone just a little bit differently, Master of Orion too might have been a shareware release. It was designed in the spare time of Steve Barcia, an electrical engineer living in Austin, Texas, and programmed by Steve himself, his wife Marcia Barcia, and their friend Ken Burd. Steve claims not ever to have played any of the computer games I’ve just mentioned, but, as an avid and longtime tabletop gamer, he was very familiar with Stellar Conquest and a number of its successors. (No surprise there: Howard Thompson and his game were in fact also products of Austin’s vibrant board-gaming scene.)

After working on their computer game, which they called Star Lords, on and off for years, the little band of hobbyist programmers submitted it to MicroProse, whose grand-strategy game of Civilization, a creation of their leading in-house designer Sid Meier, had just taken the world by storm. A MicroProse producer named Jeff Johannigman — himself another member of the Austin gaming fraternity, as it happened, one who had just left Origin Systems in Austin to join MicroProse up in Baltimore — took a shine to the unpolished gem and signed its creators to develop it further. Seeing their hobby about to become a real business, the trio quit their jobs, took the name of SimTex, and leased a cramped office above a gyro joint to finish their game under Johannigman’s remote supervision, with a little additional help from MicroProse’s art department.

A fellow named Alan Emrich was one of most prominent voices in strategy-game criticism at the time; he was the foremost scribe on the subject at Computer Gaming World magazine, the industry’s accepted journal of record, and had just published a book-length strategy guide on Civilization in tandem with Johnny Wilson, the same magazine’s senior editor. Thanks to that project, Emrich was well-connected with MicroProse, and was happy to serve as a sounding board for them. And so, one fateful day very early in 1993, Johannigman asked if he’d like to have a look at a new submission called Star Lords.

As Emrich himself puts it, his initial impressions “were not that great.” He remembers thinking the game looked like “something from the late 1980s” — an eternity in the fast-changing computing scene of the early 1990s. Yet there was just something about it; the more he played, the more he wanted to keep playing. So, he shared Star Lords with his friend Tom Hughes, with whom he’d been playing tabletop and computerized strategy games for twenty years. Hughes had the same experience. Emrich:

After intense, repeated playing of the game, Tom and I were soon making numerous suggestions to [Johannigman], who, in turn, got tired of passing them on to the designer and lead programmer, Steve Barcia. Soon, we were talking to Steve directly. The telephone lines were burning regularly and a lot of ideas went back and forth. All the while, Steve was cooking up a better and better game. It was during this time that the title changed to Master of Orion and the game’s theme and focus crystallized.

I wrote a sneak preview for Computer Gaming World magazine where I indicated that Master of Orion was shaping up to be a good game. It had a lot of promise, but I didn’t think it was up there with Sid Meier’s Civilization, the hobby’s hallmark of strategy gaming at that time. But by the time that story hit the newsstands, I had changed my mind. I found myself still playing the game constantly and was reflecting on that fact when Tom called me. We talked about Master of Orion, of course, and Tom said, “You know, I think this game might become more addicting even than Civilization.” I replied, “You know, I think it already is.”

I was hard on Emrich in earlier articles for his silly assertion that Civilization‘s inclusion of global warming as a threat to progress and women’s suffrage as a Wonder of the World constituted some form of surrender to left-wing political correctness, as I was for his even sillier assertion that the game’s simplistic and highly artificial economic model could somehow be held up as proof for the pseudo-scientific theory of trickle-down economics. Therefore let me be very clear in praising him here: Emrich and Hughes played an absolutely massive role in making Master of Orion one of the greatest strategy games of all time. Their contribution was such that SimTex took the unusual step of adding to the credits listing a “Special Thanks to Alan Emrich and Tom Hughes for their invaluable design critiquing and suggestions.” If anything, that credit would seem to be more ungenerous than the opposite. By all indications, a pair of full-fledged co-designer credits wouldn’t have been out of proportion to the reality of their contribution. The two would go on to write the exhaustive official strategy guide for the game, a tome numbering more than 400 pages. No one could have been more qualified to tackle that project.

As if all that wasn’t enough, Emrich did one more great service for Master of Orion and, one might even say, for gaming in general. In a “revealing sneak preview” of the game, published in the September 1993 issue of Computer Gaming World, he pronounced it to be “rated XXXX.” After the requisite measure of back-patting for such edgy turns of phrase as these, Emrich settled down to explain what he really meant by the label: “XXXX” in this context stood for “EXplore, EXpand, EXploit, and EXterminate.” And thus was a new sub-genre label born. The formulation from the article was quickly shortened to “4X” by enterprising gamers uninterested in making strained allusions to pornographic films. In that form, it would be applied to countless titles going forward, right up to the present day, and retroactively applied to countless titles of the past, including all of the earlier space operas I’ve just described as well as the original Civilization — a game to which the “EXterminate” part of the label fits perhaps less well, but such is life.

Emrich’s article also creates an amusing distinction for the more pedantic ludic taxonomists and linguists among us. Although Master of Orion definitely was not, as we’ve now seen at some length, the first 4X game in the abstract, it was the very first 4X game to be called a 4X game. Maybe this accounts for some of the pride of place it holds in modern gaming culture?

However that may be, though, the lion’s share of the credit for Master of Orion‘s enduring influence must surely be ascribed to what a superb game it is in its own right. If it didn’t invent the 4X space opera, it did in some sense perfect it, at least in its digital form. It doesn’t do anything conceptually new on the face of it — you’re still leading an alien race as it expands through a randomly created galaxy, competing with other races in the fields of economics, technology, diplomacy, and warfare to become the dominant civilization — but it just does it all so well.

A new game of Master of Orion begins with you choosing a galaxy size (from small to huge), a difficulty level (from simple to impossible), and a quantity of opposing aliens to compete against (from one to five). Then you choose which specific race you would like to play; you have ten possibilities in all, drawing from a well-worn book of science-fiction tropes, from angry cats in space to hive-mind-powered insects, from living rocks to pacifistic brainiacs, alongside the inevitable humans. Once you’ve made your choice, you’re cast into the deep end — or rather into deep space — with a single half-developed planet, a colony ship for settling a second planet as soon as you find a likely candidate, two unarmed scout ships for exploring for just such a candidate, and a minimal set of starting technologies.

You must parlay these underwhelming tools into galactic domination hundreds of turns later. You can take the last part of the 4X tag literally and win out by utterly exterminating all of your rivals, but a slightly less genocidal approach is a victory in the “Galactic Council” which meets every quarter-century (i.e., every 25 turns). Here everyone can vote on which of the two most currently populous empires’ leaders they prefer to appoint as ruler of the galaxy, with “everyone” in this context including the two leading emperors themselves. Each empire gets a number of votes determined by its population, and the first to collect two-thirds of the total vote wins outright. (Well, almost… it is possible for you to refuse to respect the outcome of a vote that goes against you, but doing so will cause all of your rivals to declare immediate and perpetual war against you, whilst effectively pooling all of their own resources and technology. Good luck with that!)

A typical game of Master of Orion plays out over three broad stages. The first stage is the land grab, the wide-open exploration and colonization phase that happens before you meet your rival aliens. Here your challenge is to balance the economic development of your existing planets against your need to settle as many new ones as possible to put yourself in a good position for the mid-game. (When exactly do I stop spending my home planet’s resources on improving its own infrastructure and start using them to build more colony ships?) The mid-game begins when you start to bump into your rivals, and comes to entail much jockeying for influence, as the various races begin to sort themselves into rival factions. (The Alkaris, bird-like creatures, loathe the Mrrshans, the aforementioned race of frenzied pussycats, and their loathing is returned in kind. I don’t have strong feelings about either one — but whose side would it most behoove me to choose from a purely strategic perspective?) The endgame is nigh when there is no more room for anyone to expand, apart from taking planets from a rival by force, and the once-expansive galaxy suddenly seems claustrophobic. It often, although by no means always, is marked by a massive war that finally secures somebody that elusive two-thirds majority in the Galactic Council. (I’m so close now! Do I attack those stubbornly intractable Bulrathi to try to knock down their population and get myself over the two-thirds threshold that way, or do I keep trying to sweet-talk and bribe them into voting for me?) The length and character of all of these stages will of course greatly depend on the initial setup you chose; the first stage might be all but nonexistent in a small galaxy with five rivals, while it will go on for a long, long time indeed in a huge galaxy with just one or two opponents. (The former scenario is, for the record, far more challenging.)

And that’s how it goes, generally speaking. Yet the core genius of Master of Orion actually lies in how resistant it is to generalization. It’s no exaggeration to say that there really is no “typical” game; I’ve enjoyed plenty which played out in nothing like the pattern I’ve just described for you. I’ve played games in which I never fired a single shot in anger, even ones where I’ve never built a single armed ship of war, just as I’ve played others where I was in a constant war for survival from beginning to end. Master of Orion is gaming’s best box of chocolates; you never know what you’re going to get when you jump into a new galaxy. Everything about the design is engineered to keep you from falling back on patterns universally applicable to the “typical” game. It’s this quality, more so than any other, that makes Master of Orion so consistently rewarding. If I was to be stranded on the proverbial desert island, I have a pretty good idea of at least one of the games I’d choose to take with me.

I’ll return momentarily to the question of just how Master of Orion manages to build so much variation into a fairly simple set of core rules. I think it might be instructive to do so, however, in comparison with another game, one I’ve already had occasion to mention several times in this article: Civilization.

As I’m so often at pains to point out, game design is, like any creative pursuit, a form of public dialog. Certainly Civilization itself comes with a long list of antecedents, including most notably Walter Bright’s mainframe game Empire, Dani Bunten Berry’s PC game Seven Cities of Gold, and the Avalon Hill board game with which Civilization shares its name. Likewise, Civilization has its progeny, among them Master of Orion. By no means was it the sole influence on the latter; as we’ve seen, Master of Orion was also greatly influenced by the 4X space-opera tradition in board games, especially during its early phases of development.

Still, the mark of Civilization as well can be seen all over its finished design. (After all, Alan Emrich had just literally written the book on Civilization when he started bombarding Barcia with design suggestions…) For example, Master of Orion, unlike all of its space-opera predecessors, on the computer or otherwise, doesn’t bother at all with multiplayer options, preferring to optimize the single-player experience in their stead. One can’t help but feel that it was Civilization, which was likewise bereft of the multiplayer options that earlier grand-strategy games had always included as a matter of course, that empowered Steve Barcia and company to go this way.

At the same time, though, we cannot say that Jeff Johannigman was being particularly accurate when he took to calling Master of OrionCivilization in space” for the benefit of journalists. For all that it’s easy enough to understand what made such shorthand so tempting — this new project too was a grand-strategy game played on a huge scale, incorporating technology, economics, diplomacy, and military conflict — it wasn’t ultimately fair to either game. Master of Orion is very much its own thing. Its interface, for example, is completely different. (Ironically, Barcia’s follow-up to Master of Orion, the fantasy 4X Master of Magic, hews much closer to Civilization in that respect.) In Master of Orion, Civilization‘s influence often runs as much in a negative as a positive direction; that is to say, there are places where the later design is lifting ideas from the earlier one, but also taking it upon itself to correct perceived weaknesses in their implementation.

I have to use the qualifier “perceived” there because the two games have such different personalities. Simply put, Civilization prioritizes its fictional context over its actual mechanics, while Master of Orion does just the opposite. Together they illustrate the flexibility of the interactive digital medium, showing how great games can be great in such markedly different ways, even when they’re as closely linked in terms of genre as these two are.

Civilization explicitly bills itself as a grand journey through human history, from the time in our distant past when the first hunter-gatherers settled down in villages to an optimistic near-future in space. The rules underpinning the journey are loose-goosey, full of potential exploits. The most infamous of these is undoubtedly the barbarian-horde strategy, in which you research only a few minimal technologies necessary for war-making and never attempt to evolve your society or participate in any meaningful diplomacy thereafter, but merely flood the world with miserable hardscrabble cities supporting primitive armies, attacking everything that moves until every other civilization is extinct. At the lower and moderate difficulty levels at least, this strategy works every single time, albeit whilst bypassing most of what the game was meant to be about. As put by Ralph Betza, a contributor to an early Civilization strategy guide posted to Usenet: “You can always play Despotic Conquest, regardless of the world you find yourself starting with, and you can always win without using any of the many ways to cheat. When you choose any other strategy, you are deliberately risking a loss in order to make the game more interesting.”

So very much in Civilization is of limited utility at best in purely mechanical terms. Many or most of the much-vaunted Wonders of the World, for example, really aren’t worth the cost you have to pay for them. But that’s okay; you pay for them anyway because you like the idea of having built the Pyramids of Giza or the Globe Theatre or Project Apollo, just as you choose not to go all Genghis Khan on the world because you’d rather build a civilization you can feel proud of. Perhaps the clearest statement of Civilization‘s guiding design philosophy can be found in the manual. It says that, even if you make it all the way to the end of the game only to see one of your rivals achieve the ultimate goal of mounting an expedition to Alpha Centauri before you do, “the successful direction of your civilization through the centuries is an achievement. You have survived countless wars, the pollution of the industrial age, and the risks of nuclear weapons.” Or, as Sid Meier himself puts it, “a game of Civilization is an epic story.”

We’re happy to preach peace and cooperation, as long as we’re the top dogs… er, birds.

Such sentiments are deeply foreign to Master of Orion; this is a zero-sum game if ever there was one. If you lose the final Galactic Council vote, there’s no attaboy for getting this far, much less any consolation delivered that the galaxy has entered a new era of peaceful cooperation with some other race in the leadership role. Instead the closing cinematic tells you that you’ve left the known galaxy and “set forth to conquer new worlds, vowing to return and claim the renowned title of Master of Orion.” (Better to rule in Hell, right?) There are no Wonders of the World in Master of Orion, and, while there is a tech tree to work through, you won’t find on it any of Civilization‘s more humanistic advances, such as Chivalry or Mysticism, or even Communism or The Corporation. What you get instead are technologies — it’s telling that Master of Orion talks about a “tech tree,” while Civilization prefers the word “advances” — with a direct practical application to settling worlds and making war, divided into the STEM-centric categories of Computers, Construction, Force Fields, Planetology, Propulsion, and Weapons.

So, Civilization is the more idealistic, more educational, perhaps even the nobler of the two games. And yet it often plays a little awkwardly — which awkwardness we forgive because of its aspirational qualities. Master of Orion‘s fictional context is a much thinner veneer to stretch over its mechanics, while words like “idealistic” simply don’t exist in its vocabulary. And yet, being without any high-flown themes to fall back on, it makes sure that its mechanics are absolutely tight. These dichotomies can create a dilemma for a critic like yours truly. If you asked me which game presents a better argument for gaming writ large as a potentially uplifting, ennobling pursuit, I know which of the two I’d have to point to. But then, when I’m just looking for a fun, challenging, intriguing game to play… well, let’s just say that I’ve played a lot more Master of Orion than Civilization over the last quarter-century. Indeed, Master of Orion can easily be read as the work of a designer who looked at Civilization and was unimpressed with its touchy-feely side, then set out to make a game that fixed all the other failings which that side obscured.

By way of a first example, let’s consider the two games’ implementation of an advances chart — or a tech tree, whichever you prefer. Arguably the most transformative single advance in Civilization is Railroads; they let you move your military units between your cities almost instantaneously, which makes attacks much easier and quicker to mount for warlike players and enables the more peaceful types to protect their holdings with a much smaller (and thus less expensive) standing army. The Railroads advance is so pivotal that some players build their entire strategy around acquiring it as soon as possible, by finding it on the advances chart as soon as the game begins in 4000 BC and working their way backward to find the absolute shortest path for reaching it. This is obviously problematic from a storytelling standpoint; it’s not as if the earliest villagers set about learning the craft of Pottery with an eye toward getting their hands on Railroads 6000 years later. More importantly, though, it’s damaging to the longevity of the game itself, in that it means that players can and will always employ that same Railroads strategy just as soon as they figure out what a winner it is. Here we stumble over one of the subtler but nonetheless significant axioms of game design: if you give players a hammer that works on every nail, many or most of them will use it — and only it — over and over again, even if it winds up decreasing their overall enjoyment. It’s for this reason that some players continue to use even the barbarian-horde strategy in Civilization, boring though it is. Or, to take an outside example: how many designers of CRPGs have lovingly crafted dozens of spells with their own unique advantages and disadvantages, only to watch players burn up everything they encounter with a trusty Fireball?

Master of Orion, on the other hand, works hard at every turn to make such one-size-fits-all strategies impossible — and nowhere more so than in its tech tree. When a new game begins, each race is given a randomized selection of technologies that are possible for it to research, constituting only about half of the total number of technologies in the game. Thus, while a technology roughly equivalent to Civilization‘s Railroads does exist in Master of Orion — Star Gates — you don’t know if this or any other technology is actually available to you until you advance far enough up the tree to reach the spot where it ought to be. You can’t base your entire strategy around a predictable technology progression. While you can acquire technologies that didn’t make it into your tree by trading with other empires, bullying them into giving them to you, or attacking their planets and taking them, that’s a much more fraught, uncertain path to go down than doing the research yourself, one that requires a fair amount of seat-of-your-pants strategy in its own right. Any way you slice it, in other words, you have to improvise.

We’ve been lucky here in that Hydrogen Fuel Cells, the first range-extending technology and a fairly cheap one, is available in our tree. If it wasn’t, and if we didn’t have a lot of stars conveniently close by, we’d have to dedicate our entire empire to attaining a more advanced and thus more expensive range-extending technology, lest we be left behind in the initial land grab. But this would of course mean neglecting other aspects of our empire’s development. Trade-offs like this are a constant fact of life in Master of Orion.

This one clever design choice has repercussions for every other aspect of the game. Take, for instance, the endlessly fascinating game-within-a-game of designing your fleet of starships. If the tech tree was static, players would inevitably settle upon a small set of go-to designs that worked for their style of play. As it is, though, every new ship is a fresh balancing act, its equipment calibrated to maximize your side’s technological strengths and mitigate its weaknesses, while also taking into careful account the strengths and weaknesses of the foe you expect to use it against, about which you’ve hopefully been compiling information through your espionage network. Do you build a huge number of tiny, fast, maneuverable fighters, or do you build just a few lumbering galactic dreadnoughts? Or do you build something in between? There are no universally correct answers, just sets of changing circumstances.

Another source of dynamism are the alien races you play and those you play against. The cultures in Civilization have no intrinsic strengths and weaknesses, just sets of leader tendencies when played by the computer; for your part, you’re free to play the Mongols as pacifists, or for that matter the Russians as paragons of liberal democracy and global cooperation. But in Master of Orion, each race’s unique affordances force you to play it differently. Likewise, each opposing race’s affordances in combination with those of your own force you to respond differently to that race when you encounter it, whether on the other side of a diplomats’ table or on a battlefield in space. Further, most races have one technology they’re unusually good at researching and one they’re unusually bad at. Throw in varying degrees of affinity and prejudice toward the other races, and, again, you’ve got an enormous amount of variation which defies cookie-cutter strategizing. (It’s worth noting that there’s a great deal of asymmetry here; Steve Barcia and his helpers didn’t share so many modern designers’ obsession with symmetrical play balance above all else. Some races are clearly more powerful than others: the brainiac Psilons get a huge research bonus, the insectoid Klackons get a huge bonus in worker productivity, and the Humans get huge bonuses in trade and diplomacy. Meanwhile the avian Alkaris, the feline Mrrshan, and the ursine Bulrathis have bonuses which only apply during combat, and can be overcome fairly easily by races with other, more all-encompassing advantages.)

There are yet more touches to bring yet more dynamism. Random events occur from time to time in the galaxy, some of which can change everything at a stroke: a gigantic space amoeba might show up and start eating stars, forcing everyone to forget their petty squabbles for a while and band together against this apocalyptic threat. And then there’s the mysterious star Orion, from which the game takes it name, which houses the wonders of a long-dead alien culture from the mythical past. Taking possession of it might just win the game for you — but first you’ll have to defeat its almost inconceivably powerful Guardian.

One of the perennial problems of 4X games, Civilization among them, is the long anticlimax, which begins at that point when you know you’re going to conquer the world or be the first to blast off for Alpha Centauri, but well before you actually do so. (What Civilization player isn’t familiar with the delights of scouring the map for that one remaining rival city tucked away on some forgotten island in some forgotten corner?) Here too Master of Orion comes with a mitigating idea, in the form of the Galactic Council whose workings I’ve already described. It means that, as soon as you can collect two-thirds of the vote — whether through wily diplomacy or the simpler expedient of conquering until two-thirds of the galaxy’s population is your own — the game ends and you get your victory screen.

Indeed, one of the overarching design themes of Master of Orion is its determination to minimize the boring stuff. It must be admitted, of course, that boredom is in the eye of the beholder. Non-fans have occasionally dismissed the whole 4X space-opera sub-genre as “Microsoft Excel in space,” and Master of Orion too requires a level of comfort with — or, better yet, a degree of fascination with — numbers and ratios; you’ll spend at least as much time tinkering with your economy as you will engaging in space battles. Yet the game does everything it can to minimize the pain here as well. While hardly a simple game in absolute terms, it is quite a streamlined example of its type; certainly it’s much less fiddly than Civilization. Planet management is abstracted into a set of five sliding ratio bars, allowing you decide what percentage of that planet’s total output should be devoted to building ships, building defensive installations, building industrial infrastructure, cleaning up pollution, and researching new technologies. Unlike in Civilization, there is no list of specialized structures to build one at a time, much less a need to laboriously develop the land square by square with a specialized unit. Some degree of micro-management is always going to be in the nature of this type of game, but managing dozens of planets in Master of Orion is far less painful than managing dozens of cities in Civilization.

The research screen as well operates through sliding ratio bars which let you decide how much effort to devote to each of six categories of technology. In other words, you’re almost always researching multiple advances at once in Master of Orion, whereas in Civilization you only research one at a time. Further, you can never predict for sure when a technology will arrive; while each has a base cost in research points, “paying” it leads only to a slowly increasing randomized chance of acquiring the technology on any given turn. (That’s the meaning of the “17%” next to Force Fields in the screenshot above.) You also receive bonuses for maintaining steady research over a long run of turns, rather than throwing all of your research points into one technology, then into something else, etc. All of this as well serves to make the game more unpredictable and dynamic.

In short, Master of Orion tries really, really hard to work with you rather than against you, and succeeds to such a degree that it can sometimes feel like the game is reading your mind. A reductionist critic of the sort I can be on occasion might say that there are just two types of games: those that actually got played before their release and those that didn’t. With only rare exceptions, this distinction, more so than the intrinsic brilliance of the design team or any other factor, is the best predictor of the quality of the end result. Master of Orion is clearly a game that got played, and played extensively, with all of the feedback thus gathered being incorporated into the final design. The interface is about as perfect as the technical limitations of 1993 allow it to be; nothing you can possibly want to do is more than two clicks away. And the game is replete with subtle little conveniences that you only come to appreciate with time — like, just to take one example, the way it asks if you want to automatically adjust the ecology spending on every one of your planets when you acquire a more efficient environmental-cleanup technology. This lived-in quality can only be acquired the honest, old-fashioned way: by giving your game to actual players and then listening to what they tell you about it, whether the points they bring up are big or small, game-breaking or trivial.

This thoroughgoing commitment to quality is made all the more remarkable by our knowledge of circumstances inside MicroProse while Master of Orion was going through these critical final phases of its development. When the contract to publish the game was signed, MicroProse was in desperate financial straits, having lost bundles on an ill-advised standup-arcade game along with expensive forays into adventure games and CRPGs, genres far from their traditional bread and butter of military simulations and grand-strategy games. Although other projects suffered badly from the chaos, Master of Orion, perhaps because it was a rather low-priority project entrusted largely to an outside team located over a thousand miles away, was given the time and space to become its best self. It was still a work in progress on June 21, 1993, when MicroProse’s mercurial, ofttimes erratic founder and CEO “Wild Bill” Stealey sold the company to Spectrum Holobyte, a publisher with a relatively small portfolio of extant games but a big roll of venture capital behind them.

Master of Orion thus became one of the first releases from the newly conjoined entity on October 1, 1993. Helped along by the evangelism of Alan Emrich and his pals at Computer Gaming World, it did about as well as such a cerebral title, almost completely bereft of audiovisual bells and whistles, could possibly do in the new age of multimedia computing; it became the biggest strategy hit since Civilization, and the biggest 4X space opera to that point, in any medium. Later computerized iterations on the concept, including its own sequels, doubtless sold more copies in absolute numbers, but the original Master of Orion has gone on to become one of the truly seminal titles in gaming history, almost as much so as the original Civilization. It remains the game to which every new 4X space opera — and there have been many of them, far more than have tried to capture the more elusively idealistic appeal of Civilization — must be compared.

Sometimes a status such as that enjoyed by Master of Orion arrives thanks to an historical accident or a mere flashy technical innovation, but that is definitively not the case here. Master of Orion remains as rewarding as ever in all its near-infinite variation. Personally, I like to embrace its dynamic spirit for everything it’s worth by throwing a (virtual) die to set up a new game, letting the Universe decide what size galaxy I play in, how many rivals I play with, and which race I play myself. The end result never fails to be enjoyable, whether it winds up a desperate free-for-all between six alien civilizations compressed into a tiny galaxy with just 24 stars, or a wide-open, stately game of peaceful exploration in a galaxy with over 100 of them. In short, Master of Orion is the most inexhaustible well of entertainment I’ve ever found in the form of a single computer game — a timeless classic that never fails to punish you for playing lazy, but never fails to reward you for playing well. I’ve been pulling it out to try to conquer another random galaxy at least once every year or two for half my life already. I suspect I’ll still be doing so until the day I die.

(Sources: the books Gamers at Work: Stories Behind the Games People Play by Morgan Ramsay, Designers & Dragons, Volume 1: The 1970s by Shannon Appelcline, and Master of Orion: The Official Strategy Guide by Alan Emrich and Tom E. Hughes, Jr.; Computer Gaming World of December 1983, June/July 1985, October 1991, June 1993, August 1993, September 1993, December 1993, and October 1995; Commodore Disk User of May 1988; Softline of March 1983. Online sources include “Per Aspera Ad Astra” by Jon Peterson from ROMchip, Alan Emrich’s historical notes from the old Master of Orion III site, a Steve Barcia video interview which originally appeared in the CD-ROM magazine Interactive Entertainment., and the Civilization Usenet FAQ, lasted updated by “Dave” in 1994.

Master of Orion I and II are available for purchase together from GOG.com. I highly recommend a tutorial, compiled many years ago by Sirian and now available only via archive.org, as an excellent way for new players to learn the ropes.)

 
 

Tags: , ,

Buzz Aldrin’s Race into Space (and Space-Program Games in General)

“Demography is destiny,” said the French sociologist Auguste Comte apocryphally in the nineteenth century. That truism has been taken to heart by many in the time since — not least by our political classes. Yet it applies equally in the world of the arts and entertainment. For in any free market, the nature of production is dictated as much by the consumers as by the producers.

Certainly this is true of computer games. Throughout the 1980s and 1990s, they were largely the province of a rather specific demographic indeed: single white males between the ages of ten and thirty from relatively privileged socioeconomic circumstances, with a bent toward intellectual rather than active pursuits — i.e., the stereotypical “nerds” of pop culture. Computer games reflected the tastes of these boys and young men in other kinds of entertainment and leisure-time hobbies: Dungeons & Dragons, Star Wars, jet fighters, World War II, action movies, heavy-metal music, fast cars, and, when they could get a glimpse of them, fast women. Although I too have liked all of these things to a greater or lesser degree at some point in my life — I did, after all, grow up as a member of exactly the demographic in question — their extreme prevalence in the cultural ghetto about which I write has often left me searching, sometimes in vain, for games with a different set of values and antecedents.

But this article is not about one or more of those interesting cultural outliers. It’s rather about an interestingly scanty subgenre of games which seems like it ought to have been perfect for the demographic I’ve just described, but that for some reason just never quite took off. Specifically, I speak of games based on the realities of space exploration in a contemporary context, as opposed to the outer-space fantasias of Star Wars and the like. After all, just about every nerdy teenage boy goes through a race-for-the-Moon phase at some point. (And why not? Has humanity ever embarked on a grander collective adventure?) Further, games on this subject would seemingly have fit in well with the broader craze for realistic simulation, as manifested by everything from F-15 Strike Eagle to SimCity, which had taken a firm grip on the industry by the end of the 1980s.

And yet there just weren’t many simulations of this particular type, and even fewer of them that did very well. It strikes me that it’s worth asking why this is so. Was there something about this subject that just didn’t work as a game, or are we dealing with a mere historical accident here? Let’s begin with a brief survey of the field of earlier games that did venture out into this territory before we turn to the one that will be our main focus for today. To help in doing so, we’ll further divide the field into two categories: vehicular simulations of spaceflight and games of space-program management.

The earliest game of the former type actually predates the personal computer. Created on a big DEC PDP-8 by a Massachusetts high-school student named Jim Storer, inspired by the real Neil Armstrong’s nerve-wracking manual landing on the Moon in 1969, the very year it was first programmed, Lunar demanded that you set your own landing craft down gently before your fuel ran out. Implemented entirely in text — you simply entered the number of fuel units you wished to burn each turn in response to a changing textual status display — it inspired dozens of clones and variants, most going under the more accurately descriptive name of Lunar Lander. By the dawn of the personal-computing age in 1978, David Ahl was able to write in his landmark book BASIC Computer Games that Lunar Lander in all its incarnations was “far and away the most popular computer game” of them all. It was even converted into a graphical standup-arcade game by Atari in 1979, in which form its quiet, cerebral tension made it an incongruous outlier indeed in an arcade full of shoot-em-ups.

Other programmers got inevitably more expansive in their ambitions for spaceflight simulation after Lunar Lander. By 1986, with the release of Spectrum HoloByte’s Orbiter, they had graduated to offering up a complete Space Shuttle flight simulator, covering all the stages of a mission from liftoff to landing. (Sadly, it arrived just in time for the Challenger disaster…) In 1992, Virgin Software published an even more complex and complete iteration on the concept, entitled simply Shuttle.

Yet neither of these later simulations came close to matching their simplistic predecessor in popularity. Their subject matter, it seemed, just didn’t quite work as a hardcore simulation. A simulation of a jet fighter flying into a war zone — such as the popular and long-lived Falcon series which Spectrum HoloByte produced after Orbiter — offered an intriguing range of tactical possibilities which a simulation of a Space Shuttle did not. A fighter pilot flying into combat is lord of his domain, in complete control of his airplane; the outcomes of his battles are entirely up to him. An astronaut flying into space, on the other hand, is merely the tip of a long spear of cooperative hierarchy; situations like those last few minutes before the Eagle landed, when Neil Armstrong was making all of the decisions and executing them all alone, have been vanishingly rare in the history of space flight. If, as Sid Meier likes to say, a good game is “a series of interesting decisions,” this fact makes spaceflight as it has existed so far in our historical reality problematic as the subject of a compelling simulation. Too often, Orbiter and Shuttle felt like exercises in rote button-mashing — button-mashing which you were expected to do exactly when and how ground control told you. Perhaps you weren’t quite the spam in a can the test-pilot peers of the earliest astronauts had so mocked them for being, but it sure felt that way at times. “As strange as it may seem,” wrote Computer Gaming World magazine of Orbiter, “a lot of flying the Shuttle is boring — a lot of pushing buttons, running computer programs, and the like — and it shows.”

In light of this, it’s telling that arguably the most entertaining of these spaceflight simulators opted for a less hardcore, more impressionistic approach. Apollo 18, developed by the Canadian studio Artech and published by Accolade in 1987, posited an alternative history where at least one of NASA’s final trio of cancelled Moon missions actually did take place. In keeping with Artech designer and theoretician-in-chief Michael Bate’s concept of “aesthetic simulation,” Apollo 18 portrayed a mission to the Moon not as a holistic vehicular simulation but as a series of mini-games, jumping from the perspective of ground control to that of the astronauts in space whenever it felt the need. This more free-wheeling, almost cinematic approach, combined perhaps with the fact that going to the Moon is inherently more exciting than releasing yet another whatsit from the Shuttle’s cargo bay in low Earth orbit, made the game a more riveting experience than its Shuttle-centric peers. Still, even it ran out of legs fairly quickly; once you’d worked through the steps of getting to the Moon and back once or twice, there just wasn’t much motivation to do so again.

So much for simulation. In the category of strategic space-program managers, we have an equally mixed bag.

Just as with the venerable Lunar Lander, one of the very first attempts to portray the contemporary conquest of space in this way was also the most successful of its era, in both financial and artistic terms. I wrote at some length long ago about 1984’s Project: Space Station, an earnest effort, masterminded by a fellow named Lawrence Holland who would go on to become LucasArts’s flight-simulator guru, to portray the construction and operation of a commercial space station in Earth orbit. Both space stations and private enterprise in space were much in vogue at the time, thanks respectively to President Ronald Reagan’s announcement of plans to build a station called Freedom in his 1984 State of the Union address and the realities of a terminally underfunded NASA whose priorities shifted with the political winds — realities which would ensure that Freedom itself never got off the drawing board, although it would gradually morph into the joint project known as the International Space Station. As I wrote in that older article, Project: Space Station, which blended an overarching strategy game with light vehicular simulation, came heartbreakingly close to greatness. But in the end, it was somewhat undone by a lack of feedback mechanisms and poor command and control — weaknesses which, it should be said, feel more like a result of the limited 8-bit hardware on which it ran than a failure of design in the abstract. But whatever its failings, it was by all indications reasonably successful in its day, enough so that, when its original publisher HESware went bankrupt within a year of its release, it was picked up at auction by Accolade and re-released by them in the same year they published Apollo 18.

Alas, Project: Space Station‘s immediate successors would prove markedly less rewarding as games to play or products to sell. Space MAX, created and self-published by a former Jet Propulsion Laboratories engineer named Tom Keller in 1986, poured on the detail at the expense of playability, until it came to resemble one of NASA’s long-range planning tools more than a computer game. And Karl Buiter’s Earth Orbit Stations of 1987 buried a very appealing premise, focusing more on the mechanical details of building a modular space station than had either of the earlier games of its type, under an atrocious presentation layer which Computer Gaming World described as “a textbook case of how not to design a [graphical user] interface.” And after those two less-than-compelling efforts, the strategic space-program-management subgenre pretty much dried up.

This, then, was the underwhelming state of contemporary-spaceflight games in general in 1993, when Interplay published a new take on the subject matter bearing the name of one of the most famous astronauts of all — in fact, the one who had actually been sitting there beside Neil Armstrong when he was making that hair-raising landing on the Moon. Like Apollo 18, Buzz Aldrin’s Race into Space chose to turn back the clock to those glory days of the Moon race rather than focusing on present-day space stations engaged in the comparatively plebeian labor of developing new industrial-chemical compounds and new medical treatments, important though such things undoubtedly are. The managerial perspective it adopted, however, had more to do with Project: Space Station than Apollo 18. A noble effort in its way, as indeed were all those games I’ve just written about, its own points of failure have perhaps even more to tell us about game design than theirs do.


Fritz Bronner

The driving force behind Buzz Aldrin’s Race into Space wasn’t its astronaut mascot — no surprise there, right? — but rather one Fritz Bronner, a less famous American whose name would have fit perfectly to one of the German rocket scientists who helped Wernher von Braun build the Saturn V rocket that sent men to the Moon. In the early 1980s, as a young man with dreams of becoming an actor, Bronner spent many an evening playing a variety of tabletop wargames and RPGs with his buddies in his home state of Florida. On one of those evenings, he had just finished an RPG session when he turned on the television to see a rocket launch on the news — an event he always watched with interest, being a self-described “space fanatic.” The thought process he went through then, with his mind still addled by game systems and dice rolls, will waken immediate recognition in anyone who has ever played Race into Space. For the most fundamental mechanic in that game has its origin right here:

The game player in me suddenly wondered what the odds were for a successful launch. The next thought I had was the chance of failure. I formulated in my mind a guess on the total number of [successful] launches versus failures. I quickly concluded that out of ten previous launches, nine of them were successful. Just before liftoff, I rolled the percentile dice and rolled below the range, which indicated to me that the launch would be successful. A few minutes later, another satellite reached orbit. I was elated that I had come up with a pseudo-model for launch success.

Immediately I wondered how a manned launch would work. I started to play with some rough mathematical figures. I selected a one-stage rocket and a two-stage rocket and then realized that I would have to devise a safety factor for a capsule. I think I came up with around 85 percent for the capsule. Then I plunged into what mission steps would occur in spaceflight. I rolled the dice on a three-step suborbital flight and to my excitement it worked! Suddenly each step of the mission was monumentally important. I became tense as I rolled the dice. It reminded me of the flavor of the early spaceflights.

I called [my friend] Steve [Stipp] over and told him of my successful suborbital flight. After his own successful flight, we both gleefully started scribbling notes on possible payload weights and additional mission steps. Soon we had scraps of paper filled with my horribly drawn stick figures of capsules that were lofting astronauts into space.

At this point, it was success or total failure on a mission step. We both realized that it was too crude and unrealistic for a rocket to always blow up on the pad. There were cancelled launches and aborts that should be considered. We laughed and played and scribbled more notes and sketched drawings for several hours, and then folded it up and forgot about it for several years.

In 1985, Bronner’s acting dream took him from Florida to New York City. His wife was working as a long-haul flight attendant, leaving him with plenty of solitude for contemplation in between auditions there in the big city. A television documentary called Spaceflight refreshed his memories of playing that improvised dice-throwing game of space launches. Just as importantly, it shifted his thinking toward an historical perspective. What if he made a game about the space race of the 1950s and 1960s, with one player in the role of the Americans and the other of the Soviets, each trying to be the first to reach the Moon? Each player would have to research the technology necessary for each stage of the endeavor, then test it with a live launch. The tension that would make for interesting choices was clear: that between researching everything exhaustively to achieve the best possible safety rating and pushing the timetable to beat out your opponent. At bottom, then, it would be a “press your luck” game — an evergreen in tabletop game design, but implemented here in the service of a thoroughly unique theme. For the next couple of years, Bronner continued to develop and refine the concept, even sending samples to many board-game publishers, albeit without managing to stir up much interest.

In 1987, Bronner’s acting dream took him from New York City to Hollywood. While he would never become the movie star he might have imagined back in Florida, he would carve out a solid career for himself as one of the film industry’s unglamorous but indispensable utility players; he would take bit parts in dozens of movies and television shows alongside starring roles in hundreds of commercials, and eventually also take on small-time writing, directing, and producing gigs. A year after arriving in Hollywood, he wrangled a meeting with the Los Angeles-based Task Force Games, best known for their Star Fleet Battles tactical space-combat games which took place in the Star Trek universe. He finally got a positive response from this publisher, and soon signed a contract with them to publish the board game Liftoff!.

Liftoff! made its public bow in the summer of 1989 at the Origins International Game Expo, one of the tabletop hobby’s two biggest American events, which happened to be held that year right there in Los Angeles. The reaction to Bronner’s game at Origins was cautiously favorable, but it never translated into much in the way of sales in the months that followed. Task Force Games had been bought by the computer-game publisher New World Computing the year before they signed the contract with Bronner; it was for this reason that they were in the Los Angeles area at all, having been moved there from Amarillo, Texas, to join their new parent. Yet the relationship wasn’t living up to either partner’s expectations. Profits, which tended to be scant at the best of times in the tabletop industry, had become nonexistent, as the expected synergies between the computer and the tabletop business failed to materialize. In 1990, Task Force’s head John Olsen scraped together enough funding to buy the company back from New World and moved with it back to Amarillo. Necessity forced the downsized entity to focus its resources on Star Fleet Battles, its most well-known and marketable franchise. Liftoff! died on the vine.

But Fritz Bronner wasn’t willing to let his game go so gently into that good night. Although he had never owned a computer in the past, his arrival in Hollywood had coincided with the beginnings of a buzz from the more forward-thinking members of the media elite about the future of interactive video and multimedia computing. It certainly hadn’t been lost on Bronner when signing the contract with Task Force Games that the company’s parent was a publisher of computer games. In fact, he had tried to interest New World in a digital version of Liftoff! repeatedly, but could never really get their attention. Fortunately, his attorney had assured that the contract he signed with Task Force/New World gave them just one year to develop a computerized version, if they wished to do so; afterward, those rights reverted to Bronner himself. He soon bought his first computer, a used Commodore Amiga 500, to consider the possibilities. In the summer of 1990, he started talking with a young programmer named Michael K. McCarty. At year’s end, the two of them formed a company which they named Strategic Visions, and began working on a demo to show to publishers.

It perhaps says something about the zeitgeist of gaming on the cusp of the multimedia age that Bronner and McCarty elected to make their demo a non-interactive video rather than an interactive game. From the start, Bronner’s vision for the project had been to move the mechanics of the board game onto the computer essentially intact, then spice them up with lots of video footage from the archives of NASA and the Soviet space program. His timing in this respect was perfect: the fall of the Iron Curtain helped immensely in getting access to the latter’s videos. Meanwhile the fact that all of the footage was the product of government agencies, and thus released into the public domain, helped in another way. Less positively, this overweening focus on the multimedia aspects of the project, which would continue throughout its duration, would rather distract from some worrisome flaws in the foundation of the actual rules set — an issue we’ll return to a bit later.

In the short term, though, the non-interactive demo served its purpose. In contrast to the relative lack of interest the tabletop design had garnered, the proposed digital version attracted lots of publishers when Bronner and McCarty brought their demo to the Summer Consumer Electronics Show for private screenings in June of 1991. The videos Bronner showed of rockets soaring and exploding were well-nigh irresistible to an industry all abuzz with talk of interactive movies incorporating just this type of real-world footage. Over thirty potential partners viewed the demo reel in the course of the show, and several of them came forward with serious offers.

Bronner settled on Interplay Productions for several reasons: they were also Los Angeles-based, always a nice advantage; he got on well with Interplay’s head Brian Fargo; and Fargo had immediately run with an idea Bronner had mentioned in passing, that of signing up Edwin “Buzz” Aldrin — by far the most gregarious and ambitious of the Apollo 11 astronauts in terms of media and marketing — to lend his endorsement to the game. Indeed, Fargo already had Aldrin on board when the contract was signed in August of 1991. Thus did Liftoff! become Buzz Aldrin’s Race into Space.

Aldrin’s direct participation encompassed nothing more than marketing — he regaled a long string of trade-show attendees and magazine editors with his well-worn tales of landing on the Moon, while saying next to nothing about the game itself — but it did lead to the computer game’s most significant substantive addition to the board game. Bronner added a roster of astronauts to be recruited and trained, who manifested differing strengths and weaknesses and even differing personalities which could cause them to be more or less effective when combined into crews. The idea and approach are so similar to the astronaut management found in Project: Space Station that one suspects they must have been inspired by that earlier game. That said, I have no proof that this was so.

Otherwise, though, Race into Space is a fairly straightforward re-implementation of Liftoff! rather than a major expansion upon it. In fact, some parts of the board game are actually trimmed away, such as the ability to play as the head of a fictional European or Asian space agency, which Bronner had included in order to allow up to four players to gather around the tabletop. Race into Space, on the other hand, is limited to two players, each of them controlled either by a human or the computer.

Pitched to Interplay with an absurdly optimistic six-month development timeline, Race into Space ran over that estimate by a factor of three. Indeed, it became the first game in history to get two feature-length previews in Computer Gaming World, one in January of 1992 and one in December of the same year. An early decision to switch development from the fading Amiga to MS-DOS didn’t help matters; nor did Strategic Visions’s need to rely on Interplay’s art team for most of the non-digitized graphics, work that got done only as time allowed betwixt and between other in-house projects. Most of all, though, the project began just a little bit too early, before the typical consumer computer was quite able to live up to Bronner’s multimedia ambitions. Even the version of the game that finally did ship on floppy disk in March of 1993 was heavily compromised by the limitations of its storage medium, with digitized still photographs standing in for most of the videos the original demo had promised. Players would have to wait for the CD-ROM version, which didn’t arrive until fourteen months later, to truly see the game as its designer had imagined it.



Race into Space is played in turns lasting six months each, beginning in 1957 and stretching until either 1977 arrives or someone manages to land on the Moon. Economics will play a big role in your success or lack thereof; you’re provided with a semiannual budget which increases only gradually, with the completion of major milestones according it a more substantial boost — especially if you manage them before your opponent — and catastrophic failures having the opposite effect. This approach is rather ahistorical on the face of it — in a classic example of throwing money at a problem until it bears fruit, the budget of NASA in particular was dictated more by the achievements of the Soviets than by the agency’s own accomplishments — but is probably necessary for Race into Space to work as a game.

As the game goes on, you build up your program’s facilities — adding things like additional launch pads to let you carry out more launches per turn.

Still, the core of the experience remains what it was when a young Fritz Bronner first started experimenting with the idea of a space-program-management game in the early 1980s: watching with bated breath from mission control as your rockets go up, hoping each successive step will go off without a hitch to get you your next mission milestone. Said milestones encompass everything from launching the first unmanned satellite — the game begins in the year of Sputnik — to the Moon landing itself. Yet, beyond the first few milestones at any rate, they don’t break down into a mere linear progression of steps to be mindlessly walked through. You can combine milestones into one mission; for example, you might make your first flight of eight days or more duration the same one where your astronauts first execute a space walk. And you can also skip some of them entirely, if you’re pressed for time and are willing to forgo the budget boosts with which they tempt you; the aforementioned space walk, for example, isn’t even strictly necessary for a Moon landing.

Most importantly, Race into Space lets you implement not only the historical method of getting to the Moon — that of employing a space capsule which orbits the Moon and a separate landing craft to take part of the crew down to the surface — but also a number of other approaches that were discussed at the time, such as an all-in-one-spacecraft approach (this requires developing a monster rocket that makes a Saturn V look like a kid’s toy) or even a reusable space shuttle (this requires both an enormous investment of time and money and a really slow opponent). The variety of alternate histories the game allows is not infinite — more on that momentarily — but is enough to provide for at least a few interesting and even educational playthroughs. If nothing else, you’ll walk away from your failed attempts to rewrite history with a better understanding of why NASA chose the approach they did.

Achieving firsts is extremely important because it increases your program’s prestige — which in turn leads to an increase in its budget. If things go too disastrously wrong, you can even be fired from your post as program director.

But alas, Race into Space soon begins to show those cracks in its foundation which I alluded to earlier, which are partly born from the lack of a clear sense of its own goals as a game. One can imagine at least three abstract approaches fitting into the general framework of “a managerial game about the race to the Moon.” One would be a heavily experiential game, in the spirit of Michael Bate’s aesthetic simulations, de-emphasizing the competitive aspects in favor of taking the player on a journey through those heady early days of the space age. Another would be a replayable game of hardcore strategy, in which the fiction of the Moon race functions as a mere thematic skin for the mechanical underpinnings which quickly become the player’s real focus. And still another would be an open-ended sandbox, a learning tool that lets the player experiment with many different approaches to landing on the Moon and to spaceflight in general.

Race into Space never firmly commits to any one of these approaches, but rather feints toward all of them in various places. The end result is a confusing mishmash of elements that are constantly cutting against one another. The heavy reliance on photographs, video, and sound clips from the period in question seem to push it into the experiential camp, but its board-game-derived mechanics and relatively short play time — a full game usually takes no more than two or three hours to play — pull it in the second direction I outlined. And so the cognitive dissonances start to add up. The video clips lose their appeal when you’re forced to click through the same ones over and over, every time you play, even as it remains debatable whether the mechanics are really compelling enough to make it a game you want to return to again and again under any circumstances; there are really only one or two best paths to follow to get to the Moon, and once you’ve found them there’s little reason to keep playing. Meanwhile the game’s educational sandbox potential, while by no means nonexistent, is also sharply limited. True to its board-game roots, Race into Space doesn’t simulate spaceflight at all beyond rolling dice against an arbitrary set of success-or-failure percentiles. In terms of spaceflight hardware, it lets you mix and match a set of pieces it provides for you, and pour money into each piece’s research to push its reliability percentage up, but it’s nowhere near sophisticated enough to let you develop your own components from scratch. Here too, then, it feints in a promising direction without going far enough to truly satisfy over the long term.

Yet this sense of confusion about what Race into Space actually wants to be constitutes only its second biggest problem. Its biggest problem of all doesn’t require as much design philosophy to explain: the darn thing is just too darn hard. Something is badly off with the math behind this game — something you sense more than you can know. Playing it quickly begins to feel like that memorable montage of exploding and misguided rockets from the film The Right Stuff. You can recover in fairly short order from failed launches in the early phases, when you’re mostly launching unmanned craft, but they turn devastating when they start chewing through your astronaut corps like a wolf in a chicken coop. Failed missions not only destroy the morale of your surviving astronauts, causing them to perform worse, but knock the reliability of the failed component almost all the way back to zero, forcing you to research it up again from scratch. This of course makes no sense in strictly logical terms; in the absence of any new inputs, a defective component should be defective to exactly the same extent on the next flight. Rather than conveying the rounds of investigation and soul-searching that always accompanied a real loss of life in the space program, as it was doubtless intended to do, this mechanic just furthers the impression that the game is out to get you at any cost. The fact that the computer player mysteriously seems to be able to cut more corners than you without killing astronauts by the dozens contributes strongly to the same impression.

Screens like this one appear distressingly frequently, almost regardless of how thoroughly you research and develop your components. Either the real NASA was incredibly lucky, or something is off inside this game’s numbers. Perhaps a bit of both?

Unkind though it may sound to say, I can’t help but suspect that Race into Space‘s issues in this area reflect a fundamental misunderstanding of statistics on the part of a younger Fritz Bronner — a misunderstanding that somehow never got corrected through all his years of working on his game. A mission does not, as one might initially imagine, have a chance of success equal to the reliability percentage of its dodgiest hardware component. On the contrary: the various components actually undergo reliability checks at various times — often at multiple times — during a mission. Therefore even a stack of components which have all been researched up to a reliability of 95 percent still has a substantial chance of failing in some more or less disastrous way on a more complex mission. And yet you simply don’t have time to laboriously research every component up to its maximum reliability, which for many of them is substantially below 95 percent anyway. You’re in a Moon race, after all. You have to roll the dice. Small wonder that so many players over the years have advocated save-scumming — that dastardly practice of saving and reloading until the dice roll your way — as the only practical way to play. That, or play a two-human-player game, but just click through your “opponent’s” turns without doing anything. Playing that way, you might just be able to get to the Moon before 1977.

So, despite the historical verisimilitude it works so hard to inculcate via its video clips and all the other period-specific touches, Race into Space‘s mechanics lead to a simple game of luck at bottom, and one where the odds are stacked against you at that. There is no opportunity to jump in and make decisions when a mission starts to go wrong — no chance, in other words, to improvise your way through a drama like the Apollo 13 mission. You’re a mere helpless bystander from the moment a mission begins until it ends.

The game’s delight in making its players’ rockets go boom provoked such howls of protests from early purchasers of the original floppy-based release that Interplay soon released a patch to tweak the numbers somewhat — although still nowhere near enough in the opinion of most. The very fact that Bronner felt able to manipulate the numbers in this way, of course, demolished any remaining belief players might have harbored that the numbers had any real historical basis at all. Clearly they were strictly arbitrary. Bronner never did achieve a balance that felt both playable and true to history. And that failure makes it difficult to consider Race into Space as a whole as anything but another interestingly failed attempt at making a game out of real-world space exploration.

Race into Space sold in reasonable numbers for Interplay, but never huge ones, especially after word of just how frustrating it could be got around on the street. Thus none of Bronner’s plans for sequels, which he had publicly discussed at some length in the run-up to release, ever got off the metaphorical launching pad. Strategic Visions soon folded up shop, and Bronner continued his career in Hollywood. He’s never designed another game.

Ironically, the sequels Bronner discussed may actually have made for better games than this one. One idea, for example, would have focused on a manned mission to Mars. Removed from the context of real history, not being surrounded by all those grainy old video clips reminding players of what once was, such a game would have been able to exist entirely on its own terms, and may have wound up feeling more satisfying because of it even if its mechanics had been left largely unchanged.

As it is, though, Race into Space displays that most telling sign of an ingenious game idea with questionable execution: players lining up with ways to fix it. Their efforts were confined to the realms of speculation and hex editors until 2005, when, the rights having reverted to Fritz Bronner, he generously released the game and all of its source code under the General Public License. In the time since, a small community of enthusiasts has continued to port and refine the game on a sporadic basis, but it’s never managed to garner a critical mass of developers or players. Ditto an attempt at a full-fledged commercial revival of the concept by the wargame publisher Slitherine, which arrived complete with the original game’s astronaut mascot in 2014 under the name Buzz Aldrin’s Space Program Manager.


While Race into Space‘s most specific, practical design mistakes aren’t too hard to identify, the more generalized failings of it and its peers in the scanty tradition of contemporary-space-program games do rather prompt one to ask another question: is there something about the subject matter itself that causes it not to work as a satisfying game? I believe I’ve actually done a reasonable job of answering that question already for the case of spaceborne vehicular simulations: as I noted near the beginning of this article, an astronaut in space just doesn’t have enough independent agency in most situations to make for a reasonably realistic simulation that’s also engaging as a game. But what of the other broad category of games I’ve addressed today, the one to which Race into Space belongs: that of space-program managerial games?

For a long, long time after Race into Space, one might have been forgiven for assuming that space-program managers as well were indeed nonstarters as satisfying games. But then, in 2015, a game called Kerbal Space Program came along to prove such naysayers wrong. I don’t usually write about modern games here, but I will briefly outline the premise of this one.

The titular Kerbals are a species of furry green aliens who run a space program of their own on their planet of Kerbin. Despite their cartoony cuteness, said space program itself is simulated with meticulous attention to detail, including all of the particulars of physics and aeronautics which Race into Space so conspicuously lacks. Players with an interest in rocketry or aeronautical engineering can and do lose years of leisure time to it. It may or may not be a game for you, but it is, by any objective definition, an impressive piece of work, far more intrinsically fascinating than any other that I’ve written about today.

And how does it accomplish this feat? One obvious answer is that it knows what it wants to be first and foremost: a sandbox for exploring the practical possibilities and limitations of space travel using the technology of our own recent past, present, and near future. A dedicated modding community has helped the designers to graft on additional layers of competitive strategy and economics for those who want them. Nevertheless, the game’s central delight remains that of creation and discovery. Kerbal Space Program is, in other words, one of the preeminent sandbox games of our time. And it’s completely comfortable with itself in that role, being free of the cognitive dissonances of Race into Space.

This stronger sense of itself is certainly one of the secrets to Kerbal Space Program‘s success. And here’s another: having noted earlier that the proposed non-historical sequels to Race into Space may have led to more compelling games, I’ll now submit Kerbal Space Program as Exhibit One in evidence for that argument. Freed from the weight of all that real human history, existing as it does in a world of cartoon aliens, it can just be a game.

Games can be great tools for exploring other lives and other times, but sometimes you just want to play. History, after all, doesn’t occur for our ludic amusement. Every wargamer knows that the number of unaltered historical battles that lead to good games is very small indeed; most real battles have their outcomes foreordained before they even begin. Perhaps the Apollo program and the Space Shuttle and the International Space Station and all the rest just don’t have the right stuff to make a worthy game. But that’s okay — because it means that, instead of recreating the storied past, we can imagine an exciting future. That goal is at least equally worthy — and, as Kerbal Space Program so thoroughly illustrates, it’s something that a game about space exploration can most definitely do, and do well at that.

When you play Race into Space as the Americans, each turn begins with a newscast from “Carter Walcrite” — a nod to Walter Cronkite, the television anchorman whose dulcet tones were the voice of the space race for many Americans, whom a number of surveys revealed to be the most trusted person in the United States during the turbulent 1960s. (I’ll leave the comparisons with contemporary attitudes toward journalism as an exercise for the reader…) Although the inclusion of all this loving period detail is wonderful on one level, on another it can be oddly stultifying to your attempts to write your own history.

(Sources: the books The Buzz Aldrin’s Race into Space Companion by Fritz Bronner, Designers & Dragons, Volume 2 by Shannon Appelcline, and BASIC Computer Games by David Ahl; Computer Gaming World of August 1986, March 1987, October 1987, February 1988, January 1992, May 1992, December 1992, and August 1993; Strategy & Tactics 212. Online sources include Leon Baradat’s comprehensive Race into Space site, the article “The Buzz is Gone” at The Escapist, and Steve Stipp’s homepage.

You can download the current open-source edition of Race into Space for free, or purchase its spiritual successor Buzz Aldrin’s Space Program Manager.)

 
 

Tags: ,