RSS

The Gateway Games of Legend (Preceded by the Legend of Gateway)

Frederik Pohl was still a regular speaker at science-fiction conventions in 2008.

Frederik Pohl, who died on September 2, 2013, at age 93, had one of the most multifaceted careers in the history of written science fiction. Almost uniquely, he played major roles in all three of the estates that constitute science fiction’s culture: the first estate of the creators, in which he wrote stories and novels over a span of many decades; the second estate of the publishers and other business interests, in which he served as a highly respected and influential agent, editor, and anthologist over a similar period of time; and the third estate of fandom, in which his was an important voice from the very dawn of the pulp era, and for which he never lost his enthusiasm, attending science-fiction conventions and casting his votes on fan committees right up to the end.

Growing up between the world wars in Brooklyn, New York, Pohl discovered the nascent literary genre of science fiction in 1930 at the age of 10, when he stumbled upon an issue of Science Wonder Stories. From that moment on, he spent his time at every opportunity with the likes of Edgar Rice Burroughs’s Princess of Mars and Doc Smith’s Lensmen — catnip for any red-blooded young boy with any sense of wonder at all. In comparison to other young science-fiction fanatics, however, Pohl stood out for his personableness, his ambition, his spirit of innovation, and his sheer commitment to the things he loved. He became a founding member of the Brooklyn Science Fiction League, one of the earliest instances of organized science-fiction fandom anywhere in the country, and by the ripe old age of 13 or so had become a prolific editor and publisher of fanzines, many of which enjoyed a total circulation reaching all the way into two figures.

The world of science fiction was indeed still a small one, but that had its advantages in terms of access, especially when one was fortunate enough to live in the pulp publishing capital that was New York City. The boundaries between science-fiction fan and the “profession” of science-fiction writer were porous, and by the latter half of the 1930s Pohl was hobnobbing with such luminaries as Isaac Asimov and Cyril Kornbluth in an informal club of like-minded souls who called themselves the Futurians. He stumbled into the job of acting as the Futurians’ literary agent, which entailed buying stamps and envelopes in bulk, mailing off his friends’ stories to every pulp publisher in the Big Apple, and collecting lots of rejection slips alongside the occasional letter of acceptance in the return post.

In 1939, a 19-year-old Frederik Pohl got himself an editor’s job at the pulp house Popular Publications by virtue of knocking on their door and asking for one. He was given responsibility for Astonishing and Super Science Stories, second-tier magazines that paid their writers a penny per word and trafficked in the stories that weren’t good enough for John W. Campbell’s Astounding, the class of the field. Most of the authors whose stories Pohl accepted are justifiably forgotten today, but he did get his hands every now and then on a sub-par offering from the likes of a Robert A. Heinlein or L. Sprague de Camp that Campbell had rejected; Pohl, alas, was in no position to be so choosy.

But then along came the Second World War to put everything on hold for a while. Pohl wound up joining the Army Air Force, and was rewarded with what he freely described as a “cushy” war experience, working as meteorologist for a B-24 squadron based in Italy. When he returned from Europe, he returned to publishing as well but, initially, not to science fiction. Now a married man with familial responsibilities, he worked for a few years as an advertising copywriter, then as an editor for the book adjuncts to the magazines Popular Science and Outdoor Life; this constitutes the only substantial period of his entire professional life spent outside science fiction.

Yet the pull of science fiction remained strong, and in the early 1950s Pohl resumed his old role of literary agent for his writer buddies, albeit now on a slightly more professional footing. The locus of science-fiction profits was moving from the pulps to paperback novels and short-story collections in book form; thus Pohl became an editor for Ballantine’s new line of science-fiction paperbacks. By this point, the name of Frederik Pohl, while still fairly obscure to most readers, was known to everyone inside the community of science-fiction writers. He really was on a first-name basis with everyone who was anyone in the field, from hard science fiction’s holy trinity of Isaac Asimov, Robert A. Heinlein, and Arthur C. Clarke to lyrical science fiction’s patron saint Ray Bradbury.

In 1960, a 41-year-old Pohl accepted what was destined to become his most influential behind-the-scenes role of all when he agreed to become editor of a troubled ten-year-old also-ran of a magazine called Galaxy Science Fiction. “The pay was miserable,” he would later remember. “The work was never-ending. It was the best job I ever had in my life.”

At that time, science fiction was on the precipice of a new era, as a more culturally, racially, sexually, and stylistically diverse generation of up-and-coming writers — the so-called “New Wave” — began to arrive on the scene with a new interest in prose quality and formal experimentation, alongside an interest in exploring the future in terms of human psychology rather than technology alone. Many or most of the old guard who had cut their teeth in the pulp era, whose politics tended to veer conservative in predictable middle-aged-white-male fashion, greeted this invasion of beatnik radicals with dismay and contempt. The card-carrying John Birch Society member John W. Campbell, who was still editing Astounding — or rather, as it had recently been renamed, Analog Science Fiction — was particularly vocal in his criticism of all this new-fangled nonsense.

Frederik Pohl, however, was different from most of his peers. He had always read widely outside the field of science fiction as well as inside it, and was as comfortable discussing the stylistic experiments of James Joyce and Marcel Proust as he was the clockwork plots of Doc Smith. And as for politics… well, he had spent four years as a card-carrying member of the American Communist Party — take that, John Campbell! — and even after disillusionment with the Soviet Union of Josef Stalin had put an end to that phase he had retained his leftward bent.

In short: Frederik Pohl welcomed the new arrivals and their new ideas with open arms, making Galaxy a haven for works at the cutting edge of modern science fiction, superseding Campbell’s increasingly musty-smelling Analog as the genre’s journal of record. He had to, as he later put it, “encourage, coax, and sometimes browbeat” his charges to get the very best work out of them, but together they changed the face of science fiction. Indeed, it was arguably helping other writers be their best selves that constituted this multifariously talented man’s most remarkable talent of all. Perhaps his most difficult yet rewarding writer was the famously irascible Harlan Ellison, who burst to prominence in the pages of Galaxy and If, its sister publication, with stories whose names were as scintillatingly trippy as their contents: “‘Repent, Harlequin!’ Said the Ticktockman,” “I Have No Mouth, and I Must Scream,” “The Beast That Shouted Love at the Heart of the World.” Such stories were painfully shaped over the course of a series of bloody rows between editor and writer. Most readers would agree that Ellison’s later fiction has never approached the quality of these early stories, churned out under the editorship of Frederik Pohl.

Burned out at last by the job of editing Galaxy, Pohl stepped down at the end of the 1960s, a decade that had transformed the culture of science fiction every bit as much as it had the larger American culture that surrounded it. In the following decade, however, he continued to push the boundaries as an editor for Bantam Books. It was entirely thanks to him that Bantam in 1975 published Samuel R. Delany’s experimental masterpiece or colossal con job — depending on the beholder — Dhalgren, nearly 900 pages of digressive, circular prose heavily influenced by James Joyce’s equally controversial Finnegans Wake. Whatever else you could say about it, science fiction had come a long way from the days of Science Wonder Stories and Edgar Rice Burroughs.

All of which is to say that Frederik Pohl would have made a major impact on the field of science fiction had he never written a word of his own. In actuality, though, he managed to combine all of the work I’ve described to this point with an ebbing and flowing output of original short stories and novels, beginning with, of all things, a rather awkwardly adolescent poem called “Elegy to a Dead Satellite: Luna,” which appeared in Amazing Stories in 1937. Through the ensuing decades, Pohl was regarded as a competent but second-tier writer, the kind who could craft a solid tale but seldom really dazzled. Yet he kept at it; if nothing else, continuing to work as a writer in his own right gave him a feeling for what the more high-profile writers he represented and edited were going through. In 1967, he even switched roles with his frenemy Harlan Ellison by contributing a story to the latter’s Dangerous Visions anthology, a collection of deliberately provocative stories — the sorts of things that could never, ever have gotten into print in earlier years — from New Age writers and adventurous members of the old guard; it went on to become what many critics consider the most important and influential science-fiction anthology of all time.

But even Pohl’s contribution there — “The Day After the Day After the Martians Came,” a parable about the eternal allure of racism and xenophobia that was well-taken then and now but far less provocative than many of the anthology’s other stories — didn’t really change perceptions of him as a fine editor with a sideline in writing rather than the opposite. That shift didn’t happen until a decade later, when the now 58-year-old Pohl published a novel called Gateway. Coming after the most important work of the vast majority of his pulpy peers was well behind them, Pohl’s 21st solely-authored or co-authored novel constitutes the most unlikely story of a late blooming in the history of science fiction.

Described in the broadest strokes, Gateway sounds like the sort of rollicking space opera which John W. Campbell would have loved to publish back in the heyday of Astounding. In our solar system’s distant past, when the primitive ancestors of humanity had yet to discover fire, an advanced star-faring race, later to be dubbed the Heechee by humans, visited, only to abandon their bases an unknown period of time later. As humans begin to explore and settle the solar system in our own near future, they discover a deserted Heechee space station in an elliptical orbit around our sun. They find that the station still contains bays full of hundreds of small spaceships, and discover the hard way that, at the press of a mysterious button, these spaceships sweep their occupants away on a non-negotiable faster-than-light journey to some other corner of the galaxy, then (hopefully) back to Earth at the press of another button; for this reason, they name the station Gateway, as in, “Gateway to the Stars.” Many of the destinations the spaceships visit are pointless; some, such as the interior of a black hole, are deadly. Sometimes, though, the spaceships travel to habitable planets and/or to planets containing other artifacts of Heechee technology, worth a pretty penny to scientists, engineers, and collectors back on Earth.

Earth itself is not in very good shape socially, culturally, or environmentally. Overpopulation and runaway capitalism have all but ruined the planet and created an underclass of have-nots who make up the vast majority of the population, working in unappetizing industries like “food shale mines.” The so-called Gateway Corporation, which has taken charge of the station, runs a lottery for people interested in climbing into a Heechee spaceship, pressing a button, and seeing where it takes them. Possibly they can end up rich; more likely, they might wind up dead, their bodies left to decay hundreds of light years from home. But, conditions being what they are among the teeming masses, there’s no shortage of volunteers ready and willing to take such a long shot. These intrepid — or, rather, desperate — explorers are known as the Gateway “prospectors.”


That, then, is the premise —  a premise offering a universe of possibility to any writer with an ounce of the old pulpy space-opera spirit. Who are (or were) the Heechee? Why did they disappear? Did they intend for humans to discover their technology and start using it to explore the galaxy, or is that just a happy (?) accident? Will the two races meet someday? Or, if you like, table all those Big Mysteries for some series finale off in the far distance. Just the premise of flying off to parts unknown in all these Heechee spaceships admits of an infinite variety of adventures. Gene Roddenberry may have once famously pitched Star Trek as “Wagon Train to the Stars,” but the starship Enterprise has got nothing on this idea.

Here’s the thing, though: having come up with this spectacular idea that the likes of a Doc Smith could have spent an entire career milking, Frederik Pohl perversely refused to turn it into the straightforward tales of interstellar adventure that it was crying out to become. Gateway engages with it instead only in the most subversively oblique fashion. Half of the novel consists of a series of therapy sessions involving a robot psychologist and a Gateway prospector named Robinette Broadhead who’s neither conventionally adventurous nor even terribly likable. Robinette is the only survivor — under somewhat suspicious circumstances — of a recent five-person prospecting expedition. He’s now rich, but he’s also a deeply damaged soul, just one of the many who inhabit Gateway, a rather squalid place beset by rampant drug abuse, a symptom of the literal dead-enders who inhabit it between prospecting voyages. We spend far more time exploring the origins and outcomes of Robinette’s various psycho-sexual hangups than we do gallivanting about the stars. It’s as if we wandered into a Star Trek movie and got an Ingmar Bergman film that just happens to be set in space instead. Gateway is a shameless bait-and-switch of a novel. Robinette Broadhead, I’m afraid, lost his sense of wonder a long time ago, and it seems that he took Frederik Pohl’s as well.

The best way to understand Gateway may be through the lens of the times in which it was written: this is very much a novel of the 1970s, that long, hazy morning after to the rambunctious 1960s. The counterculture of the earlier decade had focused on collective struggles for social justice, but the 1970s turned inward to focus on the self. Images of feminist activists like Betty Friedan shouting through bullhorns at rallies were replaced in the media landscape with the sitcom character Mary Tyler Moore, the career gal who really did have it all; rollicking songs of mass protest were replaced by the navel-gazing singer-songwriter movement; the term Me Generation was coined, and suddenly everyone seemed to be in therapy of one kind or another, trying to sort out their personal issues instead of trying to fix society writ large. Meanwhile a pair of global oil crises, acid rain, and the thick layer of smog that hovered continually over Hollywood — the very city of dreams itself — were driving home for the first time what a fragile place this planet of ours actually is. Oh, well… on the brighter side, if you were into that sort of thing, lots of people were having lots and lots of casual sex, still enjoying the libertine sexual mores of the 1960s before the specter of AIDS would rear its head and put an end to all that as well in the following decade.

It’s long been a truism among science-fiction critics that this genre which is ostensibly about our many possible futures usually has far more interesting things to say about the various presents that create it. And nowhere is said truism more true than in the case of Gateway. For better or for worse, all of the aspects of fashionable 1970s culture which I’ve just mentioned fairly leap off its pages: the therapy and accompanying obsessive self-examination, the warnings about ecology and environment, the sex. It was so in tune with its times that the taste-makers of science fiction, who so desperately wanted their favored literary genre to be relevant, able to hold its head up proudly alongside any other, rewarded the novel mightily. Gateway won pretty much everything it was possible for a science-fiction novel to win, including its year’s Hugo and Nebula, the most prestigious awards in the genre; it sold far better than anything else Frederik Pohl had ever written; it made Pohl, four decades on from publishing that first awkward adolescent poem in Amazing Stories, a truly hot author at last.

The modern critical opinion tends to be more mixed. In fact, Gateway stands today as one of the more polarizing science-fiction novels ever written. Plenty of readers find its betrayal of its brilliant space-operatic setup unforgivable, and/or find its unlikable, self-absorbed protagonist insufferable, and/or find its swinging-70s social mores and dated ideas about technology simply silly. I confess that I myself largely belong to this group, although more for the latter two reasons than the first. Other readers, though, continue to find something hugely compelling about the novel that’s never quite come through for me. And yet even some of this group might agree that some aspects of Gateway haven’t aged terribly well. With some of the best writers in the world now embracing or at least acknowledging science fiction as as valid a literary form as any other, the desperate need to prove the genre’s literary bona fides at every turn that marked the 1960s and 1970s no longer exists. Gateway today feels like it’s trying just a bit too hard.

In at least one sense, Gateway did turn into a case of business as usual for a popular genre novel: Frederik Pohl published three sequels plus a collection of Gateway short stories during the 1980s. These gradually peeled back the layers of mystery to reveal who the Heechee were, why they had once come to our solar system, and why they had left, using the same oblique approach that had so delighted and infuriated readers of the first book. None of them had the same lightning-in-a-bottle quality as that first book, however, and Pohl’s reputation gradually declined back to join the mid-tier authors with which he had always been grouped prior to 1977. Perhaps in the long run that was simply where he belonged — a solid writer of readable, enjoyable fiction, but not one overly likely to shift any paradigms inside a reader’s psyche.

At any rate, such was the position Pohl found himself in in early 1991, when Legend Entertainment came calling with a plan to make a computer game out of Gateway.


As a tiny developer and publisher in a fast-growing, competitive industry, Legend was always doomed to lead a somewhat precarious existence. Nevertheless, the first months of 1991 saw them having managed to establish themselves fairly well as the only company still making boxed parser-driven adventure games — the natural heir to Infocom, co-founded by an ex-Infocom author named Bob Bates and publishing games written not only by him but also by Steve Meretzky, the most famous Infocom author of all. Spellcasting 101, the latter’s fantasy farce that had become Legend’s debut product the previous year, was selling quite well, and a sequel was already in the works, as was Timequest, a more serious-minded time-travel epic from Bates.

Taking stock of the situation, Legend realized that they needed to increase the number of games they cranked out in order to consolidate their position. Their problem was that they only had two game designers to call upon, both of whom had other distractions to deal with in addition to the work of designing new Legend adventure games: Bates was kept busy by the practical task of running the company, while Meretkzy was working from home as a freelancer, and as such was also doing other projects for other companies. A Legend “Presentation to Stockholders” dated May of 1991 makes the need clear: “We need to find new game authors,” it states under the category of “Product Issues.” Luckily, there was already someone to hand — in fact, someone who had played a big part in drawing up the very document in question — who very much wanted to design a game.

Mike Verdu had been Bates’s partner in Legend Entertainment from the very beginning. Although not yet out of his twenties, he was already an experienced entrepreneur who had founded, run, and then sold a successful business. He still held onto his day job with ASC, the computer-services firm with many Defense Department contracts which had acquired the aforementioned business, even as he was devoting his evenings and weekends to Legend. Verdu:

I was the business guy. I was the CFO, the COO, the guy who went and got money and made sure we didn’t run out of it, who figured out the production plans for the products, tried to get them done on time, figured out the milestone plans and the software-development plans. I was a product guy inasmuch as I was helping to hire programmers and putting them to work, but I wasn’t a game designer, and I wasn’t writing code or being the creative director on products. And I really wanted to do that.

So, there was this moment when I had to decide between continuing to work with ASC and doing Legend part time or doing Legend full time. I decided to do Legend full time. But as a condition of that, I said, “I’d like to be a part of the teams that are actually making the games.”

But I didn’t believe I had the chops to create a whole world and write a game from scratch. I was sort of looking for a world I could tell a story in. So I talked to Bob about licensing. I was incredibly passionate about Frederik Pohl’s novels. So we talked about Gateway, and Bob made the connection and negotiated the deal. It went so much smoother and easier than I thought it would. I was so excited!

The negotiations were doubtless sped along by the fact that the bloom was already somewhat off the rose when it came to Gateway. The novel’s sequels had been regarded by even many fans of the original as a classic case of diminishing returns, and the whole body of work, which so oozed that peculiar malaise of the 1970s, felt rather dated when set up next to hipper, slicker writers of the 1980s like William Gibson. Nobody, in short, was clamoring to license Gateway for much of anything by this point, so a deal wasn’t overly hard to strike.

Just like that, Mike Verdu had his world to design his game in, and Legend was about to embark on their first foray into a type of game that would come to fill much of their catalog in subsequent years: a literary license. For this first time out, they were fortunate enough to get the best kind of literary license, short of the vanishingly rare case of one where an active, passionate author is willing to serve as a true co-creator: the kind where the author doesn’t appear to be all that interested in or even aware of the project’s existence. Mike Verdu never met or even spoke to Frederik Pohl in the process of making what would turn out to be two games based on his novels. He got all the benefits of an established world to play in with none of the usual drawbacks of having to ask for approval on every little thing.

Yet the Gateway project didn’t remain Verdu’s baby alone for very long. Bates and Verdu, eager to expand their stable of game designers yet further, hit upon the idea of using it as a sort of training ground for other current Legend employees who, like Verdu, dreamed of breaking into a different side of the game-development business. Verdu agreed to divide his baby into three pieces, taking one for himself and giving the others to Glen Dahlgren, a Legend programmer, and to Michael Lindner, the company’s music-and-sound guru. All would work on their parts under the loose supervision of the experienced Bob Bates, who stood ready to gently steer them back on course if they started to stray. Verdu:

We learned how to write code. We learned the craft of interactive-fiction design from Bob, then we would huddle as a group and hash out the storylines and puzzles for our respective sections of the game, then try to tie them all together. That was one of the best times of my career, turning from a defense-industry executive into a game designer who could write code and bring a game to life. Magical… incredibly great!

You were writing, compiling, and testing in this constant iteration. You would write something, then you would see the results, then repeat. I think that was the most powerful flow state I’ve ever been in. Hours would just evaporate. I’d look up at four in the morning and there’d be nobody in the office: Good God, where did the last eight hours go? It was a wonderful creative process.

It was an unorthodox, perhaps even disjointed way to make a game, but the Legend Trade School for Game Design worked out beautifully. When it shipped in the summer of 1992, Gateway was by far the best thing Legend had done to that point: a big, generous, well-polished game, with lots to see and do, a nice balance between plot and free-form exploration, and meticulously fair puzzle design. It’s the adventure-game equivalent of a summer beach read, a page turner that just keeps rollicking along, ratcheting up the excitement all the while. It isn’t a hard game, but you wouldn’t want it to be; this is a game where you just want to enjoy the ride, not scratch your head for long periods of time over its puzzles. It even looks much better than the occasionally garish-looking Legend games which came before it, thanks to the company’s belated embrace of 256-color VGA graphics and their growing comfort working with multimedia elements.

You might already be sensing a certain incongruity between this description of Gateway the game and my earlier description of Gateway the novel. And, indeed, said incongruity is very much present. A conventional object-oriented adventure game is hardly the right medium for delving deep into questions of individual psychology. A player of a game needs a through line to follow, a set of concrete goals to achieve; this explains why adventure games share their name with adventure fiction rather than literary fiction. Do you remember how I described Gateway the novel as setting up a perfect space-opera premise, only to obscure it behind therapy sessions and a disjointed, piecemeal approach to its narrative? Well, Gateway the game becomes the very space opera that the novel seemed to promise us, only to jerk it away: a big galaxy-spanning romp that Doc Smith could indeed have been proud of. Mike Verdu, the designer most responsible for the overarching structure of the game, jettisoned Pohl’s sad-sack protagonist along with all of his other characters. He also dispensed with the foreground plot, such as it is, about personal guilt and responsibility that drives the novel. What he was left with was the glorious wide-frame premise behind it all.

The game begins with you, a lucky (?) lottery winner from the troubled Earth, arriving at Gateway Station to take up the job of prospector. In its first part, written by Mike Verdu, you acclimate to life on the station, complete your flight training, and go on your initial prospecting mission. In the second part, written by Michael Lindner, you tackle a collection of prospecting destinations in whatever order you prefer, visiting lots of alien environments and assembling clues about who the Heechee were and why they’ve disappeared. In part three, written by Glen Dahlgren, you have to avert a threat to Earth posed by another race of aliens known as the Assassins — that race being the reason, you’ve only just discovered to your horror, that the Heechee went into hiding in the first place. The plot as a whole is expansive and improbable and, yes, more than a little silly. In other words, it’s space opera at its best. There’s nothing wrong with a little pure escapism from time to time.

Gateway the game thus becomes, in my opinion anyway, an example of a phenomenon more common than one might expect in creative media: the adaptation that outdoes its source material. It doesn’t even try to carry the same literary or thematic weight that the novel rather awkwardly stumbles along under, but by way of compensation it’s a heck of a lot more fun. As an adaptation, it fails miserably if one’s criterion for success is capturing the unadulterated flavor and spirit of the source material. As a standalone adventure game, however, it’s a rollicking success.

Legend had signed a two-game deal with Frederik Pohl right from the start, and had always intended to develop a sequel to Gateway if its sales made that idea viable. And so, when the first Gateway sold a reasonable 35,000 units or so, Gateway II: Homeworld got the green light. Michael Lindner had taken on another project of his own by this point, so Mike Verdu and Glen Dahlgren divided the sequel between just the two of them, each taking two of the sequel’s four parts.

Reaching stores almost exactly one year after its predecessor, Gateway II became both the last parser-driven adventure Legend published and the last boxed game of that description from any publisher — a melancholy milestone for anyone who had grown up with Infocom and their peers during the previous decade. The text adventure would live on, but it would do so outside the conventional computer-game industry, in the form of games written by amateurs and moonlighters that were distributed digitally and usually given away rather than sold. Never again would anyone be able to make a living from text adventures.

As era enders go, though, Gateway II: Homeworld is pretty darn spectacular, with all the same strengths as its predecessor. In its climax, you finally meet the Heechee themselves on their hidden homeworld — thus the game’s subtitle — and save the Earth one final time while you’re at it. It’s striking to compare the driving plot of this game with the static collections of environments and puzzles that had been the text adventures of ten years before. The medium had come a long way from the days of Zork. This isn’t to say that Legend’s latter-day roller-coaster text adventures, sporting music, cut scenes, and heaps of illustrations, were intrinsically superior to the traditional approach — but they certainly were impressive in their degree of difference, and in how much fun they still are to play in their own way.

One thing that Zork and the Gateway games do share is the copious amounts of love and passion that went into making them. Unlike so many licensed games, the Gateway games were made for the right reasons, made by people who genuinely loved the universe of the novels and were passionate about bringing it to life in an interactive medium.

For Mike Verdu, Michael Lindner, and Glen Dahlgren, the Gateway games did indeed mark the beginning of new careers as game designers, at Legend and elsewhere. The story of Verdu, the business executive who became a game designer, is particularly compelling — almost as compelling, one might even say, as that of Frederik Pohl, the mid-tier author, agent, and editor who briefly became the hottest author in science fiction almost five decades after he decided to devote his life to his favorite literary genre, in whatever capacity it would have him. Both men’s stories remind us that, for the lucky among us at least, life is long, and as rich as we care to make it, and it’s a shame to spend it all doing just one thing.

Gateway and Gateway II: Homeworld in Pictures


Gateway employs Legend’s standard end-stage-commercial-text-adventure interface, with music and sound and graphics and several screen layouts to choose from, straining to satisfy everyone from the strongly typing-averse to the purists who still scoff at anything more elaborate than a simple stream of text and a blinking command prompt.

Mike Verdu wanted a license to give him an established world to play with. Having gotten his wish, he used it well. Gateway puts enormous effort into making its environment a rich, living place, building upon what is found in Frederik Pohl’s novels. Much of this has nothing to do with the puzzles or other gameplay elements; it’s there strictly to add to the experience as a piece of fiction. Thanks to an unlimited word count and heaps of new multimedia capabilities, it outdoes anything Infocom could ever have dreamed of doing in this respect.

We spend a big chunk of Gateway II in a strange alien spaceship — the classic “Big Dumb Object” science-fiction plot, reminding us not just of classic novels but of earlier text adventures like Infocom’s Starcross and Telarium’s adaptation of Rendezvous with Rama. In fact, there are some oddly specific echoes of the former game, such as a crystal rod and a sort of zoo of alien lifeforms to deal with. That said, you’ll never mistake one game for the other. Starcross is minimalist in spirit and presentation, a cerebral exercise in careful exploration and puzzle-solving, while Gateway II is just a big old fun-loving thrill ride, full of sound and color, that rarely slows down enough to let you take a breath. I love them both equally.

Many of the illustrations in Gateway II in particular really are lovely to look at, especially when one considers the paucity of resources at Legend’s disposal in comparison to bigger adventure developers like Sierra and LucasArts. There were obviously some fine artists employed by Legend, with a keen eye for doing more with less.

Some of the cut scenes in Gateway II are 3D-modeled. Such scenes were becoming more and more common in games by 1993, as computing hardware advanced and developers began to experiment with a groundbreaking product called 3D Studio. The 3D Revolution, which would change the look and to a large extent the very nature of games as the decade wore on, was already looming in the near distance.

The parser disappeared from Legend’s games not so much all at once as over a series of stages. By Gateway II, the last Legend game to be ostensibly parser-based, conversations and even some puzzles had become purely point-and-click affairs for the sake of convenience and variety. It already feels like you spend almost as much time mousing around as you do typing, even if you don’t choose to use the (cumbersome) onscreen menus of verbs and nouns to construct your commands for the parser. Having come this far, it was a fairly straightforward decision for Legend to drop the parser entirely in their next game. Thus do most eras end — not with a bang but with a barely recognized whimper. At least the parser went out on a high note…

(Sources: I find Frederik Pohl’s memoir The Way the Future Was, about his life spent in science fiction, more compelling than his actual fiction, as I do The Way the Future Blogs, an online journal which he maintained for the last five years or so of his life, filling it with precious reminiscences about his writing, his fellow authors, his nearly century-spanning personal life, and his almost equally lengthy professional career in publishing and fandom. I’m able to tell the Legend Entertainment side of this story in detail thanks entirely to Bob Bates and Mike Verdu, both of whom sat down for long interviews, the former of whom also shared some documents from those times.

Feel free to download the games Gateway and Gateway II, packaged to be as easy as possible to get running under DOSBox on your modern computer, from right here. As noted in the article proper, they’re great rides that are well worth your time, two of the standout gems of Legend’s impressive catalog.)

 
46 Comments

Posted by on September 21, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Shades of Gray

Ladies and gentlemen, come and see. This isn’t a country here but an epic failure factory, an excuse for a place, a weed lot, an abyss for tightrope walkers, blindman’s bluff for the sightless saddled with delusions of grandeur, proud mountains reduced to dust dumped in big helpings into the cruciform maws of sick children who crouch waiting in the hope of insane epiphanies, behaving badly and swamped besides, bogged down in their devil’s quagmires. Our history is a corset, a stifling cell, a great searing fire.

— Lyonel Trouillot

What’s to be done about Haiti?

Generations have asked that question about the first and most intractable poster child for postcolonial despair, the poorest country in North or South America now and seemingly forever, a place whose corruption and futility manages to make the oft-troubled countries around it look like models of good governance. Nowhere feels James Joyce’s description of history as “a nightmare from which I am trying to awake” more apt. Indeed, Haiti stands as perhaps the ultimate counterargument to the idealistic theory of history as progress. Here history really is just one damned thing after another — differing slightly in the details, but always the same at bottom.

But why should it be this way? What has been so perplexing and infuriating about Haiti for so long is that there seems to be no real reason for its constant suffering. Long ago, when it was still a French colony, it was known as the “Pearl of the Caribbean,” and was not only beautiful but rich; at the time of the American Revolution, it was richer than any one of the thirteen British American colonies. Those few who bother to visit Haiti today still call it one of the most beautiful places of all in the beautiful region that is the Caribbean. Today the Dominican Republic, the nation with which Haiti shares the island of Hispaniola, is booming, the most popular tourist spot in the Caribbean, with the fastest-growing economy anywhere in North or South America. But Haiti, despite being blessed with all the same geographic advantages, languishes in poverty next door, seething with resentment over its condition. It’s as if the people of Haiti have been cursed by one of the voodoo gods to which some of them still pray to act out an eternal farce of chaos, despair, and senseless violence.

Some scenes from the life of Haiti…

…you are a proud Mandingue hunter in a hot West African land. But you’re not hunting. You’re being hunted — by slavers, both black and white. You run, and run, and run, until your lungs are near to bursting. But it’s no use. You’re captured and chained like an animal, and thrust into the dank hold of a sailing ship. Hundreds of your countrymen and women are here — hungry, thirsty, some beaten and maimed by your captors. All are terrified for themselves and their families, from whom they’ve been cruelly separated. Many die on the long voyage. But when it’s over, you wonder if perhaps they were the lucky ones…

The recorded history of the island of Hispaniola begins with the obliteration of the people who had always lived there. The Spanish conquistadors arrived on the island in the fifteenth century, bringing with them diseases against which the native population, known as the Taíno, had no resistance, along with a brutal regime of forced labor. Within two generations, the Taíno were no more. They left behind only a handful of words which entered the European vocabulary, like “hammock,” “hurricane,” “savanna,” “canoe,” “barbecue,” and “tobacco.” The Spanish, having lost their labor force, shrugged their shoulders and largely abandoned Hispaniola.

But in the ensuing centuries, Europeans developed a taste for sugar, which could be produced in large quantities only in the form of sugarcane, which in turn grew well only in tropical climates like those of the Caribbean. Thus the abandoned island of Hispaniola began to have value again. The French took possession of the western third of the island — the part known as Haiti today — with the Treaty of Ryswick, which ended the Nine Years War in 1697. France officially incorporated its new colony of Saint-Domingue on Hispaniola the same year.

Growing sugarcane demanded backbreaking labor under the hot tropical sun, work of a kind judged unsuitable for any white man. And so, with no more native population to enslave, the French began to import slaves from Africa. Their labor turned Saint-Domingue in a matter of a few decades from a backwater into one of the jewels of France’s overseas empire. In 1790, the year of the colony’s peak, 48,000 slaves were imported to join the 500,000 who were already there. It was necessary to import slaves in such huge numbers just to maintain the population in light of the appalling death toll of those working in the fields; little Saint-Domingue alone imported more slaves over the course of its history than the entirety of the eventual United States.

…you’re a slave, toiling ceaselessly in a Haitian cane field for your French masters. While they live bloated with wealth, you and your fellows know little but hardship and pain. Brandings, floggings, rape, and killing are everyday events. And for the slightest infraction, a man could be tortured to death by means limited only by his owners’ dark imaginations. What little comfort you find is in the company of other slaves, who, at great risk to themselves, try to keep the traditions of your lost homeland alive. And there is hope — some of your brothers could not be broken, and have fled to the hills to live free. These men, the Maroons, are said to be training as warriors, and planning for your people’s revenge. Tonight, you think, under cover of darkness, you will slip away to join them…

The white masters of Saint-Domingue, who constituted just 10 percent of the colony’s population, lived in terror of the other 90 percent, and this fear contributed to the brutality with which they punished the slightest sign of recalcitrance on the part of their slaves. Further augmenting their fears of the black Other was the slaves’ foreboding religion of voodoo: a blending of the animistic cults they had brought with them from tribal Africa with the more mystical elements of Catholicism — all charms and curses, potions and spells, trailing behind it persistent rumors of human sacrifice.

Even very early in the eighteenth century, some slaves managed to escape into the wilderness of Hispaniola, where they formed small communities that the white men found impossible to dislodge. Organized resistance, however, took a long time to develop.

Legend has it that the series of events which would result in an independent nation on the western third of Hispaniola began on the night of August 21, 1791, when a group of slave leaders secretly gathered at a hounfour — a voodoo temple — just outside the prosperous settlement of Cap‑Français. Word of the French Revolution had reached the slaves, and, with mainland France in chaos, the time seemed right to strike here in the hinterlands of the empire. A priestess slit the throat of a sacrificial pig, and the head priest said that the look and taste of the pig’s blood indicated that Ogun and Ghede, the gods of war and death respectively, wanted the slaves to rise up. Together the leaders drank the blood under a sky that suddenly broke into storm, then sneaked back onto their individual plantations at dawn to foment revolution.

That, anyway, is the legend. There’s good reason to doubt whether the hounfour actually happened, but the revolution certainly did.

…you are in the middle of a bloody revolution. You are a Maroon, an ex-slave, fighting in the only successful slave revolt in history. You have only the most meager weapons, but you and your comrades are fighting for your very lives. There is death and destruction all around you. Once-great plantation houses lie in smouldering ruins. Corpses, black and white, litter the cane fields. Ghede walks among them, smiling and nodding at his rich harvest. He sees you and waves cheerfully…

The proudest period of Haiti’s history — the one occasion on which Haiti actually won something — began before a nation of that name existed, when the slaves of Saint-Domingue rose up against their masters, killing or driving them off their plantations. After the French were dispensed with, the ex-slaves continued to hold their ground against Spanish and English invaders who, concerned about what an example like this could mean for other colonies, tried to bring them to heel.

In 1798, a well-educated, wily former slave named Toussaint Louverture consolidated control of the now-former French colony. He spoke both to his own people and to outsiders using the language of the Enlightenment, drawing from the American Declaration of Independence and the French Declaration of the Rights of Man and the Citizen, putting a whole new face on this bloody revolution that had supposedly been born at a voodoo houfour on a hot jungle night.

Toussaint Louverture was frequently called the black George Washington in light of the statesmanlike role he played for his people. He certainly looked the part. Would Haiti’s history have been better had he lived longer? We can only speculate.

…and you are battling Napoleon’s armies, Europe’s finest, sent to retake the jewel of the French empire. You have few resources, but you fight with extraordinary courage. Within two years, sixty thousand veteran French troops have died, and your land is yours again. The French belong to Ghede, who salutes you with a smirk…

Napoleon had now come to power in France, and was determined to reassert control over his country’s old empire even as he set about conquering a new one. In 1802, he sent an army to retake the colony of Saint-Domingue. Toussaint Louverture was tricked, captured, and shipped to France, where he soon died in a prison cell. But his comrades in arms, helped along by a fortuitous outbreak of yellow fever among the French forces and by a British naval blockade stemming from the wars back in Europe, defeated Napoleon’s finest definitively in November of 1803. The world had little choice but to recognize the former colony of Saint-Domingue as a predominately black independent nation-state, the first of its type.

With Louverture dead, however, there was no one to curb the vengeful instincts of the former slaves who had defeated the French after such a long, hard struggle. It was perfectly reasonable that the new nation would take for its name Haiti — the island of Hispaniola’s name in the now-dead Taíno language — rather than the French appellation of Saint-Domingue. Less reasonable were the words of independent Haiti’s first leader, and first in its long line of dictators, Jean-Jacques Dessalines, who said that “we should use the skin of a white man as a parchment, his skull for an inkwell, his blood for ink, and a bayonet for a pen.” True to his words, he proceeded to carry out systematic genocide on the remaining white population of Haiti, destroying in the process all of the goodwill that had accrued to the new country among progressives and abolitionists in the wider world. His vengeance cost Haiti both much foreign investment that might otherwise have been coming its way and the valuable contribution the more educated remaining white population, by no means all of whom had been opposed to the former slaves’ cause, might have been able to make to its economy. A precedent had been established which holds to this day: of Haiti being its own worst enemy, over and over again.

…a hundred years of stagnation and instability flash by your eyes. As your nation’s economic health declines, your countrymen’s thirst for coups d’etat grows. Seventeen of twenty-four presidents are overthrown by guile or force of arms, and Ghede’s ghastly armies swell…

So, Haiti, having failed from the outset to live up to the role many had dreamed of casting it in as the first enlightened black republic, remained poor and inconsequential, mired in corruption and violence, as its story devolved from its one shining moment of glory into the cruel farce it remains to this day. The arguable lowlight of Haiti’s nineteenth century was the reign of one Faustin Soulouque, who had himself crowned Emperor Faustin I — emperor of what? — in 1849. American and European cartoonists had a field day with the pomp and circumstance of Faustin’s “court.” He was finally exiled to Jamaica in 1859, after he had tried and failed to invade the Dominican Republic (an emperor has to start somewhere, right?), extorted money from the few well-to-do members of Haitian society and defaulted on his country’s foreign debt in order to finance his palace, and finally gotten himself overthrown by a disgruntled army officer. Like the vast majority of Haiti’s leaders down through the years, he left his country in even worse shape than he found it.

Haiti’s Emperor Faustin I was a hit with the middle-brow reading public in the United States and Europe.

…you are a student, protesting the years-long American occupation of your country. They came, they said, to thwart Kaiser Wilhelm’s designs on the Caribbean, and to help the Haitian people. But their callous rule soon became morally and politically bankrupt. Chuckling, Ghede hands you a stone and you throw it. The uprising that will drive the invaders out has begun…

In 1915, Haiti was in the midst of one of its periodic paroxysms of violence. Jean Vilbrun Guillaume Sam, the country’s sixth president in the last four years, had managed to hold the office for just five months when he was dragged out of the presidential palace into the street and torn limb from limb by a mob. The American ambassador to Haiti, feeling that the country had descended into a state of complete anarchy that could spread across the Caribbean, pleaded with President Woodrow Wilson to intervene. Fearing that Germany and its allies might exploit this chaos on the United States’s doorstep if and when his own country should enter the First World War on the opposing side, Wilson agreed. On July 28, 1915, a small force of American sailors occupied the Haitian capital of Port-au-Prince almost without firing a shot — a far cry from Haiti’s proud struggle for independence against the French. Haiti was suddenly a colony again, although its new colonizers did promise that the occupation was temporary. It was to last just long enough to set the country on its feet and put a sound system of government in place.

When the Americans arrived in Haiti, they found its people’s lives not all that much different from the way they had lived at the time of Toussaint Louverture. Here we see the capital city of Port-au-Prince, the most “developed” place in the country.

The American occupation wound up lasting nineteen years, during which the occupiers did much practical good in Haiti. They paved more than a thousand miles of roadway; built bridges and railway lines and airports and canals; erected power stations and radio stations, schools and hospitals. Yet, infected with the racist attitudes toward their charges that were all too typical of the time, they failed at the less concrete tasks of instilling a respect for democracy and the rule of law. They preferred to make all the rules themselves by autocratic decree, giving actual Haitians only a token say in goings-on in their country. This prompted understandable anger and a sort of sullen, passive resistance among Haitians to all of the American efforts at reform, occasionally flaring up into vandalism and minor acts of terrorism. When the Americans, feeling unappreciated and generally hard-done-by, left Haiti in 1934, it didn’t take the country long to fall back into the old ways. Within four years President Sténo Vincent had declared himself dictator for life. But he was hardly the only waxing power in Haitian politics.

…a tall, ruggedly handsome black man with an engaging smile.

He is speaking to an assembled throng in a poverty-stricken city neighborhood. He tells moving stories about his experiences as a teacher, journalist, and civil servant. You admire both his skillful use of French and Creole, and his straightforward ideas about government. With eloquence and obvious sincerity, he speaks of freedom, justice and opportunity for all, regardless of class or color. His trenchant, biting criticisms of the establishment delight the crowd of longshoremen and laborers.

“Latin America and the Caribbean already have too many dictators,” he says. “It is time for a truly democratic government in Haiti.” The crowd roars out its approval…

The aspect of Haitian culture which had always baffled the Americans the most was the fact that this country whose population was 99.9 percent black was nevertheless riven by racism as pronounced as anywhere in the world. The traditional ruling class was the mulattoes: Haitians who could credit their lighter skin to white blood dating back to the old days of colonization, and/or to the fact that they and their ancestors hadn’t spent long years laboring in the sun. They made up perhaps 10 percent of the population, and spoke and governed in French. The rest of the population was made up of the noir Haitians: the darker-skinned people who constituted the working class. They spoke only the Haitian Creole dialect for the most part, and thus literally couldn’t understand most of what their country’s leaders said. In the past, it had been the mulattoes who killed one another to determine who ruled Haiti, while the noir Haitians just tried to stay out of the way.

In the 1940s, however, other leaders came forward to advance the cause of the “black” majority of the population; these leaders became known as the noiristes. Among the most prominent of them was Daniel Fignolé, a dark-skinned Haitian born, like most of his compatriots, into extreme poverty in 1913. Unlike most of them, he managed to educate himself by dint of sheer hard work, became political at the sight of the rampant injustice and corruption all around him, and came to be known as the “Moses of Port-au-Prince” for the fanatical loyalty he commanded among the stevedores, factory workers, and other unskilled laborers in and around the capital. Fignolé emphasized again and again that he was not a Marxist — an ideology that had been embraced by some of the mulattoes and was thus out of bounds for any good noiriste. Yet he did appropriate the Marxist language of proletariat and bourgeoisie, and left no doubt which side of that divide he was fighting for. For years, he remained an agitating force in Haitian politics without ever quite breaking through to real power. Then came the tumultuous year of 1957.

Daniel Fignolé, the great noiriste advocate for the working classes of Haiti.

…but you’re now a longshoreman in Port-au-Prince, and your beloved Daniel Fignolé has been ousted after just nineteen days as Provisional President. Rumors abound that he has been executed by Duvalier and his thugs. You’re taking part in a peaceful, if noisy, demonstration demanding his return. Suddenly, you’re facing government tanks and troops. Ghede rides on the lead tank, laughing and clapping his hands in delight. You shout your defiance and pitch a rock at the tank. The troops open fire, and machine-gun bullets rip through your chest…

One Paul Magloire, better known as Bon Papa, had been Haiti’s military dictator since 1950. The first few years of his reign had gone relatively well; his stridently anticommunist posturing won him some measure of support from the United States, and Haiti briefly even became a vacation destination to rival the Dominican Republic among sun-seeking American tourists. But when a devastating hurricane struck Hispaniola in 1954 and millions of dollars in international aid disappeared in inimitable Haitian fashion without ever reaching the country’s people, the mood among the elites inside the country who had been left out of that feeding frenzy began to turn against Bon Papa. On December 12, 1956, he resigned his office by the hasty expedient of jumping into an airplane and getting the hell out of Dodge before he came to share the fate of Jean Vilbrun Guillaume Sam. The office of the presidency, a hot potato if ever there was one, then passed through three more pairs of hands in the next six months, while an election campaign to determine Haiti’s next permanent leader took place.

Of course, in Haiti election campaigns were fought with fists, clubs, knives, guns, bombs, and, most of all, rampant, pervasive corruption at every level. Still, in a rare sign of progress of a sort in Haitian politics, the two strongest candidates were both noiristes promising to empower the people rather than the mulatto elites. They were Daniel Fignolé and François Duvalier, the latter being a frequent comrade-in-arms of the former during the struggles of the last twenty years who had now become a rival; he was an unusually quiet, even diffident-seeming personality in terms of typical Haitian politics, so much so that many doubted his mental fortitude and intelligence alike. But Duvalier commanded enormous loyalty in the countryside, where he had worked for years as a doctor, often in tandem with American charitable organizations. Meanwhile Fignolé’s urban workers remained as committed to him as ever, and clashes between the supporters of the two former friends were frequent and often violent.

The workers around Port-au-Prince pledged absolute allegiance to Daniel Fignolé. He liked to call them his wuolo konmpresé — his “steamrollers,” always ready to take to the streets for a rally, a demonstration, or just a good old fight.

But then, on May 25, 1957, Duvalier unexpectedly threw his support behind a bid to make his rival the latest provisional president while the election ran its course, and Fignolé marched into the presidential palace surrounded by his cheering supporters. In a stirring speech on the palace steps, he promised a Haitian “New Deal” in the mold of Franklin D. Roosevelt’s American version.

The internal machinations of Haitian politics are almost impossible for an outsider to understand, but many insiders have since claimed that Duvalier, working in partnership with allies he had quietly made inside the military, had set Fignolé up for a fall, contriving to remove him from the business of day-to-day campaigning and thereby shore up his own support while making sure his presidency was always doomed to be a short one even by Haitian standards. At any rate, on the night of June 14, 1957 — just nineteen days after he had assumed the post — a group of army officers burst into Fignolé’s office, forced him to sign a resignation letter at gunpoint, and then tossed him into an airplane bound for the United States, exiling him on pain of death should he ever return to Haiti.

The deposing of Fignolé ignited another spasm of civil unrest among his supporters in Port-au-Prince, but their violence was met with even more violence by the military. There were reports of soldiers firing machine guns into the crowds of demonstrators. People were killed in the hundreds if not thousands in the capital, even as known agitators were rounded up en masse and thrown into prison, the offices of newspapers and magazines supporting Fignolé’s cause closed and ransacked. On September 22, 1957, it was announced that François Duvalier had been elected president by the people of Haiti.

Inside the American government, opinion was divided about the latest developments in Haiti. The CIA was convinced that, despite Fignolé’s worrisome leftward orientation, his promised socialist democracy was a better, more stable choice for the United States’s close neighbor than a military junta commanded by Duvalier. The agency thus concocted a scheme to topple Duvalier’s new government, which was to begin with the assassination of his foreign minister, Louis Raimone, on an upcoming visit to Mexico City to negotiate an arms deal. But the CIA’s plans accidentally fell into the hands of one Austin Garriot, an academic doing research for his latest book in Washington, D.C. Garriot passed the plans on to J. Edgar Hoover’s FBI, who protested strongly that any attempt to overthrow Duvalier would be counter to international law — and who emphasized as well that he had declared himself to be strongly pro-American and anti-Soviet. With the top ranks of the FBI threatening to expose the illegal assassination plot to other parts of the government if the scheme was continued, the CIA had no choice but to quietly abandon it. Duvalier remained in power, unmolested.

He had promised his supporters a bright future…

…before a shining white city atop a hill. A sign welcomes you to Duvalierville. As you walk through the busy streets, well-dressed, cheerful people greet you as they pass by. You are struck by the abundance of goods and services offered, and the cleanliness and order that prevails. Almost every wall is adorned with a huge poster of a frail, gray-haired black man wearing a dark suit and horn-rimmed glasses.

Under the figure are the words: “Je suis le drapeau Haitien, Uni et Indivisible. François Duvalier.”

Everyone you ask about the man says the same thing: “We all love Papa Doc. He’s our president for life now, and we pray that he will live forever.”

Instead the leader who became known as Papa Doc — this quiet country doctor — became another case study in the banality of evil. During his fourteen years in power, an estimated 60,000 people were executed upon his personal extra-judicial decree. The mulatto elite, who constituted the last remnants of Haiti’s educated class and thus could be a dangerous threat to his rule, were a particular target; purge after purge cut a bloody swath through their ranks. When Papa Doc died in 1971, his son Jean-Claude Duvalier — Baby Doc — took over for another fifteen years. The world became familiar with the term “Haitian boat people” as the Duvaliers’ desperate victims took to the sea in the most inadequate of crafts. For them, any shred of hope for a better life was worth grasping at, no matter what the risk.

…you find yourself at sea, in a ragged little boat. Every inch of space is crowded with humanity. They’re people you know and care about deeply. You have no food or water, but you have something more precious — hope. In your native Haiti, your life has become intolerable. The poverty, the fear, the sudden disappearances of so many people — all have driven you to undertake this desperate journey into the unknown.

A storm arises, and your small boat is battered by the waves and torn apart. One by one, your friends, your brothers, your children slip beneath the roiling water and are lost. You cling to a rotten board as long as you can, but you know that your dream of freedom is gone. “Damn you, Duvalier,” you scream as the water closes over your head…



And now I have to make a confession: not quite all of the story I’ve just told you is true. That part about the CIA deciding to intervene in Haitian politics, only to be foiled by the FBI? It never happened (as far as I know, anyway). That part, along with all of the quoted text above, is rather lifted from a fascinating and chronically underappreciated work of interactive fiction from 1992: Shades of Gray.

Shades of Gray was the product of a form of collaboration which would become commonplace in later years, but which was still unusual enough in 1992 that it was remarked in virtually every mention of the game: the seven people who came together to write it had never met one another in person, only online. The project began when a CompuServe member named Judith Pintar, who had just won the 1991 AGT Competition with her CompuServe send-up Cosmoserve, put out a call for collaborators to make a game for the next iteration of the Competition. Mark Baker, Steve Bauman, Belisana, Hercules, Mike Laskey, and Cindy Yans wound up joining her, each writing a vignette for the game. Pintar then wrote a central spine to bind all these pieces together. The end result was so much more ambitious than anything else made for that year’s AGT Competition that organizer David Malmberg created a “special group effort” category just for it — which, it being the only game in said category, it naturally won.

Yet Shades of Gray‘s unusual ambition wasn’t confined to its size or number of coauthors. It’s also a game with some serious thematic heft.

The idea of using interactive fiction to make a serious literary statement was rather in abeyance in the early 1990s. Infocom had always placed a premium on good writing, and had veered at least a couple of times into thought-provoking social and historical commentary with A Mind Forever Voyaging and Trinity. But neither of those games had been huge sellers, and Infocom’s options had always been limited by the need to please a commercial audience who mostly just wanted more fun games like Zork from them, not deathless literary art. Following Infocom’s collapse, amateur creators working with development systems like AGT and TADS likewise confined almost all of their efforts to making games in the mold of Zork — unabashedly gamey games, with lots of puzzles to solve and an all-important score to accumulate.

On the surface, Shades of Gray may not seem a radical departure from that tradition; it too sports lots of puzzles and a score. Scratch below the surface, though, and you’ll find a text adventure with more weighty thoughts on its mind than any since 1986’s Trinity (a masterpiece of a game which, come to think of it, also has puzzles and a score, thus proving these elements are hardly incompatible with literary heft).

It took the group who made Shades of Gray much discussion to arrive at its central theme, which Judith Pintar describes as one of “moral ambiguity”: “We wanted to show that life and politics are nuanced.” You are cast in the role of Austin Garriot, a man whose soul has become unmoored from his material being for reasons that aren’t ever — and don’t really need to be — clearly explained. With the aid of a gypsy fortune teller and her Tarot deck, you explore the impulses and experiences that have made you who you are, presented in the form of interactive vignettes carved from the stuff of symbolism and memory and history. Moral ambiguity does indeed predominate through echoes of the ancient Athens of Antigone, the Spain of the Inquisition, the United States of the Civil War and the Joseph McCarthy era. In the most obvious attempt to present contrasting viewpoints, you visit Sherwood Forest twice, playing once as Robin Hood and once as the poor, put-upon Sheriff of Nottingham, who’s just trying to maintain the tax base and instill some law and order.

> examine chest
The chest is solidly made, carved from oak and bound together with strips
of iron. It contains the villagers' taxes -- money they paid so you could
defend them against the ruffians who inhabit the woods. Unfortunately, the
outlaws regularly attack the troops who bring the money to Nottingham, and
generally steal it all.

Because you can no longer pay your men-at-arms, no one but you remains to protect the local villagers. The gang is taking full advantage of this, attacking whole communities from their refuge in Sherwood Forest. You are alone, but you still have a duty to perform.

Especially in light of the contrasting Robin Hood vignettes, it would be all too easy for a reviewer like me to neatly summarize the message of Shades of Gray as something like “there are two sides to every story” or “walk a mile in my shoes before you condemn me.” And, to be sure, that message is needed more than ever today, not least by the more dogmatic members of our various political classes. Yet to claim that that’s all there is to Shades of Gray is, I think, to do it a disservice. Judith Pintar, we should remember, described its central theme as moral ambiguity, which is a more complex formulation than just a generalized plea for empathy. There are no easy answers in Shades of Gray — no answers at all really. It tells us that life is complicated, and moral right is not always as easy to determine as we might wish.

Certainly that statement applies to the longstanding question with which I opened this article: What to do about Haiti? In the end, it’s the history of that long-suffering country that comes to occupy center stage in Shades of Gray‘s exploration of… well, shades of gray.

Haiti’s presence in the game is thanks to the contributor whose online handle was Belisana. [1]I do know her real name, but don’t believe it has ever been published in connection with Shades of Gray, and therefore don’t feel comfortable “outing” her here. It’s an intriguingly esoteric choice of subject matter for a game written in this one’s time and place; of the contributors, only Mark Baker had any sort of personal connection to Haiti, having worked there for several months back in 1980. Belisana began her voyage into Haitian history with a newspaper clipping, chanced upon in a library, from that chaotic year of 1957. She included a lightly fictionalized version of it in the game itself:

U.S. AID TO HAITI REDUCED TWO-THIRDS

PORT-AU-PRINCE, Haiti, Oct. 8 — The United States government today shut down two-thirds of its economic aid to Haiti. The United States Embassy sources stressed that the action was not in reprisal against the reported fatal beating of a United States citizen last Sunday.

The death of Shibley Matalas was attributed by Col. Louis Raimone, Haitian Foreign Minister, to a heart attack. Three U.S. representatives viewed Mr. Matalas’ body. Embassy sources said they saw extensive bruises, sufficient to be fatal.

Through my own archival research, I’ve determined that in the game Belisana displaced the date of the actual incident by one week, from October 1 to October 8, and that she altered the names of the principals: Shibley Matalas was actually named Shibley Talamas, and Louis Raimone was Louis Roumain. The incident in question occurred after François Duvalier had been elected president of Haiti but three weeks before he officially assumed the office. The real wire report, as printed in the Long Beach Press Telegram, tells a story too classically Haitian not to share in full.

Yank in Haitian Jail Dies, U.S. Envoy Protests

Port-au-Prince, Haiti (AP) — Americans were warned to move cautiously in Haiti today after Ambassador Gerald Drew strongly protested the death of a U.S. citizen apparently beaten while under arrest. The death of Shibley Talamas, 30-year-old manager of a textile factory here, brought the United States into the turmoil which followed the presidential election Sept. 22 in the Caribbean Negro republic.

Drew protested Monday to Col. Louis Roumain, foreign minister of the ruling military junta. The ambassador later cautioned Americans to be careful and abide by the nation’s curfew.

Roumain had gone to the U.S. Embassy to present the government’s explanation of Talamas’ death, which occurred within eight hours of his arrest.

The ambassador said Roumain told him Talamas, son of U.S. citizens of Syrian extraction, was arrested early Sunday afternoon in connection with the shooting of four Haitian soldiers. The solders were killed by an armed band Sunday at Kenscoff, a mountain village 14 miles from this capital city.

Drew said Roumain “assured me that Talamas was not mistreated.”

While being questioned by police, Talamas tried to attack an officer and to reach a nearby machine gun, Roumain told Drew. He added that Talamas then was handcuffed and immediately died of a heart attack.

The embassy said three reliable sources reported Talamas was beaten sufficiently to kill him.

One of these sources said Talamas’ body bore severe bruises about the legs, chest, shoulders, and abdomen, and long incisions that might have been made in an autopsy.

A Haitian autopsy was said to have confirmed that Talamas died of a heart attack. The location of the body remained a mystery. It was not delivered immediately to relatives.

Talamas, 300-pound son of Mr. and Mrs. Antoine Talamas, first was detained in the suburb of Petionville. Released on his promise to report later to police, he surrendered to police at 2 p.m. Sunday in the presence of two U.S. vice-consuls. His wife, Frances Wilpula Talamas, formerly of Ashtabula, Ohio, gave birth to a child Sunday.

Police said they found a pistol and shotgun in Talamas’ business office. Friends said he had had them for years.

Before seeing Roumain Monday, Drew tried to protest to Brig. Gen. Antonio Kébreau, head of the military junta, but failed in the attempt. An aid told newsmen that Kébreau could not see them because he had a “tremendous headache.”

Drew issued a special advisory to personnel of the embassy and U.S. agencies and to about 400 other Americans in Haiti. He warned them to stay off the streets during the curfew — 8 p.m. to 4 a.m. — except for emergencies and official business.

Troops and police have blockaded roads and sometimes prevented Americans getting to and from their homes. Americans went to their homes long ahead of the curfew hour Monday night. Some expressed fear that Talamas’ death might touch off other incidents.

Calm generally prevailed in the country. Police continued to search for losing presidential candidate Louis Déjoie, missing since the election. His supporters have threatened violence and charged that the military junta rigged the election for Dr. François Duvalier, a landslide winner in unofficial returns.

Official election results will be announced next Tuesday. Duvalier is expected to assume the presidency Oct. 14.

The Onion, had it existed at the time, couldn’t have done a better job of satirizing the farcical spectacle of a Haitian election. And yet all this appeared in a legitimate news report, from the losing candidate who mysteriously disappeared to the prisoner who supposedly dropped dead of a heart attack as soon as his guards put the handcuffs on him — not to mention the supreme leader with a headache, which might just be my favorite detail of all. Again: what does one do with a place like this, a place so corrupt for so long that corruption has become inseparable from its national culture?

But Shades of Gray is merciless. In the penultimate turn, it demands that you answer that question — at least this one time, in a very specific circumstance. Still playing the role of the hapless academic Austin Garriot, you’ve found a briefcase with all the details of the CIA’s plot to kill the Haitian foreign minister and initiate a top-secret policy of regime change in the country. The CIA’s contracted assassin, the man who lost the briefcase in the first place, is a cold fish named Charles Calthrop. He’s working together with Michael Matalas, vengeance-seeking brother of the recently deceased Shibley Matalas (né Talamas), and David Thomas, the CIA’s bureau chief in Haiti; they all want you to return the briefcase to them and forget that you ever knew anything about it. But two FBI agents, named Smith and Wesson (ha, ha…), have gotten wind of the briefcase’s contents, and want you to give it to them instead so they can stop the conspiracy in its tracks.

So, you are indeed free to take the course of action I’ve already described: give the briefcase to the FBI, and thereby foil the plot and strike a blow for international law. This will cause the bloody late-twentieth-century history of Haiti that we know from our own timeline to play out unaltered, as Papa Doc consolidates his grip on the country unmolested by foreign interventions.

Evil in a bow tie: François Duvalier at the time of the 1957 election campaign. Who would have guessed that this unassuming character would become the worst single Haitian monster of the twentieth century?

Or you can choose not to turn over the briefcase, to let the CIA’s plot take its course. And what happens then? Well, this is how the game describes it…

Smith and Wesson were unable to provide any proof of the CIA’s involvement in Raimone’s killing, and they were censured by Hoover for the accusation.

The following Saturday, Colonel Louis Raimone died from a single rifle shot through the head as he disembarked from a plane in Mexico City. His assassin was never caught, nor was any foreign government ever implicated.

It was estimated that the shot that killed Raimone was fired from a distance of 450 yards, from a Lee Enfield .303 rifle. Very few professionals were capable of that accuracy over that distance; Charles Calthrop was one of the few, and the Lee Enfield was his preferred weapon.

Duvalier didn’t survive long as president. Without the riot equipment that Raimone had been sent to buy, he was unable to put down the waves of unrest that swept the country. The army switched its allegiance to the people, and he was overthrown in March 1958.

Duvalier lived out the rest of his life in exile in Paris, and died in 1964.

Daniel Fignolé returned to govern Haiti after Duvalier was ousted, and introduced an American-style democracy. He served three 5-year terms of office, and was one of Kennedy’s staunchest allies during the Cuban missile crisis. He is still alive today, an elder statesman of Caribbean politics.

His brother’s death having been avenged, Michel Matalas returned to his former job as a stockman in Philadelphia. He joined the army and died in Vietnam in 1968. His nephew, Shibley’s son Mattieu, still lives in Haiti.

David Thomas returned to Haiti in his role as vice-consul, and became head of the CIA’s Caribbean division. He provided much of the intelligence that allowed Kennedy to bluff the Russians during the Cuban missile crisis before returning to take up a senior post at Langley.

What we have here, then, is a question of ends versus means. In the universe of Shades of Gray, at least, carrying out an illegal assassination and interfering in another sovereign country’s domestic politics leads to a better outcome than the more straightforwardly ethical course of abiding by international law.

Ever since it exited World War II as the most powerful country in the world, the United States has been confronted with similar choices time and time again. It’s for this reason that Judith Pintar calls her and her colleagues’ game “a story about American history as much as it is about Haiti.” While its interference in Haiti on this particular occasion does appear to have been limited or nonexistent in our own timeline, we know that the CIA has a long history behind it of operations just like the one described in the game, most of which didn’t work out nearly so well for the countries affected. And we also know that such operations were carried out by people who really, truly believed that their ends did justify their means. What can we do with all of these contradictory facts? Shades of gray indeed.

Of course, Shades of Gray is a thought experiment, not a serious study in geopolitical outcomes. There’s very good reason to question whether the CIA, who saw Daniel Fignolé as a dangerously left-wing leader, would ever have allowed him to assume power once again; having already chosen to interfere in Haitian politics once, a second effort to keep Fignolé out of power would only have been that much easier to justify. (This, one might say, is the slippery slope of interventionism in general.) Even had he regained and subsequently maintained his grip on the presidency, there’s reason to question whether Fignolé would really have become the mechanism by which true democracy finally came to Haiti. The list of Haitian leaders who once seemed similarly promising, only to disappoint horribly, is long; it includes on it that arguably greatest Haitian monster of all, the mild-mannered country doctor named François Duvalier, alongside such more recent disappointments as Jean-Bertrand Aristide. Perhaps Haiti’s political problems really are cultural problems, and as such are not amenable to fixing by any one person. Or, as many a stymied would-be reformer has speculated over the years, perhaps there really is just something in the water down there, or a voodoo curse in effect, or… something.

So, Shades of Gray probably won’t help us solve the puzzle of Haiti. It does, however, provide rich food for thought on politics and ethics, on the currents of history and the winds of fate — and it’s a pretty good little text adventure too. Its greatest weakness is the AGT development system that was used to create it, whose flexibility is limited and whose parser leaves much to be desired. “Given a better parser and the removal of some of the more annoying puzzles,” writes veteran interactive-fiction reviewer Carl Muckenhoupt, “this one would easily rate five stars.” I don’t actually find the puzzles all that annoying, but do agree that the game requires a motivated player willing to forgive and sometimes to work around the flaws of its engine. Any player willing to do so, though, will be richly rewarded by this milestone in interactive-fiction history, the most important game in terms of the artistic evolution of the medium to appear between Infocom’s last great burst of formal experiments in 1987 and the appearance of Graham Nelson’s milestone game Curses! in 1993. Few games in all the years of text-adventure history have offered more food for thought than Shades of Gray — a game that refuses to provide incontrovertible answers to the questions it asks, and is all the better for it.

In today’s Haiti, meanwhile, governments change constantly, but nothing ever changes. The most recent election as of this writing saw major, unexplained discrepancies between journalists’ exit polling and the official results, accompanied by the usual spasms of violence in the streets. Devastating earthquakes and hurricanes in recent years have only added to the impression that Haiti labors under some unique curse. On the bright side, however, it has been nearly a decade and a half since the last coup d’etat, which is pretty good by Haitian standards. You’ve got to start somewhere, right?

(Sources: the books Red & Black in Haiti: Radicalism, Conflict, and Political Change 1934-1957, Haiti: The Tumultuous History — From Pearl of the Caribbean to Broken Nation by Philippe Girard, and Haiti: The Aftershocks of History by Laurent Dubois; Life of June 3 1957; Long Beach Press Telegram of October 1 1957. My huge thanks go to Judith Pintar for indulging me with a long conversation about Shades of Gray and other topics. You can read more of our talk elsewhere on this site.

You can download Shades of Gray from the IF Archive. You can play it using the included original interpreter through DOSBox, or, more conveniently, with a modern AGT interpreter such as AGiliTY or — best of all in my opinion — the multi-format Gargoyle.)

Footnotes

Footnotes
1 I do know her real name, but don’t believe it has ever been published in connection with Shades of Gray, and therefore don’t feel comfortable “outing” her here.
 
26 Comments

Posted by on September 14, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

Agrippa (A Book of the Dead)

Is it the actor or the drama
Playing to the gallery?
Or is it but the character
Of any single member of the audience
That forms the plot
of each and every play?

“Hanging in the Gallery” by Dave Cousins

I was introduced to the contrast between art as artifact and art as experience by an episode of Northern Exposure, a television show which meant a great deal to my younger self. In “Burning Down the House,” Chris in the Morning, the town of Cicely, Alaska’s deejay, has decided to fling a living cow through the air using a trebuchet. Why? To create a “pure moment.”

“I didn’t know what you are doing was art,” says Shelley, the town’s good-hearted bimbo. “I thought it had to be in a frame, or like Jesus and Mary and the saints in church.”

“You know, Shell,” answers Chris in his insufferable hipster way, “the human soul chooses to express itself in a profound profusion of ways, not just the plastic arts.”

“Plastic hearts?”

“Arts! Plastic arts! Like sculpture, painting, charcoal. Then there’s music and poetry and dance. Lots of people, Susan Sontag notwithstanding, include photography.”

“Slam dancing?”

“Insofar as it reflects the slam dancer’s inner conflict with society through the beat… yeah, sure, why not? You see, Shelley, what I’m dealing with is the aesthetics of the transitory. I’m creating tomorrow’s memories, and, as memories, my images are as immortal as art which is concrete.”

Certain established art forms — those we generally refer to as the performing arts — have this quality baked into them in an obvious way. Keith Richards of the Rolling Stones once made the seemingly arrogant pronouncement that his band was “the greatest rock-and-roll band in the world” — but later modified his statement by noting that “on any given night, it’s a different band that’s the greatest rock-and-roll band in the world.” It might be the Rolling Stones playing before an arena full of 20,000 fans one night, and a few sweaty teenagers playing for a cellar full of twelve the next. It has nothing to do with the technical skill of the musicians; music is not a skills competition. A band rather becomes the greatest rock-and-roll band in the world the moment when the music goes someplace that transcends notes and measures. This is what the ancient Greeks called the kairos moment: the moment when past and future and thought itself fall away and there are just the band, the audience, and the music.

But what of what Chris in the Morning calls the “plastic arts,” those oriented toward producing some physical (or at least digital) artifact that will remain in the world long after the artist has died? At first glance, the kairos moment might seem to have little relevance here. Look again, though. Art must always be an experience, in the sense that there is a viewer, a reader, or a player who must experience it. And the meaning it takes on for that person — or lack thereof — will always be profoundly colored by where she was, who she was, when she was at the time. You can, in other words, find your own transitory transcendence inside the pages of a book just as easily as you can in a concert hall.

The problem with the plastic arts is that it’s too easy to destroy the fragile beauty of that initial impression. It’s too easy to return to the text trying to recapture the transcendent moment, too easy to analyze it and obsess over it and thereby to trample it into oblivion.

But what if we could jettison the plastic permanence from one of the plastic arts, creating something that must live or die — like a rock band in full flight or Chris in the Morning’s flying cow — only as a transitory transcendence? What if we could write a poem which the reader couldn’t return to and fuss over and pin down like a butterfly in a display case? What if we could write a poem that the reader could literally only read one time, that would flow over her once and leave behind… what? As it happens, an unlikely trio of collaborators tried to do just that in 1992.



Very early that year, a rather strange project prospectus made the rounds of the publishing world. Its source was Kevin Begos, Jr., who was known, to whatever extent he was known at all, as a publisher of limited-edition art books for the New York City gallery set. This new project, however, was something else entirely, and not just because it involved the bestselling science-fiction author William Gibson, who was already ascending to a position in the mainstream literary pantheon as “the prophet of cyberspace.”

Kevin Begos Jr., publisher of museum-quality, limited edition books, has brought together artist Dennis Ashbaugh (known for his large paintings of computer viruses and his DNA “portraits”) and writer William Gibson (who coined the term cyberspace, then explored the concept in his award-winning books Neuromancer, Count Zero, and Mona Lisa Overdrive) to produce a collaborative Artist’s Book.

In an age of artificial intelligence, recombinant genetics, and radical, technologically-driven cultural change, this “Book” will be as much a challenge as a possession, as much an enigma as a “story”.

The Text, encrypted on a computer disc along with a Virus Program written especially for the project, will mutate and destroy itself in the course of a single “reading”. The Collector/Reader may either choose to access the Text, thus setting in motion a process in which the Text becomes merely a Memory, or preserve the Text unread, in its “pure” state — an artifact existing exclusively in cyberspace.

Ashbaugh’s etchings, which allude to the potent allure and taboo of Genetic Manipulation, are both counterpoint and companion-piece to the Text. Printed on beautiful rag paper, their texture, odor, form, weight, and color are qualities unavailable to the Text in cyberspace. (The etchings themselves will undergo certain irreparable changes following their initial viewing.)

This Artist’s Book (which is not exactly a “book” at all) is cased in a wrought metal box, the Mechanism, which in itself becomes a crucial, integral element of the Text. This book-as-object raises unique questions about Art, Time, Memory, Possession—and the Politics of Information Control. It will be the first Digital Myth.

William Gibson had been friends with Dennis Ashbaugh for some time, ever since the latter had written him an admiring letter a few years after his landmark novel Neuromancer was published. The two men worked in different mediums, but they shared an interest in the transformations that digital technology and computer networking were having on society. They corresponded regularly, although they met only once in person.

Yet it was neither Gibson the literary nor Ashbaugh the visual artist who conceived their joint project’s central conceit; it was instead none other than the author of the prospectus above, publisher Kevin Begos, Jr., another friend of Ashbaugh. Ashbaugh, who like Begos was based in New York City, had been looking for a way to collaborate with Gibson, and came to his publisher friend looking for ideas that might be compelling enough to interest such a high-profile science-fiction writer, who lived all the way over in Vancouver, Canada, just about as far away as it was possible to get from New York City and still be in North America. “The idea kind of came out of the blue,” says Begos: “to do a book on a computer disk that destroys itself after you read it.” Gibson, Begos, thought, would be the perfect writer to which to pitch such a project, for he innately understood the kairos moment in art; his writing was thoroughly informed by the underground rhythms of the punk and new-wave music scenes. And, being an acknowledged fan of experimental literature like that written by his hero William S. Burroughs, he wasn’t any stranger to conceptual literary art of the sort which this idea of a self-destroying text constituted.

Even so, Begos says that it took him and Ashbaugh a good six to nine months to convince Gibson to join the project. Even after agreeing to participate, Gibson proved to be the most passive of the trio by far, providing the poem that was to destroy itself early on but then doing essentially nothing else after that. It’s thus ironic and perhaps a little unfair that the finished piece remains today associated almost exclusively with the name of William Gibson. If one person can be said to be the mastermind of the project as a whole, that person must be Kevin Begos, Jr., not William Gibson.

Begos, Ashbaugh, and Gibson decided to call their art project Agrippa (A Book of the Dead), adopting the name Gibson gave to his poem for the project as a whole. Still, there was, as the prospectus above describes, much more to it than the single self-immolating disk which contained the poem. We can think of the whole artwork as being split into two parts: a physical component, provided by Ashbaugh, and a digital component, provided by Gibson, with Begos left to tie them together. Both components were intended to be transitory in their own ways. (Their transcendence, of course, must be in the eye of the beholder.)

Begos said that he would make and sell just 455 copies of the complete work, ranging in price from $450 for the basic edition to $7500 for a “deluxe copy in a bronze case.” The name of William Gibson lent what would otherwise have been just a wacky avant-garde art project a great deal of credibility with the mainstream press. It was discussed far and wide in the spring and summer of 1992, finding its way into publications like People, Entertainment WeeklyEsquire, and USA Today long before it existed as anything but a set of ideas inside the minds of its creators. A reporter for Details magazine repeated the description of a Platonic ideal of Agrippa that Begos relayed to him from his fond imagination:

‘Agrippa’ comes in a rough-hewn black box adorned with a blinking green light and an LCD readout that flickers with an endless stream of decoded DNA. The top opens like a laptop computer, revealing a hologram of a circuit board. Inside is a battered volume, the pages of which are antique rag-paper, bound and singed by hand.

Like a frame of unprocessed film, ‘Agrippa’ begins to mutate the minute it hits the light. Ashbaugh has printed etchings of DNA nucleotides, but then covered them with two separate sets of drawings: One, in ultraviolet ink, disappears when exposed to light for an hour; the other, in infrared ink, only becomes visible after an hour in the light. A paper cavity in the center of the book hides the diskette that contains Gibson’s fiction, digitally encoded for the Macintosh or the IBM.

[…]

The disk contained Gibson’s poem Agrippa: “The story scrolls on the screen at a preset pace. There is no way to slow it down, speed it up, copy it, or remove the encryption that ultimately causes it to disappear.” Once the text scrolled away, the disk got wiped, and that was that. All that would be left of Agrippa was the reader’s memory of it.

The three tricksters delighted over the many paradoxes of their self-destroying creation with punk-rock glee. Ashbaugh laughed about having to send two copies of it to the copyright office — because to register it for a copyright, you had to read it, but when you read it you destroyed it. Gibson imagined some musty academic of the future trying to pry the last copy out of the hands of a collector so he could read it — and thereby destroy it definitively for posterity. He described it as “a cruel joke on book collectors.”

As I’ve already noted, Ashbaugh’s physical side of the Agrippa project was destined to be overshadowed by Gibson’s digital side, to the extent that the former is barely remembered at all today. Part of the problem was the realities of working with physical materials, which conspired to undo much of the original vision for the physical book. The LCD readout and the circuit-board hologram fell by the wayside, as did Ashbaugh’s materializing and de-materializing pictures. (One collector has claimed that the illustrations “fade a bit” over time, but one does have to wonder whether even that is wishful thinking.)

But the biggest reason that one aspect of Agrippa so completely overshadowed the other was ironically the very thing that got the project noticed at all in so many mainstream publications: William Gibson’s fame in comparison to his unknown collaborators. People magazine didn’t even bother to mention that there was anything to Agrippa at all beyond the disk; “I know Ashbaugh was offended by that,” says Begos. Unfortunately obscured by this selective reporting was an intended juxtaposition of old and new forms of print, a commentary on evolving methods of information transmission. Begos was as old-school as publishers got, working with a manual printing press not very dissimilar from the one invented by Gutenberg; each physical edition of Agrippa was a handmade objet d’art. Yet all most people cared about was the little disk hidden inside it.

So, even as the media buzzed with talk about the idea of a digital poem that could only be read once, Begos had a hell of a time selling actual, physical copies of the book. As of December of 1992, a few months after it went to press, Begos said he still had about 350 copies of it sitting around waiting for buyers. It seems unlikely that most of these were ever sold; they were quite likely destroyed in the end, simply because the demand wasn’t there. Begos relates a typical anecdote:

There was a writer from a newspaper in the New York area who was writing something on Agrippa. He was based out on Long Island and I was based in Manhattan. He sent a photographer to photograph the book one afternoon. And he’d done a phone interview with me, though I don’t remember if he called Gibson or not. He checked in with me after the photographer had come to make sure that it had gone alright, and I said yes. I said, “Well aren’t you coming by; don’t you want to see the book?” He said “No; you know, the traffic’s really bad; you know, I just don’t have time.” He published his story the next day, and there was nothing wrong with it, but I found that very odd. It probably would have taken him an hour to drive in, or he could have waited a few days. But some people, they almost seemed resistant to seeing the whole package.

It’s inevitable, given the focus of this site, that our interest too will largely be captured by the digital aspect of the work. Yet the physical artwork — especially the full-fledged $7500 edition — certainly is an interesting creation in its own right. Rather than looking sleek and modern, as one might expect from the package framing a digital text from the prophet of cyberpunk, it looks old — mysteriously, eerily old. “There’s a little bit of a dark side to the Gibson story and the whole mystery about it and the whole notion of a book that destroys itself, a text that destroys itself after you read it,” notes Begos. “So I thought that was fitting.” It smacks of ancient tomes full of forbidden knowledge, like H.P. Lovecraft’s Necronomicon, or the Egyptian Book of the Dead to which its parenthetical title seems to pay homage. Inside was to be found abstract imagery and, in lieu of conventional text, long strings of numbers and characters representing the gene sequence of the fruit fly. And then of course there was the disk, nestled into its little pocket at the back.

The deluxe edition of Agrippa is housed in this box, made out of fiberglass and paper and “distressed” by hand.

The book is inside a shroud and another case. Its title has been burned into it by hand.

The book’s 64 hand-cut pages combine long chunks of the fruit-fly genome alongside Daniel Ashbaugh’s images evocative of genetics — and occasional images, such as the pistol above, drawn from Gibson’s poem of “Agrippa.”

The last 20 pages have been glued together — as usual, by hand — and a pocket cut out of them to hold the disk.

But it was, as noted, the contents of the disk that really captured the public’s imagination, and that’s where we’ll turn our attention now.

William Gibson’s contribution to the project is an autobiographical poem of approximately 300 lines and 2000 words. The poem called “Agrippa” is named after something far more commonplace than its foreboding packaging might imply. “Agrippa” was actually the brand name of a type of photo album which was sold by Kodak in the early- and mid-twentieth century. Gibson’s poem begins as he has apparently just discovered such an artifact — “a Kodak album of time-burned black construction paper” — in some old attic or junk room. What follows is a meditation on family and memory, on the roots of things that made William Gibson the man he is now. There’s a snapshot of his grandfather’s Appalachian sawmill; there’s a pistol from some semi-forgotten war; there’s a picture of downtown Wheeling, West Virginia, 1917; there’s a magazine advertisement for a Rocket 88; there’s the all-night bus station in Wytheville, Virginia, where a young William Gibson used to go to buy cigarettes for his mother, and from which a slightly older one left for Canada to avoid the Vietnam draft and take up the life of an itinerant hippie.

Gibson is a fine writer, and “Agrippa” is a lovely, elegiac piece of work which stands on its own just fine as plain old text on the page when it’s divorced from all of its elaborate packaging and the work of conceptual art that was its original means of transmission. (Really, it does: go read it.) It was also the least science-fictional thing he had written to date — quite an irony in light of all of the discussion that swirled around it about publication in the age of cyberspace. But then, the ironies truly pile up in layers when it comes to this artistic project. It was ironically appropriate that William Gibson, a famously private person, should write something so deeply personal only in the form of a poem designed to disappear as soon as it had been read. And perhaps the supreme irony was this disappearing poem’s interest in the memories encoded by permanent artifacts like an old photo album, an old camera, or an old pistol. This interest in the way that everyday objects come to embody our collective memory would go on to become a recurring theme in Gibson’s later, more mature, less overtly cyberpunky novels. See, for example, the collector of early Sinclair microcomputers who plays a prominent role in 2003’s Pattern Recognition, in my opinion Gibson’s best single novel to date.

But of course it wasn’t as if the public’s interest in Agrippa was grounded in literary appreciation of Gibson’s poem, any more than it was in artistic appreciation of the physical artwork that surrounded it. All of that was rather beside the point of the mainstream narrative — and thus we still haven’t really engaged with the reason that Agrippa was getting write-ups in the likes of People magazine. Beyond the star value lent the project by William Gibson, all of the interest in Agrippa was spawned by this idea of a text — it could been have any text packaged in any old way, if we’re being brutally honest — that consumed itself as it was being read. This aspect of it seemed to have a deep resonance with things that were currently happening in society writ large, even if few could clarify precisely what those things were in a world perched on the precipice of the Internet Age. And, for all that the poem itself belied his reputation as a writer of science fiction, this aspect of Agrippa also resonated with the previous work of William Gibson, the mainstream media’s go-to spokesman for the (post)modern condition.

Enter, then, the fourth important contributor to Agrippa, a shadowy character who has chosen to remain anonymous to this day and whom we shall therefore call simply the Hacker. He apparently worked at Bolt, Beranek, and Newman, a Boston consulting firm with a rich hacking heritage (Will Crowther of Adventure fame had worked there), and was a friend of Dennis Ashbaugh. Kevin Begos, Jr., contracted with him to write the code for Gibson’s magical disappearing poem. “Dealing with the hacker who did the program has been like dealing with a character from one of your books,” wrote Begos to Gibson in a letter.

The Hacker spent most of his time not coding the actual display of the text — a trivial exercise — but rather devising an encryption scheme to make it impenetrable to the inevitable army of hex-editor-wielding compatriots who would try to extract the text from the code surrounding it. “The encryption,” he wrote to Begos, “has a very interesting feature in that it is context-sensitive. The value, both character and numerical, of any given character is determined by the characters next to it, which from a crypto-analysis or code-breaking point of view is an utter nightmare.”

The Hacker also had to devise a protection scheme to prevent people from simply copying the disk, then running the program from the copy. He tried to add digitized images of some of Ashbaugh’s art to the display, which would have had a welcome unifying effect on an artistic statement that too often seemed to reflect the individual preoccupations of Begos, Ashbaugh, and Gibson rather than a coherent single vision. In the end, however, he gave that scheme up as technically unfeasible. Instead he settled for a few digitized sound effects and a single image of a Kodak Agrippa photo album, displayed as the title screen before the text of the poem began to scroll. Below you can see what he ended up creating, exactly as someone would have who was foolhardy enough to put the disk into her Macintosh back in 1992.


The denizens of cyberspace, many of whom regarded William Gibson more as a god than a prophet, were naturally intrigued by Agrippa from the start, not least thanks to the implicit challenge it presented to crack the protection and thus turn this artistic monument to impermanence into its opposite. The Hacker sent Begos samples of the debates raging on the pre-World Wide Web Internet already in April of 1992, months before the book’s publication.

“I just read about William Gibson’s new book Agrippa (The Book of the Dead),” wrote one netizen. “I understand it’s going to be published on disk, with a virus that prevents it from being printed out. What do people think of this idea?”

“I seem to recall reading that this stuff about the virus-loaded book was an April Fools joke started here on the Internet,” replied another. “But nobody’s stopped talk about it, and even Tom Maddox, who knows Gibson, seemed to confirm its existence. Will the person who posted the original message please confirm or confess? Was this an April Fools joke or not?”

The Tom Maddox in question, who was indeed personally acquainted with Gibson, replied that the disappearing text “was part of a limited-edition, expensive artwork that Gibson believes was totally subscribed before ‘publication.’ Someone will publish it in more accessible form, I believe (and it will be interesting to see what the cyberpunk audience makes of it — it’s an autobiographical poem, about ten pages long).”

“What a strange world we live in,” concluded another netizen. Indeed.

The others making Agrippa didn’t need the Hacker to tell them with what enthusiasm the denizens of cyberspace would attack his code, vying for the cred that would come with being the first to break it. John Perry Barlow, a technology activist and co-founder of the Electronic Frontier Foundation, told Begos that unidentified “friends of his vow to buy and then run Agrippa through a Cray supercomputer to capture the code and crack the program.”

And yet for the first few months after the physical book’s release it remained uncracked. The thing was just so darn expensive, and the few museum curators and rare-books collectors who bought copies neither ran in the same circles as the hacking community nor were likely to entrust their precious disks to one of them.

Interest in the digital component of Agrippa remained high in the press, however, and, just as Tom Maddox had suspected all along, the collaborators eventually decided to give people unwilling to spend hundreds or thousands of dollars on the physical edition a chance to read — and to hear — William Gibson’s poem through another ephemeral electronic medium. On December 9, 1992, the Americas Society of New York City hosted an event called “The Transmission,” in which the magician and comedian Penn Jillette read the text of the poem as it scrolled across a big screen, bookended by question-and-answer sessions with Kevin Begos, Jr., the only member of the artistic trio behind Agrippa to appear at the event. The proceedings were broadcast via a closed-circuit satellite hookup to, as the press release claimed, “a street-corner shopfront on the Lower East Side, the Michael Carlos Museum in Atlanta, the Kitchen in New York City, a sheep farm in the Australian Outback, and others.” Continuing with the juxtaposition of old and new that had always been such a big thematic part of the Agrippa project — if a largely unremarked one — the press release pitched the event as a return to the days when catching a live transmission of one form or another had been the only way to hear a story, an era that had been consigned to the past by the audio- and videocassette.

When did you last hear Hopalong Cassidy on his NBC radio program? When did you last read to your children around a campfire? Have you been sorry that your busy schedule prevented a visit to the elders’ mud hut in New Guinea, where legends of times past are recounted? Have you ever looked closely at your telephone cable to determine exactly how voices and images can come out of the tiny fibers?

Naturally, recording devices were strictly prohibited at the event. Agrippa was still intended to be an ephemeral kairos moment, just like the radio broadcasts of yore.

Of course, it had always been silly to imagine that all traces of the poem could truly be blotted from existence after it had been viewed and/or heard by a privileged few. After all, people reading it on their monitor screens at home could buy video cameras too. Far from denying this reality, Begos imagined an eventual underground trade in fuzzy Agrippa videotapes, much like the bootleg concert tapes traded among fans of Bob Dylan and the Grateful Dead. Continuing with the example set by those artists, he imagined the bootleg trade being more likely to help than to hurt Agrippa‘s cultural cachet. But it would never come to that — for, despite Begos’s halfhearted precautions, the Transmission itself was captured as it happened.

Begos had hired a trio of student entrepreneurs from New York University’s Interactive Television Program to run the technical means of transmission of the Transmission. They went by the fanciful names of “Templar, Rosehammer, and Pseudophred” — names that could have been found in the pages of a William Gibson novel, and that should therefore have set off warning bells in the head of one Kevin Begos, Jr. Sure enough, the trio slipped a videotape into the camera broadcasting the proceedings. The very next morning, the text of the poem appeared on an underground computer bulletin board called MindVox, preceded by the following introduction:

Hacked & Cracked by
-Templar-
Rosehammer & Pseudophred
Introduction by Templar

When I first heard about an electronic book by William Gibson… sealed in an ominous tome of genetic code which smudges to the touch… which is encrypted and automatically self-destructs after one reading… priced at $1,500… I knew that it was a challenge, or dare, that would not go unnoticed. As recent buzzing on the Internet shows, as well as many overt attempts to hack the file… and the transmission lines… it’s the latest golden fleece, if you will, of the hacking community.

I now present to you, with apologies to William Gibson, the full text of AGRIPPA. It, of course, does not include the wonderful etchings, and I highly recommend purchasing the original book (a cheaper version is now available for $500). Enjoy.

And I’m not telling you how I did it. Nyah.

As Matthew Kirschenbaum, the foremost scholar of Agrippa, points out, there’s a delicious parallel to be made with the opening lines of Gibson’s 1981 short story “Johnny Mnemonic,” the first fully realized piece of cyberpunk literature he or anyone else ever penned: “I put the shotgun in an Adidas bag and padded it out with four pairs of tennis socks, not my style at all, but that was what I was aiming for: If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy. So I decided to get as crude as possible.” Templar was happy to let people believe he had reverse-engineered the Hacker’s ingenious encryption, but in reality his “hack” had consisted only of a fortuitous job contract and a furtively loaded videotape. Whatever works, right? “A hacker always takes the path of least resistance,” said Templar years later. “And it is a lot easier to ‘hack’ a person than a machine.”

Here, then, is one more irony to add to the collection. Rather than John Parry Barlow’s Cray supercomputer, rather than some genius hacker Gibson would later imagine had “cracked the supposedly uncrackable code,” rather than the “international legion of computer hackers” which the journal Cyberreader later claimed had done the job, Agrippa was “cracked” by a cameraman who caught a lucky break. Within days, it was everywhere in cyberspace. Within a month, it was old news online.

Before Kirschenbaum uncovered the real story, it had indeed been assumed for years, even by the makers of Agrippa, that the Hacker’s encryption had been cracked, and that this had led to its widespread distribution on the Internet — led to this supposedly ephemeral text becoming as permanent as anything in our digital age. In reality, though, it appears that the Hacker’s protection wasn’t cracked at all until long after it mattered. In 2012, the University of Toronto sponsored a contest to crack the protection, which was won in fairly short order by one Robert Xiao. Without taking anything away from his achievement, it should be noted that he had access to resources — including emulators, disk images, and exponentially more sheer computing power — of which someone trying to crack the program on a real Macintosh in 1992 could hardly even have conceived. No protection is unbreakable, but the Hacker’s was certainly unbreakable enough for its purpose.

And so, with Xiao’s exhaustive analysis of the Hacker’s protection (“a very straightforward in-house ‘encryption’ algorithm that encodes data in 3-byte blocks”), the last bit of mystery surrounding Agrippa has been peeled away. How, we might ask at this juncture, does it hold up as a piece of art?

My own opinion is that, when divorced from its cultural reception and judged strictly as a self-standing artwork of the sort we might view in a museum, it doesn’t hold up all that well. This was a project pursued largely through correspondence by three artists who were all chasing somewhat different thematic goals, and it shows in the end result. It’s very hard to construct a coherent narrative of why all of these different elements are put together in this way. What do Ashbaugh’s DNA texts and paintings really have to do with Gibson’s meditation on family memory? (Begos made a noble attempt to answer that question at the Transmission, claiming that recordings of DNA strands would somehow become the future’s version of family snapshots — but if you’re buying that, I have some choice swampland to sell you.) And then, why is the whole thing packaged to look like H.P. Lovecraft’s Necronomicon? Rather than a unified artistic statement, Agrippa is a hodgepodge of ideas that too often pull against one another.

But is it really fair to divorce Agrippa so completely from its cultural reception all those years ago? Or, to put it another way, is it fair to judge Agrippa the artwork based solely upon Agrippa the slightly underwhelming material object? Matthew Kirschenbaum says that “the practical failure to realize much of what was initially planned for Agrippa allowed the project to succeed by leaving in its place the purest form of virtual work — a meme rather than an artifact.” He goes on to note that Agrippa is “as much conceptual art as anything else.” I agree with him on both points, as I do with the online commenter from back in the day who called it “a piece of emergent performance art.” If art truly lives in our memory and our consciousness, then perhaps our opinion of Agrippa really should encompass the whole experience, including its transmission and its reception. Certainly this is the theory that underlies the whole notion of conceptual art —  whether the artwork in question involves flying cows or disappearing poems.

It’s ironic — yes, there’s that word again — to note that Agrippa was once seen as an ominous harbinger of the digital future in the way that it showed information, divorced from physical media, simply disappearing into the ether, when the reality of the digital age has led to exactly the opposite problem, with every action we take and every word we write online being compiled into a permanent record of who we supposedly are — a slate which we can never wipe clean. And this digital permanence has come to apply to the poem of “Agrippa” as well, which today is never more than a search query away. Gibson:

The whole thing really was an experiment to see just what would happen. That whole Agrippa project was completely based on “let’s do this. What will happen?” Something happens. “What’s going to happen next?”

It’s only a couple thousand words long, and dangerously like poetry. Another cool thing was getting a bunch of net-heads to sit around and read poetry. I sort of liked that.

Having it wind up in permanent form, sort of like a Chinese Wall in cyberspace… anybody who wants to can go and read it, if they take the trouble. Free copies to everyone. So that it became, really, at the last minute, the opposite of the really weird, elitist thing many people thought it was.

So, Agrippa really was as uncontrollable and unpredictable for its creators as it was for anyone else. Notably, nobody made any money whatsoever off it, despite all the publicity and excitement it generated. In fact, Begos calls it a “financial disaster” for his company; the fallout soon forced him to abandon publishing altogether.

“Gibson thinks of it [Agrippa] as becoming a memory, which he believes is more real than anything you can actually see,” said Begos in a contemporary interview. Agrippa did indeed become a collective kairos moment for an emerging digital culture, a memory that will remain with us for a long, long time to come. Chris in the Morning would be proud.

(Sources: the book Mechanisms: New Media and the Forensic Imagination by Matthew G. Kirschenbaum; Starlog of September 1994; Details of June 1992; New York Times of November 18 1992. Most of all, The Agrippa Files of The University of California Santa Barbara, a huge archive of primary and secondary sources dealing with Agrippa, including the video of the original program in action on a vintage Macintosh.)

 
25 Comments

Posted by on September 7, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

The Games of Windows

There are two stories to be told about games on Microsoft Windows during the operating environment’s first ten years on the market. One of them is extremely short, the other a bit longer and far more interesting. We’ll dispense with the former first.

During the first half of the aforementioned decade — the era of Windows 1 and 2 — the big game publishers, like most of their peers making other kinds of software, never looked twice at Microsoft’s GUI. Why should they? Very few people were even using the thing.

Yet even after Windows 3.0 hit the scene in 1990 and makers of other kinds of software stampeded to embrace it, game publishers continued to turn up their noses. The Windows API made life easier in countless ways for makers of word processors, spreadsheets, and databases, allowing them to craft attractive applications with a uniform look and feel. But it certainly hadn’t been designed with games in mind; they were so far down on Microsoft’s list of priorities as to be nonexistent. Games were in fact the one kind of software in which uniformity wasn’t a positive thing; gamers craved diverse experiences. As a programmer, you couldn’t even force a Windows game to go full-screen. Instead you were stuck all the time inside the borders of the window in which it ran; this, needless to say, didn’t do much for immersion. It was true that Windows’s library for programming graphics, known as the Graphics Device Interface, or GDI, liberated programmers from the tyranny of the hardware — from needing to program separate modules to interact properly with every video standard in the notoriously diverse MS-DOS ecosystem. Unfortunately, though, GDI was slow; it was fine for business graphics, but unusable for most of the popular game genres.

For all these reasons, game developers, alone among makers of software, stuck obstinately with MS-DOS throughout the early 1990s, even as everything else in mainstream computing went all Windows, all the time. It wouldn’t be until after the first decade of Windows was over that game developers would finally embrace it, helped along both by a carrot (Microsoft was finally beginning to pay serious attention to their needs) and a stick (the ever-expanding diversity of hardware on the market was making the MS-DOS bare-metal approach to programming untenable).

End of story number one.

The second, more interesting story about games on Windows deals with different kinds of games from the ones the traditional game publishers were flogging to the demographic who were happy to self-identify as gamers. The people who came to play these different kinds of games couldn’t imagine describing themselves in those terms — and, indeed, would likely have been somewhat insulted if you had suggested it to them. Yet they too would soon be putting in millions upon millions of hours every year playing games, albeit more often in antiseptic adult offices than in odoriferous teenage bedrooms. Whatever; the fact was, they were still playing games. In fact, they were playing games enough to make Windows, that alleged game-unfriendly operating environment, quite probably the most successful gaming platform of the early 1990s in terms of sheer number of person-hours spent playing. And all the while the “hardcore” gamers barely even noticed this most profound democratization of computer gaming that the world had yet seen.



Microsoft Windows, like its inspiration the Apple Macintosh, used what’s known as a skeuomorphic interface — an interface built out of analogues to real-world objects, such as paper documents, a desktop,  and a trashcan — to present a friendlier face of computing to people who may have been uncomfortable with the blinking command prompt of yore. It thus comes as little surprise that most of the early Windows games were skeuomorphic as well, being computerized versions of non-threateningly old-fashioned card and board games. In this, they were something of a throwback to the earliest days of personal computing in general, when hobbyists passed around BASIC versions of these same hoary classics, whose simple designs constituted some of the only ones that could be made to fit into the minuscule memories of the first microcomputers. With Windows, it seemed, the old had become new again, as computer gaming started over to try to capture a whole new demographic.

The very first game ever programmed to run in Windows is appropriately prototypical. When Tandy Trower took over the fractious and directionless Windows project at Microsoft in January of 1985, he found that a handful of applets that weren’t, strictly speaking, a part of the operating environment itself had already been completed. These included a calculator, a rudimentary text editor, and a computerized version of a board game called Reversi.

Reversi is an abstract game for two players that looks a bit like checkers and plays like a faster-paced, simplified version of the Japanese classic Go. Its origins are somewhat murky, but it was first popularized as a commercial product in late Victorian England. In 1971, an enterprising Japanese businessman made a couple of minor changes to the rules of this game that had long been considered in the public domain, patented the result, and started selling it as Othello. Under this name, it enjoys modest worldwide popularity to this day. Under both of its names, it also became an early favorite on personal computers, where its simple rules and relatively constrained possibility space lent themselves well to the limitations of programming in BASIC on a 16 K computer; Byte magazine, the bible of early microcomputer hackers, published a type-in Othello as early as its October 1977 issue.

A member of the Windows team named Chris Peters had decided to write a new version of the game under its original (and non-trademarked) name of Reversi in 1984, largely as one of several experiments — proofs of concept, if you will — into Windows application programming. Tandy Trower then pushed to get some of his team’s experimental applets, among them Reversi, included with the first release of Windows in November of 1985:

When the Macintosh was announced, I noted that Apple bundled a small set of applications, which included a small word processor called MacWrite and a drawing application called MacPaint. In addition, Lotus and Borland had recently released DOS products called Metro and SideKick that consisted of a small suite of character-based applications that could be popped up with a keyboard combination while running other applications. Those packages included a simple text editor, a calculator, a calendar, and a business-card-like database. So I went to [Bill] Gates and [Steve] Ballmer with the recommendation that we bundle a similar set of applets with Windows, which would include refining the ones already in development, as well as a few more to match functions comparable to these other products.

Interestingly, MacOS did not include any full-fledged games among its suite of applets; the closest it came was a minimalist sliding-number puzzle that filled all of 600 bytes and a maze on the “Guided Tour of Macintosh” disk that was described as merely a tool for learning to use the mouse. Apple, whose Apple II was found in more schools and homes than businesses and who were therefore viewed with contempt by much of the conservative corporate computing establishment, ran scared from any association of their latest machine with games. But Microsoft, on whose operating system MS-DOS much of corporate America ran, must have felt they could get away with a little more frivolity.

Still, Windows Reversi didn’t ultimately have much impact on much of anyone. Reversi in general was a game more suited to the hacker mindset than the general public, lacking the immediate appeal of a more universally known design, while the execution of this particular version of Reversi was competent but no more. And then, of course, very few people bought Windows 1 in the first place.

For a long time thereafter, Microsoft gave little thought to making more games for Windows. Reversi stuck around unchanged in the only somewhat more successful Windows 2, and was earmarked to remain in Windows 3.0 as well. Beyond that, Microsoft had no major plans for Windows gaming. And then, in one of the stranger episodes in the whole history of gaming, they were handed the piece of software destined to become almost certainly the most popular computer game of all time, reckoned in terms of person-hours played: Windows Solitaire.

The idea of a single-player card game, perfect for passing the time on long coach or railway journeys, had first spread across Europe and then the world during the nineteenth century. The game of Solitaire — or Patience, as it is still more commonly known in Britain — is really a collection of many different games that all utilize a single deck of everyday playing cards. The overarching name is, however, often used interchangeably with the variant known as Klondike, by far the most popular form of Solitaire.

Klondike Solitaire, like the many other variants, has many qualities that make it attractive for computer adaptation on a platform that gives limited scope for programmer ambition. Depending on how one chooses to define such things, a “game” of Solitaire is arguably more of a puzzle than an actual game, and that’s a good thing in this context: the fact that this is a truly single-player endeavor means that the programmer doesn’t have to worry about artificial intelligence at all. In addition, the rules are simple, and playing cards are fairly trivial to represent using even the most primitive computer graphics. Unsurprisingly, then, Solitaire was another favorite among the earliest microcomputer game developers.

It was for all the same reasons that a university student named Wes Cherry, who worked at Microsoft as an intern during the summer of 1988, decided to make a version of Klondike Solitaire for Windows that was similar to one he had spent a lot of time playing on the Macintosh. (Yes, even when it came to the games written by Microsoft’s interns, Windows could never seem to escape the shadow of the Macintosh.) There was, according to Cherry himself, “nothing great” about the code of the game he wrote; it was no better nor worse than a thousand other computerized Solitaire games. After all, how much could you really do with Solitaire one way or the other? It either worked or it didn’t. Thankfully, Cherry’s did, and even came complete with a selection of cute little card backs, drawn by his girlfriend Leslie Kooy. Asked what was the hardest aspect of writing the game, he points today to the soon-to-be-iconic cascade of cards that accompanied victory: “I went through all kinds of hoops to get that final cascade as fast as possible.” (Here we have a fine example of why most game programmers held Windows in such contempt…) At the end of his summer internship, he put his Solitaire on a server full of games and other little experiments that Microsoft’s programmers had created while learning how Windows worked, and went back to university.

Months later, some unknown manager at Microsoft sifted through the same server and discovered Cherry’s Solitaire. It seems that Microsoft had belatedly started looking for a new game — something more interesting than Reversi — to include with the upcoming Windows 3.0, which they intended to pitch as hard to consumers as businesspeople. They now decided that Solitaire ought to be that game. So, they put it through a testing process, getting Cherry to fix the bugs they found from his dorm room in return for a new computer. Meanwhile Susan Kare, the famed designer of MacOS’s look who was now working for Microsoft, gave Leslie Kooy’s cards a bit more polishing.

And so, when Windows 3.0 shipped in May of 1990, Solitaire was included. According to Microsoft, its purpose was to teach people how to use a GUI in a fun way, but that explanation was always something of a red herring. The fact was that computing was changing, machines were entering homes in big numbers once again, and giving people a fun game to play as part of an otherwise serious operating environment was no longer anathema. Certainly huge numbers of people would find Solitaire more than compelling enough as an end unto itself.

The ubiquity that Windows Solitaire went on to achieve — and still maintains to a large extent to this day [1]The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015. — is as difficult to overstate as it is to quantify. Microsoft themselves soon announced it to be the “most used” Windows application of all, easily besting heavyweight businesslike contenders like Word, Excel, Lotus 1-2-3, and WordPerfect. The game became a staple of office life all over the world, to be hauled out during coffee breaks and down times, to be kept always lurking minimized in the background, much to the chagrin of officious middle managers. By 1994, a Washington Post article would ask, only half facetiously, if Windows Solitaire was sowing the seeds of “the collapse of American capitalism.”

“Yup, sure,” says Frank Burns, a principal in the region’s largest computer bulletin board, the MetaNet. “You used to see offices laid out with the back of the video monitor toward the wall. Now it’s the other way around, so the boss can’t see you playing Solitaire.”

“It’s swallowed entire companies,” says Dennis J. “Gomer” Pyles, president of Able Bodied Computers in The Plains, Virginia. “The water-treatment plant in Warrenton, I installed [Windows on] their systems, and the next time I saw the client, the first thing he said to me was, ‘I’ve got 2000 points in Solitaire.'”

Airplanes full of businessmen resemble not board meetings but video arcades. Large gray men in large gray suits — lugging laptops loaded with spreadsheets — are consumed by beating their Solitaire scores, flight attendants observe.

Some companies, such as Boeing, routinely remove Solitaire from the Windows package when it arrives, or, in some cases, demand that Microsoft not even ship the product with the game inside. Even PC Magazine banned game-playing during office hours. “Our editor wanted to lessen the dormitory feel of our offices. Advertisers would come in and the entire research department was playing Solitaire. It didn’t leave the best impression,” reported Tin Albano, a staff editor.

Such articles have continued to crop up from time to time in the business pages ever since — as, for instance, the time in 2006 when New York City Mayor Michael Bloomberg summarily terminated an employee for playing Solitaire on the job, creating a wave of press coverage both positive and negative. But the crackdowns have always been to no avail; it’s as hard to imagine the modern office without Microsoft Solitaire as it is to imagine it without Microsoft Office.

Which isn’t to say that the Solitaire phenomenon is limited to office life. My retired in-laws, who have quite possibly never played another computer game in either of their lives, both devote hours every week to Solitaire in their living room. A Finnish study from 2007 found it to be the favorite game of 36 percent of women and 13 percent of men; no other game came close to those numbers. Even more so than Tetris, that other great proto-casual game of the early 1990s, Solitaire is, to certain types of personality at any rate, endlessly appealing. Why should that be?

To begin to answer that question, we might turn to the game’s pre-digital past. Whitmore Jones’s Games of Patience for One or More Players, a compendium of many Solitaire variants, was first published in 1898. Its introduction is fascinating, presaging much of the modern discussion about Microsoft Solitaire and casual gaming in general.

In days gone by, before the world lived at the railway speed as it is doing now, the game of Patience was looked upon with somewhat contemptuous toleration, as a harmless but dull amusement for idle ladies, and was ironically described as “a roundabout method of sorting the cards”; but it has gradually won for itself a higher place. For now, when the work, and still more the worries, of life have so enormously increased and multiplied, the value of a pursuit interesting enough to absorb the attention without unduly exciting the brain, and so giving the mind a rest, as it were, a breathing space wherein to recruit its faculties, is becoming more and more recognised and appreciated.

In addition to illustrating how concerns about the pace of contemporary life and nostalgia for the good old days are an eternal part of the human psyche, this passage points to the heart of Solitaire’s appeal, whether played with real cards or on a computer: the way that it can “absorb the attention without unduly exciting the brain.” It’s the perfect game to play when killing time at the end of the workday, as a palate cleanser between one task and another, or, as in the case of my in-laws, as a semi-active accompaniment to the idle practice of watching the boob tube.

Yet Solitaire isn’t a strictly rote pursuit even for those with hundreds of hours of experience playing it; if it was, it would have far less appeal. Indeed, it isn’t even particularly fair. About 20 percent of shuffles will result in a game that isn’t winnable at all, and Wes Cherry’s original computer implementation at least does nothing to protect you from this harsh mathematical reality. Still, when you get stuck there’s always that “Deal” menu option waiting for you up there in the corner, a tempting chance to reshuffle the cards and try your hand at a new combination. So, while Solitaire is the very definition of a low-engagement game, it’s also a game that has no natural end point; somehow the “Deal” option looks equally tempting whether you’ve just won or just lost. After being sucked in by its comfortable similarity to an analog game of cards almost everyone of a certain age has played, people can and do proceed to keep playing it for a lifetime.

As in the case of Tetris, there’s room to debate whether spending so many hours upon such a repetitive activity as playing Solitaire is psychologically healthy. For my own part, I avoid it and similar “time waster” games as just that — a waste of time that doesn’t leave me feeling good about myself afterward. By way of another perspective, though, there is this touching comment that was once left by a Reddit user to Wes Cherry himself:

I just want to tell you that this is the only game I play. I have autism and don’t game due to not being able to cope with the sensory processing – but Solitaire is “my” game.

I have a window of it open all day, every day and the repetitive clicking is really soothing. It helps me calm down and mentally function like a regular person. It makes a huge difference in my quality of life. I’m so glad it exists. Never thought there would be anyone I could thank for this, but maybe I can thank you. *random Internet stranger hugs*

Cherry wrote Solitaire in Microsoft’s offices on company time, and thus it was always destined to be their intellectual property. He was never paid anything at all, beyond a free computer, for creating the most popular computer game in history. He says he’s fine with this. He’s long since left the computer industry, and now owns and operates a cider distillery on Vashon Island in Puget Sound.

The popularity of Solitaire convinced Microsoft, if they needed convincing, that simple games like this had a place — potentially a profitable place — in Windows. Between 1990 and 1992, they released four “Microsoft Entertainment Packs,” each of which contained seven little games of varying degrees of inspiration, largely cobbled together from more of the projects coded by their programmers in their spare time. These games were the polar opposite of the ones being sold by traditional game publishers, which were growing ever more ambitious, with increasingly elaborate storylines and increasing use of video and sound recorded from the real world. The games from Microsoft were instead cast in the mold of Cherry’s Solitaire: simple games that placed few demands on either their players or the everyday office computers Microsoft envisioned running them, as indicated by the blurbs on the boxes: “No more boring coffee breaks!”; “You’ll never get out of the office!” Bruce Ryan, the manager placed in charge of the Entertainment Packs, later summarized the target demographic as “loosely supervised businesspeople.”

The centerpiece of the first Entertainment Pack was a passable version of Tetris, created under license from Spectrum Holobyte, who owned the computer rights to the game. Wes Cherry, still working out of his dorm room, provided a clone of another older puzzle game called Pipe Dream to be the second Entertainment Pack’s standard bearer; he was even compensated this time, at least modestly. As these examples illustrate, the Entertainment Packs weren’t conceptually ambitious in the least, being largely content to provide workmanlike copies of established designs from both the analog and digital realms. Among the other games included were Solitaire variants other than Klondike, a clone of the Activision tile-matching hit Shanghai, a 3D Tic-tac-toe game, a golf game (for the ultimate clichéd business-executive experience), and even a version of John Horton Conway’s venerable study of cellular life cycles, better known as the game of Life. (One does have to wonder what bored office workers made of that).

Established journals of record like Computer Gaming World barely noticed the Entertainment Packs, but they sold more than half a million copies in two years, equaling or besting the numbers of the biggest hardcore hits of the era, such as the Wing Commander series. Yet even that impressive number rather understates the popularity of Microsoft’s time wasters. Given that they had no copy protection, and given that they would run on any computer capable of running Windows, the Entertainment Packs were by all reports pirated at a mind-boggling rate, passed around offices like cakes baked for the Christmas potluck.

For all their success, though, nothing on any of the Entertainment Packs came close to rivaling Wes Cherry’s original Solitaire game in terms of sheer number of person-hours played. The key factor here was that the Entertainment Packs were add-on products; getting access to these games required motivation and effort from the would-be player, along with — at least in the case of the stereotypical coffee-break player from Microsoft’s own promotional literature — an office environment easygoing enough that one could carry in software and install it on one’s work computer. Solitaire, on the other hand, came already included with every fresh Windows installation, so long as an office’s system administrators weren’t savvy and heartless enough to seek it out and delete it. The archetypal low-effort game, its popularity was enabled by the fact that it also took no effort whatsoever to gain access to it. You just sort of stumbled over it while trying to figure out this new Windows thing that the office geek had just installed on your faithful old computer, or when you saw your neighbor in the next cubicle playing and asked what the heck she was doing. Five minutes later, it had its hooks in you.

It was therefore significant when Microsoft added a new game — or rather an old one — to 1992’s Windows 3.1. Minesweeper had actually debuted as part of the first Entertainment Pack, where it had become a favorite of quite a number of players. Among them was none other than Bill Gates himself, who became so addicted that he finally deleted the game from his computer — only to start getting his fix on his colleagues’ machines. (This creates all sorts of interesting fuel for the imagination. How do you handle it when your boss, who also happens to be the richest man in the world, is hogging your computer to play Minesweeper?) Perhaps due to the CEO’s patronage, Minesweeper became part of Windows’s standard equipment in 1992, replacing the unloved Reversi.

Unlike Solitaire and most of the Entertainment Pack games, Minesweeper was an original design, written by staff programmers Robert Donner and Curt Johnson in their spare time. That said, it does owe something to the old board game Battleship, to very early computer games like Hunt the Wumpus, and in particular to a 1985 computer game called Relentless Logic. You click on squares in a grid to uncover their contents, which can be one of three things: nothing at all, indicating that neither this square nor any of its adjacent squares contain mines; a number, indicating that this square is clear but said number of its adjacent squares do contain mines; or — unlucky you! — an actual mine, which kills you, ending the game. Like Solitaire, Minesweeper straddles the line — if such a line exists — between game and puzzle, and it isn’t a terribly fair take on either: while the program does protect you to the extent that the first square you click will never contain a mine, it’s possible to get into a situation through no fault of your own where you can do nothing but play the odds on your next click. But, unlike Solitaire, Minesweeper does have more of the trappings of a conventional videogame, including a timer which encourages you to play quickly to achieve the maximum score.

Doubtless because of those more overt videogame trappings, Minesweeper never became quite the office fixture that Solitaire did. Those who did get sucked in by it, however, found it even more addictive, perhaps not least because it does demand a somewhat higher level of engagement. It too became an iconic part of life with Microsoft Windows, and must rank high on any list of most-played computer games of all time, if the data only existed to compile such a thing. After all, it did enjoy one major advantage over even Solitaire for office workers with uptight bosses: it ran in a much smaller window, and thus stood out far less on a crowded screen when peering eyes glanced into one’s cubicle.

Microsoft included a third game with Windows for Workgroups 3.1, a variant intended for a networked office environment. True to that theme, Hearts was a version of the evergreen card game which could be played against computer opponents, but which was most entertaining when played together by up to four real people, all on separate computers. Its popularity was somewhat limited by the fact that it came only with Windows for Workgroups, but, again, that adjective is relative. By any normal computer-gaming standard, Hearts was hugely popular indeed for quite some years, serving for many people as their introduction to the very concept of online gaming — a concept destined to remake much of the landscape of computer gaming in general in years to come. Certainly I can remember many a spirited Hearts tournament at my workplaces during the 1990s. The human, competitive element always made Hearts far more appealing to me than the other games I’ve discussed in this article.

But whatever your favorite happened to be, the games of Windows became a vital part of a process I’ve been documenting in fits and starts over the last year or two of writing this history: an expansion of the demographics that were playing games, accomplished not by making parents and office workers suddenly fall in love with the massive, time-consuming science-fiction or fantasy epics upon which most of the traditional computer-game industry remained fixated, but rather by meeting them where they lived. Instead of five-course meals, Microsoft provided ludic snacks suited to busy lives and limited attention spans. None of the games I’ve written about here are examples of genius game design in the abstract; their genius, to whatever extent it exists, is confined to worming their way into the psyche in a way that can turn them into compulsions. Yet, simply by being a part of the software that just about everybody, with the exception of a few Macintosh stalwarts, had on their computers in the 1990s, they got hundreds of millions of people playing computer games for the first time. The mainstream Ludic Revolution, encompassing the gamification of major swaths of daily life, began in earnest on Microsoft Windows.

(Sources: the book A Casual Revolution: Reinventing Video Games and Their Players by Jesper Juul; Byte of October 1977; Computer Gaming World of September 1992; Washington Post of March 9 1994; New York Times of February 10 2006; online articles at Technologizer, The Verge, B3TA, Reddit, Game Set Watch, Tech Radar, Business Insider, and Danny Glasser’s personal blog.)

Footnotes

Footnotes
1 The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015.
 
 

Tags: ,

Doing Windows, Part 9: Windows Comes Home

This series of articles so far has been a story of business-oriented personal computing. Corporate America had been running for decades on IBM before the IBM PC appeared, so it was only natural that the standard IBM introduced would be embraced as the way to get serious, businesslike things done on a personal computer. Yet long before IBM entered the picture, personal computing in general had been pioneered by hackers and hobbyists, many of whom nursed grander dreams than giving secretaries a better typewriter or giving accountants a better way to add up figures. These pioneers didn’t go away after 1981, but neither did they embrace the IBM PC, which most of them dismissed as technically unimaginative and aesthetically disastrous. Instead they spent the balance of the 1980s using computers like the Apple II, the Commodore 64, the Commodore Amiga, and the Atari ST to communicate with one another, to draw pictures, to make music, and of course to write and play lots and lots of games. Dwarfed already in terms of dollars and cents at mid-decade by the business-computing monster the IBM PC had birthed, this vibrant alternative computing ecosystem — sometimes called home computing, sometimes consumer computing — makes a far more interesting subject for the cultural historian of today than the world of IBM and Microsoft, with its boring green screens and boring corporate spokesmen running scared from the merest mention of digital creativity. It’s for this reason that, a few series like this one aside, I’ve spent the vast majority of my time on this blog talking about the cultures of creative computing rather than those of IBM and Microsoft.

Consumer computing did enjoy one brief boom in the 1980s. From roughly 1982 to 1984, a narrative took hold within the mainstream media and the offices of venture capitalists alike that full-fledged computers would replace the Atari VCS and other game consoles in American homes on a massive scale. After all, computers could play games just like the consoles, but they alone could also be used to educate the kids, write school reports and letters, balance the checkbook, and — that old favorite to which the pundits returned again and again — store the family recipes.

All too soon, though, the limitations of the cheap 8-bit computers that had fueled the boom struck home. As a consumer product, those early computers with their cryptic blinking command prompts were hopeless; at least with an Atari VCS you could just put a cartridge in the slot, turn it on, and play. There were very few practical applications for which they weren’t more trouble than they were worth. If you needed to write a school report, a standalone word-processing machine designed for that purpose alone was often a cheaper and better solution, and the family accounts and recipes were actually much easier to store on paper than in a slow, balky computer program. Certainly paper was the safer choice over a pile of fragile floppy disks.

So, what we might call the First Home Computer Revolution fizzled out, with most of the computers that had been purchased over its course making the slow march of shame from closet to attic to landfill. That minority who persisted with their new computers was made up of the same sorts of personalities who had had computers in their homes before the boom — for the one concrete thing the First Home Computer Revolution had achieved was to make home computers in general more affordable, and thus put them within the reach of more people who were inclined toward them anyway. People with sufficient patience continued to find home computers great for playing games that offered more depth than the games on the consoles, while others found them objects of wonder unto themselves, new oceans just waiting to have their technological depths plumbed by intrepid digital divers. It was mostly young people, who had free time on their hands, who were open to novelty, who were malleable enough to learn something new, and who were in love with escapist fictions of all stripes, who became the biggest home-computer users.

Their numbers grew at a modest pace every year, but the real money, it was now clear, was in business computing. Why try to sell computers piecemeal to teenagers when you could sell them in bulk to corporations? IBM, after having made one abortive stab at capturing home computing as well via the ill-fated PCjr, went where the money was, and all but a few other computer makers — most notable among these home-computer loyalists were Commodore, Atari, and Radio Shack — followed them there. The teenagers, for their part, responded to the business-computing majority’s contempt in kind, piling scorn onto the IBM PC’s ludicrously ugly CGA graphics and its speaker that could do little more than beep and fart at you, all while embracing their own more colorful platforms with typical adolescent zeal.

As the 1980s neared their end, however, the ugly old MS-DOS computer started down an unanticipated road of transformation. In 1987, as part of the misbegotten PS/2 line, IBM introduced a new graphics standard called VGA that, with up to 256 onscreen colors from a palette of more than 260,000, outdid all of the common home computers of the time. Soon after, enterprising third parties like Ad Lib and Creative Labs started making add-on sound cards for MS-DOS machines that could make real music and — just as important for game fanatics — real explosions. Many a home hacker woke up one morning to realize that the dreaded PC clone suddenly wasn’t looking all that bad. No, the technical architecture wasn’t beautiful, but it was robust and mature, and the pressure of having dozens of competitors manufacturing machines meeting the standard kept the bang-for-your-buck ratio very good. And if you — or your parents — did want to do any word processing or checkbook balancing, the software for doing so was excellent, honed by years of catering to the most demanding of corporate users. Ditto the programming tools that were nearer to a hacker’s heart; Borland’s Turbo Pascal alone was a thing of wonder, better than any other programming environment on any other personal computer.

Meanwhile 8-bit home computers like the Apple II and the Commodore 64 were getting decidedly long in the tooth, and the companies that made them were doing a peculiarly poor job of replacing them. The Apple Macintosh was so expensive as to be out of reach of most, and even the latest Apple II, known as the IIGS, was priced way too high for what it was; Apple, having joined the business-computing rat race, seemed vaguely embarrassed by the continuing existence of the Apple II, the platform that had made them. The Commodore Amiga 500 was perhaps a more promising contender to inherit the crown of the Commodore 64, but its parent company had mismanaged their brand almost beyond hope of redemption in the United States.

So, in 1988 and 1989 MS-DOS-based computing started coming home, thanks both to its own sturdy merits and a lack of compelling alternatives from the traditional makers of home computers. The process was helped along by Sierra Online, a major publisher of consumer software who had bet big and early on the MS-DOS standard conquering the home in the end, and were thus out in front of its progress now with a range of appealing games that took full advantage of the new graphics and sound cards. Other publishers, reeling before a Nintendo onslaught that was devastating the remnants of the 8-bit software market, soon followed their lead. By 1990, the vast majority of the American consumer-software industry had joined their counterparts in business software in embracing MS-DOS as their platform of first — often, of only — priority.

Bill Gates had always gone where the most money was. In years past, the money had been in business computing, and so Microsoft, after experimenting briefly with consumer software in the period just before the release of the IBM PC, had all but ignored the consumer market in favor of system software and applications targeted squarely at corporate America. Now, though, the times were changing, as home computers became powerful and cheap enough to truly go mainstream. The media was buzzing about the subject as they hadn’t for years; everywhere it was multimedia this, CD-ROM that. Services like Prodigy and America Online were putting a new, friendlier face on the computer as a tool for communicating and socializing, and game developers were buzzing about an emerging new form of mass-market entertainment, a merger of Silicon Valley and Hollywood. Gates wasn’t alone in smelling a Second Home Computer Revolution in the wind, one that would make the computer a permanent fixture of modern American home life in all the ways the first had failed to do so.

This, then, was the zeitgeist into which Microsoft Windows 3.0 made its splashy debut in May of 1990. It was perfectly positioned both to drive the Second Home Computer Revolution and to benefit from it. Small wonder that Microsoft undertook a dramatic branding overhaul this year, striving to project a cooler, more entertaining image — an image appropriate for a company which marketed not to other companies but to individual consumers. One might say that the Microsoft we still know today was born on May 22, 1990, when Bill Gates strode onto a stage — tellingly, not a stage at Comdex or some other stodgy business-oriented computing event — to introduce the world to Windows 3.0 over a backdrop of confetti cannons, thumping music, and huge projection screens.

The delirious sales of Windows 3.0 that followed were not — could not be, given their quantity — driven exclusively by sales to corporate America. The world of computing had turned topsy-turvy; consumer computing was where the real action was now. Even as they continued to own business-oriented personal computing, Microsoft suddenly dominated in the home as well, thanks to the capitulation without much of a fight of all of the potential rivals to MS-DOS and Windows. Countless copies of Windows 3.0 were sold by Microsoft directly to Joe Public to install on his existing home computer, through a toll-free hotline they set up for the purpose. (“Have your credit card ready and call!”) Even more importantly, as new computers entered American homes in mass quantities for the second time in history, they did so with Windows already on their hard drives, thanks to Microsoft’s longstanding deals with the companies that made them.

In April of 1992,  Windows 3.1 appeared, sporting as one of its most important new features a set of “multimedia extensions” — this meaning tools for recording and playing back sounds, for playing audio CDs, and, most of all, for running a new generation of CD-ROM-based software sporting digitized voices and music and video clips — which were plainly aimed at the home rather than the business user.  Although Windows 3.1 wasn’t as dramatic a leap forward as its predecessor had been, Microsoft nevertheless hyped it to the skies in the mass media, rolling out an $8 million television-advertising campaign among other promotional strategies that would have been unthinkable from the business-focused Microsoft of just a few years earlier. It sold even faster than had its predecessor.

A Quick Tour of Windows for Workgroups 3.1


Released in April of 1992, Windows 3.1 was the ultimate incarnation of Windows’s third generation. (A version 3.11 was released the following year, but it confined itself to bug fixes and modest performance tweaks, introducing no significant new features.) It dropped support for 8088-based machines, and with it the old “real mode” of operation; it now ran only in protected mode or 386 enhanced mode. It made welcome strides in terms of stability, even as it still left much to be desired on that front. And this Windows was the last to be sold as an add-on to an MS-DOS which had to be purchased separately. Consumer-grade incarnations of Windows would continue to be built on top of MS-DOS for the rest of the decade, but from Windows 95 on Microsoft would do a better job of hiding their humble foundation by packaging the whole software stack together as a single product.

Stuff like this is the reason Windows always took such a drubbing in comparison to other, slicker computing platforms. In truth, Microsoft was doing the best they could to support a bewildering variety of hardware, a problem with which vendors of turnkey systems like Apple didn’t have to contend. Still, it’s never a great look to have to tell your customers, “If this crashes your computer, don’t worry about it, just try again.” Much the same advice applied to daily life with Windows, noted the scoffers.

Microsoft was rather shockingly lax about validating Windows 3 installations. The product had no copy protection of any sort, meaning one person in a neighborhood could (and often did) purchase a copy and share it with every other house on the block. Others in the industry had a sneaking suspicion that Microsoft really didn’t mind that much if Windows was widely pirated among their non-business customers — that they’d rather people run pirated copies of Windows than a competing product. It was all about achieving the ubiquity which would open the door to all sorts of new profit potential through the sale of applications. And indeed, Windows 3 was pirated like crazy, but it also became thoroughly ubiquitous. As for the end to which Windows’s ubiquity was the means: by the time applications came to represent 25 percent of Microsoft’s unit sales, they already accounted for 51 percent of their revenue. Bill Gates always had an instinct for sniffing out where the money was.

Probably the most important single enhancement in Windows 3.1 was its TrueType fonts. The rudimentary bitmap fonts which shipped with older versions looked… not all that nice on the screen or on the page, reportedly due to Bill Gates’s adamant refusal to pay a royalty for fonts to an established foundry like Adobe, as Apple had always done. This decision led to a confusion of aftermarket fonts in competing formats. If you used some of these more stylish fonts in a document, you couldn’t share that document with anyone else unless she also had installed the same fonts. So, you could either share ugly documents or keep nice-looking ones to yourself. Some choice! Thankfully, TrueType came along to fix all that, giving Macintosh users at least one less thing to laugh at when it came to Windows.

The TrueType format was the result of an unusual cooperative project led by Microsoft and Apple — yes, even as they were battling one another in court. The system of glyphs and the underlying technology to render them were intended to break the stranglehold Adobe Systems enjoyed over high-end printing; Adobe charged a royalty of up to $100 per gadget that employed their own PostScript font system, and were widely seen in consequence as a retrograde force holding back the entire desktop-publishing and GUI ecosystem. TrueType would succeed splendidly in its monopoly-busting goal, to such an extent that it remains the standard for fonts on Microsoft Windows and Apple’s OS X to this day. Bill Gates, no stranger to vindictiveness, joked that “we made [the widely disliked Adobe head] John Warnock cry.”

The other big addition to Windows 3.1 was the “multimedia extensions.” These let you do things like record sounds using an attached microphone and play your audio CDs on your computer. That they were added to what used to be a very businesslike operating environment says much about how important home users had become to Microsoft’s strategy.

In a throwback to an earlier era of computing, MS-DOS still shipped with a copy of BASIC included, and Windows 3.1 automatically found it and configured it for easy access right out of the box — this even though home computing was now well beyond the point where most users would ever try to become programmers. Bill Gates’s sentimental attachment to BASIC, the language on which he built his company before the IBM PC came along, has often been remarked upon by his colleagues, especially since he wasn’t normally a man given to much sentimentality. It was the widespread perception of Borland’s Turbo Pascal as the logical successor to BASIC — the latest great programming tool for the masses — that drove the longstanding antipathy between Gates and Borland’s flamboyant leader, Philippe Kahn. Later, it was supposedly at Gates’s insistence that Microsoft’s Visual BASIC, a Pascal-killer which bore little resemblance to BASIC as most people knew it, nevertheless bore the name.

Windows for Workgroups — a separate, pricier version of the environment aimed at businesses — was distinguished by having built-in support for networking. This wasn’t, however, networking as we think of it today. It was rather intended to connect machines together only in a local office environment. No TCP/IP stack — the networking technology that powers the Internet — was included.

But you could get on the Internet with the right additional software. Here, just for fun, I’m trying to browse the web using Internet Explorer 5 from 1999, the last version made for Windows 3. Google is one of the few sites that work at all — albeit, as you can see, not very well.

All this success — this reality of a single company now controlling almost all personal computing, in the office and in the home — brought with it plenty of blowback. The metaphor of Microsoft as the Evil Empire, and of Bill Gates as the computer industry’s very own Darth Vader, began in earnest in these years of Windows 3’s dominance. Neither Gates nor his company had ever been beloved among their peers, having always preferred making money to making friends. Now, though, the naysayers came out in force. Bob Metcalfe, a Xerox PARC alum famous in hacker lore as the inventor of the Ethernet networking protocol, talked about Microsoft’s expanding “death grip” on innovation in the computer industry. Indeed, zombie imagery was prevalent among many of Microsoft’s rivals; Mitch Kapor of Lotus called the new Windows-driven industry “the kingdom of the dead”: “The revolution is over, and free-wheeling innovation in the software industry has ground to a halt.” Any number of anonymous commenters mused about doing Gates any number of forms of bodily harm. “It’s remarkable how widespread the negative feelings toward Microsoft are,” mused Stewart Alsop. “No one wants to work with Microsoft anymore,” said noted Gates-basher Phillipe Kahn of Borland. “We sure won’t. They don’t have any friends left.” Channeling such sentiments, Business Month magazine cropped his nerdy face onto a body-builder’s body and labeled him the “Silicon Bully” on its cover: “How long can Bill Gates kick sand in the face of the computer industry?”

Setting aside the jealousy that always follows great success, even setting aside for the moment the countless ways in which Microsoft really did play hardball with their competitors, something about Bill Gates rubbed many people the wrong way on a personal, visceral level. In keeping with their new, consumer-friendly image, Microsoft had hired consultants to fix up his wardrobe and work on his speaking style — not to mention to teach him the value of personal hygiene — and he could now get through a canned presentation ably enough. When it came to off-the-cuff interactions, though, he continued to strike many as insufferable. To judge him on the basis of his weedy physique and nasally speaking voice — the voice of the kid who always had to show how smart he was to the rest of the class — was perhaps unfair. But one certainly could find him guilty of a thoroughgoing lack of graciousness.

His team of PR coaches could have told him that, when asked who had contributed the most to the personal-computer revolution, he ought to politely decline to answer, or, even better, modestly reflect on the achievements of someone like his old friend Steve Jobs. But they weren’t in the room with him one day when that exact question was put to him by a smiling reporter, and so, after acknowledging that it really should be answered by “others less biased than me,” he proceeded to make the case for himself: “I will say that I started the first microcomputer-software company. I put BASIC in micros before 1980. I was influential in making the IBM PC a 16-bit machine. My DOS is in 50 million computers. I wrote software for the Mac.” I, I, I. Everything he said was true, at least if one presumed that “I” meant “Bill Gates and the others at Microsoft” in this context. Yet there was something unappetizing about this laundry list of achievements he could so easily rattle off, and about the almost pathological competitiveness it betrayed. We love to praise ambition in the abstract, but most of us find such naked ambition as that constantly displayed by Gates profoundly off-putting. The growing dislike for Microsoft in the computer industry and even in much of the technology press was fueled to a large extent by a personal aversion to their founder.

Which isn’t to say that there weren’t valid grounds for concern when it came to Microsoft’s complete dominance of personal-computer system software. Comparisons to the Standard Oil trust of the Gilded Age were in the air, so much so that by 1992 it was already becoming ironically useful for Microsoft to keep the Macintosh and OS/2 alive and allow them their paltry market share, just so the alleged monopolists could point to a couple of semi-viable competitors to Windows. It was clear that Microsoft’s ambitions didn’t end with controlling the operating system installed on the vast majority of computers in the country and, soon, the world. On the contrary, that was only a means to their real end. They were already using their status as the company that made Windows to cut deep into the application market, invading territory that had once belonged to the likes of Lotus 1-2-3 and WordPerfect. Now, those names were slowly being edged out by Microsoft Excel and Microsoft Word. Microsoft wanted to own more or less all of the software on your computer. Any niche outside developers that remained in computing’s new order, it seemed, would do so at Microsoft’s sufferance. The established makers of big-ticket business applications would have been chilled if they had been privy to the words spoken by Mike Maples, Microsoft’s head of applications, to his own people: “If someone thinks we’re not after Lotus and after WordPerfect and after Borland, they’re confused. My job is to get a fair share of the software applications market, and to me that’s 100 percent.” This was always the problem with Microsoft. They didn’t want to compete in the markets they entered; they wanted to own them.

Microsoft’s control of Windows gave them all sorts of advantages over other application developers which may not have been immediately apparent to the non-technical public. Take, for instance, the esoteric-sounding technology of Object Linking and Embedding, or OLE, which debuted with Windows 3.0 and still exists in current versions. OLE allows applications to share all sorts of dynamic data between themselves. Thanks to it, a word-processor document can include charts and graphs from a spreadsheet, with the one updating itself automatically when the other gets updated. Microsoft built OLE support into new versions of Word and Excel that accompanied Windows 3.0’s release, but refused for many months to tell outside developers how to use it.  Thus Microsoft’s applications had hugely desirable capabilities which their competitors did not for a long, long time. Similar stories played out again and again, driving the competition to distraction while Bill Gates shrugged his shoulders and played innocent. “We bend over backwards to make sure we’re not getting special advantage,” he said, while Steve Ballmer talked about a “Chinese wall” between Microsoft’s application and system programmers — a wall which people who had actually worked there insisted simply didn’t exist.

On March 1, 1991, news broke that the Federal Trade Commission was investigating Microsoft for anti-trust violations and monopolistic practices. The investigators specifically pointed to that agreement with IBM that had been announced at the Fall 1989 Comdex, to target low-end computers with Microsoft’s Windows and high-end computers with the two companies’ joint operating system OS/2 — ironically, an “anti-competitive” initiative that Microsoft had never taken all that seriously. Once the FTC started digging, however, they found that there was plenty of other evidence to be turned up, from both the previous decade and this new one.

There was, for instance, little question that Microsoft had always leveraged their status as the maker of MS-DOS in every way they could. When Windows 3.0 came out, they helped to ensure its acceptance by telling hardware makers that the only way they would continue to be allowed to buy MS-DOS for pre-installation on their computers was to buy Windows and start pre-installing that too. Later, part of their strategy for muscling into the application market was to get Microsoft Works, a stripped-down version of the full Microsoft Office suite, pre-installed on computers as well. How many people were likely to go out and buy Lotus 1-2-3 or WordPerfect when they already had similar software on their computer? Of course, if they did need something more powerful, said the little card included with every computer, they could have the more advanced version of Microsoft Works for the cost of a nominal upgrade fee…

And there were other, far more nefarious stories to tell. There was, for instance, the tale of DR-DOS, a 1988 alternative to MS-DOS from Digital Research which was compatible with Microsoft’s operating system but offered a lot of welcome enhancements. Microsoft went after any clone maker who tried to offer DR-DOS pre-installed on their machines with both carrots (they would undercut Digital Research’s price to the point of basically giving MS-DOS away if necessary) and sticks (they would refuse to license them the upcoming, hotly anticipated Windows 3.0 if they persisted in their loyalty to Digital Research). Later, once the DR-DOS threat had been quelled, most of the features that had made it so desirable turned up in the next release of MS-DOS. Digital Research — a company which Bill Gates seemed to delight in tormenting — had once again been, in the industry’s latest parlance, “Microslimed.”

But Digital Research was neither the first nor the last such company. Microsoft, it was often claimed, had a habit of negotiating with smaller companies under false pretenses, learning what made their technology tick under the guise of due diligence, and then launching their own product based on what they had learned. In early 1990, Microsoft told Intuit, a maker of a hugely successful money-management package called Quicken, that they were interested in acquiring them. After several weeks of negotiations, including lots of discussions about how Quicken was programmed, how it was used in the wild, and what marketing strategies had been most effective, Microsoft abruptly broke off the talks, saying they “couldn’t find a way to make it work.” Before the end of 1990, they had announced Microsoft Money, their own money-management product.

More and more of these types of stories were being passed around. A startup who called themselves Go came to Microsoft with a pen-based computing interface. (The latter was all the rage at the time; Apple as well was working on something called the Newton, a sort of pen-based proto-iPad that, like all of the other initiatives in this direction, would turn into an expensive failure.) After spending weeks examining Go’s technology, Microsoft elected not to purchase it or sign them to a contract. But, just days later, they started an internal project to create a pen-based interface for Windows, headed by the engineer who had been in charge of “evaluating” Go’s technology. A meme was emerging, by no means entirely true but perhaps not entirely untrue either, of Microsoft as a company better at doing business than doing technology, who would prefer to copy the innovations of others than do the hard work of coming up with their own ideas.

In a way, though, this very quality was a source of strength for Microsoft, the reason that corporate clients flocked to them now like they once had to IBM; the mantra that “no one ever got fired for buying IBM” was fast being replaced in corporate America by “no one ever got fired for buying Microsoft.” “We don’t do innovative stuff, like completely new revolutionary stuff,” Bill Gates admitted in an unguarded moment. “One of the things we are really, really good at doing is seeing what stuff is out there and taking the right mix of good features from different products.” For businesses and, now, tens of millions of individual consumers, Microsoft really was the new IBM: they were safe. You bought a Windows machine not because it was the slickest or sexiest box on the block but because you knew it was going to be well-supported, knew there would be software on the shelves for it for a long time to come, knew that when you did decide to upgrade the transition would be a relatively painless one. You didn’t get that kind of security from any other platform. If Microsoft’s business practices were sometimes a little questionable, even if Windows crashed sometimes or kept on running inexplicably slower the longer you had it on your computer, well, you could live with that. Alan Boyd, an executive at Microsoft for a number of years:

Does Bill have a vision? No. Has he done it the right way? Yes. He’s done it by being conservative. I mean, Bill used to say to me that his job is to say no. That’s his job.

Which is why I can understand [that] he’s real sensitive about that. Is Bill innovative? Yes. Does he appear innovative? No. Bill personally is a lot more innovative than Microsoft ever could be, simply because his way of doing business is to do it very steadfastly and very conservatively. So that’s where there’s an internal clash in Bill: between his ability to innovate and his need to innovate. The need to innovate isn’t there because Microsoft is doing well. And innovation… you get a lot of arrows in your back. He lets things get out in the market and be tried first before he moves into them. And that’s valid. It’s like IBM.

Of course, the ethical problem with this approach to doing business was that it left no space for the little guys who actually had done the hard work of innovating the technologies which Microsoft then proceeded to co-opt. “Seeing what stuff is out there and taking it” — to use Gates’s own words against him — is a very good way indeed to make yourself hated.

During the 1990s, Windows was widely seen by the tech intelligentsia as the archetypal Microsoft product, an unimaginative, clunky amalgam of other people’s ideas. In his seminal (and frequently hilarious) 1999 essay “In the Beginning… Was the Command Line,” Neal Stephenson described operating systems in terms of vehicles. Windows 3 was a moped in this telling, “a Rube Goldberg contraption that, when bolted onto a three-speed bicycle [MS-DOS], enabled it to keep up, just barely, with Apple-cars. The users had to wear goggles and were always picking bugs out of their teeth while Apple owners sped along in hermetically sealed comfort, sneering out the windows. But the Micro-mopeds were cheap, and easy to fix compared with the Apple-cars, and their market share waxed.”

And yet if we wished to identify one Microsoft product that truly was visionary, we could do worse than boring old ramshackle Windows. Bill Gates first put his people to work on it, we should remember, before the original IBM PC and the first version of MS-DOS had even been released — so strongly did he believe even then, just as much as that more heralded visionary Steve Jobs, that the GUI was the future of computing. By the time Windows finally reached the market four years later, it had had occasion to borrow much from the Apple Macintosh, the platform with which it was doomed always to be unfavorably compared. But Windows 1 also included vital features of modern computing that the Mac did not, such as multitasking and virtual memory. No, it didn’t take a genius to realize that these must eventually make their way to personal computers; Microsoft had fine examples of them to look at from the more mature ecosystems of institutional computing, and thus could be said, once again, to have implemented and popularized but not innovated them.

Still, we should save some credit for the popularizers. Apple, building upon the work done at Xerox, perfected the concept of the GUI to such an extent in LisaOS and MacOS that one could say that all of the improvements made to it since have been mere details. But, entrenched in a business model that demanded high profit margins and proprietary hardware, they were doomed to produce luxury products rather than ubiquitous ones. This was the logical flaw at the heart of the much-discussed “1984” television advertisement and much of the rhetoric that continued to surround the Macintosh in the years that followed. If you want to change the world through better computing, you have to give the people a computer they can afford. Thanks to Apple’s unwillingness or inability to do that, it was Microsoft that brought the GUI to the world in their stead — in however imperfect a form.

The rewards for doing so were almost beyond belief. Microsoft’s revenues climbed by roughly 50 percent every year in the few years after the introduction of Windows 3.0, as the company stormed past Boeing to become the biggest corporation in the Pacific Northwest. Someone who had invested $1000 in Microsoft in 1986 would have seen her investment grow to $30,000 by 1991. By the same point, over 2000 employees or former employees had become millionaires. In 1992, Bill Gates was anointed by Forbes magazine the richest person in the world, a distinction he would enjoy for the next 25 years by most reckonings. The man who had been so excited when his company grew to be bigger than Lotus in 1987 now owned a company that was larger than the next five biggest software publishers combined. And as for Lotus alone? Well, Microsoft was now over four times their size. And the Decade of Microsoft had only just begun.

In 2000, the company’s high-water point, an astonishing 97 percent of all consumer computing devices would have some sort of Microsoft software installed on them. In the vast majority of cases, of course, said software would include Microsoft Windows. There would be all sorts of grounds for concern about this kind of dominance even had it not been enjoyed by a company with such a reputation for playing rough as Microsoft. (Or would a company that didn’t play rough ever have gotten to be so dominant in the first place?) In future articles, we’ll be forced to spend a lot more time dealing with Microsoft’s various scandals and controversies, along with reactions to them that took the form of legal challenges from the American government and the European Union and the rise of an alternative ideology of software called the open-source movement.

But, as we come to the end of this particular series of articles on the early days of Windows, we really should give Bill Gates some credit as well. Had he not kept doggedly on with Windows in the face of a business-computing culture that for years wanted nothing to do with it, his company could very easily have gone the way of VisiCorp, Lotus, WordPerfect, Borland, and, one might even say, IBM and Apple for a while: a star of one era of computing that was unable to adapt to the changing times. Instead, by never wavering in his belief that the GUI was computing’s future, Gates conquered the world. That he did so while still relying on the rickety foundation of MS-DOS is, yes, kind of appalling for anyone who values clean, beautiful computer engineering. Yet it also says much about his programmers’ creativity and skill, belying any notion of Microsoft as a place bereft of such qualities. Whatever else you can say about the sometimes shaky edifices that were Windows 3 and its next few generations of successors, the fact that they worked at all was something of a minor miracle.

Most of all, we should remember the huge role that Windows played in bringing computing home once again — and, this time, permanently. The third generation of Microsoft’s GUI arrived at the perfect time, just when the technology and the culture were ready for it. Once a laughingstock, Windows became for quite some time the only face of computing many people knew — in the office and in the home. Who could have dreamed it? Perhaps only one person: a not particularly dreamy man named Bill Gates.

(Sources: the books Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and In the Beginning… Was the Command Line by Neal Stephenson; Computer Power User of October 2004; InfoWorld of May 20 1991 and January 31 1994. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 
 

Tags: , , ,