RSS

Category Archives: Interactive Fiction

How Jordan Mechner Made a Different Sort of Interactive Movie (or, The Virtues of Restraint)

One can learn much about the state of computer gaming in any given period by looking to the metaphors its practitioners are embracing. In the early 1980s, when interfaces were entirely textual and graphics crude or nonexistent, text adventures like those of Infocom were heralded as the vanguard of a new interactive literature destined to augment or entirely supersede non-interactive books. That idea peaked with the mid-decade bookware boom, when just about every entertainment-software publisher (and a few traditional book publishers) were rushing to sign established authors and books to interactive projects. It then proceeded to collapse just as quickly under the weight of its own self-importance when the games proved less compelling and the public less interested than anticipated.

Prompted by new machines like the Commodore Amiga with their spectacular graphics and sound, the industry reacted to that failure by turning to the movies for media mentorship. This relationship would prove more long-lasting. By the end of the 1980s, companies like Cinemaware and Sierra were looking forward confidently to a blending of Hollywood and Silicon Valley that they believed might just replace the conventional non-interactive movie, not to mention computer games as people had known them to that point. Soon most of the major publishers would be conducting casting calls and hiring sound stages, trying literally to make games out of films. It was an approach fraught with problems — problems that were only slowly and grudgingly acknowledged by these would-be unifiers of Southern and Northern Californian entertainment. Before it ran its course, it spawned lots of really terrible games (and, it must be admitted, against all the odds the occasional good one as well).

Given the game industry’s growing fixation on the movies as the clock wound down on the 1980s, Jordan Mechner would seem the perfect man for the age. Struggling with the blessing or curse of an equally abiding love for both mediums, his professional life had already been marked by constant vacillation between movies and games. Inevitably, his love of film influenced him even when he was making games. But, perhaps because that love was so deep and genuine, he accomplished the blending in a more even-handed, organic way than would most of the multi-CD, multi-gigabyte interactive movies that would soon be cluttering store shelves. Mechner’s most famous game, by contrast, filled just two Apple II disk sides — less than 300 K in total. And yet the cinematic techniques it employs have far more in common with those found in the games of today than do those of its more literal-minded rivals.


 

As a boy growing up in the wealthy hamlet of Chappaqua, New York, Jordan Mechner dreamed of becoming “a writer, animator, or filmmaker.” But those ambitions got modified if not discarded when he discovered computers at his high school. Soon after, he got his hands on his own Apple II for the first time. Honing his chops as a programmer, he started contributing occasional columns on BASIC to Creative Computing magazine at the age of just 14. Yet fun as it was to be the magazine’s youngest contributor, his real reason for learning programming was always to make games. “Games were the only kind of software I knew,” he says. “They were the only kind that I enjoyed. At that time, I didn’t really see any use for a word processor or a spreadsheet.” He fell into the throes of what he describes as an “obsession” to get a game of his own published.

Initially, he did what lots of other game programmers were doing at the time: cloning the big standup-arcade hits for fun and (hopefully) profit. He made a letter-perfect copy of Atari’s Asteroids, changed the titular space rocks to bright bouncing balls in the interest of plausible deniability, and sent the resulting Deathbounce off to Brøderbund for consideration; what with Brøderbund having been largely built on the back of Apple Galaxian, an arcade clone which made no effort whatsoever to conceal its source material, the publisher seemed a very logical choice. But Doug Carlston was now trying to distance his company from such fare for reasons of reputation as well as his fear of Atari’s increasingly aggressive legal threats. Nice guy that he was, he called Mechner personally to explain why Deathbounce wasn’t for Brøderbund. He promised to send Mechner a free copy of Brøderbund’s latest hit, Choplifter, suggesting he think about whether he might be able to apply the programming chops he had demonstrated in Deathbounce to a more original game, as Choplifter‘s creator Dan Gorlin had done. Mechner remembers the conversation as well-nigh life-changing. He had been so immersed in the programming side of making games that the idea of doing an original design had never really occurred to him before: “I didn’t have to copy someone else’s arcade game. I was allowed to design my own!”

Carlston’s phone call came in May of 1982, when Mechner was finishing up his first year at Yale University; undecided about his major as he was so much else in his life at the time, he would eventually wind up with a Bachelors in psychology. We’re granted an unusually candid and personal glimpse into his life between 1982 and 1993 thanks to his private journals, which he published (doubtless in a somewhat expurgated form) in 2012. The early years paint a picture of a bright, sensitive young man born into a certain privilege that carries with it the luxury of putting off adulthood for quite some time. He romanticizes chance encounters (“I saw a heartbreakingly beautiful young blonde out of the corner of my eye. She was wearing a blue down vest. As she passed, our eyes met. She smiled at me. As I went out I held the door for her; her fingers grazed mine. Then she was gone.”); frets frequently about cutting classes and generally not being the man he ought to be (“I think Ben is the only person who truly comprehends the depths of how little classwork I do.”); alternates between grand plans accompanied by frenzies of activity and indecision accompanied by long days of utter sloth (“Here’s what I do do: listen to music. Browse in record stores. Read newspapers, magazines, play computer games, stare out the windows. See a lot of movies.”); muses with all the self-obliviousness of youth on whether he would prefer “writing a bestselling novel or directing a blockbusting film,” as if attaining fame and fortune was as simple as deciding on one or the other.

At Yale, film, that other constant of his creative life, came to the fore. He joined every film society he stumbled upon, signed up for every film-studies course in the catalog, and set about “trying to see in four years every film ever made”; Akira Kurosawa’s classic adventure epic Seven Samurai (a major inspiration behind Star Wars among other things) emerged as his favorite of them all. He also discovered an unexpected affinity for silent cinema, which naturally led him to compare that earliest era of film with the current state of computer games, a medium that seemed in a similar state of promising creative infancy. All of this, combined with the example of Choplifter and the karate lessons he was sporadically attending, led to Karateka, the belated fruition of his obsession with getting a game published.

To a surprising degree given his youth and naivete, Mechner consciously designed Karateka as the proverbial Next Big Thing in action games after the first wave of simple quarter munchers, whose market he watched collapse over the two-plus years he spent intermittently working on it. Plenty of fighting games had appeared on the Apple II and other platforms before, some of them very playable; Mechner wasn’t sure he could really improve on their templates when it came to pure game play. What he could do, however, was give his game some of the feel and emotional resonance of cinema. Reasoning that computer games were technically on par with the first decade or two of film in terms of the storytelling tools at his disposal, he mimicked the great silent-film directors in building his story out of the broadest archetypal elements: an unnamed hero must assault a mountain fortress to rescue an abducted princess, fighting through wave after wave of enemies, culminating in a showdown with the villain himself. He energetically cross-cut the interactive fighting sequences with non-interactive scenes of the villain issuing orders to his minions while the princess looks around nervously in her cell — a suspense-building technique from cinema dating back to The Birth of a Nation. He mimicked the horizontal wipes Kurosawa used for transitions in Seven Samurai; mimicked the scrolling textual prologue from Star Wars. When the player lost or won, he printed “THE END” on the screen in lieu of “GAME OVER.” And, indeed, he made it possible, although certainly not easy, to win Karateka and carry the princess off into the sunset. The player was, in other words, playing for bigger stakes than a new high score.

Karateka

The most technically innovative aspect of Karateka — suggested, like much in the game, by Mechner’s very supportive father — involved the actual people on the screen. To make his fighters move as realistically as possible, Mechner made use for the first time in a computer game of an old cartoon-animation technique known as rotoscoping. After shooting some film footage of his karate instructor in action, doing various kicks and punches, Mechner used an ancient Moviola editing machine that had somehow wound up in the basement of the family home to isolate and make prints out of every third frame. He imported the figure at the center of each print into his Apple II by tracing it on a contraption called the VersaWriter. Flipped through in sequence, the resulting sprites appeared to “move” in an unusually fluid and realistic fashion. “When I saw that sketchy little figure walk across the screen,” he wrote in his journal, “looking just like Dennis [his karate instructor], all I could say was ‘ALL RIGHT!’ It was a glorious moment.”

Karateka

Doug Carlston, who clearly saw something special in this earnest kid, was gently encouraging and almost infinitely patient with him. When it looked like Mechner had come up with something potentially great at last, Carlston signed him to a contract and flew him out to California in the summer of 1984 to finish it up with the help of Brøderbund’s in-house staff. Released just a little too late to fully capitalize on the 1984 Christmas rush, Karateka started slowly but gradually turned into a hit, especially once the Commodore 64 port dropped in June of 1985. Once ported to Nintendo for the domestic Japanese market, it proceeded to sell many hundreds of thousand units, making Jordan Mechner a very flush young man indeed.

So, Mechner, about to somehow manage to graduate despite all the missed assignments and cut classes spent working on Karateka, seemed poised for a fruitful career making games. Yet he continued to vacillate between his twin obsessions. Even as his game, the most significant accomplishment of his young life and one of which anyone could justly be proud, had entered the homestretch, he had written how “I definitely want my next project to be film-related. Videogames have taken up enough of my time for now.” In the wake of his game’s release, the steady stream of royalties therefrom only made it easier to dabble in film.

Mechner spent much of the year after graduating from university back at home in Chappaqua working on his first screenplay. In between writing dialog and wracking himself with doubt over whether he really wanted to do another game at all, he occasionally turned his attention to the idea of a successor to Karateka. Already during that first summer after Yale, he and Gene Portwood, a Brøderbund executive, dreamed up a scenario for just such a beast: an Arabian Nights-inspired story involving an evil sultan, a kidnapped princess, and a young man — the player, naturally — who must rescue her. Karateka in Middle Eastern clothing though it may have been in terms of plot, that was hardly considered a drawback by Brøderbund, given the success of Mechner’s first game.

Seven frames of animation ready to be photocopied and digitized.

Seven frames of animation ready to be photocopied and digitized.

Determined to improve upon the rotoscoping of Karateka, Mechner came up with a plan to film a moving figure and use a digitizer to capture the frames into the computer, rather than tracing the figure using the VersaWriter. He spent $2500 on a high-end VCR and video camera that fall, knowing he would return them before his month’s grace period was out (“I feel so dishonest,” he wrote in his journal). The technique he had in the works may have been an improvement over what he had done for Karateka, but it was still very primitive and hugely labor-intensive. After shooting his video, he would play it back on the VCR, pausing it on each frame he wanted to capture. Then he would take a picture of the screen using an ordinary still camera and get the film developed. Next step was to trace the outline of the figure in the photograph using Magic Marker and fill him in using White-Out. Then he would Xerox the doctored photograph to get a black-and-white version with a very clear silhouette of the figure. Finally, he would digitize the photocopy to import it into his Apple II, and erase everything around the figure by hand on the computer to create a single frame of sprite animation. He would then get to go through this process a few hundred more times to get the prince’s full repertoire of movements down.


On October 20, 1985, Jordan Mechner did his first concrete work on the game that would become Prince of Persia, using his ill-gotten video camera to film his 16-year-old brother David running and jumping through a local parking lot. When he finally got around to buying a primitive black-and-white image digitizer for his trusty Apple II more than six months later, he quickly determined that the footage he’d shot was useless due to poor color separation. Nevertheless, he saw potential magic.

I still think this can work. The key is not to clean up the frames too much. The figure will be tiny and messy and look like crap… but I have faith that, when the frames are run in sequence at 15 fps, it’ll create an illusion of life that’s more amazing than anything that’s ever been seen on an Apple II screen. The little guy will be wiggling and jiggling like a Ralph Bakshi rotoscope job… but he’ll be alive. He’ll be this little shimmering beacon of life in the static Apple-graphics Persian world I’ll build for him to run around in.

For months after that burst of enthusiasm, however, he did little more with the game.

At last in September of 1986, having sent his screenplay off to Hollywood and thus with nothing more to do on that front but wait, Mechner moved out to San Rafael, California, close to Brøderbund’s offices, determined to start in earnest on Prince of Persia. He spent much time over the next few months refining his animation technique, until by Christmas everyone who saw the little running and jumping figure was “bowled over” by him. Yet after that progress again slowed to a crawl, as he struggled to motivate himself to turn his animation demos into an actual game.

And then, on May 4, 1987, came the phone call that would stop the little running prince in his tracks for the better part of a year. A real Hollywood agent called to tell him she “loved” his script for Birthstone, a Spielbergian supernatural comedy/thriller along the lines of Gremlins or The Goonies. Within days of her call, the script was optioned by Larry Turman, a major producer with films like The Graduate on his resume. For months Mechner fielded phone calls from a diverse cast of characters with a diverse cast of suggestions, did endless rewrites, and tried to play the Hollywood game, schmoozing and negotiating and trying not to appear to be the awkward, unworldly kid he still largely was. Only when Birthstone seemed permanently stuck in development hell — “Hollywood’s the only town where you can die of encouragement,” he says wryly, quoting Pauline Kael —  did he give up and turn his attention back to games. Mechner notes today that just getting as far as he did with his very first script was a huge achievement and a great start in itself. After all, he was, if not quite hobnobbing with the Hollywood elite, at least getting rejection letters from such people as Michael Apted, Michael Crichton, and Henry Winkler; such people were reading his script. But he had been spoiled by the success of Karateka. If he wrote another screenplay, there was no guarantee it would get even as far as his first had. If he finished Prince of Persia, on the other hand, he knew Brøderbund would publish it.

And so, in 1988, it was back to games, back to Prince of Persia. Inspired by “puzzly” 8-bit action games like Doug Smith’s Lode Runner and Ed Hobbs’s The Castles of Dr. Creep, his second game was shaping up to be more than just a game of combat. Instead his prince would have to make his way through area after area full of tricks, traps, and perilous drops. “What I wanted to do with Prince of Persia,” Mechner says, “was a game which would have that kind of logical, head-scratching, fast-action, Lode Runner-esque puzzles in a level-based game but also have a story and a character that was trying to accomplish a recognizable human goal, like save a princess. I was trying to merge those two things.” Ideally, the game would play like the iconic first ten minutes of Raiders of the Lost Ark, in which Indiana Jones runs and leaps and dodges and sometimes outwits rather than merely outruns a series of traps. For a long while, Mechner planned to make the hero entirely defenseless, as a sort of commentary on the needless ultra-violence found in so many other games. In the end, he didn’t go that far — the allure of sword-fighting, not to mention commercial considerations, proved too strong — but Prince of Persia was nevertheless shaping up to be a far more ambitious, multi-faceted work than Karateka, boasting much more than just improved running and jumping animations.

With just 128 K of memory to work with on the Apple II, Mechner was forced to make Prince of Persia a modular design, relying on a handful of elements which are repeatedly reused and recombined. Take, for instance, the case of the loose floorboards. The first time they appear, they’re a simple trap: you have to jump over a section of the floor to avoid falling into a pit. Later, they appear on the ceiling, as part of the floor above your own; caught in an apparent cul de sac, you have to jump up and bash the ceiling to open an escape route. Still later, they can be used strategically: to kill guards below you by dropping the floorboards on their heads, or to hold down a pressure plate below you that opens a door on the level on which you’re currently standing. It’s a fine example of a constraint in game design turning into a strength. “There’s a certain elegance to taking an element the player is already familiar with,” says Mechner, “and challenging him to think about it in a different way.”


On July 14, 1989, Mechner shot the final footage for Prince of Persia: the denouement, showing the prince — now played by the game’s project manager at Brøderbund, Brian Ehler — embracing the rescued princess — played by Tina LaDeau, the 18-year-old daughter of another Brøderbund employee, in her prom dress. (“Man, she is a fox,” Mechner wrote in his journal. “Brian couldn’t stop blushing when I had her embrace him.”)

The game shipped for the Apple II on October 6, 1989. And then, despite a very positive review in Computer Gaming World — Charles Ardai called it nothing less than “the Star Wars of its field,” music to the ears of a movie buff like Mechner — it proceeded to sell barely at all: perhaps 500 units a month. It was, everyone at Brøderbund agreed, at least a year too late to hope to sell significant numbers of a game like this on the Apple II, whose only remaining commercial strength was educational software, thanks to the sheer number of the things still installed in American schools. Mechner’s procrastination and vacillation had spoiled this version’s commercial prospects entirely.

Thankfully, the Apple II version wasn’t to be the only one. Brøderbund already had programmers and artists working on ports to MS-DOS and the Amiga, the last two truly viable computer-gaming platforms in North America. Mechner as well turned his attention to the versions for these more advanced machines as soon as the Apple II version was finished. And once again his father pitched in, composing a lovely score for the luxuriously sophisticated sound hardware now at the game’s disposal. “This is going to be the definitive version of Prince of Persia,” Mechner enthused over the MS-DOS version. “With VGA [graphics] and sound card, on a fast machine, it’ll blow the Apple away. It looks like a Disney film. It’s the most beautiful game I’ve ever seen.” Reworked though they were in almost all particulars, at the heart of the new versions lay the same digitized film footage that had made the 8-bit prince run and leap so fluidly.

Prince of Persia

And yet, after it shipped on April 19, 1990, the MS-DOS version also disappointed. Mechner chafed over his publisher’s disinterest in promoting the game; they seemed on the verge of writing it off, noting how the vastly superior MS-DOS version was being regarded as just another port of an old 8-bit game, and thus would likely never be given a fair shake by press or public. True as ever to the bifurcated pattern of his life, he decided to turn back to film. Having tried and failed to get into New York University film school, he resorted to working as a production assistant in movies by way of supporting himself and trying to drum up contacts in the film-making community of New York. Thus the first anniversary of Prince of Persia‘s original release on the Apple II found him schlepping crates around New York City. His career as a game developer seemed to be behind him, and truth be told his prospects as a filmmaker didn’t look a whole lot brighter.

The situation began to reverse itself only after the Amiga version was finished — programmed, as it happened, by Dan Gorlin, the very fellow whose Choplifter had first inspired Mechner to look at his own games differently. In Europe, the Amiga’s stronghold, Prince of Persia was free of the baggage which it carried in North America — few in Europe had much idea of what an Apple II even was — and doubtless benefited from a much deeper and richer tradition on European computers of action-adventures and platform puzzlers. It received ebullient reviews and turned into a big hit on European Amigas, and its reputation gradually leaked back across the pond to turn it at last into a hit in its homeland as well. Thus did Prince of Persia become a slow grower of an international sensation — a very unusual phenomenon in the hits-driven world of videogames, where shelf lives are usually short and retailer patience shorter. Soon came the console releases, along with releases for various other European and Japanese domestic computers, sending total sales soaring to over 2 million units.

By the beginning of 1992, Mechner was far removed from his plight of just eighteen months before. He was drowning in royalties, consulting intermittently with Brøderbund on a Prince of Persia 2 — it was understood that his days in the programming trenches were behind him — and living a globetrotting lifestyle, jaunting from Paris to San Rafael to Madrid to New York as whim and business took him. He was also planning his first film, a short documentary to be shot in Cuba, and already beginning to mull over what would turn into his most ambitious and fascinating game production of all, known at this point only as “the train game.”

Prince of Persia, which despite the merits of that eventual “train game” is and will likely always remain Mechner’s signature work, strikes me most of all as a triumph of presentation. The actual game play is punishingly difficult. Each of its twelve levels is essentially an elaborate puzzle that can only be worked out by dying many times when not getting trapped into one of way too many dead ends. Even once you think you have it all worked out, you still need to execute every step with perfect precision, no mean feat in itself. Messing up at any point in the process means starting that level over again from the beginning. And, because you only have one hour of real time to rescue the princess, every failure is extremely costly; a perfect playthrough, accomplished with absolute surety and no hesitations, takes about half an hour, leaving precious little margin for error. At least there is a “save” feature that will let you bookmark each level starting with the third, so you don’t have to replay the whole game every time you screw up — which, believe me, you will, hundreds if not thousands of times before you finally rescue the princess. Beating Prince of Persia fair and square is a project for a summer vacation of those long-gone adolescent days when responsibilities were few and distractions fewer. As a busy adult, I find it too repetitive and too reliant on rote patterns, as well as — let’s be honest here — just too demanding on my aging reflexes. In short, the effort-to-reward ratio strikes me as way out of whack. Of course, I’m sure that, given Prince of Persia‘s status as a beloved icon of gaming, many of you have a different opinion.

So, let’s turn back to something on which we can hopefully all agree: the brilliance of that aforementioned presentation, which brings to aesthetic maturity many of the techniques Mechner had first begun to experiment with in Karateka. Rather than using filmed footage as a tool for the achievement of fluid, lifelike motion, as Mechner did, games during the years immediately following Prince of Persia would be plastered with jarring chunks of poorly acted, poorly staged “full-motion video.” Such spectacles look far more dated today than the restrained minimalism of Prince of Persia. The industry as a whole would take years to wind up back at the place where Jordan Mechner had started: appropriating some of the language of cinema in the service of telling a story and building drama, without trying to turn games into literal interactive movies. Mechner:

Just as theater is its own thing — with its own conventions, things that it does well, things it does badly — so is film, and so [are] computer games. And there is a way to borrow from one medium to another, and in fact that’s what an all-new medium does when it’s first starting out. Film, when it was new, looked like someone set up a camera front and center and filmed a staged play. Then the things that are specific to film — like the moving camera, close-ups, reaction shots, dissolves — all these kinds of things became part of the language of cinema. It’s the same with computer games. To take a long film sequence and to play that on your TV screen is the bad way to make a game cinematic. The computer game is not a VCR. But if you can borrow from the knowledge that we all carry inside our heads of how cuts work, how reaction shots work, what a low angle means dramatically, what it means when the camera suddenly pulls back… We’ve got this whole collective unconscious of the vocabulary of film, and that’s a tremendously valuable tool to bring into computer gaming.

In a medium that has always struggled to tamp down its instinct toward aesthetic maximalism, Mechner’s games still stand out for their concern with balance and proportion. Mechner again:

Visuals are [a] component where it’s often tempting to compromise. You think, “Well, we could put a menu bar across here, we could put a number in the upper right-hand corner of the screen representing how many potions you’ve drunk,” or something. The easy solution is always to do something that as a side effect is going to make the game look ugly. So I took as one of the ground rules going in that the overall screen layout had to be pleasing, had to be strong and simple. So that somebody who was not playing the game but who walked into the room and saw someone else playing it would be struck by a pleasing composition and could stop to watch for a minute, thinking, “This looks good, this looks as if I’m watching a movie.” It really forces you as a designer to struggle to find the best solution for things like inventory. You can’t take the first solution that suggests itself, you have to try to solve it within the constraints you set yourself.

Mechner’s take on visual aesthetics can be seen as a subversion of Ken Williams’s old “ten-foot rule,” which, as you might remember, stated that every Sierra game ought to be visually arresting enough to make someone say “Wow!” when glimpsing it from ten feet away across a crowded shop. Mechner believed that game visuals ought to be more than just striking; they ought to be aesthetically good by the more refined standards of film and the other, even older visual arts. All that time Mechner spent obsessing over films and film-making, which could all too easily be labeled a complete waste of time, actually allowed him to bring something unique to the table, something that made him different from virtually all of his many contemporaries in the interactive-movie business.

There are various ways to situate Jordan Mechner’s work in general and Prince of Persia in particular within the context of gaming history. It can be read as the last great swan song of the Apple II and, indeed, of the entire era of 8-bit computer gaming, at least in North America. It can be read as yet one more example of Brøderbund’s downright bizarre commercial Midas touch, which continued to yield a staggering number of hits from a decidedly modest roster of new releases (Brøderbund also released SimCity in 1989, thus spawning two of the most iconic franchises in gaming history within bare months of one another). It can be read as the precursor to countless cinematic action-adventures and platformers to come, many of whose designers would acknowledge it as a direct influence. In its elegant simplicity, it can even be read as a fascinating outlier from the high-concept complexity that would come to dominate American computer gaming in the very early 1990s. But the reading that makes me happiest is to simply say that Prince of Persia showed how less can be more.

(Sources: Game Design Theory and Practice by Richard Rouse III; The Making of Karateka and The Making of Prince of Persia by Jordan Mechner; Creative Computing of March 1979, September 1979, and May 1980; Next Generation of May 1998; Computer Gaming World of December 1989; Jordan Mechner’s Prince of Persia postmortem from the 2011 Game Developers Conference; “Jordan Mechner: The Man Who Would Be Prince” from Games™; the Jordan Mechner and Brøderbund archives at the Strong Museum of Play.)

 
 

Tags: , ,

Cinemaware’s Year in the Desert

The last year of the 1980s was also the last that the Commodore Amiga would enjoy as the ultimate American game machine. Even as the low-end computer-game market was being pummeled into virtual nonexistence by the Nintendo Entertainment System, leaving the Amiga with little room into which to expand downward, the heretofore business-centric world of MS-DOS was developing rapidly on the high end, with VGA graphics and sound cards becoming more and more common. The observant could already recognize that these developments, combined with Commodore’s lackadaisical attitude toward improving their own technology, must spell serious trouble for the Amiga in the long run.

But for now, for this one more year, things were still going pretty well. Amiga zealots celebrated loudly and proudly at the beginning of 1989 when news broke that the platform had pushed past the magic barrier of 1 million machines sold. As convinced as ever that world domination was just around the corner for their beloved “Amy,” they believed that number would have to lead to her being taken much more seriously by the big non-gaming software houses. While that, alas, would never happen, sales were just beginning to roll in many of the European markets that would sustain the Amiga well into the 1990s.

This last positive development fed directly into the bottom line of Cinemaware, the American software house that was the developer most closely identified with the Amiga to a large extent even in Europe. Cinemaware’s founder Bob Jacob wisely forged close ties with the exploding European Amiga market via a partnership with the British publisher Mirrorsoft. In this way he got Cinemaware’s games wide distribution and promotion throughout Europe, racking up sales across the pond under the Mirrorsoft imprint that often dramatically exceeded those Cinemaware was able to generate under their own label in North America. The same partnership led to another welcome revenue stream: the importation of European games into Cinemaware’s home country. Games like Speedball, by the rockstar British developers The Bitmap Brothers, didn’t have much in common with Cinemaware’s usual high-concept fare, but did feed the appetite of American youngsters who had recently found Amiga 500s under their Christmas trees for splashy, frenetic, often ultra-violent action.

Yet Cinemaware’s biggest claim to fame remained their homegrown interactive movies — which is not to say that everyone was a fan of their titular cinematic approach to game-making. A steady drumbeat of criticism, much of it far from unjustified, had accompanied the release of each new interactive movie since the days of Defender of the Crown. Take away all of the music and pretty pictures that surrounded their actual game play, went the standard line of attack, and these games were nothing but shallow if not outright broken exercises in strategy attached to wonky, uninteresting action mini-games. Cinemaware clearly took the criticism to heart despite the sales success they continued to enjoy. Indeed, the second half of the company’s rather brief history can to a large extent be read as a series of reactions to that inescapable negative drumbeat, a series of attempts to show that they could make good games as well as pretty ones.

At first, the new emphasis on depth led to decidedly mixed results. Conflating depth with difficulty in a manner akin to the way that so many adventure-game designers conflate difficulty with unfairness, Cinemaware gave the world Rocket Ranger as their second interactive movie of 1988. It had all the ingredients to be great, but was undone by balance issues exactly the opposite of those which had plagued the prototypical Cinemaware game, Defender of the Crown. In short, Rocket Ranger was just too hard, a classic game-design lesson in the dangers of overcompensation and the importance of extensive play-testing to get that elusive balance just right. With two more new interactive movies on the docket for 1989, players were left wondering whether this would be the year when Cinemaware would finally get it right.

Lords of the Rising Sun

Certainly they showed no sign of backing away from their determination to bring more depth to their games. On the contrary, they pushed that envelope still harder with Lords of the Rising Sun, their first interactive movie of 1989. At first glance, it was a very typical Cinemaware confection, a Defender of the Crown set in feudal Japan. Built like that older game from the tropes and names of real history without bothering to be remotely rigorous about any of it, Lords of the Rising Sun is also another strategy game broken up by action-oriented minigames — the third time already, following Defender of the Crown and Rocket Ranger, that Cinemaware had employed this template. This time, however, a concerted effort was made to beef up the strategy game, not least by making it into a much more extended affair. Lords of the Rising Sun became just the second interactive movie to include a save-game feature, and in this case it was absolutely necessary; a full game could absorb many hours. It thus departed more markedly than anything the company had yet done from Bob Jacob’s original vision of fast-playing, non-taxing, ultra-accessible games. Indeed, with a thick manual and a surprising amount of strategic and tactical detail to keep track of, Lords of the Rising Sun can feel more like an SSI than a typical Cinemaware game once you look past its beautiful audiovisual presentation. Reaching for the skies if not punching above their weight, Cinemaware even elected to include the option of playing the game as an exercise in pure strategy, with the action sequences excised.


But sadly, the strategy aspect is as inscrutable as a Zen koan. While Rocket Ranger presents with elegance and grace a simple strategy game that would be immensely entertaining if it wasn’t always kicking your ass, Lords of the Rising Sun is just baffling. You’re expected to move your armies over a map of Japan, recruiting allies where possible, fighting battles to subdue enemies where not. Yet it’s all but impossible to divine any real sense of the overall situation from the display. This would-be strategy game ends up feeling more random than anything else, as you watch your banners wander around seemingly of their own volition, bumping occasionally into other banners that may represent enemies or friends. It suffers mightily from a lack of clear status displays, making it really, really hard to keep track of who wants to do what to whom. If you have the mini-games turned on, the bird’s-eye view is broken up by arcade sequences that are at least as awkward as the strategy game. In the end, Lords of the Rising Sun is just no fun at all.

Lords of the Rising Sun's animated, scrolling map is nicer to look at than it is a practical tool for strategizing.

While it’s very pretty, Lords of the Rising Sun‘s animated, scrolling map is nicer to look at than it is a practical tool for strategizing.

Press and public alike were notably unkind to Lords of the Rising Sun. Claims like Bob Jacob’s that “there is more animation in Lords than has ever been done in any computer game” — a claim as unquantifiable as it was dubious, especially in light of some of Sierra’s recent efforts — did nothing to shake Cinemaware’s reputation for being all sizzle, no steak. Ken St. Andre of Tunnels & Trolls and Wasteland fame, reviewing the game for Questbusters magazine, took Cinemaware to task on its every aspect, beginning with the excruciating picture on the box of a cowering maiden about to fall out of her kimono; he deemed it “an insult to women everywhere and to Japanese culture in particular.” (Such a criticism sounds particularly forceful coming from St. Andre; Wasteland with its herpes-infested prostitutes and all the rest is hardly a bastion of political correctness.) He concluded his review with a zinger so good I wish I’d thought of it: he called the game “a Japanese Noh play.”

Many other reviewers, while less boldly critical, seemed nonplussed by the whole experience — a very understandable reaction to the strategy game’s vagaries. Sales were disappointing in comparison to those of earlier interactive movies, and the game has gone down in history alongside the equally underwhelming S.D.I. as perhaps the least remembered of all the Cinemaware titles.

It Came from the Desert

So, what with the game-play criticisms beginning to affect the bottom line, Cinemaware really needed to deliver something special for their second game of 1989. Thankfully, It Came from the Desert would prove to be the point where they finally got this interactive-movie thing right, delivering at long last a game as nice to play as it is to look at.


It Came from the Desert was the first of the interactive movies not to grow from a seed of an idea planted by Bob Jacob himself. Its originator was rather David Riordan, a newcomer to the Cinemaware fold with an interesting career in entertainment already behind him. As a very young man, he’d made a go of it in rock music, enjoying his biggest success in 1970 with a song called “Green-Eyed Lady,” a #3 hit he co-wrote for the (briefly) popular psychedelic band Sugarloaf. A perennial on Boomer radio to this day, that song’s royalties doubtless went a long way toward letting him explore his other creative passions after his music career wound down. He worked in movies for a while, and then worked with MIT on a project exploring the interactive potential of laser discs. After that, he worked briefly for Lucasfilm Games during their heady early days with Peter Langston at the helm. And from there, he moved on to Atari, where he worked on laser-disc-driven stand-up arcade games until it became obvious that Dragon’s Lair and its spawn had been the flashiest of flashes in the pan.

David Riordan on the job at Cinemaware.

David Riordan on the job at Cinemaware.

Riordan’s resume points to a clear interest in blending cinematic approaches with interactivity. It thus comes as little surprise that he was immediately entranced when he first saw Defender of the Crown one day at his brother-in-law’s house. It had, he says, “all the movie attributes and approaches that I had been trying to get George Lucas interested in” while still with Lucasfilm. He wrote to Cinemaware, sparking up a friendship with Bob Jacob which led him to join the company in 1988. Seeing in Riordan a man who very much shared his own vision for Cinemaware, Jacob relinquished a good deal of the creative control onto which he had heretofore held so tightly. Riordan was placed in charge of the company’s new “Interactive Entertainment Group,” which was envisioned as a production line for cranking out new interactive movies of far greater sophistication than those Cinemaware had made to date. These latest and greatest efforts were to be made available on a whole host of platforms, from their traditional bread and butter the Amiga to the much-vaunted CD-based platforms now in the offing from a number of hardware manufacturers. If all went well, It Came from the Desert would mark the beginning of a whole new era for Cinemaware.

Here we can see -- just barely; sorry for this picture's terrible fidelity -- Cinemaware's interactive-movie scripting tool, which they dubbed MasterPlan, running in HyperCard.

Here we can see — just barely; sorry for this picture’s terrible fidelity — Cinemaware’s scripting tool MasterPlan.

Cinemaware spent months making the technology that would allow them to make It Came from the Desert. Riordan’s agenda can be best described as a desire to free game design from the tyranny of programmers. If this new medium was to advance sufficiently to tell really good, interesting interactive stories, he reasoned, its tools would have to become something that non-coding “real” writers could successfully grapple with. Continuing to advance Cinemaware’s movie metaphors, his team developed a game engine that could largely be “scripted” in point-and-click fashion in HyperCard rather than needing to be programmed in any conventional sense. Major changes to the structure of a game could be made without ever needing to write a line of code, simply by editing the master plan of the game in a HyperCard tool Cinemaware called, appropriately enough, MasterPlan. The development process leveraged the best attributes of a number of rival platforms: Amigas ran the peerless Deluxe Paint for the creation of art; Macs ran HyperCard for the high-level planning; fast IBM clones served as the plumbing of the operation, churning through compilations and compressions. It was by anyone’s standards an impressive collection of technology — so impressive that the British magazine ACE, after visiting a dozen or more studios on a sort of grand tour of the American games industry, declared Cinemaware’s development system the most advanced of them all. Cinemaware had come a long way from the days of Defender of the Crown, whose development process had consisted principally of locking programmer R.J. Mical into his office with a single Amiga and a bunch of art and music and not letting him out again until he had a game. “If we ever get a real computer movie,” ACE concluded, “this is where it’s going to come from.”

It Came from the Desert

While it’s debatable whether It Came from the Desert quite rises to that standard, it certainly is Cinemaware’s most earnest and successful attempt at crafting a true interactive narrative since King of Chicago. The premise is right in their usual B-movie wheelhouse. Based loosely on the campy 1950s classic Them!, the game takes place in a small desert town with the charming appellation of Lizard Breath that’s beset by an alarming number of giant radioactive ants, product of a recent meteor strike. You play a geologist in town; “the most interesting rocks always end up in the least interesting places,” notes the introduction wryly. Beginning in your cabin, you can move about the town and its surroundings as you will, interacting with its colorful cast of inhabitants via simple multiple-choice dialogs and getting into scrapes of various sorts which lead to the expected Cinemaware action sequences. Your first priority is largely to convince the townies that they have a problem in the first place; this task you can accomplish by collecting enough evidence of the threat to finally gain the attention of the rather stupefyingly stupid mayor. Get that far, and you’ll be placed in charge of the town’s overall defense, at which point a strategic aspect joins the blend of action and adventure to create a heady brew indeed. Your ultimate goal, which you have just fifteen days in total to accomplish, is to find the ants’ main nest and kill the queen.

It Came from the Desert excels in all the ways that most of Cinemaware’s interactive movies excel. The graphics and sound were absolutely spectacular in their day, and still serve very well today; you can well-nigh taste the gritty desert winds. What makes it a standout in the Cinemaware catalog, however, is the unusual amount of attention that’s been paid to the design — to you the player’s experience. A heavily plot-driven game like this could and usually did go only one way in the 1980s. You probably know what I’m picturing: a long string of choke points requiring you to be in just the right place at just the right time to avoid being locked out of victory. Thankfully, It Came from the Desert steers well away from that approach. The plot is a dynamic thing rolling relentlessly onward, but your allies in the town are not entirely without agency of their own. If you fail to accomplish something, someone else might just help you out — perhaps not as quickly or efficiently as one might ideally wish, but at least you still feel you have a shot.

And even without the townies’ help, there are lots of ways to accomplish almost everything you need to. The environment as a whole is remarkably dynamic, far from the static set of puzzle pieces so typical of more traditional adventure games of this era and our own. There’s a lot going on under the hood in this one, far more than Cinemaware’s previous games would ever lead one to expect. Over the course of the fifteen days, the town’s inhabitants go from utterly unconcerned about the strange critters out there in the desert to full-on, backs-against-the-wall, fight-or-flight panic mode. By the end, when the ants are roaming at will through the rubble that once was Lizard Breath destroying anything and anyone in their path, the mood feels far more apocalyptic than that of any number of would-be “epic” games. One need only contrast the frantic mood at the end of the game with the dry, sarcastic tone of the beginning — appropriate to an academic stranded in a podunk town — to realize that one really does go on a narrative journey over the few hours it takes to play.

Which brings me to another remarkable thing: you can’t die in It Came from the Desert. If you lose at one of the action games, you wake up in the hospital, where you have the option of spending some precious time recuperating or trying to escape in shorter order via another mini-game. (No, I have no idea why a town the size of Lizard Breath should have a hospital.) In making sure that every individual challenge or decision doesn’t represent a zero-sum game, It Came from the Desert leaves room for the sort of improvisational derring-do that turns a play-through into a memorable, organic story. It’s not precisely that knowledge of past lives isn’t required; you’re almost certain to need several tries to finally save Lizard Breath. Yet each time you play you get to live a complete story, even if it is one that ends badly. Meanwhile you’re learning the lay of the land, learning to play more efficiently and getting steadily better at the action games, which are themselves unusually varied and satisfying by Cinemaware’s often dodgy standards. There are not just many ways to lose It Came from the Desert but also many paths to victory. Win or lose, your story in It Came from the Desert is your story; you get to own it. There’s a save-game feature, but I don’t recommend that you use it except as a bookmark when you really do need to do something else for a while. Otherwise just play along and let the chips fall where they may. At last, here we have a Cinemaware interactive movie that’s neither too easy nor too hard; this one is just right, challenging but not insurmountable.

It Came from the Desert evolves into a strategy game among other things, as you manuveur the town's forces to battle new infestations while you search for the main hive with the queen to put an end to the menace once and for all.

It Came from the Desert evolves into a strategy game among other things, as you deploy the town’s forces to battle each new ant infestation while you continue the search for the main hive.

Widely and justifiably regarded among the old-school Amiga cognoscenti of today as Cinemaware’s finest hour, It Came from the Desert was clearly seen as something special within Cinemaware as well back in the day; one only has to glance at contemporary comments from those who worked on the game to sense their pride and excitement. There was a sense both inside and outside their offices that Cinemaware was finally beginning to crack a nut they’d been gnawing on for quite some time. Even Ken St. Andre was happy this time. “Cinemaware’s large creative team has managed to do a lot of things very well indeed in this game,” he wrote, “and as a result they have produced a game that looks great, sounds great, moves along at a rapid pace, is filled with off-the-wall humor without being dumb, and is occasionally both gripping and exciting.”

When It Came from the Desert proved a big commercial success, Cinemaware pulled together some ideas that had been left out of the original game due to space constraints, combined them with a plot involving the discovery of a second ant queen, and made it all into a sequel subtitled Ant-Heads!. Released at a relatively low price only as an add-on for the original game — thus foreshadowing a practice that would get more and more popular as the 1990s wore on — Ant-Heads! was essentially a new MasterPlan script that utilized the art and music assets from the original game, a fine demonstration of the power of Cinemaware’s new development system. It upped the difficulty a bit by straitening the time limit from fifteen days to ten, but otherwise played much like the original — which, considering how strong said original had been, suited most people just fine.

It Came from the Desert, along with the suite of tools used to create it, might very well have marked the start of exactly the new era of more sophisticated Cinemaware interactive movies that David Riordan had intended it to. As things shook out, however, it would have more to do with endings than beginnings. Cinemaware would manage just one more of these big productions before being undone by bad decisions, bad luck, and a changing marketplace. We’ll finish up with the story of their visionary if so often flawed games soon. In the meantime, by all means go play It Came from the Desert if time and motivation allow. I was frankly surprised at how well it still held up when I tackled it recently, and I think it just might surprise you as well.

(Sources: The One from April 1989, June 1989, and June 1990; ACE from April 1990; Commodore Magazine from November 1988; Questbusters from September 1989, February 1990, and May 1990; Matt Barton’s interview with Bob Jacob on Gamasutra.)

 
 

Tags: , , ,

The Manhole

The Manhole

Because the CD-ROM version of The Manhole sold in relatively small numbers in comparison to the original floppy version, the late Russell Lieblich’s surprisingly varied original soundtrack is too seldom heard today. So, in the best tradition of multimedia computing (still a very new and sexy idea in the time about which I’m writing), feel free to listen while you read.

The Manhole



Were HyperCard “merely” the essential bridge between Ted Nelson’s Xanadu fantasy and the modern World Wide Web, it would stand as one of the most important pieces of software of the 1980s. But, improbably, HyperCard was even more than that. It’s easy to get so dazzled by its early implementation of hypertext that one loses track entirely of the other part of Bill Atkinson’s vision for the environment. True to the Macintosh, “the computer for the rest of us,” Atkinson designed HyperCard as a sort of computerized erector set for everyday users who might not care a whit about hypertext for its own sake. With HyperCard, he hoped, “a whole new body of people who have creative ideas but aren’t programmers will be able to express their ideas or expertise in certain subjects.”

He made good on that goal. An incredibly diverse group of people worked with HyperCard, a group in which traditional hackers were very much the minority. Danny Goodman, the man who became known as the world’s foremost authority on HyperCard programming, was actually a journalist whose earlier experiences with programming had been limited to a few dabblings in BASIC. In my earlier article about hypertext and HyperCard, I wrote how “a professor of music converted his entire Music Appreciation 101 course into a stack.” Well, readers, I meant that literally. He did it himself. Industry analyst and HyperCard zealot Jan Lewis:

You can do things with it [HyperCard] immediately. And you can do sexy things: graphics, animation, sound. You can do it without knowing how to program. You get immediate feedback; you can make a change and see or hear it immediately. And as you go up on the learning curve — let’s say you learn how to use HyperTalk [the bundled scripting language] — again, you can make changes easily and simply and get immediate feedback. It just feels good. It’s fun!

And yet HyperCard most definitely wasn’t a toy. People could and did make great, innovative, commercial-quality software using it. Nowhere is the power of HyperCard — a cultural as well as a technical power — illustrated more plainly than in the early careers of Rand and Robyn Miller.

The Manhole

Rand and Robyn had a very unusual upbringing. The first and third of the four sons of a wandering non-denominational preacher, they spent their childhoods moving wherever their father’s calling took him: from Dallas to Albuquerque, from Hawaii to Haiti to Spokane. They were a classic pairing of left brain and right brain. Rand had taken to computers from the instant he was introduced to them via a big time-shared system whilst still in junior high, and had made programming them into his career. By 1987, the year HyperCard dropped, he was to all appearances settled in life: 28 years old, married with children, living in a small town in East Texas, working for a bank as a programmer, and nurturing a love for the Apple Macintosh (he’d purchased his first Mac within days of the machine’s release back in 1984). He liked to read books on science. His brother Robyn, seven years his junior, was still trying to figure out what to do with his life. He was attending the University of Washington in somewhat desultory fashion as an alleged anthropology major, but devoted most of his energy to drawing pictures and playing the guitar. He liked to read adventure novels.

HyperCard struck Rand Miller, as it did so many, with all the force of a revelation. While he was an accomplished enough programmer to make a living at it, he wasn’t one who particularly enjoyed the detail work that went with the trade. “There are a lot of people who love digging down into the esoterics of compilers and C++, getting down and dirty with typed variables and all that stuff,” he says. “I wanted a quick return on investment. I just wanted to get things done.” HyperCard offered the chance to “get things done” dramatically faster and more easily than any programming environment he had ever seen. He became an immediate convert.

The Manhole

With two small girls of his own, Rand felt keenly the lack of quality children’s software for the Macintosh. He hit upon the idea of making a sort of interactive storybook using HyperCard, a very natural application for a hypertext tool. Lacking the artistic talent to make a go of the pictures, he thought of his little brother Robyn. The two men, so far apart in years and geography and living such different lives, weren’t really all that close. Nevertheless, Rand had a premonition that Robyn would be the perfect partner for his interactive storybook.

But Robyn, who had never owned a computer and had never had any interest in doing so, wasn’t immediately enticed by the idea of becoming a software developer. Getting him just to consider the idea took quite a number of letters and phone calls. At last, however, Robyn made his way down to the Macintosh his parents kept in the basement of the family home in Spokane and loaded up the copy of HyperCard his brother had sent him. There, like so many others, he was seduced by Bill Atkinson’s creation. He started playing around, just to see what he could make. What he made right away became something very different from the interactive storybook, complete with text and metaphorical pages, that Rand had envisioned. Robyn:

I started drawing this picture of a manhole — I don’t even know why. You clicked on it and the manhole cover would slide off. Then I made an animation of a vine growing out. The vine was huge, “Jack and the Beanstalk”-style. And then I didn’t want to turn the page. I wanted to be able to navigate up the vine, or go down into the manhole. I started creating a navigable world by using the very simple tools [of HyperCard]. I created this place.  I improvised my way through this world, creating one thing after another. Pretty soon I was creating little canals, and a forest with stars. I was inventing it as I went. And that’s how the world was born.

For his part, Rand had no problem accepting the change in approach:

Immediately you are enticed to explore instead of turning the page. Nobody sees a hole in the ground leading downward and a vine growing upward and in the distance a fire hydrant that says, “Touch me,” and wants to turn the page. You want to see what those things are. Instead of drawing the next page [when the player clicked a hotspot], he [Robyn] drew a picture that was closer — down in the manhole or above on the vine. It was kind of a stream of consciousness, but it became a place instead of a book. He started sending me these images, and I started connecting them, trying to make them work, make them interactive.

The Manhole

In this fashion, they built the world of The Manhole together: Robyn pulling its elements from the flotsam and jetsam of his consciousness and drawing them on the screen, Rand binding it all together into a contiguous place, and adding sound effects and voice snippets here and there. If they had tried to make a real game of the thing, with puzzles and goals, such a non-designed approach to design would likely have gone badly wrong in a hurry.

Luckily, puzzles and goals were never the point of The Manhole. It was intended always as just an endlessly interesting space to explore. As such, it would prove capable of captivating children and the proverbial young at heart for hours, full as it was of secrets and Easter eggs hidden in the craziest of places. One can play with The Manhole on and off for literally years, and still continue to stumble upon the occasional new thing. Interactions are often unexpected, and unexpectedly delightful. Hop in a rowboat to take a little ride and you might emerge in a rabbit’s teacup. Start watching a dragon’s television — Why does a dragon have a television? Who knows! — and you can teleport yourself into the image shown on the screen to emerge at the top of the world. Search long enough, and you might just discover a working piano you can actually play. The spirit of the thing is perhaps best conveyed by the five books you find inside the friendly rabbit’s home: Alice in Wonderland; The Wind in the Willows; The Lion, the Witch, and the Wardrobe; Winnie the Pooh; and Metaphors of Intercultural Philosophy (“This book isn’t about anything!”). Like all of those books excepting, presumably, the last, The Manhole is pretty wonderful, a perfect blend of sweet cuteness and tart whimsy.

The Manhole

With no contacts whatsoever within the Macintosh software industry, the brothers decided to publish The Manhole themselves via a tiny advertisement in the back of Macworld magazine, taken out under the auspices of Prolog, a consulting company Rand had founded as a moonlighting venture some time before. They rented a tiny booth to show The Manhole publicly for the first time at the Hyper Expo in San Francisco in June of 1988. (Yes, HyperCard mania had gotten so intense that there were entire trade shows dedicated just to it.) There they were delighted to receive a visit from none other than HyperCard’s creator Bill Atkinson, with his daughter Laura in tow; not yet five years old, she had no trouble navigating through their little world. Incredibly, Robyn had never even heard the word “hypertext” prior to the show, had no idea about the decades of theory that underpinned the program he had used, savant-like, to create The Manhole. When he met a band of Ted Nelson’s disgruntled Xanadu disciples on the show floor, come to crash the HyperCard party, he had no idea what they were on about.

But the brothers’ most important Hyper Expo encounter was a meeting with Richard Lehrberg, Vice President for Product Development at Mediagenic, [1]Activision was renamed Mediagenic at almost the very instant that Lehrberg first met the Miller brothers. When the name change was greeted with universal derision, Activision/Mediagenic CEO Bruce Davis quickly began backpedaling on his hasty decision. The Manhole, for instance, was released by Mediagenic under their “Activision” label — which was odd because under the new ordering said label was supposed to be reserved for games, and The Manhole was considered children’s software, not a traditional game. I just stick with the name “Mediagenic” in this article as the least confusing way to address a confusing situation. who took a copy of The Manhole away with him for evaluation. Lehrberg showed it to William Volk, whom he had just hired away from the small Macintosh and Amiga publisher Aegis to become Mediagenic’s head of technology; he described it to Volk unenthusiastically as “this little HyperCard thing” done by “two guys in Texas.” Volk was much more impressed. He was immediately intrigued by one aspect of The Manhole in particular: the way that it used no buttons or conventional user-interface elements at all. Instead, the pictures themselves were the interface; you could just click where you would and see what happened. It was perhaps a product of Robyn Miller’s sheer naïveté as much as anything else; seasoned computer people, so used to conventional interface paradigms, just didn’t think like that. But regardless of where it came from, Volk thought it was genius, a breaking down of a wall that had heretofore always separated the user from the virtual world. Volk:

The Miller brothers had come up with what I call the invisible interface. They had gotten rid of the idea of navigation buttons, which was what everyone was doing: go forward, go backward, turn right, turn left. They had made the scenes themselves the interface. You’re looking at a fire hydrant. You click on the fire hydrant; the fire hydrant sprays water. You click on the fire hydrant again; you zoom in to the fire hydrant, and there’s a little door on the fire hydrant. That was completely new.

Of course, other games did have you clicking “into” their world to make things happen; the point-and-click adventure genre was evolving rapidly during this period to replace the older parser-driven adventure games. But even games like Déjà Vu and Maniac Mansion, brilliantly innovative though they were, still surrounded their windows into their worlds with a clutter of “verb” buttons, legacies of the genre’s parser-driven roots. The Manhole, however, presented the player with nothing but its world. What with its defiantly non-Euclidean — not to say nonsensical — representation of space and its lack of goals and puzzles, The Manhole wasn’t a conventional adventure game by any stretch. Nevertheless, it pointed the way to what the genre would become, not least in the later works of the Miller brothers themselves.

Much of Volk’s working life for the next two years would be spent on The Manhole, by the end of which period he would quite possibly be more familiar with its many nooks and crannies than its own creators were. He became The Manhole‘s champion inside Mediagenic, convincing his colleagues to publish it, thereby bringing it to a far wider audience than the Miller brothers could ever have reached on their own. Released by Mediagenic under their Activision imprint, it became a hit by the modest standards of the Macintosh consumer-software market. Macworld magazine named The Manhole the winner of their “Wild Card” category in a feature article on the best HyperCard stacks, while the Software Publishers Association gave it an “Excellence in Software” award for “Best New Use of a Computer.”

We aware that The Manhole was collecting a certain computer-chic cachet, Mediagenic/Activision didn't hesitate to play that angle up in their advertising.

Well aware that The Manhole was collecting a certain chic cachet to itself, Mediagenic/Activision didn’t hesitate to play that angle up in their advertising.

Had that been left to be that, The Manhole would remain historically interesting as both a delightful little curiosity of its era and as the starting point of the hugely significant game-development careers of the Miller brothers. Yet there’s more to the story.

William Volk, frustrated with the endless delays of CD-I and the state of paralysis the entire industry was in when it came to the idea of publishing entertainment software on CD, had been looking for some time for a way to break the logjam. It was Stewart Alsop, an influential tech journalist, who first suggested to Volk that the answer to his dilemma was already part of Mediagenic’s catalog — that The Manhole would be perfect for CD-ROM. Volk was just the person to see such a project through, having already experimented extensively with CD-ROM and CD-I  as part of Aegis as well as Mediagenic. With the permission of the Miller brothers, he recruited Russell Lieblich, Mediagenic’s longstanding guru in all things music- and sound-related, to compose and perform a soundtrack for The Manhole which would play from the CD as the player explored.

An important difference separates the way the music worked in the CD-ROM version of The Manhole from the way it worked in virtually all computer games to appear before it. The occasional brief digitized snippet aside, music in computer games had always been generated on the computer, whether by sound chips like the Commodore 64’s famous SID or entire sound boards like the top-of-its-class Roland MT-32 (we shall endeavor to forget the horrid beeps and squawks that issued from the IBM PC and Apple II’s native sound hardware). But The Manhole‘s music, while having been originally generated entirely or almost entirely on computers in Lieblich’s studio, was then recorded onto CD for digital playback, just like a song on a music CD. This method, made possible only by evolving computer sound hardware and, most importantly, by the huge storage capacity of a CD-ROM, would in the years to come slowly become simply the way that computer-game music was done. Today many big-budget titles hire entire orchestras to record soundtracks as elaborate and ambitious as the ones found in big Hollywood feature films, whilst also including digitized recordings of voices, squealing tires, explosions, and all the inevitable rest. In fact, surprisingly little of the sound present in most modern games is synthesized sound, a situation that has long since relegated elaborate setups like the Roland MT-32 to the status of white elephants; just pipe your digitized recording through a digital-to-analog converter and be done with it already.

As the very first title to go all digitized all the time, The Manhole didn’t have a particularly easy time of it; getting the music to play without breaking up or stuttering as the player explored presented a huge challenge on the Macintosh, a machine whose minimalist design burdened the CPU with all of the work of sound generation. However, Volk and his colleagues got it going at last. Published in the spring of 1989, the CD-ROM version of The Manhole marked a major landmark in the history of computing, the first American game — or, at least, software toy (another big buzzword of the age, as it happens) — to be released on CD-ROM. [2]The first CD-based software to reach European consumers says worlds about the differences that persisted between American and European computing, and about the sheer can-do ingenuity that so often allowed British programmers in particular to squeeze every last ounce of potential out of hardware that was usually significantly inferior to that enjoyed by their American counterparts. Codemasters, a budget software house based in Warwickshire, came up with a very unique shovelware package for the 1989 Christmas season. They transferred thirty old games from cassette to a conventional audio CD, which they then sold along with a special cable to run the output from an ordinary music-CD player into a Sinclair or Amstrad home computer. “Here’s your CD-ROM,” they said. “Have a ball.” By all accounts, Codemasters’s self-proclaimed “CD revolution,” kind of hilarious and kind of brilliant, did quite well for them. When it came to doing more with less in computing, you never could beat the Brits. Volk, infuriated with Philips for the chaos and confusion CD-I’s endless delays had wrought in an industry he believed was crying out for the limitless vistas of optical storage, sent them a copy of The Manhole along with a curt note: “See! We did it! We’re tired of waiting!”

And they weren’t done yet. Having gotten The Manhole working on CD-ROM on the Macintosh, Volk and his colleagues at Mediagenic next tackled the daunting task of porting it to the most popular platform for consumer software, MS-DOS — a platform without HyperCard. To address this lack, Mediagenic developed a custom engine for CD-ROM titles on MS-DOS, dubbing it the Multimedia Applications Development Environment, or MADE. [3]MADE’s scripting language was to some extent based on AdvSys, a language for amateur text-adventure creation that never quite took off like the contemporaneous AGT. Mediagenic’s in-house team of artists redrew Robyn Miller’s original black-and-white illustrations in color, and The Manhole on CD-ROM for MS-DOS shipped in 1990.

In my opinion, The Manhole lost a little bit of its charm when it was colorized. The VGA graphics, impressive in their day, look a bit garish today.

In my opinion, The Manhole lost some of its unique charm when it was colorized for MS-DOS. The VGA graphics, impressive in their day, look just a bit garish and overdone today in comparison to the classic pen-and-ink style of the original.

The Manhole, idiosyncratic piece of artsy children’s software that it was, could hardly have been expected to break the industry’s optical logjam all on its own. Its CD-ROM incarnation, for that matter, wasn’t all that hugely different from the floppy version. In the end, one has to acknowledge that The Manhole on CD-ROM was little more than the floppy version with a soundtrack playing in the background — a nice addition certainly, but perhaps not quite the transformative experience which all of the rhetoric surrounding CD-ROM’s potential might have led one to expect. It would take another few excruciating years for a CD-ROM drive to become a must-have accessory for everyday American computers. Yet every revolution has to start somewhere, and William Volk deserves his full measure of credit for doing what he could to push this one forward in the only way that could ultimately matter: by stepping up and delivering a real, tangible product at long last. As Steve Jobs used to say, “Real artists ship.”

The importance of The Manhole, existing as it does right there at the locus of so much that was new and important in computing in the late 1980s, can be read in so many ways that there’s always a danger of losing some of them in the shuffle. But it should never be forgotten whilst trying to sort through the tangle that this astonishingly creative little world was principally designed by someone who had barely touched a computer in his life before he sat down with HyperCard. That he wound up with something so fascinating is a huge tribute not just to Robyn Miller and his enabling brother Rand, but also to Bill Atkinson’s HyperCard itself. Apple has long since abandoned HyperCard, and we enjoy no precise equivalent to it today. Indeed, its vision of intuitive, non-pretentious, fun programming is one that we’re in danger of losing altogether. Being one who loves the computer most of all as the most exciting tool for creation ever invented, I can’t help but see that as a horrible shame.

The Miller brothers had, as most of you reading this probably know, a far longer future in front of them than HyperCard would get to enjoy. Already well before 1988 was through they had rechristened themselves Cyan Productions, a name that felt much more appropriate for a creative development house than the businesslike Prolog. As Cyan, they made two more pieces of children’s software, Cosmic Osmo and the Worlds Beyond the Makerei and Spelunx and the Caves of Mr. Seudo. Both were once again made using HyperCard, and both were very much made in the spirit of The Manhole. And like The Manhole both were published on CD-ROM as well as floppy disk; the Miller brothers, having learned much from Mediagenic’s process of moving their first title to CD-ROM, handled the CD-ROM as well as the floppy versions themselves when it came to these later efforts. Opinions are somewhat divided on whether the two later Cyan children’s titles fully recapture the magic that has led so many adults and children alike over the years to spend so much time plumbing the depths of The Manhole. None, however, can argue with the significance of what came next, the Miller brothers’ graduation to games for adults — and, as it happens, another huge milestone in the slow-motion CD-ROM revolution. But that story, like so many others, is one that we’ll have to tell at another time.

(Sources: Amstrad Action of January 1990; Macworld of July 1988, October 1988, November 1988, March 1989, April 1989, and December 1989; Wired of August 1994 and October 1999; The New York Times of November 28 1989. Also the books Myst and Riven: The World of the D’ni by Mark J.P. Wolf and Prima’s Official Strategy Guide: Myst by Rick Barba and Rusel DeMaria, and the Computer Chronicles television episodes entitled “HyperCard,” “MacWorld Special 1988,” “HyperCard Update,” and “Hypertext.” Online sources include Robyn Miller’s Myst postmortem from the 2013 Game Developer’s Conference; Richard Moss’s Ludiphilia podcast; a blog post by Robyn Miller. Finally, my huge thanks to William Volk for sharing his memories and impressions with me in an interview and for sending me an original copy of The Manhole on CD-ROM for my research.

The original floppy-disk-based version of The Manhole can be played online at archive.org. The Manhole: Masterpiece Edition, a remake supervised by the Miller brothers in 1994 which sports much-improved graphics and sound, is available for purchase on Steam.)

Footnotes

Footnotes
1 Activision was renamed Mediagenic at almost the very instant that Lehrberg first met the Miller brothers. When the name change was greeted with universal derision, Activision/Mediagenic CEO Bruce Davis quickly began backpedaling on his hasty decision. The Manhole, for instance, was released by Mediagenic under their “Activision” label — which was odd because under the new ordering said label was supposed to be reserved for games, and The Manhole was considered children’s software, not a traditional game. I just stick with the name “Mediagenic” in this article as the least confusing way to address a confusing situation.
2 The first CD-based software to reach European consumers says worlds about the differences that persisted between American and European computing, and about the sheer can-do ingenuity that so often allowed British programmers in particular to squeeze every last ounce of potential out of hardware that was usually significantly inferior to that enjoyed by their American counterparts. Codemasters, a budget software house based in Warwickshire, came up with a very unique shovelware package for the 1989 Christmas season. They transferred thirty old games from cassette to a conventional audio CD, which they then sold along with a special cable to run the output from an ordinary music-CD player into a Sinclair or Amstrad home computer. “Here’s your CD-ROM,” they said. “Have a ball.” By all accounts, Codemasters’s self-proclaimed “CD revolution,” kind of hilarious and kind of brilliant, did quite well for them. When it came to doing more with less in computing, you never could beat the Brits.
3 MADE’s scripting language was to some extent based on AdvSys, a language for amateur text-adventure creation that never quite took off like the contemporaneous AGT.
 
 

Tags: , ,

A Slow-Motion Revolution

CD-ROM

A quick note on terminology before we get started: “CD-ROM” can be used to refer either to the use of CDs as a data-storage format for computers in general or to the Microsoft-sponsored specification for same. I’ll be using the term largely in the former sense in the introduction to this article, in the latter after something called “CD-I” enters the picture. I hope the point of transition won’t be too hard to identify, but my apologies if this leads to any confusion. Sometimes this language of ours is a very inexact thing.



In the first week of March 1986, much of the computer industry converged on Seattle for the first annual Microsoft CD-ROM Conference. Microsoft had anticipated about 500 to 600 attendees to the four-day event. Instead more than 1000 showed up, forcing the organizers to reject many of them at the door of a conference center that by law could only accommodate 800 people. Between the presentations on CD-ROM’s bright future, the attendees wandered through an exhibit hall showcasing the format’s capabilities. The hit of the hall was what was about to become the first CD-ROM product ever to be made available for sale to the public, consisting of the text of all 21 volumes of the Grolier Academic Encyclopedia, some 200 MB in all, on a single disc. It was to be published by KnowledgeSet, a spinoff of Digital Research. Digital’s founder Gary Kildall, apparently forgiving Bill Gates his earlier trespasses in snookering a vital IBM contract out from under his nose, gave the conference’s keynote address.

Kildall’s willingness to forgive and forget in light of the bright optical-storage future that stood before the computer industry seemed very much in harmony with the mood of the conference as a whole. Sentiments often verged on the utopian, with talk of a new “paperless society” abounding, a revolution to rival that of Gutenberg. “The compact disc represents a major discontinuity in the cost of producing and distributing information,” said one Ed Schmid of DEC. “You have to go back to the invention of movable type and the printing press to find something equivalent.” The enthusiasm was so intense and the good vibes among the participants — many of them, like Gates and Kildall, normally the bitterest of enemies — so marked that some came to call the conference “the computer industry’s Woodstock.” If the attendees couldn’t quite smell peace and love in the air, they certainly could smell potential and profit.

All the excitement came down to a single almost unbelievable number: the 650 MB of storage offered by every tiny, inexpensive-to-manufacture compact disc. It’s very, very difficult to fully convey in our current world of gigabytes and terabytes just how inconceivably huge a figure 650 MB actually was in 1986, a time when a 40 MB hard drive was a cavernous, how-can-I-ever-possibly-fill-this-thing luxury found on only the most high-end computers. For developers who had been used to making their projects fit onto floppy disks boasting less than 1 MB of space, the idea of CD-ROM sounded like winning the lottery several times over. You could put an entire 21-volume encyclopedia on one of the things, for Pete’s sake, and still have more than two-thirds of the space left over! Suddenly one of the most nail-biting constraints against which they had always labored would be… well, not so much eased as simply erased. After all, how could anything possibly fill 650 MB?

And just in case that wasn’t enough great news, there was also the fact that the CD was a read-only format. If the industry as a whole moved to CD-ROM as its format of choice, the whole piracy problem, which organizations like the Software Publishers Association ardently believed was costing it billions every year, would dry up and blow away like a dandelion in the fall. Small wonder that the mood at the conference sometimes approached evangelistic fervor. Microsoft, as swept away with it all as anyone, published a collection of the papers that were presented there under the very non-businesslike, non-Microsoft-like title of CD-ROM: The New Papyrus. The format just seemed to demand a touch of rhapsodic poetry.

But the rhapsody wasn’t destined to last very long. The promised land of a software industry built around the effectively unlimited storage capacity of the compact disc would prove infuriatingly difficult to reach; the process of doing so would stretch over the better part of a decade, by the end of which time the promised land wouldn’t seem quite so promising anymore. Throughout that stretch, CD-ROM was always coming in a year or two, always the next big thing right there on the horizon that never quite arrived. This situation, so antithetical to the usual propulsive pace of computer technology, was brought about partly by limitations of the format itself which were all too easy to overlook amid the optimism of that first conference, and partly by a unique combination of external factors that sometimes almost seemed to conspire, perfect-storm-like, to keep CD-ROM out of the hands of consumers.



The compact disc was developed as a format for music by a partnership of the Dutch electronics giant Philips and the Japanese Sony during the late 1970s. Unlike the earlier analog laser-disc format for the storage of video, itself a joint project of Philips and the American media conglomerate MCA, the CD stored information digitally, as long strings of ones and zeros to be passed through digital-to-analog converters and thus turned into rich stereo sound. Philips and Sony published the final specifications for the music CD in 1980, opening up to others who wished to license the technology what would become known as the “Red Book” standard after the color of the binder in which it was described. The first consumer-oriented CD players began to appear in Japan in 1982, in the rest of the world the following year. Confined at first to the high-end audiophile market, by the time of that first Microsoft CD-ROM Conference in 1986 the CD was already well on its way to overtaking the record album and, eventually, the cassette tape to become the most common format for music consumption all over the world.

There were good reasons for the CD’s soaring popularity. Not only did CDs sound better than at least all but the most expensive audiophile turntables, with a complete absence of hiss or surface noise, but, given that nothing actually touched the surface of a disc when it was being played, they could effectively last forever, no matter how many times you listened to them; “Perfect sound forever!” ran the tagline of an early CD advertising campaign. Then there was the way you could find any song you liked on a CD just by tapping a few buttons, as opposed to trying to drop a stylus on a record at just the right point or rewind and fast-forward a cassette to just the right spot. And then there was the way that CDs could be carried around and stored so much more easily than a record album, plus the way they could hold up to 75 minutes worth of music, enough to pack many double vinyl albums onto a single CD. Throw in the lack of a need to change sides to listen to a full album, and seldom has a new media format appeared that is so clearly better than the existing formats in almost all respects.

It didn’t take long for the computer industry to come to see the CD format, envisioned originally strictly as a music medium, as a natural one to extend to other types of data storage. Where the rubber met the road — or the laser met the platter — a CD player was just a mechanism for reading bits off the surface of the disc and sending them on to some other circuitry that knew what to do with them. This circuitry could just as easily be part of a computer as a stereo system.

Such a sanguine view was perhaps a bit overly reductionist. When one started really delving into the practicalities of the CD as a format for data storage, one found a number of limitations, almost all of them drawn directly from the technology’s original purpose as a music-delivery solution. For one thing, CD drives were only capable of reading data off a disc at a rate of 153.6 K per second, this figure corresponding not coincidentally to the speed required to stream standard CD sound for real-time playback. [1]The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data. Such a throughput was considered pretty good but hardly breathtaking by mid-1980s hard-disk standards; an average 10 MB hard drive of the period might have a transfer rate of about 96 K per second, although high-performance drives could triple or even quadruple that figure.

More problematic was a CD drive’s atrocious seek speed — i.e., the speed at which files could be located for reading on a disc. An average 10 MB hard disk of 1986 had a typical seek time of about 100 milliseconds, a worst-case-scenario maximum of about 200 — although, again, high-performance models could improve on those figures by a factor of four. A CD drive, by contrast, had a typical seek time of 500 milliseconds, a maximum of 1000  — one full second. The designers of the music CD hadn’t been particularly concerned by the issue, for a music-CD player would spend the vast majority of its time reading linear streams of sound data. On those occasions when the user did request a certain track found deeper on the disc, even a full second spent by the drive in seeking her favorite song would hardly be noticed unduly, especially in comparison to the pain of trying to find something on a cassette or a record album. For storage of computer data, however, the slow seek speed gave far more cause for concern.

The LMS LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It can hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

The Laser Magnetic Storage LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It could hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

Given these issues of performance, which promised only to get more marked in comparison to hard drives as the latter continued to get faster, one might well ask why the industry was so determined to adapt the music CD specifically to data storage rather than using Philips and Sony’s work as a springboard to another optical format with affordances more suitable to the role. In fact, any number of companies did choose the latter course, developing optical formats in various configurations and capacities, many even offering the ability to write to as well as read from the disc. (Such units were called “WORM” drives, for “Write Once Read Many”; data, in other words, could be written to their discs, but not erased or rewritten thereafter.) But, being manufactured in minuscule quantities as essentially bespoke items, all such efforts were doomed to be extremely expensive.

The CD, on the other hand, had the advantage of an existing infrastructure dedicated to stamping out the little silver discs and filling them with data. At the moment, that data consisted almost exclusively of encoded music, but the process of making the discs didn’t care a whit what the ones and zeros being burned into them actually represented. CD-ROM would allow the computer industry to piggy-back on an extant, mature technology that was already nearing ubiquity. That was a huge advantage when set against the cost of developing a new format from scratch and setting up a similar infrastructure to turn it out in bulk — not to mention the challenge of getting the chaotic, hyper-competitive computer industry to agree on another format in the first place. For all these reasons, there was surprisingly little debate on whether adapting the music CD to the purpose of data storage was really the best way to go. For better or for worse, the industry hitched its wagon to the CD; its infelicities as a general-purpose data-storage solution would just have to be worked around.

One of the first problems to be confronted was the issue of a logical file format for CD-ROM. The physical layout of the bits on a data CD was largely dictated by the design of the platters themselves and the machinery used to burn data into them. Yet none of that existing infrastructure had anything to say about how a filesystem appropriate for use with a computer should work within that physical layout. Microsoft, understanding that a certain degree of inter-operability was a valuable thing to have even among the otherwise rival platforms that might wind up embracing CD-ROM, pushed early for a standardized logical format. As a preliminary step on the road to that landmark first CD-ROM Conference, they brought together a more intimate group of eleven other industry leaders at the High Sierra Resort and Casino in Lake Tahoe in November of 1985 to hash out a specification. Among those present were Philips, Sony, Apple, and DEC; notably absent was IBM, a clear sign of Microsoft’s growing determination to step out of the shadow of Big Blue and start dictating the direction of the industry in their own right. The so-called “High Sierra” format would be officially published in finalized form in May of 1986.

In the run-up to the first Microsoft CD-ROM Conference, then, everything seemed to be coming together nicely. CD-ROM had its problems, but virtually everyone agreed that it was a tremendously exciting development. For their part, Microsoft, driven by a Bill Gates who was personally passionate about the format and keenly aware that his company, the purveyor of clunky old MS-DOS, needed for reasons of public relations if nothing else a cutting-edge project to rival any of Apple’s, had established themselves as the driving force behind the nascent optical revolution. And then, just five days before the conference was scheduled to convene — timing that struck very few as accidental — Philips injected a seething ball of chaos into the system via something called CD-I.

CD-I was a different, competing file format for CD data storage. But CD-I was also much, much more. Excited by the success the music CD had enjoyed, Philips, with the tacit support of Sony, had decided to adapt the format into the all-singing, all-dancing, all-around future of home entertainment in the abstract. Philips would be making a CD-I box for the home, based on a minimalist operating system called OS-9 running on a Motorola 68000 processor. But this would be no typical home computer; the user would be able to control CD-I entirely using a VCR-style remote control. CD-I was envisioned as the interactive television of the future, a platform for not only conventional videogames but also lifestyle products of every description, from interactive astronomy lessons to the ultimate in exercise tapes. Philips certainly wasn’t short of ideas:

Think of owning an encyclopedia which presents chosen topics in several different ways. Watching a short audio/video sequence to gain a general background to the topic. Then choosing a word or subject for more in-depth study. Jumping to another topic without losing your place — and returning again after studying the related topic to proceed further. Or watching a cartoon film, concert, or opera with the interactive capabilities of CD-I added. Displaying the score, libretto, or text onscreen in a choice of languages. Or removing one singer or instrument to be able to sing along with the music.

Just as they had with the music CD, Philips would license the specifications to whoever else wanted to make gadgets of their own capable of playing the CD-I discs. They declared confidently that there would be as many CD-I players in the world as phonographs within a few years of the format’s debut, that “in the long run” CD-I “could be every bit as big as the CD-audio market.”

Already at the Microsoft CD-ROM Conference, Philips began aggressively courting developers in the existing computer-games industry to embrace CD-I. Plenty of them were more than happy to do so. Despite the optimism that dominated at the conference, it wasn’t clear how much priority Microsoft, who earned the vast majority of their money from business computing, would really give to more consumer-focused applications of CD-ROM like gaming. Philips, on the other hand, was a giant of consumer electronics. While they paid due lip service to applications of CD-I in areas like corporate training, it was always clear that it would be first and foremost a technology for the living room, one that comprehensively addressed what most believed was the biggest factor limiting the market for conventional computer games: that the machines that ran them were just too fiddly to operate. At the time that CD-I was first announced, the videogame console was almost universally regarded as a dead fad; the machine that would so dramatically reverse that conventional wisdom, the Nintendo Entertainment System, was still an oddball upstart being sold in selected markets only. Thus many game makers saw CD-I as their only viable route out of the back bedroom and into the living room — into the mainstream of home entertainment.

So, when Philips spoke, the game developers listened. Many publishers, including big powerhouses like Activision as well as smaller boutique houses like the 68000 specialists Aegis Development, committed to CD-I projects during 1986, receiving in return a copy of the closely guarded “Green Book” that detailed the inner workings of the system. There was no small pressure to get in on the action quickly, for Philips was promising to ship the first finished CD-I units in time for the Christmas of 1987. Trip Hawkins of Electronic Arts made CD-I a particular priority, forming a whole new in-house development division for the platform. He’d been waiting for a true next-generation mainstream game machine for years. At first, he’d thought the Commodore Amiga would be that machine, but Commodore’s clueless marketing and the Amiga’s high price were making such an outcome look less and less likely. So now he was looking to CD-I, which promised graphics and sound as good as those of the Amiga, along with the all but infinite storage of the unpirateable CD format, and all in a tidy, inexpensive package designed for the living room. What wasn’t to like? He imagined Silicon Valley becoming “the New Hollywood,” imagined a game like Electronic Arts’s hit Starflight remade as a CD-I experience.

You could actually do it just like a real movie. You could hire a costume designer from the movie business, and create special-effects costumes for the aliens. Then you’d videotape scenes with the aliens, and have somebody do a soundtrack for the voices and for the text that they speak in the game.

Then you’d digitize all of that. You could fill up all the space on the disc with animated aliens and interesting sounds. You would also have a universe that’s a lot more interesting to look at. You might have an out-of-the-cockpit view, like Star Trek, with planets that look like planets — rotating, with detailed zooms and that sort of thing.

Such a futuristic vision seemed thoroughly justifiable based on Philips’s CD-I hype, which promised a rich multimedia environment combining CD-quality stereo sound with full-motion video, all at a time when just displaying a photo-realistic still image captured from life on a computer screen was considered an amazing feat. (Among extant personal computers, only the Amiga could manage it.) When developers began to dive into the Green Book, however, they found the reality of CD-I often sharply at odds with the hype. For instance, if you decided to take advantage of the CD-quality audio, you had to tie up the CD drive entirely to stream it, meaning you couldn’t use it to fetch pictures or video or anything else for this supposed rich multimedia environment.

Video playback became an even bigger sore spot that echoed back to those fundamental limitations that had been baked into the CD when it was regarded only as a medium for music delivery. A transfer rate of barely 150 K per second just wasn’t much to work with in terms of streaming video. Developers found themselves stymied by an infuriating Catch-22. If you tried to work with an uncompressed or only modestly compressed video format, you simply couldn’t read it off the disk fast enough to display it in real-time. Yet if you tried to use more advanced compression techniques, it became so expensive in terms of computation to decompress the data that the CD-I unit’s 68000 CPU couldn’t keep up. The best you could manage was to play video snippets that only filled a quarter of the screen — not a limitation that felt overly compatible with the idea of CD-I as the future of home entertainment in the abstract. It meant that a game like the old laser-disc-driven arcade favorite Dragon’s Lair, the very sort of thing people tended to think of first when you mentioned optical storage in the context of entertainment, would be impossible with CD-I. The developers who had signed contracts with Philips and committed major resources to CD-I could only soldier on and hope the technology would continue to evolve.

By 1987, then, the CD as a computer format had been split into two camps. While the games industry had embraced CD-I, the powers that were in business computing had jumped aboard the less ambitious, Microsoft-sponsored standard of CD-ROM, which solved issues like the problematic video playback of CD-I by the simple expediency of not having anything at all to say about them. Perhaps the most impressive of the very early CD-ROM products was the Microsoft Bookshelf, which combined Roget’s Thesaurus, The American Heritage Dictionary, The Chicago Manual of Style, The World Almanac and Book of Facts, and Bartlett’s Familiar Quotations alongside spelling and grammar checkers, a ZIP Code directory, and a collection of forms and form letters, all on a single disc — as fine a demonstration of the potential of the new format as could be imagined short of all that rich multimedia that Philips had promised. Microsoft proudly noted that Bookshelf was their largest single product ever in terms of the number of bits it contained and their smallest ever in physical size. Nevertheless, with most drives costing north of $1000 and products to use with them like Microsoft Bookshelf hundreds more, CD-ROM remained a pricey proposition found in vanishingly few homes — and for that matter not in all that many businesses either.

But at least actual products were available in CD-ROM format, which was more than could be said for CD-I. As 1986 turned into 1987, developers still hadn’t received any CD-I hardware at all, being forced to content themselves with printed specifications and examples of the system in action distributed on videotape by Philips. Particularly for a small company like Aegis, which had committed heavily to a game based on Jules Verne’s 20,000 Leagues Under the Sea, for which they had recruited Jim Sachs of Defender of the Crown fame as illustrator, it was turning into a potentially dangerous situation.

The computer industry — even those parts of it now more committed to CD-I than CD-ROM — dutifully came together once again for the second Microsoft CD-ROM Conference in March of 1987. In contrast to the unusual Pacific Northwest sunshine of the previous conference, the weather this year seemed to match the more unsettled mood: three days of torrential downpour. It was a more skeptical and decidedly less Woodstock-like audience who filed into the auditorium one day for a presentation by no less unlikely a party than the venerable old American conglomerate General Electric. But in the course of that presentation, the old rapture came back in a hurry, culminating in a spontaneous standing ovation. What had so shocked and amazed the audience was the impossible made real: full-screen video running in real-time off a CD drive connected to what to all appearances was an ordinary IBM PC/AT computer. Digital Video Interactive, or DVI, had just made its dramatic debut.

DVI’s origins dated back to 1983, when engineer Larry Ryan of another old-school American company, RCA, had been working on ways to make the old analog laser-disc technology more interactive. Growing frustrated with the limitations he kept bumping against, he proposed to his bosses that RCA dump the laser disc from the equation entirely and embrace digital optical storage. They agreed, and a new project on those lines was begun in 1984. It was still ongoing two years later — just reaching the prototype stage, in fact — when General Electric acquired RCA.

DVI worked by throwing specialized hardware at the problem which Philips had been fruitlessly trying to solve via software alone. By using ultra-intensive compression techniques, it was possible to crunch video playing at a resolution of 256 X 240 — not an overwhelming resolution even by the standards of the day, but not that far below the practical resolution of a typical television set either — down to a size below 153.6 K per second of footage without losing too much quality. This fact was fairly well-known, not least to Philips. The bottleneck had always been the cost of decompressing the footage fast enough to get it onto the screen in real time. DVI attacked this problem via a hardware add-on that consisted principally of a pair of semi-autonomous custom chips designed just for the task of decompressing the video stream as quickly as possible. DVI effectively transformed the potential 75 minutes of sound that could be stored on a CD into 75 minutes of video.

Philosophically, the design bore similarities to the Amiga’s custom chips — similarities which became even more striking when you considered some of the other capabilities that came almost as accidental byproducts of the design. You could, for instance, overlay conventional graphics onto the streaming video by using the computer’s normal display circuitry in conjunction with DVI, just as you could use an Amiga to overlay titles and other graphics onto a “genlocked” feed from a VCR or other video source. But the difference with DVI was that it required no complicated external video source at all, just a CD in the computer’s CD drive. The potential for games was obvious.

In this demonstration of DVI's potential, the user can explore an ancient Mayan archeological site that's depicted using real-world video footage, while the control icons are traditional computer graphics.

In this demonstration of DVI’s potential, the user can explore an ancient Mayan archeological site that’s depicted using real-world video footage, while the icons used as controls are traditional computer graphics.

Still, DVI’s dramatic debut barely ended before the industry’s doubts began. It seemed clear enough that DVI was technically better than CD-I, at least in the hugely important area of video playback, but General Electric — hardly anyone’s idea of a nimble innovator — offered as yet no clear road map for the technology, no hint of what they really planned to do with it. Should game developers place their CD-I projects on hold to see if something better really was coming in the form of DVI, or should they charge full speed ahead and damn the torpedoes? Some did one, some did the other; some made halfhearted commitments to both technologies, some vacillated between them.

But worst of all was the effect that DVI had on Philips. They were thrown into a spin by that presentation from which they never really recovered. Fearful of getting their clock cleaned in the marketplace by a General Electric product based on DVI, Philips stopped CD-I in its tracks, demanding that a way be found to make it do full-screen video as well. From an original plan to ship the first finished CD-I units in time for Christmas 1987, the timetable slipped to promise the first prototypes for developers by January of 1988. Then that deadline also came and went, and all that developers had received were software emulators. Now the development prototypes were promised by summer 1988, finished units expected to ship in 1989. The delay notwithstanding, Philips still confidently predicted sales in “the tens of millions.” But then world domination was delayed again until 1990, then 1991.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Wanting CD-I to offer the best of everything, the project chased its own tail for years, trying to address every actual or potential innovation from every actual or potential rival. The game publishers who had jumped aboard with such enthusiasm in the early days were wracked with doubt upon the announcement of each successive delay. Should they jump off the merry-go-round now and cut their losses, or should they stay the course in the hope that CD-I finally would turn into the revolutionary product Philips had been promising for so long? To this day, you merely have to mention CD-I to even the most mild-mannered old games-industry insider to be greeted with a torrent of invective. Philips’s merry-go-round cost the industry huge. Some smaller developers who had trusted Philips enough to bet their very survival on CD-I paid the ultimate price. Aegis, for example, went out of business in 1990 with CD-I still vaporware.

While CD-I chased its tail, General Electric, the unwitting instigators of all this chaos, tried to decide in their slow, bureaucratic way what to do with this DVI thing they’d inherited. Thus things were as unsettled as ever on the CD-I and DVI fronts when the third Microsoft CD-ROM Conference convened in March of 1988. The old plain-Jane CD-ROM format, however, seemed still to be advancing slowly but steadily. Certainly Microsoft appeared to be in fine fettle; harking back to the downpour that had greeted the previous year’s conference, they passed out oversized gold umbrellas to everyone — emblazoned, naturally, with the Microsoft logo in huge type. They could announce at their conference that the High Sierra logical format for CD-ROM had been accepted, with some modest modifications to support languages other than English, by the International Standards Organization as something that would henceforward be known as “ISO 9660.” (It remains the standard logical format for CD-ROM to this day.) Meanwhile Philips and Sony were about to begrudgingly codify the physical format for CD-ROM, extant already as a de facto standard for several years now, as the Yellow Book, latest addition to a library of binders that was turning into quite the rainbow. Apple, who had previously been resistant to CD-ROM, driven as it was by their arch-rival Microsoft, showed up with an official CD-ROM drive for a Macintosh or even an Apple II, albeit at a typically luxurious Apple price of $1200. Even IBM showed up for the conference this time, albeit with a single computer attached to a non-IBM CD-ROM drive and a carefully noncommittal official stance on all this optical evangelism.

As CD-ROM gathered momentum, the stories of DVI and CD-I alike were already beginning to peter out in anticlimax. After doing little with DVI for eighteen long months, General Electric finally sold it to Intel at the end of 1988, explaining that DVI just “didn’t mesh with [their] strategic plans.” Intel began shipping DVI setups to early adopters in 1989, but they cost a staggering $20,000 — a long, long way from a reasonable consumer price point. DVI continued to lurch along into the 1990s, but the price remained too high. Intel, possessed of no corporate tradition of marketing directly to consumers, often seemed little more motivated to turn DVI into a practical product than had been General Electric. Thus did the technology that had caused such a sensation and such disruption in 1987 gradually become yesterday’s news.

Ironically, we can lay the blame for the creeping irrelevancy of DVI directly at the feet of the work for which Intel was best known. As Gordon Moore — himself an Intel man — had predicted decades before, the overall throughput of Intel’s most powerful microprocessors continued to double every two years or so. This situation meant that the problem DVI addressed through all that specialized hardware — that of conventional general-purpose CPUs not having enough horsepower to decompress an ultra-compressed video stream fast enough — wasn’t long for this world. And meanwhile other engineers were attacking the problem from the other side, addressing the standard CD’s reading speed of just 153.6 K per second. They realized that by applying an integral multiplier to the timing of a CD drive’s circuitry, its reading (and seeking) speed could be increased correspondingly. Soon so-called “2X” drives began to appear, capable of reading data at well over 300 K per second, followed in time by “4X” drives, “8X” drives, and whatever unholy figure they’ve reached by today. These developments rendered all of the baroque circuitry of DVI pointless, a solution in search of a problem. Who needed all that complicated stuff?

CD-I’s end was even more protracted and ignominious. The absurd wait eventually got to be too much for even the most loyal CD-I developers. One by one, they dropped their projects. It marked a major tipping point when in 1989 Electronic Arts, the most enthusiastic of all the software publishers in the early days of CD-I, closed down the department they had formed to develop for the platform, writing off millions of dollars on the aborted venture. In another telling sign of the times, Greg Riker, the manager of that department, left Electronic Arts to work for Microsoft on CD-ROM.

When CD-I finally trickled onto store shelves just a few weeks shy of Christmas 1991, it was able to display full-screen video of a sort but only in 128 colors, and was accompanied by an underwhelming selection of slapdash games and lifestyle products, most funded by Philips themselves, that were a far cry from those halcyon expectations of 1986. CD-I sales disappointed — immediately, consistently, and comprehensively. Philips, nothing if not persistent, beat the dead horse for some seven years before giving up at last, having sold only 1 million units in total, many of them at fire-sale discounts.

In the end, the big beneficiary of the endless CD-I/DVI standoff was CD-ROM, the simple, commonsense format that had made its public debut well before either of them. By 1993 or so, you didn’t need anything special to play video off a CD at equivalent or better quality to that which had been so amazing in 1987; an up-to-date CPU combined with a 2X CD-ROM drive would do the job just fine. The Microsoft standard had won out. Funny how often that happened in the 1980s and 1990s, isn’t it?

Bill Gates’s reputation as a master Machiavellian being what it is, I’ve heard it suggested that the chaos and indecision which followed the public debut of DVI had been consciously engineered by him — that he had convinced a clueless General Electric to give that 1987 demonstration and later convinced Intel to keep DVI at least ostensibly alive and thus paralyzing Philips long enough for everyday PC hardware and vanilla CD-ROM to win the day, all the while knowing full well that DVI would never amount to anything. That sounds a little far-fetched to this writer, but who knows? Philips’s decision to announce CD-I five days before Microsoft’s CD-ROM Conference had clearly been a direct shot across Bill Gates’s bow, and such challenges did tend not to end well for the challenger. Anything else is, and must likely always remain, mere speculation.

(Sources: Amazing Computing of May 1986; Byte of May 1986, October 1986, April 1987, January 1989, May 1989, and December 1990; Commodore Magazine of November 1988; 68 Micro Journal of August/September 1989; Compute! of February 1987 and June 1988; Macworld of April 1988; ACE of September 1989, March 1990, and April 1990; The One of October 1988 and November 1988; Sierra On-Line’s newsletter of Autumn 1989; PC Magazine of April 29 1986; the premiere issue of AmigaWorld; episodes of the Computer Chronicles television series entitled “Optical Storage Devices,” “CD-ROMs,” and “Optical Storage”; the book CD-ROM: The New Papyrus from the Microsoft Press. Finally, my huge thanks to William Volk, late of Aegis and Mediagenic, for sharing his memories and impressions of the CD wars with me in an interview.)

Footnotes

Footnotes
1 The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data.
 
46 Comments

Posted by on September 30, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags:

The Freedom to Associate

In 1854, an Austrian priest and physics teacher named Gregor Mendel sought and received permission from his abbot to plant a two-acre garden of pea plants on the grounds of the monastery at which he lived. Over the course of the next seven years, he bred together thousands upon thousands of the plants under carefully controlled circumstances, recording in a journal the appearance of every single offspring that resulted, as defined by seven characteristics: plant height, pod shape and color, seed shape and color, and flower position and color. In the end, he collected enough data to formulate the basis of the modern science of genetics, in the form of a theory of dominant and recessive traits passed down in pairs from generation to generation. He presented his paper on the subject, “Experiments on Plant Hybridization,” before the Natural History Society of Austria in 1865, and saw it published in a poorly circulated scientific journal the following year.

And then came… nothing. For various reasons — perhaps due partly to the paper’s unassuming title, perhaps due partly to the fact that Mendel was hardly a known figure in the world of biology, undoubtedly due largely to the poor circulation of the journal in which it was published — few noticed it at all, and those who did dismissed it seemingly without grasping its import. Most notably, Charles Darwin, whose On the Origin of Species had been published while Mendel was in the midst of his own experiments, seems never to have been aware of the paper at all, thereby missing this key gear in the mechanism of evolution. Mendel was promoted to abbot of his monastery shortly after the publication of his paper, and the increased responsibilities of his new post ended his career as a scientist. He died in 1884, remembered as a quiet man of religion who had for a time been a gentleman dabbler in the science of botany.

But then, at the turn of the century, the German botanist Carl Correns stumbled upon Mendel’s work while conducting his own investigations into floral genetics, becoming in the process the first to grasp its true significance. To his huge credit, he advanced Mendel’s name as the real originator of the set of theories which he, along with one or two other scientists working independently, was beginning to rediscover. Correns effectively shamed those other scientists as well into acknowledging that Mendel had figured it all out decades before any of them even came close. It was truly a selfless act; today the name of Carl Correns is unknown except in esoteric scientific circles, while Gregor Mendel’s has been done the ultimate honor of becoming an adjective (“Mendelian”) and a noun (“Mendelism”) locatable in any good dictionary.

Vannevar Bush

Vannevar Bush

So, all’s well that ends well, right? Well, maybe, but maybe not. Some 30 years after the rediscovery of Mendel’s work, an American named Vannevar Bush, dean of MIT’s School of Engineering, came to see the 35 years that had passed between the publication of Mendel’s theory and the affirmation of its importance as a troubling symptom of the modern condition. Once upon a time, all knowledge had been regarded as of a piece, and it had been possible for a great mind to hold within itself huge swathes of this collective knowledge of humanity, everything informing everything else. Think of that classic example of a Renaissance man, Leonardo da Vinci, who was simultaneously a musician, a physicist, a mathematician, an anatomist, a botanist, a geologist, a cartographer, an alchemist, an astronomer, an engineer, and an inventor. Most of all, of course, he was a great visual artist, but he used everything else he was carrying around in that giant brain of his to create paintings and drawings as technically meticulous as they were artistically sublime.

By Bush’s time, however, the world had long since entered the Age of the Specialist. As the sheer quantity of information in every field exploded, those who wished to do worthwhile work in any given field — even those people gifted with giant brains — were increasingly being forced to dedicate their intellectual lives entirely to that field and only that field, just to keep up. The intellectual elite were in danger of becoming a race of mole people, closeted one-dimensionals fixated always on the details of their ever more specialized trades, never on the bigger picture. And even then, the amount of information surrounding them was so vast, and existing systems for indexing and keeping track of it all so feeble, that they could miss really important stuff within their own specialties; witness the way the biologists of the late nineteenth century had missed Gregor Mendel’s work, and the 35-years head start it had cost the new science of genetics. “Mendel’s work was lost,” Bush would later write, “because of the crudity with which information is transmitted between men.” How many other major scientific advances were lying lost in the flood of articles being published every year, a flood that had increased by an order of magnitude just since Mendel’s time? “In this are thoughts,” wrote Bush, “certainly not often as great as Mendel’s, but important to our progress. Many of them become lost; many others are repeated over and over.” “This sort of catastrophe is undoubtedly being repeated all around us,” he believed, “as truly significant attainments become lost in the sea of the inconsequential.”

Bush’s musings were swept aside for a time by the rush of historical events. As the prospect of another world war loomed, he became President Franklin Delano Roosevelt’s foremost advisor on matters involving science and engineering. During the war, he shepherded through countless major advances in the technologies of attack and defense, culminating in the most fearsome weapon the world had ever known: the atomic bomb. It was actually this last that caused Bush to return to the seemingly unrelated topic of information management, a problem he now saw in a more urgent light than ever. Clearly the world was entering a new era, one with far less tolerance for the human folly, born of so much context-less mole-person ideology, that had spawned the current war.

Practical man that he was, Bush decided there was nothing for it but to roll up his sleeves and make a concrete proposal describing how humanity could solve the needle-in-a-haystack problem of the modern information explosion. Doing so must entail grappling with something as fundamental as “how creative men think, and what can be done to help them think. It is a problem of how the great mass of material shall be handled so that the individual can draw from it what he needs — instantly, correctly, and with utter freedom.”

As revolutionary manifestos go, Vannevar Bush’s “As We May Think” is very unusual in terms of both the man that wrote it and the audience that read it. Bush was no Karl Marx, toiling away in discontented obscurity and poverty. On the contrary, he was a wealthy upper-class patrician who was, as a member of the White House inner circle, about as fabulously well-connected as it was possible for a man to be. His article appeared first in the July 1945 edition of the Atlantic Monthly, hardly a bastion of radical thought. Soon after, it was republished in somewhat abridged form by Life, the most popular magazine on the planet. Thereby did this visionary document reach literally millions of readers.

With the atomic bomb still a state secret, Bush couldn’t refer directly to his real reasons for wanting so urgently to write down his ideas now. Yet the dawning of the atomic age nevertheless haunts his article.

It is the physicists who have been thrown most violently off stride, who have left academic pursuits for the making of strange destructive gadgets, who have had to devise new methods for their unanticipated assignments. They have done their part on the devices that made it possible to turn back the enemy, have worked in combined effort with the physicists of our allies. They have felt within themselves the stir of achievement. They have been part of a great team. Now, as peace approaches, one asks where they will find objectives worthy of their best.

Seen in one light, Bush’s essay is similar to many of those that would follow from other Manhattan Project alumni during the uncertain interstitial period between the end of World War II and the onset of the Cold War. Bush was like many of his colleagues in feeling the need to advance a utopian agenda to counter the apocalyptic potential of the weapon they had wrought, in needing to see the ultimate evil that was the atomic bomb in almost paradoxical terms as a potential force for good that would finally shake the world awake.

Bush was true to his engineer’s heart, however, in basing his utopian vision on technology rather than politics. The world was drowning in information, making the act of information synthesis — intradisciplinary and interdisciplinary alike — ever more difficult.

The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present-day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and reenter on a new path.

The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve it, for his records have relative permanency. The first idea, however, to be drawn from the analogy concerns selection. Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.

Bush was not among the vanishingly small number of people who were working in the nascent field of digital computing in 1945. His “memex,” the invention he proposed to let an individual free-associate all of the information in her personal library, was more steampunk than cyberpunk, all whirring gears, snickering levers, and whooshing microfilm strips. But really, those things are just details; he got all of the important stuff right. I want to quote some more from “As We May Think,” and somewhat at length at that, because… well, because its vision of the future is just that important. This is how the memex should work:

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.

Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

Ted Nelson

Ted Nelson

There is no record of what all those millions of Atlantic Monthly and Life readers made of Bush’s ideas in 1945 — or for that matter if they made anything of them at all. In the decades that followed, however, the article became a touchstone of the burgeoning semi-underground world of creative computing. Among its discoverers was Ted Nelson, who is depending on whom you talk to either one of the greatest visionaries in the history of computing or one of the greatest crackpots — or, quite possibly, both. Born in 1937 to a Hollywood director and his actress wife, then raised by his wealthy and indulgent grandparents following the inevitable Hollywood divorce, Nelson’s life would largely be defined by, as Gary Wolf put it in his classic profile for Wired magazine, his “aversion to finishing.” As in, finishing anything at all, or just the concept of finishing in the abstract. Well into middle-age, he would be diagnosed with attention-deficit disorder, an alleged malady he came to celebrate as his “hummingbird mind.” This condition perhaps explains why he was so eager to find a way of forging permanent, retraceable associations among all the information floating around inside and outside his brain.

Nelson coined the terms “hypertext” and “hypermedia” at some point during the early 1960s, when he was a graduate student at Harvard. (Typically, he got a score of Incomplete in the course for which he invented them, not to mention an Incomplete on his PhD as a whole.) While they’re widely used all but interchangeably today, in Nelson’s original formulation the former term was reserved for purely textual works, the later for those incorporating others forms of media, like images and sound. But today we’ll just go with the modern flow, call them all hypertexts, and leave it at that. In his scheme, then, hypertexts were texts capable of being “zipped” together with other hypertexts, memex-like, wherever the reader or writer wished to preserve associations between them. He presented his new buzzwords to the world at a conference of the Association for Computing Machinery in 1965, to little impact. Nelson, possessed of a loudly declamatory style of discourse and all the rabble-rousing fervor of a street-corner anarchist, would never be taken all that seriously by the academic establishment.

Instead, it being the 1960s and all, he went underground, embracing computing’s burgeoning counterculture. His eventual testament, one of the few things he ever did manage to complete — after a fashion, at any rate — was a massive 1200-page tome called Computer Lib/Dream Machines, self-published in 1974, just in time for the heyday of the Altair and the Homebrew Computer Club, whose members embraced Nelson as something of a patron saint. As the name would indicate, Computer Lib/Dream Machines was actually two separate books, bound back to back. Theoretically, Computer Lib was the more grounded volume, full of practical advice about gaining access to and using computers, while Dream Machines was full of the really out-there ideas. In practice, though, they were often hard to distinguish. Indeed, it was hard to even find anything in the books, which were published as mimeographed facsimile copies filled with jotted marginalia and cartoons drafted in Nelson’s shaky hand, with no table of contents or page numbers and no discernible organizing principle beyond the stream of consciousness of Nelson’s hummingbird mind. (I trust that the irony of a book concerned with finding new organizing principles for information itself being such an impenetrable morass is too obvious to be worth belaboring further.) Nelson followed Computer Lib/Dream Machines with 1981’s Literary Machines, a text written in a similar style that dwelt, when it could be bothered, at even greater length on the idea of hypertext.

The most consistently central theme of Nelson’s books, to whatever extent one could be discerned, was an elaboration of the hypertext concept he called Xanadu, after the pleasure palace in Samuel Taylor Coleridge’s poem “Kubla Khan.” The product of an opium-fueled hallucination, the 54-line poem is a mere fragment of a much longer work Coleridge had intended to write. Problem was, in the course of writing down the first part of his waking dream he was interrupted; by the time he returned to his desk he had simply forgotten the rest.

So, Nelson’s Xanadu was intended to preserve information that would otherwise be lost, which goal it would achieve through associative linking on a global scale. Beyond that, it was almost impossible to say precisely what Xanadu was or wasn’t. Certainly it sounds much like the World Wide Web to modern ears, but Nelson insists adamantly that the web is a mere bad implementation of the merest shadow of his full idea. Xanadu has been under allegedly active development since the late 1960s, making it the most long-lived single project in the history of computer programming, and by far history’s most legendary piece of vaporware. As of this writing, the sum total of all those years of work are a set of web pages written in Nelson’s inimitable declamatory style, littered with angry screeds against the World Wide Web, along with some online samples that either don’t work quite right or are simply too paradigm-shattering for my poor mind to grasp.

In my own years on this planet, I’ve come to reserve my greatest respect for people who finish things, a judgment which perhaps makes me less than the ideal critic of Ted Nelson’s work. Nevertheless, even I can recognize that Nelson deserves huge credit for transporting Bush’s ideas to their natural habitat of digital computers, for inventing the term “hypertext,” for defining an approach to links (or “zips”) in a digital space, and, last but far from least, for making the crucial leap from Vannevar Bush’s concept of the single-user memex machine to an interconnected global network of hyperlinks.

But of course ideas, of which both Bush and Nelson had so many, are not finished implementations. During the 1960s, 1970s, and early 1980s, there were various efforts — in addition, that is, to the quixotic effort that was Xanadu — to wrestle at least some of the concepts put forward by these two visionaries into concrete existence. Yet it wouldn’t be until 1987 that a corporation with real financial resources and real commercial savvy would at last place a reasonably complete implementation of hypertext before the public. And it all started with a frustrated programmer looking for a project.

Steve Jobs and Bill Atkinson

Steve Jobs and Bill Atkinson

Had he never had anything to do with hypertext, Bill Atkinson’s place in the history of computing would still be assured. Coming to Apple Computer in 1978, when the company was only about eighteen months removed from that famous Cupertino garage, Atkinson was instrumental in convincing Steve Jobs to visit the Xerox Palo Alto Research Center, thereby setting in motion the chain of events that would lead to the Macintosh. A brilliant programmer by anybody’s measure, he eventually wound up on the Lisa team. He wrote the routines to draw pixels onto the Lisa’s screen — routines on which, what with the Lisa being a fundamentally graphical machine whose every display was bitmapped, every other program depended. Jobs was so impressed by Atkinson’s work on what he named LisaGraf that he recruited him to port his routines over to the nascent Macintosh. Atkinson’s routines, now dubbed QuickDraw, would remain at the core of MacOS for the next fifteen years. But Atkinson’s contribution to the Mac went yet further: after QuickDraw, he proceeded to design and program MacPaint, one of the two applications included with the finished machine, and one that’s still justifiably regarded as a little marvel of intuitive user-interface design.

Atkinson’s work on the Mac was so essential to the machine’s success that shortly after its release he became just the fourth person to be named an Apple Fellow — an honor that carried with it, implicitly if not explicitly, a degree of autonomy for the recipient in the choosing of future projects. The first project that Atkinson chose for himself was something he called the Magic Slate, based on a gadget called the Dynabook that had been proposed years ago by Xerox PARC alum (and Atkinson’s fellow Apple Fellow) Alan Kay: a small, thin, inexpensive handheld computer controlled via a touch screen. It was, as anyone who has ever seen an iPhone or iPad will attest, a prescient project indeed, but also one that simply wasn’t realizable using mid-1980s computer technology. Having been convinced of this at last by his skeptical managers after some months of flailing,  Atkinson wondered if he might not be able to create the next best thing in the form of a sort of software version of the Magic Slate, running on the Macintosh desktop.

In a way, the Magic Slate had always had as much to do with the ideas of Bush and Nelson as it did with those of Kay. Atkinson had envisioned its interface as a network of “pages” which the user navigated among by tapping links therein — a hypertext in its own right. Now he transported the same concept to the Macintosh desktop, whilst making his metaphorical pages into metaphorical stacks of index cards. He called the end result, the product of many months of design and programming, “Wildcard.” Later, when the trademark “Wildcard” proved to be tied up by another company, it turned into “HyperCard” — a much better name anyway in my book.

By the time he had HyperCard in some sort of reasonably usable shape, Atkinson was all but convinced that he would have to either sell the thing to some outside software publisher or start his own company to market it. With Steve Jobs now long gone and with him much of the old Jobsian spirit of changing the world through better computing, Apple was heavily focused on turning the Macintosh into a practical business machine. The new, more sober mood in Cupertino — not to mention Apple’s more buttoned-down public image — would seem to indicate that they were hardly up for another wide-eyed “revolutionary” product. It was Alan Kay, still kicking around Cupertino puttering with this and that, who convinced Atkinson to give CEO John Sculley a chance before he took HyperCard elsewhere. Kay brokered a meeting between Sculley and Atkinson, in which the latter was able to personally demonstrate to the former what he’d been working on all these months. Much to Atkinson’s surprise, Sculley loved HyperCard. Apparently at least some of the old Jobsian fervor was still alive and well after all inside Apple’s executive suite.

At its most basic, a HyperCard stack to modern eyes resembles nothing so much as a PowerPoint presentation, albeit one which can be navigated non-linearly by tapping links on the slides themselves. Just as in PowerPoint, the HyperCard designer could drag and drop various forms of media onto a card. Taken even at this fairly superficial level, HyperCard was already a full-fledged hypertext-authoring (and hypertext-reading) tool — by no means the first specimen of its kind, but the first with the requisite combination of friendliness, practicality, and attractiveness to make it an appealing environment for the everyday computer user. One of Atkinson’s favorite early demo stacks had many cards with pictures of people wearing hats. If you clicked on a hat, you were sent to another card showing someone else wearing a hat. Ditto for other articles of fashion. It may sound banal, but this really was revolutionary, organization by association in action. Indeed, one might say that HyperCard was Vannevar Bush’s memex, fully realized at last.

But the system showed itself to have much, much more to offer when the author started to dig into HyperTalk, the included scripting language. All sorts of logic, simple or complex, could be accomplished by linking scripts to clicks on the surface of the cards. At this level, HyperCard became an almost magical tool for some types of game development, as we’ll see in future articles. It was also a natural fit for many other applications: information kiosks, interactive tutorials, educational software, expert systems, reference libraries, etc.

HyperCard in action

HyperCard in action

John Sculley himself premiered HyperCard at the August 1987 MacWorld show. Showing unusual largess in his determination to get HyperCard into the hands of as many people as possible as quickly as possible, he announced that henceforward all new Macs would ship with a free copy of the system, while existing owners could buy copies for their machines for just $49. He called HyperCard the most important product Apple had released during his tenure there. Considering that Sculley had also been present for the launch of the original Macintosh, this was certainly saying something. And yet he wasn’t clearly in the wrong either. As important as the Macintosh, the realization in practical commercial form of the computer-interface paradigms pioneered at Xerox PARC during the 1970s, has been to our digital lives of today, the concept of associative indexing — hyperlinking — has proved at least as significant. But then, the two do go together like strawberries and cream, the point-and-click paradigm providing the perfect way to intuitively navigate through a labyrinth of hyperlinks. It was no coincidence that an enjoyable implementation of hypertext appeared first on the Macintosh; the latter almost seemed a prerequisite for the former.

The full revolutionary nature of the concept of hypertext was far from easy to get across in advertising copy, but Apple gave it a surprisingly good go, paying due homage to Vannevar Bush in the process.

The full import of the concept of hypertext was far from easy to get across in advertising copy, but Apple gave it a surprisingly serious go, paying due homage to Vannevar Bush in the process.

In the wake of that MacWorld presentation, a towering tide of HyperCard hype rolled from one side of the computer industry to the other, out into the mainstream media, and then back again, over and over. Hypertext’s time had finally come. In 1985, it was an esoteric fringe concept known only to academics and a handful of hackers, being treated at real length and depth in print only in Ted Nelson’s own sprawling, well-nigh impenetrable tomes. Four years later, every bookstore in the land sported a shelf positively groaning with trendy paperbacks advertising hypertext this and hypertext that. By then the curmudgeons had also begun to come out in force, always a sure sign that an idea has truly reached critical mass. Presentations showed up in conference catalogs with snarky titles like “Hypertext: Will It Cook Me Breakfast Too?.”

The curmudgeons had plenty of rabid enthusiasm to push back against. HyperCard, even more so than the Macintosh itself, had a way of turning the most sober-minded computing veterans into starry-eyed fanatics. Jan Lewis, a long time business-computing analyst, declared that “HyperCard is going to revolutionize the way computing is done, and possibly the way human thought is done.” Throwing caution to the wind, she abandoned her post at InfoWorld to found HyperAge, the first magazine dedicated to the revolution. “There’s a tremendous demand,” she said. “If you look at the online services, the bulletin boards, the various ad hoc meetings, user groups — there is literally a HyperCulture developing, almost a cult.” To judge from her own impassioned statements, she should know. She recruited Ted Nelson himself — one of the HyperCard holy trinity of Bush, Nelson, and Atkinson — to write a monthly column.

HyperCard effectively amounted to an entirely new computing platform that just happened to run atop the older platform that was the Macintosh. As Lewis noted, user-created HyperCard stacks — this new platform’s word for “programs” or “software” — were soon being traded all over the telecommunications networks. The first commercial publisher to jump into the HyperCard game was, somewhat surprisingly, Mediagenic. [1]Mediagenic was known as Activision until mid-1988. To avoid confusion, I just stick with the name “Mediagenic” in this article. Bruce Davis, Mediagenic’s CEO, has hardly gone down into history as a paradigm of progressive thought in the realms of computer games and software in general, but he defied his modern reputation in this one area at least by pushing quickly and aggressively into “stackware.” One of the first examples of same that Mediagenic published was Focal Point, a collection of business and personal-productivity tools written by one Danny Goodman, who was soon to publish a massive bible called The Complete HyperCard Handbook, thus securing for himself the mantle of the new ecosystem’s go-to programming guru. Focal Point was a fine demonstration that just about any sort of software could be created by the sufficiently motivated HyperCard programmer. But it was another early Mediagenic release, City to City, that was more indicative of the system’s real potential. It was a travel guide to most major American cities — an effortlessly browsable and searchable guide to “the best food, lodgings, and other necessities” to be found in each of the metropolises in its database.

City to City

City to City

Other publishers — large, small, and just starting out — followed Mediagenic’s lead, releasing a bevy of fascinating products. The people behind The Whole Earth Catalog — themselves the inspiration for Ted Nelson’s efforts in self-publication — converted their current edition into a HyperCard stack filling a staggering 80 floppy disks. A tiny company called Voyager combined HyperCard with a laser-disc player — a very common combination among ambitious early HyperCard developers — to offer an interactive version of the National Gallery of Art which could be explored using such associative search terms as “Impressionist landscapes with boats.” Culture 1.0 let you explore its namesake through “3700 years of Western history — over 200 graphics, 2000 hypertext links, and 90 essays covering topics from the Black Plague to Impressionism,” all on just 7 floppy disks. Mission: The Moon, from the newly launched interactive arm of ABC News, gathered together details of every single Mercury, Gemini, and Apollo mission, including videos of each mission hosted on a companion laser disc. A professor of music converted his entire Music Appreciation 101 course into a stack. The American Heritage Dictionary appeared as stackware. And lots of what we might call “middlestackware” appeared to help budding programmers with their own creations: HyperComposer for writing music in HyperCard, Take One for adding animations to cards.

Just two factors were missing from HyperCard to allow hypertext to reach its full potential. One was a storage medium capable of holding lots of data, to allow for truly rich multimedia experiences, combining the lavish amounts of video, still pictures, music, sound, and of course text that the system clearly cried out for. Thankfully, that problem was about to be remedied via a new technology which we’ll be examining in my very next article.

The other problem was a little thornier, and would take a little longer to solve. For all its wonders, a HyperCard stack was still confined to the single Macintosh on which it ran; there was no provision for linking between stacks running on entirely separate computers. In other words, one might think of a HyperCard stack as equivalent to a single web site running locally off a single computer’s hard drive, without the ability to field external links alongside its internal links. Thus the really key component of Ted Nelson’s Xanadu dream, that of a networked hypertext environment potentially spanning the entire globe, remained unrealized. In 1990, Bill Nisen, the developer of a hypertext system called Guide that slightly predated HyperCard but wasn’t as practical or usable, stated the problem thus:

The one thing that is precluding the wide acceptance of hypertext and hypermedia is adequate broadcast mechanisms. We need to find ways in which we can broadcast the results of hypermedia authoring. We’re looking to in the future the ubiquitous availability of local-area networks and low-cost digital-transmission facilities. Once we can put the results of this authoring into the hands of more users, we’re going to see this industry really explode.

Already at the time Nisen made that statement, a British researcher named Tim Berners-Lee had started to experiment with something he called the Hypertext Transfer Protocol. The first real web site, the beginning of the World Wide Web, would go online in 1991. It would take a few more years even from that point, but a shared hypertextual space of a scope and scale the likes of which few could imagine was on the way. The world already had its memex in the form of HyperCard. Now — and although this equivalency would scandalize Ted Nelson — it was about to get its Xanadu.

Associative indexing permeates our lives so thoroughly today that, as with so many truly fundamental paradigm shifts, the full scope of the change it has wrought can be difficult to fully appreciate. A century ago, education was still largely an exercise in retention: names, dates, Latin verb cognates. Today’s educational institutions  — at least the more enlightened ones — recognize that it’s more important to teach their pupils how to think than it is to fill their heads with facts; facts, after all, are now cheap and easy to acquire when you need them. That such a revolution in the way we think about thought happened in just a couple of decades strikes me as incredible. That I happened to be present to witness it strikes me as amazing.

What I’ve witnessed has been a revolution in humanity’s relationship to information itself that’s every bit as significant as any political revolution in history. Some Singularity proponents will tell you that it marks the first step on the road to a vast worldwide consciousness. But even if you choose not to go that far, the ideas of Vannevar Bush and Ted Nelson are still with you every time you bring up Google. We live in a world in which much of the sum total of human knowledge is available over an electronic connection found in almost every modern home. This is wondrous. Yet what’s still more wondrous is the way that we can find almost any obscure fact, passage, opinion, or idea we like from within that mass, thanks to selection by association. Mama, we’re all cyborgs now.

(Sources: the books Hackers: Heroes of the Computer Revolution and Insanely Great: The Life and Times of the Macintosh, the Computer That Changed Everything by Steven Levy; Computer Lib/Dream Machines and Literary Machines by Ted Nelson; From Memex to Hypertext: Vannevar Bush and the Mind’s Machine, edited by James M. Nyce and Paul Kahn; The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort; Multimedia and Hypertext: The Internet and Beyond by Jakob Nielsen; The Making of the Atomic Bomb by Richard Rhodes. Also the June 1995 Wired magazine profile of Ted Nelson; Andy Hertzfeld’s website Folklore; and the Computer Chronicles television episodes entitled “HyperCard,” “MacWorld Special 1988,” “HyperCard Update,” and “Hypertext.”)

Footnotes

Footnotes
1 Mediagenic was known as Activision until mid-1988. To avoid confusion, I just stick with the name “Mediagenic” in this article.
 
55 Comments

Posted by on September 23, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags: ,