RSS

Category Archives: Interactive Fiction

Transfixed by 1996

I’m afraid I don’t have a regular article for you this week. By way of compensation, I do have a new ebook for you, compiling all of the articles from our recently concluded historical year of 1995, along with the special “Web Around the World” series about the birth of worldwide communications networks and (eventually!) the Internet. Because some of you have requested it, Richard Lindner and I have also prepared a special ebook volume that includes only the latter series. If you enjoy these ebooks, don’t hesitate to drop Richard a line at the email address on their frontispieces to thank him for his efforts.

We’re a couple of articles into 1996 already; I’ve covered Toonstruck and the first Broken Sword game. In keeping with a developing Digital Antiquarian tradition, let me tell you what else I have planned for the year as a whole:

  • The Discworld and Discworld II adventures, preceded by a short digression about Terry Pratchett and his literary Discworld universe in general, which has intersected with games on multiple occasions. (As many of you doubtless know, Terry Pratchett himself was a dedicated gamer, and his daughter Rhianna Pratchett has become a notable games journalist and designer in her own right.)
  • The second (and, sadly, last) Lost Files of Sherlock Holmes game, which plunges you even deeper into Victoriana than does its predecessor.
  • Rama and The Martian Chronicles, which are by no means great games. Nevertheless, they are on one level fairly typical exemplars of the Myst variants that were everywhere in the mid-1990s, and make for worthy objects of inquiry on that basis alone. And on another level, I think it will be interesting, constructive, and maybe even a bit nostalgic to compare them with earlier adaptations of Arthur C. Clarke and Ray Bradbury, from the first era of bookware. (The Martian Chronicles was even created by Byron Preiss Productions, the same folks behind the old Telarium bookware line.)
  • Titanic: Adventure Out of Time, the penultimate million-selling adventure of the 1990s, a case study in being in the right place at the right time — said time being in this case very close to the release date of a certain blockbuster movie starring Leonardo DiCaprio and Kate Winslet.
  • The Pandora Directive. Enough said. Tex Murphy needs no justification.
  • Spycraft, an interactive spy movie by Activision, one of the more elaborate multimedia productions of its day, which courted controversy by letting you torture prisoners while playing the role of a CIA agent. Almost a decade later, the revelations about Guantánamo Bay would give this scene an uncomfortable aura of verisimilitude.
  • Star Control 3, Legend Entertainment’s much-maligned sequel to a much-beloved game.
  • Wing Commander IV. If anyone was wondering why Toonstruck‘s $8 million budget made it only the second most expensive computer game ever as of 1996, this article will provide the answer.
  • Battlecruiser 3000 AD. Because sometimes you just need a good laugh, and this story is like an Onion satire of the games industry come to life.
  • Terra Nova, Looking Glass’s next, somewhat less successful but nevertheless innovative experiment with immersive, emergent 3D world-building after the seminal System Shock.
  • Civilization II, Master of Orion II, and Heroes of Might and Magic II. I lump these three games together here because they are all strategy sequels — a thought-provoking concept in itself, in that they are iterations on gameplay rather than the next chapters of ongoing stories. They will, however, each get an article of their own as part of a mini-series.
  • The post-DOOM generation of first-person shooters, up to Quake and the advent of hardware-accelerated 3D graphics. I know some of you have been itching for more coverage of these topics, to which I can only plead that they just aren’t my favorite sorts of games; chalk me up as too old, too slow, too pacifistic, and/or too bookish. This means I’m really not the best person to cover most first-person shooters in great individual depth. But I’ll try to do a group of them some sort of historical justice here, and spend some time on the software and hardware technology behind them as well, which I must confess to finding more interesting in some ways than the actual games.
  • Tomb Raider. Lara Croft has become arguably the most famous videogame character in the world in the years since her debut in 1996, as well as a lightning rod for discussion and controversy. Is she a sadly typical example of the objectification of women for the male-gamer gaze, or a rarer example of a capable, empowered female protagonist in a game? Or is she perhaps a little of both? We shall investigate.
  • Her Interactive. The story of the earliest games of Her Interactive, who would later carve out a permanent niche for themselves making Nancy Drew adventure games, is another fascinating and slightly bizarre tale, about attempting to sell games to teenage girls through partnerships with trendy fashion labels, with plots that might have been lifted from Beverly Hills 90210, in boxes stuffed with goodies that were like girlie versions of the Infocom gray boxes of yore. Do the games stay on the right side of the line between respectful outreach and pandering condescension? Again, we shall investigate.
  • Windows 95. The biggest topic for the year, this will serve as a continuation of not one but two earlier series: “Doing Windows” and the recently concluded “A Web Around the World.” Windows 95 was anything but just another Microsoft operating system, reflecting as it did its maker’s terror about a World Wide Web filled with increasingly “active” content that might eventually make traditional operating systems — and thus Microsoft themselves — irrelevant. And Windows 95 also introduced a little something called DirectX, which finally provided game developers with a runtime environment that was comprehensively better than bare-bones MS-DOS. But why, you may be asking, am I including Windows 95 in the coverage for 1996? Simply because it shipped very late in its titular year, and it took a while for its full impact to be felt.

To answer another question that will doubtless come up after reading the preceding: no, I’m not going to skip over Blizzard Entertainment’s Diablo, one of the most popular games of the decade. I’ve just decided to push it into 1997, given that it appears not to have reached store shelves in most places until just after the new year. And I’ll make time for a round-up of real-time-strategy games, from Blizzard and others, before covering Diablo.

As always, none of this is set in stone. Feel free to make your case in the comments for anything I’ve neglected that you think would make a worthy topic for an article, or just to register your voice as a conscientious objector in the case of the games I won’t be able to get around to.

And if what’s coming up seems exciting to you and you haven’t yet signed up to support this project, please do think about doing so. Of course, I realize all too well that much in the world is uncertain right now and many of us feel ourselves to be on shaky ground, not least when it comes to our finances. By all means, take care of yourself and yours first. But if you have a little something left over after doing so and want to ensure that my voluminous archives continue to grow, anything you can spare would be immensely appreciated. See the links at the top right of this page!

And thank you — a million times thank you — to all of you who have already become Patreon patrons or made one-time or recurring PayPal donations. Your pledges and donations are the best validation a writer could have, in addition to being the only reason I’m able to keep on doing this. It’s been quite a ride already, and yet we have a long, long way still to go. See you next week for a proper article!

 

Broken Sword: The Shadow of the Templars

The games of Revolution Software bore the stamp of the places in which they were conceived. Work on Beneath a Steel Sky, the company’s breakthrough graphic adventure, began in Hull, a grim postindustrial town in the north of England, and those environs were reflected in the finished product’s labyrinths of polluted streets and shuttered houses. But by the time Revolution turned to the question of a follow-up, they had upped stakes for the stately city of York. “We’re surrounded by history here,” said Revolution co-founder Tony Warriner. “York is a very historical city.” Charles Cecil, Revolution’s chief motivating force in a creative sense, felt inspired to make a very historical game.

The amorphous notion began to take a more concrete form after he broached the idea over dinner one evening to Sean Brennan, his main point of contact at Revolution’s publisher Virgin Interactive. Brennan said that he had recently struggled through Umberto Eco’s infamously difficult postmodern novel Foucault’s Pendulum, an elaborate satire of the conspiratorial view of history which is so carefully executed that its own conspiracy theories wind up becoming more convincing than most good-faith examples of the breed. Chasing a trail of literally and figuratively buried evidence across time and space… it seemed ideal for an adventure game. Why not do something like that? Perhaps the Knights Templar would make a good starting point. Thus was born Broken Sword: The Shadow of the Templars.



Our respectable books of history tell us that the Knights Templar was a rich and powerful but relatively brief-lived chivalric order of the late Middle Ages in Europe. It was founded in 1119 and torn up root and branch by a jealous King Philip IV of France and Pope Clement V in 1312. After that, it played no further role in history. Or did it?

People have been claiming for centuries that the order wasn’t really destroyed at all, that it just went underground in one sense or another. Meanwhile other conspiracy theories — sometimes separate from, sometimes conjoined with the aforementioned — have posited that the Knights left a fabulous hidden treasure behind somewhere, which perchance included even the Holy Grail of Arthurian legend.

In the 1960s, the old stories were revived and adapted into a form suitable for modern pop culture by a brilliant French fabulist named Pierre Plantard, who went so far as to plant forged documents in his homeland’s Bibliothèque Nationale. Three Anglo authors ingeniously expanded upon his deceptions — whether they were truly taken in by them or merely saw them as a moneymaking opportunity is unclear — in 1982 in the book The Holy Blood and the Holy Grail. It connected the Knights Templar to another, more blasphemous conspiracy theory: that Jesus Christ had not been celibate as stated in the New Testament, nor had his physical form actually died on the cross. He had rather run away with Mary Magdalene and fathered children with her, creating a secret bloodline that has persisted to the present day. The Knights Templar were formed to guard the holy bloodline, a purpose they continue to fulfill. Charles Cecil freely admits that it was The Holy Blood and the Holy Grail that really got his juices flowing.

It isn’t hard to see why. It’s a rare literary beast: a supposedly nonfiction book full of patent nonsense that remains thoroughly entertaining to read even for the person who knows what a load of tosh it all is. In his review of it back in 1982, Anthony Burgess famously wrote that “it is typical of my unregenerable soul that I can only see this as a marvelous theme for a novel.” Many others have felt likewise over the years since. If Umberto Eco’s unabashedly intellectual approach doesn’t strike your fancy, you can always turn to The Da Vinci Code, Dan Brown’s decidedly more populist take on the theme from 2003 — one of the most successful novels of the 21st century, the founder of a veritable cottage industry of sequels, knock-offs, and cinematic adaptations. (Although Brown himself insists that he didn’t use The Holy Blood and the Holy Grail for a crib sheet when writing his novel, pretty much no one believes him.)

For all their convoluted complexity, conspiracy theories are the comfort food of armchair historians. They state that the sweeping tides of history are not the result of diffuse, variegated, and ofttimes unease-inducing social and political impulses, but can instead all be explained by whatever shadowy cabal they happen to be peddling. It’s a clockwork view of history, A leading to B leading to C, which conveniently absolves us and our ancestors who weren’t pulling the strings behind the scenes of any responsibility for the state of the world. I’ve often wondered if the conspiratorial impulse in modern life stems at least in part from our current obsession with granular data, our belief that all things can be understood if we can just collect enough bits and bytes and analyze it all rigorously enough. Such an attitude makes it dangerously easy to assemble the narratives we wish to be true out of coincidental correlations. The amount of data at our fingertips, it seems to me, has outrun our wisdom for making use of it.

But I digress. As Burgess, Eco, and Brown all well recognized, outlandish conspiracy theories can be outrageously entertaining, and are harmless enough if we’re wise enough not to take them seriously. Add Charles Cecil to that list as well: “I was convinced a game set in the modern day with this history that resonated from Medieval times would make a very compelling subject.”

As he began to consider how to make a commercial computer game out of the likes of The Holy Blood and the Holy Grail, Cecil realized that he needed to stay well away from the book’s claims about Jesus Christ; the last thing Revolution Software or Virgin Interactive needed was to become the antichrist in the eyes of scandalized Christians all over the world. So, he settled on a less controversial vision of the Knights Templar, centering on their alleged lost treasure — a scavenger hunt was, after all, always a good fit for an adventure game — and a fairly nondescript conspiracy eager to get their hands on it for a spot of good old world domination for the sake of it.

Cecil and some of his more committed fans have occasionally noted some surface similarities between his game and The Da Vinci Code, which was published seven years later, and hinted that Dan Brown may have been inspired by the game as well as by The Holy Blood and the Holy Grail. In truth, though, the similarities would appear to be quite natural for fictions based on the same source material.

Indeed, I’ve probably already spent more time on the historical backstory of Broken Sword here than it deserves, considering how lightly it skims the surface of the claims broached in The Holy Blood and the Holy Grail and elsewhere. Suffice to say that the little bit of it that does exist here does a pretty good job of making you feel like you’re on the trail of a mystery ancient and ominous. And that, of course, is all it really needs to do.



In addition to being yet another manifestation of pop-culture conspiracy theorizing, Broken Sword was a sign of the times for the industry that produced it. Adventure games were as big as they would ever get in 1994, the year the project was given the green light by Virgin. Beneath a Steel Sky had gotten good reviews and was performing reasonably well in the marketplace, and Virgin was willing to invest a considerable sum to help Revolution take their next game to the proverbial next level, to compete head to head with Sierra and LucasArts, the titans of American adventure gaming. Broken Sword‘s final production cost would touch £1 million, making it quite probably the most expensive game yet made in Britain.

Having such a sum of money at their disposal transformed Revolution’s way of doing business. Some 50 different people in all contributed to Broken Sword, a five-fold increase over the staff hired for Beneath a Steel Sky. Artist Dave Gibbons, whose distinctive style had done so much to make the previous game stand out from the pack, was not among them, having moved on to other endeavors. But that was perhaps for the best; Gibbons was a comic-book artist, adept at crafting striking static images. Broken Sword, on the other hand, would have lots of motion, would be more of an interactive cartoon than an interactive comic.

To capture that feel, Charles Cecil went to Dublin, Ireland, where the animator Don Bluth ran the studio behind such films as The Land Before Time, All Dogs Go to Heaven, and Thumbelina. There he met one Eoghan Cahill, who had been working with Bluth for years, and got a hasty education on what separates the amateurs from the professionals in the field. Cecil:

I have to say, I didn’t take layout all that seriously. But he asked me about layout, and I showed him some of the stuff we were working on. And he looked at me and said, “This is not good enough.” I felt rather hurt. He said, “You need to see my stuff and you need to employ me.” So I had a look at his stuff, and it was so beautiful.

I said, “I think I really do need to employ you.” And indeed, he came to work at Revolution as a layout artist.

Although Don Bluth himself had nothing to do with the game, Broken Sword is as marked by the unique sensibility he inculcated in his artists as Beneath a Steel Sky is by that of Dave Gibbons. The opening movie is a bravura sequence by any standard, a tribute not only to the advantages of Super VGA graphics and CD-ROM — Revolution’s days of catering to more limited machines like the Commodore Amiga were now behind them — but to the aesthetic sophistication which Cahill brought to the project. Broken Sword‘s “pixel art,” as the kids call it today, remains mouth-wateringly luscious to look upon, something which most certainly cannot be said of the jaggy 3D productions of the mid-1990s.

The view with which the intro movie begins is a real one from the bell tower of Notre Dame Cathedral.

It’s worth dwelling on this movie a bit, for it does much to illustrate how quickly both Revolution and the industry to which they belonged were learning and expanding their horizons. Consider the stirring score by the noted film, television, and theater composer and conductor Barrington Pheloung, which is played by a real orchestra on real instruments — a growing trend in games in general at the time, which would have been unimaginable just a few years earlier for both technical and budgetary reasons.

Then, too, consider the subtle sophistication of the storytelling techniques that are employed here, from the first foreshadowing voice-over — the only dialog in the whole sequence — to the literal bang that finishes it. Right after the movie ends, you take control amidst the chaos on the sidewalk that follows the explosion. Assuming you aren’t made of the same stuff as that Notre Dame gargoyle, you’re already thoroughly invested at this point in figuring out what the heck just happened. The power of an in medias res opening like this one to hook an audience was well known to William Shakespeare, but has tended to elude many game developers. Charles Cecil:

There are two ways to start a game. You can give lots of background about a character and what he or she is doing or you can start in a way that is [in] the player’s control, and that’s what I wanted. I thought that since the player controlled the character and associated with him, I could afford to start a game without giving away a great deal about the character. So in the first scene, I didn’t want a long exposition. George is drawn into the plot unwillingly, having been caught up in an explosion, and he wants to do the right thing in finding out what was behind it.

All told, the jump in the quality of storytelling and writing from Beneath a Steel Sky to Broken Sword is as pronounced as the audiovisual leap. Beneath a Steel Sky isn’t really a poorly written game in comparison to others of its era, but the script at times struggles to live up to Dave Gibbons’s artwork. It bears the telltale signs of a writer not quite in control of his own material, shifting tones too jarringly and lapsing occasionally into awkward self-referential humor when it ought to be playing it straight.

None of that is the case with Broken Sword. This game’s writers know exactly where they want to go and have the courage of their conviction that they can get there. This is not to say that it’s dour — far from it; one of the greatest charms of the game is that it never takes itself too seriously, never forgets that it is at bottom just an exercise in escapist entertainment.

Remarkably, the improvement in this area isn’t so much a credit to new personnel as to the usual suspects honing their craft. Revolution’s games were always the vision of Charles Cecil, but, as he admits, he’s “not the world’s greatest writer.” Therefore he had relied since the founding of Revolution on one Dave Cummins to turn his broad outlines into a finished script. For Broken Sword, Cummins was augmented by a newcomer named Jonathan Howard, but the improvement in the writing cannot be down to his presence alone. The veterans at Revolution may have become harder to spot amidst the sea of new faces, but they were working as hard as anyone to improve, studying how film and television were put together and then applying the lessons to the game — but sparingly and carefully, mind you. Cecil:

When Broken Sword came out, we were riding on the back of these interactive movies. They were a disaster. The people knocking them out were being blinded; they wanted to rub shoulders with movie stars and producers, and the gaming elements were lost. They were out of touch with games. Of course, I am interested in film script-writing and I felt then and still do that there can be parallels with games. I felt we needed to learn from the movies with Broken Sword, but not mimic them. It was my intention to make Broken Sword cinematic — with great gameplay.

Revolution may have had global ambitions for Broken Sword, but it’s a deeply British game at heart, shot through with sly British humor. To properly appreciate any of that, however, we really need to know what the game is actually about, beyond the Knights Templar and international conspiracies of evil in the abstract.



Broken Sword‘s protagonist is an American abroad with the pitch-perfect name of George Stobbart, who is winningly portrayed in the original game and all four of its official sequels to date by voice actor Rolf Saxon. George is a painfully earnest everyman — or at least every-American — who in an earlier era might have been played on the silver screen by Jimmy Stewart. He wanders through the game’s foreign settings safely ensconced in the impenetrable armor of his nationality, a sight recognizable to any observer of Americans outside their natural habitat. To my mind the funniest line in the entire script comes when he’s accosted by an overzealous French police constable brandishing a pistol. “Don’t shoot!” he yells. “I’m an American!” Whole volumes of sociology and history could be written by way of unpacking those five words…

Anyway, as we saw in the movie above, the vacationing George is sitting in a Parisian café when a killer clown bombs the place to smithereens, in what seems to have been a deliberate — and unfortunately successful — act of murder against one particular patron. Earnest fellow that he is, George takes it upon himself to solve the crime, which proves to be much more than a random act of street violence. As he slowly peels the onion of the conspiracy behind it all, he has occasion to visit Ireland, Syria, Spain, and Scotland in addition to roaming the length and breadth of Paris, the home base for his investigations. And why does Paris feature so prominently? Well, it was close enough to Britain to make it easy for Revolution to visit in the name of research, but still held a powerful romantic allure for an Englishman of Cecil’s generation. “England was very poor in the 1960s and 1970s, and London was gray and drab,” he says. “Paris was smart. People walked differently and they wore brighter clothes. You sat in restaurants and ate amazing food. The mythology of Paris [in] Broken Sword came from that imagery of my younger days.”

George’s companion — constantly in research, from time to time in adventure, and potentially in romance — is one Nico, a French reporter with a sandpaper wit whom he meets at the scene of the bombing. She was originally created by the game’s writers to serve a very practical purpose, a trick that television and movie scriptwriters have been employing forever: in acting as a diegetic sounding board for George, she becomes a handy way to keep the player oriented and up to date with the ramifications of his latest discoveries, helping the player to keep a handle on what becomes a very complex mystery. In this sense, then, her presence is another sign of how Revolution’s writers were mastering their craft. “It meant we didn’t need to have lengthy one-man dialogs or 30 minutes of cut scenes,” says Charles Cecil.

The sexual tension between the oft-bickering pair — that classic “will they or won’t they?” dilemma — was initially a secondary consideration. It’s actually fairly understated in this first game, even as Nico herself is less prominent than she would later become; she spends the bulk of the game sitting in her apartment conducting vaguely defined “inquiries,” apparently by telephone, and waiting for another visit from George. [1]It’s telling that, when Revolution recently produced a “director’s cut” of the game for digital distribution, the most obvious additions were a pair of scenes where the player gets to control Nico directly, giving at least the impression that she has a more active role in the plot. Sadly, one of these takes place before the bombing in the Parisian café, rather spoiling that dramatically perfect — and perfectly dramatic — in medias res opening.

So much for the characters. Now, back to the subject of humor:

There’s the time when George tells Nico that he’s just visited the costume shop whence he believes the bomber to have rented his clown suit. “Yeah, I like it. What are you supposed to be?” she asks. Da-dum-dum!

“I didn’t hire a costume,” answers our terminally earnest protagonist. “These are my clothes and you know it.”

And then there’s Nico and (a jealous) George’s discussion with a French historian about Britain’s status during the time of the Roman Empire. “To the Romans, the Mediterranean was the center of the universe,” says the historian. “Britain was a remote, unfriendly place inhabited by blue-painted savages.”

“It hasn’t changed much,” says Nico. Da-dum-dum-dum!

“Well, they’ve stopped painting themselves blue,” says our straight man George.

“Except when they go to a football match,” deadpans Nico. Da-dum-dum-dum-dum!

You get the idea. I should say that all of this is made funnier by the performances of the voice cast, who are clearly having a grand old time turning their accents up to eleven. (Like so many Anglosphere productions, Broken Sword seems to think that everyone speaks English all the time, just in funny ways and with a light salting of words like bonjour and merci.)

And yet — and this is the truly remarkable part — the campiness of it all never entirely overwhelms the plot. The game is capable of creating real dramatic tension and a palpable sense of danger from time to time. It demands to be taken seriously at such junctures; while you can’t lock yourself out of victory without knowing it, you can die. The game walks a tenuous tightrope indeed between drama and comedy, but it very seldom loses its balance.


It wasn’t easy being a writer of geopolitical thrillers in the 1990s, that period of blissful peace and prosperity in the West after the end of the Cold War and before the War on Terror, the resurgence of authoritarianism, a global pandemic, and a widespread understanding of the magnitude of the crisis of global warming. Where exactly was one to find apocalyptic conflicts in such a milieu? It’s almost chilling to watch this clip today. What seemed an example of typically absurd videogame evil in 1996 feels disturbingly relevant today — not the Knights Templar nonsense, that is, but all the real-world problems that are blamed on it. If only it was as simple as stamping out a single cabal of occultists…

It’s hard to reconcile Broken Sword‘s Syria, a place where horror exists only in the form of Knights Templar assassins, a peddler of dodgy kebobs, and — most horrifying of all — an American tourist in sandals and knee socks, with the reality of the country of today. The civil war that is now being fought there has claimed the lives of more than half a million people and shattered tens of millions more.

With Nico in her Parisian flat.

Wars and governments may come and go, but the pub life of Ireland is eternal.

A villa in Spain with a connection to the Knights Templar and a grouchy gardener whom George will need to outwit.

Amidst ruins of a Scottish castle fit for a work of Romantic art, on the cusp of foiling the conspirators’ nefarious plot.



Revolution spent an inordinate amount of time — fully two and a half years — honing their shot at the adventure-game big leagues. They were silent for so long that some in the British press consigned them to the “where are they now?” file. “Whatever happened to Revolution Software?” asked PC Zone magazine in January of 1996. “Two releases down the line, they seem to have vanished.”

Alas, by the time Broken Sword was finally ready to go in the fall of 1996, the public’s ardor for the adventure genre had begun to dissipate. Despite a slew of high-profile, ambitious releases, 1996 had yet to produce a million-selling hit like the previous year’s Phantasmagoria, or like Myst the year before that. Especially in the United States, the industry’s focus was shifting to 3D action-oriented games, which not only sold better but were cheaper and faster to make than adventure games. In what some might call a sad commentary on the times, Virgin’s American arm insisted that the name of Broken Sword be changed to Circle of Blood. “They wanted it to be much more ‘bloody’ sounding,” says Charles Cecil.

For all of its high production values, the game was widely perceived by the American gaming press as a second-tier entry in a crowded field plagued by flagging enthusiasm. Computer Gaming World‘s review reads as a more reserved endorsement than the final rating of four stars out of five might imply. “The lengthy conversations often drag on before getting to the point,” wrote the author. If you had told her that Broken Sword — or rather Circle of Blood, as she knew it — would still be seeing sequels published in the second decade after such adventure standard bearers as King’s Quest and Gabriel Knight had been consigned to the videogame history books, she would surely have been shocked to say the least.

Ah, yes, Gabriel Knight… the review refers several times to that other series of adventure games masterminded by Sierra’s Jane Jensen. Even today, Gabriel Knight still seems to be the elephant in the room whenever anyone talks about Broken Sword. And on the surface, there really are a lot of similarities between the two. Both present plots that are, for all their absurdity, extrapolations on real history; both are very interested in inculcating a sense of place in their players; both feature a male protagonist and a female sidekick who develop feelings for one another despite their constant bickering, and whose rapport their audience developed feelings for to such an extent that they encouraged the developers to make the sidekick into a full-fledged co-star. According to one line of argument in adventure-game fandom, Broken Sword is a thinly disguised knock-off of Gabriel Knight. (The first game of Sierra’s series was released back in 1993, giving Revolution plenty of time to digest it and copy it.) Many will tell you that the imitation is self-evidently shallower and sillier than its richer inspiration.

But it seems to me that this argument is unfair, or at least incomplete. To begin with, the whole comparison feels more apt if you’ve only read about the games in question than if you’ve actually played them. Leaving aside the fraught and ultimately irrelevant question of influence — for the record, Charles Cecil and others from Revolution do not cite Gabriel Knight as a significant influence — there is a difference in craft that needs to be acknowledged. The Gabriel Knight games are fascinating to me not so much for what they achieve as for what they attempt. They positively scream out for critical clichés about reaches exceeding grasps; they’re desperate to elevate the art of interactive storytelling to some sort of adult respectability, but they never quite figure out how to do that while also being playable, soluble adventure games.

Broken Sword aims lower, yes, but hits its mark dead-center. From beginning to end, it oozes attention to the details of good game design. “We had to be very careful, and so we went through lots of [puzzles], seeing which ones would be fun,” says Charles Cecil. “These drive the story on, providing rewards as the player goes along, so we had to get them right.” One seldom hears similar anecdotes from the people who worked on Sierra’s games.

This, then, is the one aspect of Broken Sword I haven’t yet discussed: it’s a superb example of classic adventure design. Its puzzles are tricky at times, but never unclued, never random, evincing a respect for its player that was too often lost amidst the high concepts of games like Gabriel Knight.

Of course, if you dislike traditional adventure games on principle, Broken Sword will not change your mind. As an almost defiantly traditionalist creation, it resolves none of the fundamental issues with the genre that infuriate so many. The puzzles it sets in front of you seldom have much to do with the mystery you’re supposed to be unraveling. In the midst of attempting to foil a conspiracy of world domination, you’ll expend most of your brainpower on such pressing tasks as luring an ornery goat out of an Irish farmer’s field and scouring a Syrian village for a kebob seller’s lucky toilet brush. (Don’t ask!) Needless to say, most of the solutions George comes up with are, although typical of an adventure game, ridiculous, illegal, and/or immoral in any other context. The only way to play them is for laughs.

And this, I think, is what Broken Sword understands about the genre that Gabriel Knight does not. The latter’s puzzles are equally ridiculous (and too often less soluble), but the game tries to play it straight, creating cognitive dissonances all over the place. Broken Sword, on the other hand, isn’t afraid to lean into the limitations of its chosen genre and turn them into opportunities — opportunities, that is, to just be funny. Having made that concession, if concession it be, it finds that it can still keep its overarching plot from degenerating into farce. It’s a pragmatic compromise that works.

I like to think that the wisdom of its approach has been more appreciated in recent years, as even the more hardcore among us have become somewhat less insistent on adventure games as deathless interactive art and more willing to just enjoy them for what they are. Broken Sword may have been old-school even when it was a brand-new game, but it’s no musty artifact today. It remains as charming, colorful, and entertaining as ever, an example of a game whose reach is precisely calibrated to its grasp.

(Sources: the books The Holy Blood and the Holy Grail by Michael Baigent, Richard Leigh, and Henry Lincoln and Grand Thieves and Tomb Raiders: How British Video Games Conquered the World by Magnus Anderson and Rebecca Levene; Retro Gamer 31, 63, 146, and 148; PC Zone of January 1996; Computer Gaming World of February 1997. Online sources include Charles Cecil’s interviews with Anthony Lacey of Dining with Strangers, John Walker of Rock Paper Shotgun, Marty Mulrooney of Alternative Magazine Online, and Peter Rootham-Smith of Game Boomers.

Broken Sword: The Shadow of the Templars is available for digital purchase as a “director’s cut” whose additions and modifications are of dubious benefit. Luckily, the download includes the original game, which is well worth the purchase price in itself.)

Footnotes

Footnotes
1 It’s telling that, when Revolution recently produced a “director’s cut” of the game for digital distribution, the most obvious additions were a pair of scenes where the player gets to control Nico directly, giving at least the impression that she has a more active role in the plot. Sadly, one of these takes place before the bombing in the Parisian café, rather spoiling that dramatically perfect — and perfectly dramatic — in medias res opening.
 

Tags: , ,

Toonstruck (or, A Case Study in the Death of Adventure Games)

Some time ago, in the midst of a private email discussion about the general arc of adventure-game history, one of my readers offered up a bold claim: he said that the best single year to be a player of point-and-click graphic adventures was 1996. This rings decidedly counterintuitive, given that 1996 was also the year during which the genre first slid into a precipitous commercial decline that would not even begin to level out for a decade or more. But you know what? Looking at the lineup of games released that year, I found it difficult to argue with him. These were games of high hopes, soaring ambitions, and big budgets. The genre has never seen such a lineup since. How poignant and strange, I thought to myself. Then I thought about it some more, and I decided that it wasn’t really so strange at all.

For when we cast our glance back over entertainment history, we find that it’s not unusual for a strain of creative expression to peak in terms of sophistication and ambition some time after it has passed its zenith of raw popularity. Wings won the first ever best-picture Oscar two years after The Jazz Singer had numbered the days of soundless cinema; Duke Ellington’s big band blew up a storm at Newport two years after “Rock Around the Clock” and “That’s All Right” had heralded the end of jazz music at the top of the hit parade. The same sort of thing has happened on multiple occasions in gaming. I would argue, for example, that more great text adventures were commercially published after 1984, the year that interactive fiction plateaued and prepared for the down slide, than before that point. And then, of course, we have the graphic adventures of 1996 — the year after the release of Phantasmagoria, the last million-selling adventure game to earn such sales numbers entirely on its own intrinsic appeal, without riding the coattails of an earlier game for which it was a sequel or any other pre-existing mass-media sensation.

There are two reasons why this phenomenon occurs. One is that the people who decide what projects to green-light always have a tendency to look backward at least as much as forward; new market paradigms are always hard to get one’s head around. The other becomes increasingly prevalent as projects grow more complex, and the window of time between the day they are begun and the day they are completed grows longer as a result. A lot can happen in the world of media in the span of two years or more — not coincidentally, the type of time span that more and more game-development projects were starting to fill by the mid-1990s. Toonstruck, our subject for today, is a classic example of what can happen when the world in which a game is conceived is dramatically different from the one to which it is finally born.



Let us turn the clock back to late 1993, the moment of Toonstruck‘s genesis. At that time, the conventional wisdom inside the established games industry about gaming’s necessary future hewed almost exclusively to what we might call the Sierra vision, because it was articulated so volubly and persuasively by that major publisher’s founder and president Ken Williams. It claimed that the rich multimedia affordances of CD-ROM would inevitably lead to a merger of interactivity with cinema. Popular movie stars would soon be vying to appear in interactive movies which would boast the same production values and storytelling depth as traditional movies, but which would play out on computer instead of movie-theater or television screens, with the course of the story in the hands of the ones sitting behind the screens. This mooted merger of Silicon Valley and Hollywood — often abbreviated as “Siliwood” — would require development budgets exponentially larger than those the industry had been accustomed to, but the end results would reach an exponentially wider audience.

The games publisher Virgin Interactive, a part of Richard Branson’s sprawling media and travel empire, was every bit as invested in this prophecy as Sierra was. Its Los Angeles-based American arm was the straw that stirred the drink, under the guidance of a Brit named Martin Alper, who had been working to integrate games into a broader media zeitgeist for many years; he had first made a name for himself in his homeland as the co-founder of the budget label Mastertronic, whose games embraced pop-culture icons from Michael Jackson to Clumsy Colin (the mascot of a popular brand of chips), and were sold as often from supermarkets as from software stores. Earlier in 1993, his arm of Virgin had published The 7th Guest, an interactive horror flick which struck many as a better prototype for the Sierra vision than anything Sierra themselves had yet released; it had garnered enormous sales and adoring press notices from the taste-makers of mainstream media as well as those inside the computer-gaming ghetto. Now, Alper was ready to take things to the next level.

He turned for ideas to another Brit who had recently joined him in Los Angeles: a man named David Bishop, who had already worked as a journalist, designer, manager, and producer over the course of his decade in the industry. Bishop proposed an interactive counterpart of sorts to Who Framed Roger Rabbit, the hit 1988 movie which had wowed audiences with the novel feat of inserting cartoon characters into a live-action world. Bishop’s game would do the opposite: insert real actors into a cartoon world. He urged Alper to pull out all the stops in order to make something that would be every bit as gobsmacking as Roger Rabbit had been in its day.

So far, so good. But who should take on the task of turning Bishop’s idea into a reality? The 7th Guest had been created by a then-tiny developer known as Trilobyte, itself a partnership between a frustrated filmmaker and a programming whiz. Taking the press releases that labeled them the avatars of the next generation of entertainment at face value, the two had now left the Virgin fold, signing a contract with a splashy new player in the multimedia sweepstakes called Media Vision. Someone else would have to make the game called Toonstruck.

In a telling statement of just how committed they already were to their interactive cartoon, Virgin USA, who had only acted as a publisher to this point, decided to dive into the development business. In October of 1993, Martin Alper put two of his most trusted producers, Neil Young and Chris Yates, in charge of a new, wholly owned development studio called Burst, formed just to make Toonstruck. The two were given a virtually blank check to do so. Make it amazing was their only directive.

So, Young and Yates went across town to Hollywood. There they hired Nelvana, an animation house that had been making cartoons of every description for over twenty years. And they hired as well a gaggle of voice-acting talent that was worthy of a big-budget Disney feature. There were Tim Curry, star of the camp classic Rocky Horror Picture Show; Dan Castellaneta, the voice of Homer Simpson (“D’oh!”); David Ogden-Stiers, who had played the blue-blooded snob Charles Emerson Winchester III on M*A*S*H; Dom Deluise of The Cannonball Run and All Dogs Go to Heaven fame; plus many other less recognizable names who were nevertheless among the most talented and sought-after voices in cartoon production, the sort that any latch-key kid worth her salt had listened to for countless hours by the time she became a teenager. In hiring the star of the show — the actor destined to actually appear onscreen, inserted into the cartoon world — Burst pulled off their greatest coup of all: they secured the signature of none other than Christopher Lloyd, a veteran character actor best known as the hippie burnout Jim from the beloved sitcom Taxi, the mad scientist Doc Brown from the Back to the Future films… and Judge Doom, the villain from Who Framed Roger Rabbit. Playing in a game that would be the technological opposite of that film’s inserting of cartoon characters into the real world, Lloyd would become his old character’s psychological opposite, the hero rather than the villain. Sure, it was stunt casting — but how much more perfect could it get?

What happened next is impossible to explain in any detail. The fact is that Burst was and has remained something of a black box. What is clear, however, is that Toonstruck‘s designers-in-the-trenches Richard Hare and Jennifer McWilliams took their brief to pull out all the stops and to spare no expense in doing so as literally as everyone else at the studio, concocting a crazily ambitious script. “We were full of ideas, so we designed and designed and designed,” says McWilliams, “with a great deal of emphasis on what would be cool and interesting and funny, and not so much focus on what would actually be achievable within a set schedule and budget. [Virgin] for the most part stepped aside and let us do our thing.”

Their colleagues storyboarded their ever-expanding design document and turned it into hours and hours of quality cartoon animation — animation which was intended to meet or exceed the bar set by a first-string Disney feature film. As they did so, the deadlines flew by unheeded. Originally earmarked with the eternal optimism of game developers and Chicago Cubs fans for the 1994 Christmas season, the project slipped into 1995, then 1996. Virgin trotted it out at trade show after trade show, making ever more sweeping claims about its eventual amazingness at each one, until it became an in-joke among the gaming journalists who dutifully inserted a few lines about it into each successive “coming soon” preview. By 1996, the bill for Toonstruck was approaching a staggering $8 million, enough to make it the second most expensive computer game to date. And yet it was still far from completion.

It seems clear that the project was poorly managed from the start. Take, for example, all that vaunted high-quality animation. Burst’s decision to make the cartoon of Toonstruck first, then figure out how to make use of it in an interactive context later was hardly the most cost-effective way of doing things. It made little sense to aim to compete with Disney on a level playing field when the limitations of the consumer-computing hardware of the time meant that the final product would have to be squashed down to a resolution of 640 X 400, with a palette of just 256 shades, for display on a dinky 15-inch monitor screen.

There are also hints of other sorts of dysfunction inside Burst, and between Burst and its parent company. One Virgin insider who chose to remain anonymous alluded vaguely in 1998 to the way that “internal politics made the situation worse. Some of the project leaders didn’t get on with other senior staff, and some people had friendships to protect. So there was finger-pointing and back-slapping going on at the same time.”



During the three years that Toonstruck spent in development, the Sierra vision of gaming’s necessary future was challenged by a new one. In December of 1993, id Software, a tiny renegade company operating outside the traditional boundaries of the industry by selling its creations largely through the shareware model, released a little game called DOOM, which featured exclusively computer-generated 3D environments, gobs of bloody action, and, to paraphrase a famous statement by its chief programmer John Carmack, no more story than your typical porn movie. Not long after, a studio called Blizzard Entertainment debuted a fantasy strategy game called Warcraft which played like an action game, in hectic real time; not the first of its type, it was nevertheless the one that really caught gamers’ imaginations, especially after Blizzard perfected the concept with 1995’s Warcraft II. With these games and others like them selling at least as well as the hottest adventures, the industry’s One True Way Forward had become a proverbial fork in the road. Publishers could continue to plow money into interactive movies in the hope of cracking into the mainstream of mass entertainment, or they could double down on their longstanding customer demographic of young white males by offering them yet more fast-paced mayhem. Already by 1995, the fact that games of the latter stripe tended to cost far less than those of the former was enough to seal the deal in the minds of many publishers.

Virgin Interactive was given especial food for thought that year when they wound up publishing Trilobyte’s next game after all. Media Vision, the publisher Trilobyte had signed with, had imploded amidst government investigations of securities fraud and other financial crimes, and an opportunistic Virgin had swooped into the bankruptcy auction and made off with the contract for The 11th Hour, the sequel to The 7th Guest. It seemed like quite a clever heist at the time — but it began to seem somewhat less so when The 11th Hour under-performed relative to expectations. Both reviewers and ordinary gamers stated clearly that they were already becoming bored of Trilobyte’s rote mixing of B-movie cinematics with hoary set-piece puzzles that mostly stemmed from well before the computer age — tired of the way that the movie and the gameplay in a Trilobyte creation had virtually nothing to do with one another.

Then, as I noted at the beginning of this article, 1996 brought with it an unprecedentedly large lineup of ambitious, earnest, and expensive games of the Siliwood stripe, with some of them at least much more thoughtfully designed than anything Trilobyte had ever come up with. Nonetheless, as the year went by an alarming fact was more and more in evidence: this year’s crop of multimedia extravaganzas was not producing any towering hits to rival the likes of Sherlock Holmes: Consulting Detective in 1992, The 7th Guest in 1993, Myst in 1994, or Phantasmagoria in 1995. Arguably the best year in history to be a player of graphic adventures, 1996 was also the year that broke the genre. Almost all of the big-budget adventure releases still to come from American publishers would owe their existence to corporate inertia, being projects that executives found easier to complete and hope for a miracle than to cancel outright and then try to explain the massive write-off to their shareholders — even if outright cancellation would have been better for their companies’ bottom lines. In short, by the beginning of 1997 only dreamers doubted that the real future of the gaming mainstream lay with the lineages of DOOM and Warcraft.

Before we rush to condemn the philistines who preferred such games to their higher-toned counterparts, we must acknowledge that their preferences had to do with more than sheer bloody-mindedness. First-person shooters and real-time-strategy games could be a heck of a lot of fun, and lent themselves very well to playing with others, whether gathered together in one room or, increasingly, over the Internet. The generally solitary pursuit of adventure gaming had no answer for this sort of boisterous bonding experience. And there was also an economic factor: an adventure was a once-and-done endeavor that might last a week or two at best, after which you had no recourse but to go out and buy another one. You could, on the other hand, spend literally years playing the likes of DOOM and Warcraft with your mates.

Then there is one final harsh reality to be faced: the fact is that the Sierra vision never came close to living up to its billing for the player. These games were never remotely like waking up in the starring role of a Hollywood film. Boosters like Ken Williams were thrilled to talk about interactive movies in the abstract, but these same people were notably vague about how their interactivity was actually supposed to work. They invested massively in Hollywood acting talent, in orchestral soundtracks, and in the best computer artists money could buy, while leaving the interactivity — the very thing that ostensibly set their creations apart — to muddle through on its own, one way or another.

Inevitably, then, the interactivity ended up taking the form of static puzzles, the bedrock of adventure games since the days when they had been presented all in text. The puzzle paradigm persisted into this brave new era simply because no one could proffer any other ideas about what the player should be doing that were both more compelling and technologically achievable. I hasten to add that some players really, genuinely love puzzles, love few things more than to work through an intricate web of them in order to make something happen; I include myself among this group. When puzzles are done right, they’re as satisfying and creatively valid as any other type of gameplay.

But here’s the rub: most people — perhaps even most gamers — really don’t like solving puzzles all that much at all. (These people are of course no better or worse than those who do — just different.) For the average Joe or Jane, playing one of these new-fangled interactive movies was like watching a conventional movie filmed on an ultra-low-budget, usually with terrible acting. And then, for the pièce de résistance, you were expected to solve a bunch of boring puzzles for the privilege of witnessing the underwhelming next scene. Who on earth wanted to do this after a hard day at the office?

All of which is to say that the stellar sales of Consulting Detective, The 7th Guest, Myst, and Phantasmagora were not quite the public validations of the concept of interactive movies that the industry chose to read them as. The reasons for these titles’ success were orthogonal to their merits as games, whatever the latter might have been. People bought them as technology demonstrations, to show off the new computers they had just purchased and to test out the CD-ROM drives they had just installed. They gawked at them for a while and then, satiated, planted themselves back in front of their televisions to spend their evenings as they always had. This was not, needless to say, a sustainable model for a mainstream gaming genre. By 1996, the days when the mere presence of human actors walking and/or talking on a computer monitor could wow even the technologically unsophisticated were fast waning. That left as customers only the comparatively tiny hardcore of buyers who had always played adventure games. They were thrilled by the diverse and sumptuous smorgasbord that was suddenly set before them — but the industry’s executives, looking at the latest sales numbers, most assuredly were not. Just like that, the era of Siliwood passed into history. One can only hope that all of the hardcore adventure fans enjoyed it while it lasted.



Toonstruck was, as you may have guessed, among the most prominent of the adventures that were released to disappointing results in 1996. That event happened at the very end of the year, and only thanks to a Virgin management team who decided in the summer that enough was enough. “The powers that be in management had to step in and give us a dose of reality,” says Jennifer McWilliams. “We then needed to come up with an ending that could credibly wrap the game up halfway through, with a cliffhanger that would, ideally, introduce part two. I think we did well considering the constraints we were under, but still, it was not what we originally envisioned.” Another, anonymous team member has described what happened more bluntly: “The team was told to ‘cut it or can it’ — it either had to be shipped real soon, or not at all.”

The former option was chosen, and thus Toonstruck shipped just before Christmas, on two discs that between them bore only about one third of the total amount of animation created for the game, and that in a severely degraded form. Greeted with reviews that ran the gamut from raves to pans, it wound up selling about 150,000 copies. For a normal game with a normal budget, such numbers would be just about acceptable; if the 100,000-copy threshold was no longer the mark of an outright hit in the computer-games industry of 1996, selling that many copies and then half again that many more wasn’t too bad either. Unfortunately, all of the usual quantifiers got thrown out for a game that had cost over $8 million to make. One Virgin employee later mused wryly how Toonstruck had been intended to “blow the public away. The only thing that got blown was vast amounts of cash, and the public stayed away.”

Bleeding red ink from the failure of Toonstruck and a number of other games, Virgin’s American arm was ordered by the parent company in London to downsize their budgets and ambitions drastically. After creating a few less expensive but equally commercially disappointing games, Burst Studios was sold in 1998 to Electronic Arts, who renamed it EA Pacific and shifted its focus to 3D real-time strategy — a sign of the times if ever there was one.

Such is one tale of Toonstruck, a game which could only have appeared in its own very specific time and place. But, you might be wondering, how does this relic of a fizzled vision of gaming’s future play?



Toonstruck‘s opening movie is not a cartoon. We instead meet Christopher Lloyd for the first time in the real world, in the role of Drew Blanc (get it?), a cartoonist suffering from writer’s block. He’s called into the office of his impatient boss Sam Schmaltz, who’s played by Ben Stein, an actor of, shall we say, limited range, but one who remains readily recognizable to an entire generation for playing every kid’s nightmare of a boring teacher in Ferris Bueller’s Day Off and The Wonder Years.

We learn that Drew is unhappy with his current assignment as the illustrator of The Fluffy Fluffy Bun Bun Show, a piece of cartoon pablum with as much edge as a melting stick of butter. He rather wants to do something with his creation Flux Wildly, a hyperactive creature of uncertain taxonomy and chaotic disposition. Schmaltz, however, quickly lives up to his name; he’s having none of it. A deflated Drew resigns himself to an all-nighter in the studio to make up the time he’s wasted daydreaming about the likes of Flux. But in the course of that night, he is somehow drawn into his television — right into a cartoon.

There the bewildered Drew meets none other than Flux Wildly himself, finding him every bit as charmingly unhinged as he’d always imagined him to be. He learns that the cartoon world in which he finds himself is divided into three regions: Cutopia, where the fluffy bun bun bunnies and their ilk live; Zanydu, which anarchists like Flux call home; and Malevoland, where true evil lurks. Trouble is, Count Nefarious of Malevoland has gotten tired of the current balance of power, and has started making bombing raids on the other two regions in his Malevolator, using its ray of evil to turn them as dark and twisted as his homeland. King Hugh of Cutopia promises Drew that, if he first saves them all by collecting the parts necessary to build a Cutifier — the antidote to the Malevolator — he will send Drew back to his own world.

All of that is laid out in the opening movie, after which the plot gears are more or less shifted into neutral while you commence wandering around solving puzzles. And it’s here that the game presents its most welcome surprise: unlike so many other multimedia productions of this era that were sold primarily on the basis of their audiovisuals, this game’s puzzle design is clever, complex, and carefully crafted. I have no knowledge of precisely how this game was tested and balanced, but I have to assume these things were done, and done well. It’s not an easy game by any means — there are dozens and dozens of puzzles here, layered on top of one another in a veritable tangle of dependencies — but it’s never an unfair one. In the best tradition of LucasArts, there are no deaths or dead ends. If you are willing to observe the environment with a meticulous eye, experiment patiently, and enter into the cartoon logic of a world where holes are portable and five minutes on a weight bench can transform your physique, you might just be able to solve this one without hints.

The puzzles manage the neat trick of being whimsical without ever abandoning logic entirely. Take, for example, the overarching meta-puzzle you’re attempting to solve as you wander through the lands. Assembling the Cutifier requires combining matched pairs of objects, such as sugar and spice (that’s a freebie the game gives you to introduce the concept). Other objects waiting for their partners include a dagger, some stripes, a heart, some whistles, some polish, etc. If possible combinations have started leaping to mind already, you might really enjoy this game. If they haven’t, on the other hand, you might not, or you might have fallen afoul of the exception to the rule of its general solubility: it requires a thoroughgoing knowledge of idiomatic English, of the sort that only native speakers or those who have been steeped in the language for many years are likely to possess.

While you’re working out its gnarly puzzle structure, Toonstruck is doing its level best to keep you amused in other ways. Players who are only familiar with Christopher Lloyd from his scenery-chewing portrayals in Back to the Future and Who Framed Roger Rabbit may be surprised at his relatively low-key performance here; more often than not, he’s acting as the straight man for his wise-cracking sidekick Flux Wildly and other gleefully over-the-top cartoon personalities. In truth, Lloyd was (and is) a more multi-faceted and flexible actor than his popular image might suggest, having decades of experience in film, television, and theater productions of all types behind him. His performance here, in what must have been extremely trying circumstances — he was, after all, constantly expected to say his lines to characters who weren’t actually there — feels impressively natural.

Drew Blanc’s friendship with Flux Wildly is the emotional heart of the story. Their relationship can’t help but bring to mind the much-loved LucasArts adventuring duo Sam and Max. Once again, we have here a subdued humanoid straight man paired with a less anthropomorphic pal who comes complete with a predilection for violence. Once again the latter keeps things lively with his antics and his constant patter. And once again you the player can use him like an inventory item from time to time on the problems you encounter, sometimes with productive and often with amusing results. Flux Wildly may just be my favorite thing in the game. I just wish he was around through the whole game; more on that momentarily.

Although Flux is a lot of fun, the writing in general is a bit of a mixed bag. As, for that matter, were contemporary reviews of the writing. Computer Gaming World found Toonstruck “hilarious”: “With humor that ranges from cutesy to risqué, Toonstruck keeps the laughter coming nonstop.” Next Generation, on the other hand, wrote that “the designers have tried desperately hard to make the game zany, wacky, crazy, twisted, madcap, and side-splittingly hilarious — but it just isn’t. The dialog, slapstick humor, and relentless ‘comedy’ situations are tired. You’ve seen most of these jokes done better 40 years ago.”

In a way, both takes are correct. Toonstruck is sometimes genuinely clever and funny, but just as often feels like it’s trying way too hard. There are reports that the intended audience for the game drifted over its three years in development, that it was originally planned as a kid-friendly game and only slowly moved in a more adult direction. This may explain some of the jarring tonal shifts inside its world. At times, the writing doesn’t seem to know what it wants to be, veering wildly from the light and frothy to that depressingly common species of videogame humor that mistakes transgression for wit. The most telling example is also the one scene that absolutely no one who has ever played this game, or for that matter merely watched it being played, can possibly forget, even if she wants to.

While exploring the land of Cutopia, you come upon a sweet, matronly dairy cow and her two BFFs, a cute and fuzzy sheep and a tired old horse. Some time later, Count Nefarious arrives to zap their farm with his Malevolator. Next time you visit, you find that the horse has been turned into glue. Meanwhile the cow is spread-eagled on a “Wheel-O-Luv,” her udders dangling pendulously in a way that looks downright pornographic, cackling with masochistic delight while the leather-clad sheep gives her her delicious punishment. Words fail me… this is something you have to see for yourself.


Here and in a few other places, Toonstruck is just off, weird in a way that is not just unfunny or immature but that actually leaves you feeling vaguely uncomfortable. It demonstrates that, for all Virgin Interactive’s mainstream ambitions, they were still a long way from mustering the thematic, aesthetic, and writerly unity that goes into a slick piece of mass-market entertainment.

Toonstruck is at its best when it is neither trying to trangress for the sake of it nor to please the mass market, but rather when it’s delicately skewering a certain stripe of sickly sweet, creatively bankrupt, lowest-common denominator children’s programming that was all over television during the 1980s and 1990s. Think of The Care Bears, a program that was drawn by some of the same Nelvana animators who worked on Toonstruck; they must surely have enjoyed ripping their mawkish past to shreds here. Or, even better, think of Barney the hideous purple dinosaur, dawdling through excruciating songs with ripped-off melodies and cloying lyrics that sound like they were made up on the spot. Few media creations have ever been as easy to hate as him, as the erstwhile popularity of the Usenet newsgroup alt.Barney.dinosaur.die.die.die will attest.

Being created by so many insiders to the cartoon racket, Toonstruck is well placed to capture the very adult cynicism that oozes from such productions, engineered as they were mainly to sell plush toys to co-dependent children. It does so not least through King Hugh of Cutopia himself, who turns out to be — spoiler alert! — not quite the heroic exemplar of inclusiveness he’s billed as. Meanwhile Flux Wildly and his friends from Zanydu stand for a different breed of cartoons, ones which demonstrate a measure of respect for their young audience.

There does eventually come a point in Toonstruck, more than a few hours in, when you’ve unraveled the web of puzzles and assembled all twelve matched pairs that are required for the Cutifier. By now you feel like you’ve played a pretty complete game, and are expecting the end credits to start rolling soon. Instead the game pulls its next big trick on you: everything goes to hell in a hand basket and you find yourself in Count Nefarious’s dungeon, about to begin a second act whose presence was heretofore hinted at only by the presence of a second, as-yet unused CD in the game’s (real or virtual) box.

Most players agree that this unexpected second act is, for all the generosity demonstrated by the mere fact of its existence, considerably less enjoyable than the first. Your buddy Flux Wildly is gone, the environment darker and more constrained, and your necessary path through the plot more linear. It feels austere and lonely in contrast to what has come before — and not in a good way. Although the puzzle design remains solid enough, I imagine that this is the point where many players begin to succumb to the temptations of hints and walkthroughs. And it’s hard to blame them; the second act is the very definition of an anticlimax — almost a dramatic non sequitur in the way it throws the game out of its natural rhythm.

But a real ending — or at least a form of ending — does finally arrive. Drew Blanc defeats Count Nefarious and is returned to his own world. All seems well — until Flux Wildly contacts him again in the denouement to tell him that Nefarious really isn’t done away with just yet. Incredibly, this was once intended to mark the beginning of a third act, of four in total, all in the service of a parable about the creative process that the game we have only hints at. Laboring under their managers’ ultimatum to ship or else, the developers had to fall back on the forlorn hope of a surprise, sequel-justifying hit in the face of the marketplace headwinds that were blowing against the game. Jennifer McWilliams:

Toonstruck was meant to be a funny story about defeating some really weird bad guys, as it was when released, but originally it was also about defeating one’s own creative demons. It was a tribute to creative folks of all types, and was meant to offer encouragement to any of them that had lost their way. So, the second part of the game had Drew venturing into his own psyche, facing his fears (like a psychotically overeager dentist), living out his fantasies (like meeting his hero, Vincent van Gogh), and eventually finding a way to restore his creative spark.

It does sound intriguing on one level, but it also sounds like much, much too much for a game that already feels rather overstuffed. If the full conception had been brought to fruition, Toonstruck would have been absolutely massive, in the running for the biggest graphic adventure ever made. But whether its characters and puzzle mechanics could have supported the weight of so much content is another question. It seems that all or most of the animation necessary for acts three and four was created — more fruits of that $8 million budget — and this has occasionally led fans to dream of a hugely belated sequel. Yet it is highly doubtful whether any of the animation still exists, or for that matter whether the economics of using it make any more sense now than they did in the mid-1990s. Once all but completely forgotten, Toonstruck has enjoyed a revival of interest since it was put up for sale on digital storefronts some years ago. But only a small one: it would be a stretch to label it even a cult classic.

What we’re left with instead, then, is a fascinating exemplar of a bygone age; the fact that this game could only have appeared in the mid-1990s is a big part of its charm. Then, too, there’s a refreshing can-do spirit about it. Tasked with making something amazing, its creators did their honest best to achieve just that, on multiple levels. If the end result is imperfect in some fairly obvious ways, it never fails to be playable, which is more than can be said for many of its peers. Indeed, it remains well worth playing today for anyone who shivers with anticipation at the prospect of a pile of convoluted, deviously interconnected puzzles. Ditto for anyone who just wants to know what kind of game $8 million would buy you back in 1996.

(Sources: Starlog of May 1984 and August 1993; Computer Gaming World of January 1997; Electronic Entertainment of December 1995; Next Generation of January 1997, February 1997, and April 1998; PC Zone of August 1995, August 1996, and June 1998; Questbusters 117; Retro Gamer 174.

Toonstruck is available for digital purchase on GOG.com.)

 

Tags: ,

A Web Around the World, Part 11: A Zero-Sum Game

Mosaic Communications was founded on $13 million in venture capital, a pittance by the standards of today but an impressive sum by those of 1994. Marc Andreessen and Jim Clark’s business plan, if you can call it that, would prove as emblematic of the era of American business history they were inaugurating as anything they ever did. “I don’t know how in hell we’re going to make money,” mused Clark, “but I’ll put money behind it, and we’ll figure out a way. A market growing as quickly as that [one] is going to have money to be made in it.” This naïve faith that nebulous user “engagement” must inevitably be transformed into dollars in the end by some mysterious alchemical process would be all over Silicon Valley throughout the dot-com boom — and, indeed, has never entirely left it even after the bust.

Andreessen and Clark’s first concrete action after the founding was to contact everyone at the National Center for Supercomputing Applications who had helped out with the old Mosaic browser, asking them to come to Silicon Valley and help make the new one. Most of their targets were easily tempted away from the staid nonprofit by the glamor of the most intensely watched tech startup of the year, not to mention the stock options that were dangled before them. The poaching of talent from NCSA secured for the new company some of the most seasoned browser developers in the world. And, almost as importantly, it also served to cut NCSA’s browser — the new one’s most obvious competition — off at the knees. For without these folks, how was NCSA to keep improving its browser?

The partners were playing a very dangerous game here. The Mosaic browser and all of its source code were owned by NCSA as an organization. Not only had Andreessen and Clark made the cheeky move of naming their company after a browser they didn’t own, but they had now stolen away from NCSA those people with the most intimate knowledge of how said browser actually worked. Fortunately, Clark was a grizzled enough veteran of business to put some safeguards in place. He was careful to ensure that no one brought so much as a line of code from the old browser with them to Mosaic Communications. The new one would be entirely original in terms of its code if not in terms of the end-user experience; it would be what the Valley calls a “clean-room implementation.”

Andreessen and Clark were keenly aware that the window of opportunity to create the accepted successor to NCSA Mosaic must be short. They made it clearer with every move they made that they saw the World Wide Web as a zero-sum game. They consciously copied the take-no-prisoners approach of Bill Gates, CEO of Microsoft, which had by now replaced IBM as the most powerful and arguably the most hated company in the computer industry. Marc Andreessen:

We knew that the key to success for the whole thing was getting ubiquity on the [browser] side. That was the way to get the company jump-started because that gives you essentially a broad platform to build off of. It’s basically a Microsoft lesson, right? If you get ubiquity, you have a lot of options, a lot of ways to benefit from that. You can get paid by the product that you are ubiquitous on, but you can also get paid on products that benefit as a result. One of the fundamental lessons is that market share now equals revenue later, and if you don’t have the market share now, you are not going to have revenue later. Another fundamental lesson is that whoever gets the volume does win in the end. Just plain wins. There has to be just one single winner in a market like this.

The founders pushed their programmers hard, insisting that the company simply had to get the browser out by the fall of 1994, which gave them a bare handful of months to create it from scratch. To spur their employees on, they devised a semi-friendly competition. They divided the programmers into three teams, one working on a browser for Unix, one on the Macintosh version, and one on the Microsoft Windows version. The teams raced one another from milestone to milestone, and compared their browsers’ rendering speeds down to the millisecond, all for weekly bragging rights and names on walls of fame and/or shame. One mid-level manager remembers how “a lot of times, people were there 48 hours straight, just coding. I’ve never seen anything like it, in terms of honest-to-God, no BS, human endurance.” Inside the office, the stakes seemed almost literally life or death. He recalls an attitude that “we were fighting some war and that we could win.”

In the meantime, Jim Clark was doing some more poaching. He hired away from his old company Silicon Graphics an ace PR woman named Rosanne Siino. She became the mass-media architect of the dot-com founder as genius, visionary, and all-around rock star. “We had this 22-year-old kid who was pretty damn interesting, and I thought, ‘There’s a story here,'” she says. She proceeded to pitch that story to anyone who would take her calls.

Andreeseen, for his part, slipped into his role fluidly enough after just a bit of coaching. “If you get more visible,” he reasoned, “it counts as advertising, and it doesn’t cost anything.” By the mid-summer of 1994, he was doing multiple interviews most days. Tall and athletically built, well-dressed and glib — certainly no one’s stereotype of a pasty computer nerd — he was perfect fodder for tech journals, mainstream newspapers, and supermarket tabloids alike. “He’s young, he’s hot, and he’s here!” trumpeted one of the last above a glamor shot of the wunderkind.

The establishment business media found the rest of the company to be almost as interesting if not quite as sexy, from its other, older founder who was trying to make lightning strike a second time to the fanatical young believers who filled the cubicles; stories of crunch time were more novel then than they would soon become. Journalists fixated on the programmers’ whimsical mascot, a huge green and purple lizard named Mozilla who loomed over the office from his perch on one wall. Some were even privileged to learn that his name was a portmanteau of  “Mosaic” and “Godzilla,” symbolizing the company’s intention to annihilate the NCSA browser as thoroughly as the movie monster had leveled Tokyo. On the strength of sparkling anecdotes like this, Forbes magazine named Mosaic Communications one of its “25 Cool Companies” — all well before it had any products whatsoever.

Mozilla, the unofficial mascot of Mosaic (later Netscape) Communications. He would prove to be far longer-lived than the company he first represented. Today he still lends his name to the Mozilla Foundation, which maintains an open-source browser and fights for open standards on the Web — somewhat ironically, given that the foundation’s origins lie in the first company to be widely perceived as a threat to those standards.

The most obvious obstacle to annihilating the NCSA browser was the latter’s price: it was, after all, free. Just how was a for-profit business supposed to compete with that price point? Andreeseen and Clark settled on a paid model that nevertheless came complete with a nudge and a wink. The browser they called Mosaic Netscape would technically be free only to students and educators. But others would be asked to pay the $39 licensing fee only after a 90-day trial period — and, importantly, no mechanism would be implemented to coerce them into doing so even after the trial expired. Mosaic Communications would thus make the cornerstone of its business strategy Andreessen’s sanguine conviction that “market share now equals revenue later.”

Mosaic Netscape went live on the Internet on October 13, 1994. And in terms of Andreessen’s holy grail of market share at least, it was an immediate, thumping success. Within weeks, Mosaic Netscape had replaced NCSA Mosaic as the dominant browser on the Web. In truth, it had much to recommend it. It was blazing fast on all three of the platforms on which it ran, a tribute to the fierce competition between the teams who had built its different versions. And it sported some useful new HTML tags, such as “<center>” for centering text and “<blink>” for making it do just that. (Granted, the latter was rather less essential than the former, but that wouldn’t prevent thousands of websites from hastening to make use of it; as is typically the case with such things, the evolution of Web aesthetics would happen more slowly than that of Web technology.) Most notably of all, Netscape added the possibility of secure encryption to the Web, via the Secure Sockets Layer (SSL). The company rightly considered SSL to be an essential prerequisite to online commerce; no one in their right mind was going to send credit-card numbers in the clear.

But, valuable though these additions (mostly) were, they raised the ire of many of those who had shepherded the Web through its early years, not least among them Tim Berners-Lee. Although they weren’t patented and thus weren’t proprietary in a legal sense — anyone was free to implement them if they could figure out how they worked — Mosaic Communications had rolled them out without talking to anyone about what they were doing, leaving everyone else to play catch-up in a race of their making.

Still, such concerns carried little weight with most users. They were just happy to have a better browser.

More pressing for Andreessen and Clark were the legal threats that were soon issuing from NCSA and the University of Illinois, demanding up to 50 percent of the revenue from Mosaic Netscape, which they alleged was by rights at least half theirs. These continued even after Jim Clark produced a report from a forensic software expert which stated that, for all that they might look and feel the same, NCSA Mosaic and Mosaic Netscape shared no code at all. Accepting at last that naming their company after the rival browser whose code they insisted they were not stealing had been terrible optics, Andreessen and Clark rechristened Mosaic Communications as Netscape Communications on November 14, 1994; its browser now became known as Netscape Navigator. Seeking a compromise to make the legal questions go away once and for all, Clark offered NCSA a substantial amount of stock in Netscape, only to be turned down flat. In the end, he agreed to a cash settlement instead; industry rumor placed it in the neighborhood of $2 million. NCSA and the university with which it was affiliated may have have felt validated by the settlement, but time would show that it had not been an especially wise decision to reject Clark’s first overture: ten months later, the stock NCSA had been offered was worth $17 million.



For all its exciting growth, the World Wide Web had made relatively few inroads with everyday Americans to this point. But all of that changed in 1995, the year when the Web broke through in earnest. There was now enough content there to make it an interesting place for the ordinary Joe or Jane to visit, as well as a slick, user-friendly browser for him or her to use in Netscape Navigator.

Just as importantly, there were for the first time enough computers in daily use in American homes to make something like the Web a viable proposition. With the more approachable Microsoft Windows having replaced the cryptic, command-line-driven MS-DOS as the typical face of consumer computing, with new graphics card, sound cards, and CD-ROM drives providing a reasonably pleasing audiovisual experience, with the latest word processors and spreadsheets being more powerful and easier to use than ever before, and with the latest microprocessors and hard drives allowing it all to happen at a reasonably brisk pace, personal computers had crossed a Rubicon in the last half-decade or so, to become gadgets that people who didn’t find computers themselves intrinsically fascinating might nonetheless want to own and use. Netscape Navigator was fortunate enough to hit the scene just as these new buyers were reaching a critical mass. They served to prime the pump. And then, once just about everyone with a computer seemed to be talking about the Web, the whole thing became a self-reinforcing virtuous circle, with computer owners streaming onto the Web and the Web in turn driving computer sales. By the summer of 1995, Netscape Navigator had been installed on at least 10 million computers.

Virtually every major corporation in the country that didn’t have a homepage already set one up during 1995. Many were little more than a page or two of text and a few corporate logos at this point, but a few did go further, becoming in the process harbingers of the digital future. Pizza Hut, for example, began offering an online ordering service in select markets, and Federal Express made it possible for customers to track the progress of their packages around the country and the world from right there in their browsers. Meanwhile Silicon Valley and other tech centers played host to startup after startup, including plenty of names we still know well today: the online bookstore (and later anything-store) Amazon, the online auction house eBay, and the online dating service Match.com among others were all founded this year.

Recognizing an existential threat when they saw one, the old guard of circumscribed online services such as CompuServe, who had pioneered much of the social and commercial interaction that was now moving onto the open Web, rushed to devise hybrid business models that mixed their traditional proprietary content with Internet access. Alas, it would avail most of them nothing in the end; the vast majority of these dinosaurs would shuffle off to extinction before the decade was out. Only an upstart service known as America Online, a comparative latecomer on the scene, would successfully weather the initial storm, thanks mostly to astute marketing that positioned it as the gentler, friendlier, more secure alternative to the vanilla Web for the non-tech-savvy consumer. Its public image as a sort of World Wide Web with training wheels would rake in big profits even as it made the service and its subscribers objects of derision for Internet sophisticates. But even America Online would not be able to maintain its stranglehold on Middle America forever. By shortly after the turn of the millennium — and shortly after an ill-advised high-profile merger with the titan of old media Time Warner — it too would be in free fall.



One question stood foremost in the minds of many of these millions who were flocking onto the Web for the first time: how the heck were they supposed to find anything here? It was, to be sure, an ironic question to be asking, given that Tim Berners-Lee had invented his World Wide Web for the express purpose of making the notoriously confounding pre-Web Internet easier to navigate. Yet as websites bred and spawned like rabbits in a Viagra factory, it became a relevant one once again.

The idea of a network of associative links was as valid as ever — but just where were you to start when you knew that you wanted to, say, find out the latest rumors about your favorite band Oasis? (This was the mid-1990s, after all.) Once you were inside the Oasis ecosystem, as it were, it was easy enough to jump from site to site through the power of association. But how were you to find your way inside in the first place when you first fired up your browser and were greeted with a blank page and a blank text field waiting for you to type in a Web address you didn’t know?

One solution to this conundrum was weirdly old-fashioned: brick-and-mortar bookstore shelves were soon filling up with printed directories that cataloged the Web’s contents. But this was a manifestly inadequate solution as well as a retrograde one; what with the pace of change on the Web, such books were out of date before they were even sold. What people really needed was a jumping-off point on the Web itself, a home base from which to start each journey down the rabbit hole of their particular interests, offering a list of places to go that could grow and change as fast as the Web itself. Luckily, two young men with too much time on their hands had created just such a thing.

Jerry Yang and David Filo were rather unenthusiastic Stanford graduate students in computer science during the early 1990s. Being best friends, they discovered the Web together shortly after the arrival of the NCSA Mosaic browser. Already at this early date, finding the needles in the digital haystack was becoming difficult. Therefore they set up a list of links they found interesting, calling it “Jerry and David’s Guide to the World Wide Web.” This was not unique in itself; thousands of others were putting up similar lists of “cool links.” Yang and Filo were unique, however, in how much energy they devoted to the endeavor.

Jerry Yang and David Filo. Bare feet were something of a staple of Silicon Valley glamor shots, serving as a delightful shorthand for informal eccentricity in the eyes of the mass media.

They were among the first wave of people to discover the peculiar, dubiously healthy dopamine-release mechanism that is online attention, whether measured in page views, as in those days, or likes or retweets, as today. The more traffic that came their way, the more additional traffic they wanted. Instead of catering merely to their personal interests, they gradually turned their site into a comprehensive directory of the Web — all of it, in the ideal at least. They surfed tirelessly day after day, neglecting girlfriends, family, and personal hygiene, not to mention their coursework, trying to keep up with the Sisyphean task of cataloging every new site of note that went up on the Web, then slotting it into a branching hierarchy of hundreds of categories and sub-categories.

In April of 1994, they decided that their site needed a catchier name. Their initial thought was to combine their last names in some ingenious way, but they couldn’t find one that worked. So, they focused on the name of Yang, by nature the more voluble and outgoing of the pair. They were steeped enough in hacker culture to think of a popular piece of software called YACC; it stood for “Yet Another Compiler Compiler,” but was pronounced like the Himalayan beast of burden. That name was obviously taken, but perhaps they could come up with something else along those lines. They looked in a dictionary for words starting with “ya”: “yawn,” “yawp,” “yaw,” “y-axis”… “yahoo.” The good book told them that “yahoo” derived from Jonathan Swift’s Gulliver’s Travels, where it referred to “any of a race of brutish, degraded creatures having the form and all of the vices of man.” Whatever — they just liked the sound of the word. They racked their brains until they had turned it into an acronym: “Yet Another Hierarchical Officious Oracle.” Whatever. It would do. A few months later, they stuck an exclamation point at the end as a finishing touch. And so Yahoo! came to be.

Yahoo! very shortly after it received its name, but before it received its final flourish of an exclamation point.

For quite some time after that, not much changed on the surface. Yang and Filo had by now appropriated a neglected camping trailer on one of Stanford’s back parking lots, which they turned into their squalid headquarters. They tried to keep up with the flood of new content coming onto the Web every day by living in the trailer, trading four-hour shifts with one another around the clock, working like demons for that sweet fix of ever-increasing page-view numbers. “There was nothing else in the world like it,” says Yang. “There was such camaraderie, it was like driving off a cliff.”

But there came a point, not long after the start of that pivotal Web year of 1995, when Yang and Filo had to recognize that they were losing their battle with new content. So, they set off in search of the funding they would need to turn what had already become in the minds of many the Web’s de-facto “front page” into a real business, complete with employees they could pay to do what they had been doing for free. They seriously considered joining America Online, then came even closer to signing on with Netscape, a company which had already done much for their popularity by placing their site behind a button displayed prominently by the Navigator browser. In the end, though, they opted to remain independent. In April of 1995, they secured $4 million in financing, thanks to a far-sighted venture capitalist named Mike Moritz, who made the deal in the face of enormous skepticism from his colleagues. “The venture community [had never] invested in anything that gave a product away for free,” he remembers.

Or had they? It all depended on how you looked at it. Yang and Filo noted that television broadcasters had been giving their product away for free for decades as far as the individual viewer was concerned, making their money instead by selling access to their captive audience to third-party advertisers. Why couldn’t the same thing work on the Web? The demographic that visited Yahoo! regularly was, after all, an advertiser’s dream, being largely comprised of young adults with disposable income, attracted to novelty and with enough leisure time to indulge that attraction.

So, advertising started appearing on Yahoo! very shortly after it became a real business. Adherents to the old, non-commercial Web ideal grumbled, and some of them left in a huff, but their numbers were dwarfed by the continuing flood of new Netizens, who tended to perceive the Web as just another form of commercial media and were thus unfazed when they were greeted with advertising there. With the help of a groundbreaking Web analytics firm known as I/PRO, Yahoo! came up with ways to target its advertisements ever more precisely to each individual user’s interests, which she revealed to the company whether she wanted to or not through the links she clicked. The Web, Yang and Filo were at pains to point out, was the most effective advertising environment ever to appear. Business journalist Robert H. Reid, who profiled Netscape, Yahoo!, I/PRO, and much of the rest of the early dot-com startup scene for a book published in 1997, summed up the advantages of online advertising thusly:

There is a limit to how targeted advertising can be in traditional media. [This is] because any audience that is larger than one, even a fairly small and targeted [audience], will inevitably have its diversity elements (certain readers of the [Wall Street] Journal’s C section surely do not care about new bond issues, while certain readers of Field and Stream surely do). The Web has the potential to let marketers overcome this because, as an interactive medium, it can enable them to target their messages with surgical precision. Database technology can allow entirely unique webpages to be generated and served in moments based upon what is known about a viewer’s background, interests, and prior trajectory through a site. A site with a diverse audience can therefore direct one set of messages to high-school boys and a wholly different one to retired women. Or it could go further than this — after all, not all retired women are interested in precisely the same things — and present each visitor with an entirely unique message or experience.

Then, too, on the Web advertisers could do more than try to lodge an impression in a viewer’s mind and hope she followed up on it later, as was the case with television. They could rather present an advertisement as a clickable link that would take her instantly to their own site, which she could browse to learn far more about their products than she ever could from a one-minute commercial, which she might even be able to use to buy their products then and there — instant gratification for everyone involved.

Unlike so many Web firms before and after it, Yahoo! became profitable right away on the strength of reasoning like that. Even when Netscape pulled the site from Navigator at the end of 1995, replacing it with another one that was willing to pay dearly for the privilege — another sign of the changing times — it only briefly affected Yahoo!’s overall trajectory. As far as the mainstream media was concerned, Yang and Filo — these two scruffy graduate students who had built their company in a camping trailer — were the best business story since the rise of Netscape. If anything, Jerry Yang’s personal history made Yahoo! an even more compelling exemplar of the American Dream: he had come to the United States from Taiwan at the age of ten, when the only word of English he knew was “shoe.” When Yang showed that he could be every bit as charming as Marc Andreessen, that only made the story that much better.

Declaring that Yahoo! was a media rather than a technology company, Yang displayed a flair for branding one would never expect from a lifelong student: “It’s an article of culture. This differentiates Yahoo!, makes it cool, and gives it a market premium.” Somewhat ironically given its pitch that online advertising was intrinsically better than television advertising, Yahoo! became the first of the dot-com startups to air television commercials, all of which concluded with a Gene Autry -soundalike yodeling the name, an unavoidable ear worm for anyone who heard it. A survey conducted in 1996 revealed that half of all Americans already knew the brand name — a far larger percentage than that which had actually ventured online by that point. It seems safe to say that Yahoo! was the most recognizable of all the early Web brands, more so even than Netscape.


Trailblazing though Yahoo!’s business model was in many ways, its approach to its core competency seems disarmingly quaint today. Yahoo! wasn’t quite a search engine in the way we think of such things; it was rather a collection of sanctioned links, hand-curated and methodically organized by a small army of real human beings. Well before television commercials like the one above had begun to air, the dozens of “surfers” it employed — many of them with degrees in library science — had been relieved of the burden of needing to go out and find new sites for themselves by their own site’s ubiquity. Owners of sites which wished to be listed were expected to fill out a form, then wait patiently for a few days or weeks for someone to get to their request and, if it passed muster, slot it into Yahoo!’s ever-blossoming hierarchy.

Yahoo! as it looked in October of 1996. A search field has recently been added, but it searches only Yahoo!’s hand-curated database of sites rather than the Web itself.

The alternative approach, which was common among Yahoo!’s competitors even at the time, is to send out automated “web crawlers,” programs that jump from link to link, in order to index all of the content on the Web into a searchable database. But as far as many Netizens were concerned in the mid-1990s, that approach just didn’t work all that well. A search for “Oasis” on one of these sites was likely to show you hundreds of pages dealing with desert ecosystems, all jumbled together with those dealing with your favorite rock band. It would be some time before search engines would be developed that could divine what you were really looking for based on context, that could infer from your search for “Oasis band” that you really, really didn’t want to read about deserts at that particular moment. Search engines like the one around which Google would later build its empire require a form of artificial intelligence — still not the computer consciousness of the old “giant brain” model of computing, but a more limited, context-specific form of machine learning — that would not be quick or easy to develop. In the meantime, there was Yahoo! and its army of human librarians.



And there were also the first Internet IPOs. As ever, Netscape rode the crest of the Web wave, the standard bearer for all to follow. On the eve of its IPO of August 9, 1995, it was decided to price the shares at $28 each, giving a total value to the company of over $1 billion, even though its total revenues to date amounted to $17 million and its bottom line to date tallied a loss of $13 million. Nevertheless, when trading opened the share price immediately soared to $74.75. “It took General Dynamics 43 years to become a corporation worth today’s $2.7 billion,” wrote The Wall Street Journal. “It took Netscape Communications about a minute.”

Yahoo!’s turn came on April 12, 1996. Its shares were priced at $13 when the day’s trading opened, and peaked at $43 over the course of that first day, giving the company an implied value of $850 million.

It was the beginning of an era of almost incomprehensible wealth generated by the so-called “Internet stocks,” often for reasons that were hard for ordinary people to understand, given how opaque the revenue models of so many Web giants could be. Even many of the beneficiaries of the stock-buying frenzy struggled to wrap their heads around it all. “Take, say, a Chinese worker,” said Lou Montulli, a talented but also ridiculously lucky programmer at Netscape. “I’m probably worth a million times the average Chinese worker, or something like that. It’s difficult to rationalize the value there. I worked hard, but did I really work that hard? I mean, can anyone work that hard? Is it possible? Is anyone worth that much?” Four of the ten richest people in the world today according to Forbes magazine — including the two richest of all — can trace the origins of their fortunes directly to the dot-com boom of the 1990s. Three more were already in the computer industry before the boom, and saw their wealth exponentially magnified by it. (The founders I’ve profiled in this article are actually comparatively small fish today. Their rankings on the worldwide list of billionaires as of this writing range from 792 in the case of David Filo to 1717 for Marc Andreessen.)

And what was Tim Berners-Lee doing as people began to get rich from his creation? He did not, as some might have expected, decamp to Silicon Valley to start a company of his own. Nor did he accept any of the “special advisor” roles that were his for the taking at a multitude of companies eager to capitalize on the cachet of his name. He did leave CERN, but made it only as far as Boston, where he founded a non-profit World Wide Web Consortium in partnership with MIT and others. The W3C, as it would soon become known, was created to lead the defense of open standards against those corporate and governmental forces which were already demonstrating a desire to monopolize and balkanize the Web. At times, there would be reason to question who was really leading whom; the W3C would, for example, be forced to write into its HTML standard many of the innovations which Netscape had already unilaterally introduced into its industry-leading browser. Yet the organization has undoubtedly played a vital role in keeping the original ideal of the Web from giving way completely to the temptations of filthy lucre. Tim Berners-Lee remains to this day the only director the W3C has ever known.

So, while Marc Andreessen and Jerry Yang and their ilk were becoming the darlings of the business pages, were buying sports cars and attending the most exclusive parties, Tim Berners-Lee was riding a bus to work every day in Boston, just another anonymous commuter in a gray suit. It was fall when he first arrived in his new home, and so, as he says, “the bus ride gave me time to revel in New England’s autumnal colours.” Many over the years have found it hard to believe he wasn’t bitter that his name had become barely a footnote in the reckoning of the business-page pundits who were declaring the Web — correctly, it must be said — the most important development in mass media in their lifetimes. But he himself insists — believably, it must be said — that he was not and is not resentful over the way things played out.

People sometimes ask me whether I am upset that I have not made a lot of money from the Web. In fact, I made some quite conscious decisions about which way to take in life. Those I would not change. What does distress me, though, is how important a question it seems to be for some. This happens mostly in America, not Europe. What is maddening is the terrible notion that a person’s value depends on how important and financially successful they are, and that that is measured in terms of money. This suggests disrespect for the researchers across the globe developing ideas for the next leaps in science and technology. Core in my upbringing was a value system that put monetary gain well in its place, behind things like doing what I really want to do. To use net worth as a criterion by which to judge people is to set our children’s sights on cash rather than on things that will actually make them happy.

It can be occasionally frustrating to think about the things my family could have done with a lot of money. But in general I’m fairly happy to let other people be in the Royal Family role…

Perhaps Tim Berners-Lee is the luckiest of all the people whose names we still recognize from that go-go decade of the 1990s, being the one who succeeded in keeping his humanity most intact by never stepping onto the treadmill of wealth and attention and “disruption” and Forbes rankings. Heaven help those among us who are no longer able to feel the joy of watching nature change her colors around them.



In 1997, Robert H. Reid wrote that “the inevitable time will come when the Web’s dawning years will seem as remote as the pioneering days of film seem today. Today’s best and most lavishly funded websites will then look as naïve and primitive as the earliest silent movies.” Exactly this has indeed come to pass. And yet if we peer beneath the surface of the early Web’s garish aesthetics, most of what we find there is eerily familiar.

One of the most remarkable aspects of the explosion of the Web into the collective commercial and cultural consciousness is just how quickly it occurred. In the three and one quarter years between the initial release of the NCSA Mosaic browser and the Yahoo! IPO, a new digital society sprang into being, seemingly from nothing and nowhere. It brought with it all of the possibilities and problems we still wrestle with today. For example, the folks at Netscape, Yahoo!, and other startups were the first to confront the tension between free speech and hate speech online. (Straining to be fair to everyone, Yahoo! reluctantly decided to classify the Ku Klux Klan under the heading of “White Power” rather than “Fascism,” much less booting it off their site completely.) As we’ve seen, the Internet advertising business emerged from whole cloth during this time, along with all of the privacy concerns raised by its determination to track every single Netizen’s voyages in the name of better ad targeting. (It’s difficult to properly tell the story of this little-loved but enormously profitable branch of business in greater depth because it has always been shrouded in so much deliberate secrecy.) Worries about Web-based pornography and the millions of children and adolescents who were soon viewing it regularly took center stage in the mass media, both illuminating and obscuring a huge range of  questions — largely still unanswered today — about what effect this had on their psychology. (“Something has to be done,” said one IBM executive who had been charged with installing computers in classrooms, “or children won’t be given access to the Web.”) And of course the tension between open standards and competitive advantage remains of potentially existential importance to the Web as we know it, even if the browser that threatens to swallow the open Web whole is now Google Chrome instead of Netscape Navigator.

All told, the period from 1993 to 1996 was the very definition of a formative one. And yet, as we’ve seen, the Web — this enormous tree of possibility that seemed to so many to sprout fully formed out of nothing — had roots stretching back centuries. If we have learned anything over the course of the last eleven articles, it has hopefully been that no technology lives in a vacuum. The World Wide Web is nothing more nor less than the latest realization of a dream of instantaneous worldwide communication that coursed through the verse of Aeschylus, that passed through Claude Chappe and Samuel Morse and Cyrus Field and Alexander Graham Bell among so many others. Tellingly, almost all of those people who accessed the Web from their homes during the 1990s did so by dialing into it, using modems attached to ordinary telephone lines — a validation not only of Claude Shannon’s truism that information is information but of all of the efforts that led to such a flexible and sophisticated telephone system in the first place. Like every great invention since at least the end of prehistory, the World Wide Web stands on the shoulders of those which came before it.

Was it all worth it? Did all the bright sparks we’ve met in these articles really succeed in, to borrow one of the more odious clichés to come out of Silicon Valley jargon, “making the world a better place?” Clichés aside, I think it was, and I think they did. For all that the telegraph, the telephone, the Internet, and the World Wide Web have plainly not succeeded in creating the worldwide utopia that was sometimes promised by their most committed evangelists, I think that communication among people and nations is always preferable to the lack of same.

And with that said, it is now time to end this extended detour into the distant past — to end it here, with J.C.R. Licklider’s dream of an Intergalactic Computer Network a reality, and right on the schedule he proposed. But of course what I’ve written in this article isn’t really an end; it’s barely the beginning of what the Web came to mean to the world. As we step back into the flow of things and return to talking about digital culture and interactive entertainment on a more granular, year-by-year basis, the Web will remain an inescapable presence for us, being the place where virtually all digital culture lived after 1995 or so. I look forward to seeing it continue to evolve in real time, and to grappling alongside all of you with the countless Big Questions it will continue to pose for us.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, and Architects of the Web by Robert H. Reid. Online sources include the Pew Research Center’s “World Wide Web Timeline” and Forbes‘s up-to-the-minute billionaires scoreboard.)

 

Tags:

A Web Around the World, Part 10: A Web of Associations

While wide-area computer networking, packet switching, and the Internet were coming of age, all of the individual computers on the wire were becoming exponentially faster, exponentially more capacious internally, and exponentially smaller externally. The pace of their evolution was unprecedented in the history of technology; had automobiles been improved at a similar rate, the Ford Model T would have gone supersonic within ten years of its introduction. We should take a moment now to find out why and how such a torrid pace was maintained.

As Claude Shannon and others realized before World War II, a digital computer in the abstract is an elaborate exercise in boolean logic, a dynamic matrix of on-off switches — or, if you like, of ones and zeroes. The more of these switches a computer has, the more it can be and do. The first Turing-complete digital computers, such as ENIAC and Whirlwind, implemented their logical switches using vacuum tubes, a venerable technology inherited from telephony. Each vacuum tube was about as big as an incandescent light bulb, consumed a similar amount of power, and tended to burn out almost as frequently. These factors made the computers which employed vacuum tubes massive edifices that required as much power as the typical city block, even as they struggled to maintain an uptime of more than 50 percent — and all for the tiniest sliver of one percent of the overall throughput of the smartphones we carry in our pockets today. Computers of this generation were so huge, expensive, and maintenance-heavy in relation to what they could actually be used to accomplish that they were largely limited to government-funded research institutions and military applications.

Computing’s first dramatic leap forward in terms of its basic technological underpinnings also came courtesy of telephony. More specifically, it came in the form of the transistor, a technology which had been invented at Bell Labs in December of 1947 with the aim of improving telephone switching circuits. A transistor could function as a logical switch just as a vacuum tube could, but it was a minute fraction of the size, consumed vastly less power, and was infinitely more reliable. The computers which IBM built for the SAGE project during the 1950s straddled this technological divide, employing a mixture of vacuum tubes and transistors. But by 1960, the computer industry had fully and permanently embraced the transistor. While still huge and unwieldy by modern standards, computers of this era were practical and cost-effective for a much broader range of applications than their predecessors had been; corporate computing started in earnest in the transistor era.

Nevertheless, wiring together tens of thousands of discrete transistors remained a daunting task for manufacturers, and the most high-powered computers still tended to fill large rooms if not entire building floors. Thankfully, a better way was in the offing. Already in 1958, a Texas Instruments engineer named Jack Kilby had come up with the idea of the integrated circuit: a collection of miniaturized transistors and other electrical components embedded in a silicon wafer, the whole being suitable for stamping out quickly in great quantities by automated machinery. Kilby invented, in other words, the soon-to-be ubiquitous computer chip, which could be wired together with its mates to produce computers that were not only smaller but easier and cheaper to manufacture than those that had come before. By the mid-1960s, the industry was already in the midst of the transition from discrete transistors to integrated circuits, producing some machines that were no larger than a refrigerator; among these was the Honeywell 516, the computer which was turned into the world’s first network router.

As chip-fabrication systems improved, designers were able to miniaturize the circuitry on the wafers more and more, allowing ever more computing horsepower to be packed into a given amount of physical space. An engineer named Gordon Moore proposed the principle that has become known as Moore’s Law: he calculated that the number of transistors which can be stamped into a chip of a given size doubles every second year.[1]When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975. In July of 1968, Moore and a colleague named Robert Noyce formed the chip maker known as Intel to make the most of Moore’s Law. The company has remained on the cutting edge of chip fabrication to this day.

The next step was perhaps inevitable, but it nevertheless occurred almost by accident. In 1971, an Intel engineer named Federico Faggin put all of the circuits making up a computer’s arithmetic, logic, and control units — the central “brain” of a computer — onto a single chip. And so the microprocessor was born. No one involved with the project at the time anticipated that the Intel 4004 central-processing unit would open the door to a new generation of general-purpose “microcomputers” that were small enough to sit on desktops and cheap enough to be purchased by ordinary households. Faggin and his colleagues rather saw the 4004 as a fairly modest, incremental advancement of the state of the art, which would be deployed strictly to assist bigger computers by serving as the brains of disk controllers and other single-purpose peripherals. Before we rush to judge them too harshly for their lack of vision, we should remember that they are far from the only inventors in history who have failed to grasp the real importance of their creations.

At any rate, it was left to independent tinkerers who had been dreaming of owning a computer of their own for years, and who now saw in the microprocessor the opportunity to do just that, to invent the personal computer as we know it. The January 1975 issue of Popular Electronics sports one of the most famous magazine covers in the history of American technology: it announces the $439 Altair 8800, from a tiny Albuquerque, New Mexico-based company known as MITS. The Altair was nothing less than a complete put-it-together-yourself microcomputer kit, built around the Intel 8080 microprocessor, a successor model to the 4004.

The magazine cover that launched a technological revolution.

The next milestone came in 1977, when three separate companies announced three separate pre-assembled, plug-em-in-and-go personal computers: the Apple II, the Radio Shack TRS-80, and the Commodore PET. In terms of raw computing power, these machines were a joke compared to the latest institutional hardware. Nonetheless, they were real, Turing-complete computers that many people could afford to buy and proceed to tinker with to their heart’s content right in their own homes. They truly were personal computers: their buyers didn’t have to share them with anyone. It is difficult to fully express today just how extraordinary an idea this was in 1977.

This very website’s early years were dedicated to exploring some of the many things such people got up to with their new dream machines, so I won’t belabor the subject here. Suffice to say that those first personal computers were, although of limited practical utility, endlessly fascinating engines of creativity and discovery for those willing and able to engage with them on their own terms. People wrote programs on them, drew pictures and composed music, and of course played games, just as their counterparts on the bigger machines had been doing for quite some time. And then, too, some of them went online.

The first microcomputer modems hit the market the same year as the trinity of 1977. They operated on the same principles as the modems developed for the SAGE project a quarter-century before — albeit even more slowly. Hobbyists could thus begin experimenting with connecting their otherwise discrete microcomputers together, at least for the duration of a phone call.

But some entrepreneurs had grander ambitions. In July of 1979, not one but two subscription-based online services, known as CompuServe and The Source, were announced almost simultaneously. Soon anyone with a computer, a modem, and the requisite disposable income could dial them up to socialize with others, entertain themselves, and access a growing range of useful information.

Again, I’ve written about this subject in some detail before, so I won’t do so at length here. I do want to point out, however, that many of J.C.R. Licklider’s fondest predictions for the computer networks of the future first became a reality on the dozen or so of these commercial online services that managed to attract significant numbers of subscribers over the years. It was here, even more so than on the early Internet proper, that his prognostications about communities based on mutual interest rather than geographical proximity proved their prescience. Online chatting, online dating, online gaming, online travel reservations, and online shopping first took hold here, first became a fact of life for people sitting in their living rooms. People who seldom or never met one another face to face or even heard one another’s voices formed relationships that felt as real and as present in their day-to-day lives as any others — a new phenomenon in the history of social interaction. At their peak circa 1995, the commercial online services had more than 6.5 million subscribers in all.

Yet these services failed to live up to the entirety of Licklider’s old dream of an Intergalactic Computer Network. They were communities, yes, but not quite networks in the sense of the Internet. Each of them lived on a single big mainframe, or at most a cluster of them, in a single data center, which you dialed into using your microcomputer. Once online, you could interact in real time with the hundreds or thousands of others who might have dialed in at the same time, but you couldn’t go outside the walled garden of the service to which you’d chosen to subscribe. That is to say, if you’d chosen to sign up with CompuServe, you couldn’t talk to someone who had chosen The Source. And whereas the Internet was anarchic by design, the commercial online services were steered by the iron hands of the companies who had set them up. Although individual subscribers could and often did contribute content and in some ways set the tone of the services they used, they did so always at the sufferance of their corporate overlords.

Through much of the fifteen years or so that the commercial services reigned supreme, many or most microcomputer owners failed to even realize that an alternative called the Internet existed. Which is not to say that the Internet was without its own form of social life. Its more casual side centered on an online institution known as Usenet, which had arrived on the scene in late 1979, almost simultaneously with the first commercial services.

At bottom, Usenet was (and is) a set of protocols for sharing public messages, just as email served that purpose for private ones. What set it apart from the bustling public forums on services like CompuServe was its determinedly non-centralized nature. Usenet as a whole was a network of many servers, each storing a local copy of its many “newsgroups,” or forums for discussions on particular topics. Users could read and post messages using any of the servers, either by sitting in front of the server’s own keyboard and monitor or, more commonly, through some form of remote connection. When a user posted a new message to a server, it sent it on to several other servers, which were then expected to send it further, until the message had propagated through the whole network of Usenet servers. The system’s asynchronous nature could distort conversations; messages reached different servers at different times, which meant you could all too easily find yourself replying to a post that had already been retracted, or making a point someone else had already made before you. But on the other hand, Usenet was almost impossible to break completely — just like the Internet itself.

Strictly speaking, Usenet did not depend on the Internet for its existence. As far as it was concerned, its servers could pass messages among themselves in whatever way they found most convenient. In its first few years, this sometimes meant that they dialed one another up directly over ordinary phone lines and talked via modem. As it matured into a mainstay of hacker culture, however, Usenet gradually became almost inseparable from the Internet itself in the minds of most of its users.

From the three servers that marked its inauguration in 1979, Usenet expanded to 11,000 by 1988. The discussions that took place there didn’t quite encompass the whole of the human experience equally; the demographics of the hacker user base meant that computer programming tended to get more play than knitting, Pink Floyd more play than Madonna, and science-fiction novels more play than romances. Still, the newsgroups were nothing if not energetic and free-wheeling. For better or for worse, they regularly went places the commercial online services didn’t dare allow. For example, Usenet became one of the original bastions of online pornography, first in the form of fevered textual fantasies, then in the somehow even more quaint form of “ASCII art,” and finally, once enough computers had the graphics capabilities to make it worthwhile, as actual digitized photographs. In light of this, some folks expressed relief that it was downright difficult to get access to Usenet and the rest of the Internet if one didn’t teach or attend classes at a university, or work at a tech company or government agency.

The perception of the Internet as a lawless jungle, more exciting but also more dangerous than the neatly trimmed gardens of the commercial online services, was cemented by the Morris Worm, which was featured on the front page of the New York Times for four straight days in December of 1988. Created by a 23-year-old Cornell University graduate student named Robert Tappan Morris, it served as many people’s ironic first notice that a network called the Internet existed at all. The exploit, which its creator later insisted had been meant only as a harmless prank, spread by attaching itself to some of the core networking applications used by Unix, a powerful and flexible operating system that was by far the most popular among Internet-connected computers at the time. The Morris Worm came as close as anything ever has to bringing the entire Internet down when its exponential rate of growth effectively turned it into a network-wide denial-of-service attack — again, accidentally, if its creator is to be believed. (Morris himself came very close to a prison sentence, but escaped with three years of probation, a $10,000 fine, and 400 hours of community service, after which he went on to a lucrative career in the tech sector at the height of the dot-com boom.)

Attitudes toward the Internet in the less rarefied wings of the computing press had barely begun to change even by the beginning of the 1990s. An article from the issue of InfoWorld dated February 4, 1991, encapsulates the contemporary perceptions among everyday personal-computer owners of this “vast collection of networks” which is “a mystery even to people who call it home.”

It is a highway of ideas, a collective brain for the nation’s scientists, and perhaps the world’s most important computer bulletin board. Connecting all the great research institutions, a large network known collectively as the Internet is where scientists, researchers, and thousands of ordinary computer users get their daily fix of news and gossip.

But it is the same network whose traffic is occasionally dominated by X-rated graphics files, UFO sighting reports, and other “recreational” topics. It is the network where renegade “worm” programs and hackers occasionally make the news.

As with all communities, this electronic village has both high- and low-brow neighborhoods, and residents of one sometimes live in the other.

What most people call the Internet is really a jumble of networks rooted in academic and research institutions. Together these networks connect over 40 countries, providing electronic mail, file transfer, remote login, software archives, and news to users on 2000 networks.

Think of a place where serious science comes from, whether it’s MIT, the national laboratories, a university, or [a] private enterprise, [and] chances are you’ll find an Internet address. Add [together] all the major sites, and you have the seeds of what detractors sometimes call “Anarchy Net.”

Many people find the Internet to be shrouded in a cloud of mystery, perhaps even intrigue.

With addresses composed of what look like contractions surrounded by ‘!’s, ‘@’s, and ‘.’s, even Internet electronic mail seems to be from another world. Never mind that these “bangs,” “at signs,” and “dots” create an addressing system valid worldwide; simply getting an Internet address can be difficult if you don’t know whom to ask. Unlike CompuServe or one of the other email services, there isn’t a single point of contact. There are as many ways to get “on” the Internet as there are nodes.

At the same time, this complexity serves to keep “outsiders” off the network, effectively limiting access to the world’s technological elite.

The author of this article would doubtless have been shocked to learn that within just four or five years this confusing, seemingly willfully off-putting network of scientists and computer nerds would become the hottest buzzword in media, and that absolutely everybody, from your grandmother to your kids’ grade-school teacher, would be rushing to get onto this Internet thing before they were left behind, even as stalwart rocks of the online ecosystem of 1991 like CompuServe would already be well on their way to becoming relics of a bygone age.

The Internet had begun in the United States, and the locus of the early mainstream excitement over it would soon return there. In between, though, the stroke of inventive genius that would lead to said excitement would happen in the Old World confines of Switzerland.


Tim Berners-Lee

In many respects, he looks like an Englishman from central casting — quiet, courteous, reserved. Ask him about his family life and you hit a polite but exceedingly blank wall. Ask him about the Web, however, and he is suddenly transformed into an Italian — words tumble out nineteen to the dozen and he gesticulates like mad. There’s a deep, deep passion here. And why not? It is, after all, his baby.

— John Naughton, writing about Tim Berners-Lee

The seeds of the Conseil Européen pour la Recherche Nucléaire — better known in the Anglosphere as simply CERN — were planted amidst the devastation of post-World War II Europe by the great French quantum physicist Louis de Broglie. Possessing an almost religious faith in pure science as a force for good in the world, he proposed a new, pan-European foundation dedicated to exploring the subatomic realm. “At a time when the talk is of uniting the peoples of Europe,” he said, “[my] attention has turned to the question of developing this new international unit, a laboratory or institution where it would be possible to carry out scientific work above and beyond the framework of the various nations taking part. What each European nation is unable to do alone, a united Europe can do, and, I have no doubt, would do brilliantly.” After years of dedicated lobbying on de Broglie’s part, CERN officially came to be in 1954, with its base of operations in Geneva, Switzerland, one of the places where Europeans have traditionally come together for all manner of purposes.

The general technological trend at CERN over the following decades was the polar opposite of what was happening in computing: as scientists attempted to peer deeper and deeper into the subatomic realm, the machines they required kept getting bigger and bigger. Between 1983 and 1989, CERN built the Large Electron-Positron Collider in Geneva. With a circumference of almost seventeen miles, it was the largest single machine ever built in the history of the world. Managing projects of such magnitude, some of them employing hundreds of scientists and thousands of support staff, required a substantial computing infrastructure, along with many programmers and systems architects to run it. Among this group was a quiet Briton named Tim Berners-Lee.

Berners-Lee’s credentials were perfect for his role. He had earned a bachelor’s degree in physics from Oxford in 1976, only to find that pure science didn’t satisfy his urge to create practical things that real people could make use of. As it happened, both of his parents were computer scientists of considerable note; they had both worked on the University of Manchester’s Mark I computer, the world’s very first stored-program von Neumann machine. So, it was natural for their son to follow in their footsteps, to make a career for himself in the burgeoning new field of microcomputing. Said career took him to CERN for a six-month contract in 1980, then back to Geneva on a more permanent basis in 1984. Because of his background in physics, Berners-Lee could understand the needs of the scientists he served better than many of his colleagues; his talent for devising workable solutions to their problems turned him into something of a star at CERN. Among other projects, he labored long and hard to devise a way of making the thousands upon thousands of pages of documentation that were generated at CERN each year accessible, manageable, and navigable.

But, for all that Berners-Lee was being paid to create an internal documentation system for CERN, it’s clear that he began thinking along bigger lines fairly quickly. The same problems of navigation and discoverability that dogged his colleagues at CERN were massively present on the Internet as a whole. Information was hidden there in out-of-the-way repositories that could only be accessed using command-line-driven software with obscure command sets — if, that is, you knew that it existed at all.

His idea of a better way came courtesy of hypertext theory: a non-linear approach to reading texts and navigating an information space, built around associative links embedded within and between texts. First proposed by Vannevar Bush, the World War II-era MIT giant whom we briefly met in an earlier article in this series, hypertext theory had later proved a superb fit with a mouse-driven graphical computer interface which had been pioneered at Xerox PARC during the 1970s under the astute management of our old friend Robert Taylor. The PARC approach to user interfaces reached the consumer market in a prominent way for the first time in 1984 as the defining feature of the Apple Macintosh. And the Mac in turn went on to become the early hotbed of hypertext experimentation on consumer-grade personal computers, thanks to Apple’s own HyperCard authoring system and the HyperCard-driven laser discs and CD-ROMs that soon emerged from companies like Voyager.

The user interfaces found in HyperCard applications were surprisingly similar to those found in the web browsers of today, but they were limited to the curated, static content found on a single floppy disk or CD-ROM. “They’ve already done the difficult bit!” Berners-Lee remembers thinking. Now someone just needed to put hypertext on the Internet, to allow files on one computer to link to files on another, with anyone and everyone able to create such links. He saw how “a single hypertext link could lead to an enormous, unbounded world.” Yet no one else seemed to see this. So, he decided at last to do it himself. In a fit of self-deprecating mock-grandiosity, not at all dissimilar to J.C.R. Licklider’s call for an “Intergalactic Computer Network,” he named his proposed system the “World Wide Web.” He had no idea how perfect the name would prove.

He sat down to create his World Wide Web in October of 1990, using a NeXT workstation computer, the flagship product of the company Steve Jobs had formed after getting booted out of Apple several years earlier. It was an expensive machine — far too expensive for the ordinary consumer market — but supremely elegant, combining the power of the hacker-favorite operating system Unix with the graphical user interface of the Macintosh.

The NeXT computer on which Tim Berners-Lee created the foundations of the World Wide Web. It then went on to become the world’s first web server.

Progress was swift. In less than three months, Berners-Lee coded the world’s first web server and browser, which also entailed developing the Hypertext Transfer Protocol (HTTP) they used to communicate with one another and the Hypertext Markup Language (HTML) for embedding associative links into documents. These were the foundational technologies of the Web, which still remain essential to the networked digital world we know today.

The first page to go up on the nascent World Wide Web, which belied its name at this point by being available only inside CERN, was a list of phone numbers of the people who worked there. Clicking through its hypertext links being much easier than entering commands into the database application CERN had previously used for the purpose, it served to get Berners-Lee’s browser installed on dozens of NeXT computers. But the really big step came in August of 1991, when, having debugged and refined his system as thoroughly as he was able by using his CERN colleagues as guinea pigs, he posted his web browser, his web server, and documentation on how to use HTML to create web documents on Usenet. The response was not immediately overwhelming, but it was gratifying in a modest way. Berners-Lee:

People who saw the Web and realised the sense of unbound opportunity began installing the server and posting information. Then they added links to related sites that they found were complimentary or simply interesting. The Web began to be picked up by people around the world. The messages from system managers began to stream in: “Hey, I thought you’d be interested. I just put up a Web server.”

Tim Berners-Lee’s original web browser, which he named Nexus in honor of its host platform. The NeXT computer actually had quite impressive graphics capabilities, but you’d never know it by looking at Nexus.

In December of 1991, Berners-Lee begged for and was reluctantly granted a chance to demonstrate the World Wide Web at that year’s official Hypertext conference in San Antonio, Texas. He arrived with high hopes, only to be accorded a cool reception. The hypertext movement came complete with more than its fair share of stodgy theorists with rigid ideas about how hypertext ought to work — ideas which tended to have more to do with the closed, curated experiences of HyperCard than the anarchic open Internet. Normally modest almost to a fault, the Berners-Lee of today does allow himself to savor the fact that “at the same conference two years later, every project on display would have something to do with the Web.”

But the biggest factor holding the Web back at this point wasn’t the resistance of the academics; it was rather its being bound so tightly to the NeXT machines, which had a total user base of no more than a few tens of thousands, almost all of them at universities and research institutions like CERN. Although some browsers had been created for other, more popular computers, they didn’t sport the effortless point-and-click interface of Berners-Lee’s original; instead they presented their links like footnotes, whose numbers the user had to type in to visit them. Thus Berners-Lee and the fellow travelers who were starting to coalesce around him made it their priority in 1992 to encourage the development of more point-and-click web browsers. One for the X Window System, the graphical-interface layer which had been developed for the previously text-only Unix, appeared in April. Even more importantly, a Macintosh browser arrived just a month later; this marked the first time that the World Wide Web could be explored in the way Berners-Lee had envisioned on a computer that the proverbial ordinary person might own and use.

Amidst the organization directories and technical papers which made up most of the early Web — many of the latter inevitably dealing with the vagaries of HTTP and HTML themselves — Berners-Lee remembers one site that stood out for being something else entirely, for being a harbinger of the more expansive, humanist vision he had had for his World Wide Web almost from the start. It was a site about Rome during the Renaissance, built up from a traveling museum exhibition which had recently visited the American Library of Congress. Berners-Lee:

On my first visit, I wandered to a music room. There was an explanation of the events that caused the composer Carpentras to present a decorated manuscript of his Lamentations of Jeremiah to Pope Clement VII. I clicked, and was glad I had a 21-inch colour screen: suddenly it was filled with a beautifully illustrated score, which I could gaze at more easily and in more detail than I could have done had I gone to the original exhibit at the Library of Congress.

If we could visit this site today, however, we would doubtless be struck by how weirdly textual it was for being a celebration of the Renaissance, one of the most excitingly visual ages in all of history. The reality is that it could hardly have been otherwise; the pages displayed by Berners-Lee’s NeXT browser and all of the others could not mix text with images at all. The best they could do was to present links to images, which, when clicked, would lead to a picture being downloaded and displayed in a separate window, as Berners-Lee describes above.

But already another man on the other side of the ocean was working on changing that — working, one might say, on the last pieces necessary to make a World Wide Web that we can immediately recognize today.


Marc Andreessen barefoot on the cover of Time magazine, creating the archetype of the dot-com entrepreneur/visionary/rock star.

Tim Berners-Lee was the last of the old guard of Internet pioneers. Steeped in an ethic of non-profit research for the abstract good of the human race, he never attempted to commercialize his work. Indeed, he has seemed in the decades since his masterstroke almost to willfully shirk the money and fame that some might say are rightfully his for putting the finishing touch on the greatest revolution in communications since the printing press, one which has bound the world together in a way that Samuel Morse and Alexander Graham Bell could never have dreamed of.

Marc Andreessen, by contrast, was the first of a new breed of business entrepreneurs who have dominated our discussions of the Internet from the mid-1990s until the present day. Yes, one can trace the cult of the tech-sector disruptor, “making the world a better place” and “moving fast and breaking things,” back to the dapper young Steve Jobs who introduced the Apple Macintosh to the world in January of 1984. But it was Andreessen and the flood of similar young men that followed him during the 1990s who well and truly embedded the archetype in our culture.

Before any of that, though, he was just a kid who decided to make a web browser of his own.

Andreessen first discovered the Web not long after Berners-Lee first made his tools and protocols publicly available. At the time, he was a twenty-year-old student at the University of Illinois at Urbana-Champaign who held a job on the side at the National Center for Supercomputing Applications, a research institute with close ties to the university. The name sounded very impressive, but he found the job itself to be dull as ditch water. His dissatisfaction came down to the same old split between the “giant brain” model of computing of folks like Marvin Minsky and the more humanist vision espoused in earlier years by people like J.C.R. Licklider. The NCSA was in pursuit of the former, but Andreessen was a firm adherent of the latter.

Bored out of his mind writing menial code for esoteric projects he couldn’t care less about, Andreessen spent a lot of time looking for more interesting things to do on the Internet. And so he stumbled across the fledgling World Wide Web. It didn’t look like much — just a screen full of text — but he immediately grasped its potential.

In fact, he judged, the Web’s not looking like much was a big part of its problem. Casting about for a way to snazz it up, he had the stroke of inspiration that would make him a multi-millionaire within three years. He decided to add a new tag to Berners-Lee’s HTML specification: “<img>,” for “image.” By using it, one would be able to show pictures inline with text. It could make the Web an entirely different sort of place, a wonderland of colorful visuals to go along with its textual content.

As conceptual leaps go, this one really wasn’t that audacious. The biggest buzzword in consumer computing in recent years — bigger than hypertext — had been “multimedia,” a catch-all term describing exactly this sort of digital mixing of content types, something which was now becoming possible thanks to the ever-improving audiovisual capabilities of personal computers since those primitive early days of the trinity of 1977. Hypertext and multimedia had actually been sharing many of the same digs for quite some time. The HyperCard authoring system, for example, boasted capabilities much like those which Andreessen now wished to add to HTML, and the Voyager CD-ROMs already existed as compelling case studies in the potential of interactive multimedia hypertext in a non-networked context.

Still, someone had to be the first to put two and two together, and that someone was Marc Andreessen. An only moderately accomplished programmer himself, he convinced a much better one, another NCSA employee named Eric Bina, to help him create his new browser. The pair fell into roles vaguely reminiscent of those of Steve Jobs and Steve Wozniak during the early days of Apple Computer: Andreessen set the agenda and came up with the big ideas — many of them derived from tireless trawling of the Usenet newsgroups to find out what people didn’t like about the current browsers — and Bina turned his ideas into reality. Andreessen’s relentless focus on the end-user experience led to other important innovations beyond inline images, such as the “forward,” “back,” and “refresh” buttons that remain so ubiquitous in the browsers of today. The higher-ups at NCSA eventually agreed to allow Andreessen to brand his browser as a quasi-official product of their institute; on an Internet still dominated by academics, such an imprimatur was sure to be a useful aid. In January of 1993, the browser known as Mosaic — the name seemed an apt metaphor for the colorful multimedia pages it could display — went up on NCSA’s own servers. After that, “it spread like a virus,” in the words of Andreessen himself.

The slick new browser and its almost aggressively ambitious young inventor soon came to the attention of Tim Berners-Lee. He calls Andreessen “a total contrast to any of the other [browser] developers. Marc was not so much interested in just making the program work as in having his browser used by as many people as possible.” But, lest he sound uncharitable toward his populist counterpart, he hastens to add that “that was, of course, what the Web needed.” Berners-Lee made the Web; the garrulous Andreessen brought it to the masses in a way the self-effacing Briton could arguably never have managed on his own.

About six months after Mosaic hit the Internet, Tim Berners-Lee came to visit its inventor. Their meeting brought with it the first palpable signs of the tension that would surround the World Wide Web and the Internet as a whole almost from that point forward. It was the tension between non-profit idealism and the urge to commercialize, to brand, and finally to control. Even before the meeting, Berners-Lee had begun to feel disturbed by the press coverage Mosaic was receiving, helped along by the public-relations arm of NCSA itself: “The focus was on Mosaic, as if it were the Web. There was little mention of other browsers, or even the rest of the world’s effort to create servers. The media, which didn’t take the time to investigate deeper, started to portray Mosaic as if it were equivalent to the Web.” Now, at the meeting, he was taken aback by an atmosphere that smacked more of a business negotiation than a friendly intellectual exchange, even as he wasn’t sure what exactly was being negotiated. “Marc gave the impression that he thought of this meeting as a poker game,” Berners-Lee remembers.

Andreessen’s recollections of the meeting are less nuanced. Berners-Lee, he claims, “bawled me out for adding images to the thing.” Andreessen:

Academics in computer science are so often out to solve these obscure research problems. The universities may force it upon them, but they aren’t always motivated to just do something that people want to use. And that’s definitely the sense that we always had of CERN. And I don’t want to mis-characterize them, but whenever we dealt with them, they were much more interested in the Web from a research point of view rather than a practical point of view. And so it was no big deal to them to do a NeXT browser, even though nobody would ever use it. The concept of adding an image just for the sake of adding an image didn’t make sense [to them], whereas to us, it made sense because, let’s face it, they made pages look cool.

The first version of Mosaic ran only on X-Windows, but, as the above would indicate, Andreessen had never intended for that to be the case for long. He recruited more programmers to write ports for the Macintosh and, most importantly of all, for Microsoft Windows, whose market share of consumer computing in the United States was crossing the threshold of 90 percent. When the Windows version of Mosaic went online in September of 1993, it motivated hundreds of thousands of computer owners to engage with the Internet for the first time; the Internet to them effectively was Mosaic, just as Berners-Lee had feared would come to pass.

The Mosaic browser. It may not look like much today, but its ability to display inline images was a game-changer.

At this time, Microsoft Windows didn’t even include a TCP/IP stack, the software layer that could make a machine into a full-fledged denizen of the Internet, with its own IP address and all the trimmings. In the brief span of time before Microsoft remedied that situation, a doughty Australian entrepreneur named Peter Tattam provided an add-on TCP/IP stack, which he distributed as shareware. Meanwhile other entrepreneurs scrambled to set up Internet service providers to provide the unwashed masses with an on-ramp to the Web — no university enrollment required! —  and the shelves of computer stores filled up with all-in-one Internet kits that were designed to make the whole process as painless as possible.

The unabashed elitists who had been on the Internet for years scorned the newcomers, but there was nothing they could do to stop the invasion, which stormed their ivory towers with overwhelming force. Between December of 1993 and December of 1994, the total amount of Web traffic jumped by a factor of eight. By the latter date, there were more than 10,000 separate sites on the Web, thanks to people all over the world who had rolled up their sleeves and learned HTML so that they could get their own idiosyncratic messages out to anyone who cared to read them. If some (most?) of the sites they created were thoroughly frivolous, well, that was part of the charm of the thing. The World Wide Web was the greatest leveler in the history of media; it enabled anyone to become an author and a publisher rolled into one, no matter how rich or poor, talented or talent-less. The traditional gatekeepers of mass media have been trying to figure out how to respond ever since.

Marc Andreessen himself abandoned the browser that did so much to make all this happen before it celebrated its first birthday. He graduated from university in December of 1993, and, annoyed by the growing tendency of his bosses at NCSA to take credit for his creation, he decamped for — where else? — Silicon Valley. There he bumped into Jim Clark, a huge name in the Valley, who had founded Silicon Graphics twelve years earlier and turned it into the biggest name in digital special effects for the film industry. Feeling hamstrung by Silicon Graphics’s increasing bureaucracy as it settled into corporate middle age, Clark had recently left the company, leading to much speculation about what he would do next. The answer came on April 4, 1994, when he and Marc Andreessen founded Mosaic Communications in order to build a browser even better than the one the latter had built at NCSA. The dot-com boom had begun.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, How the Web was Born by James Gillies and Robert Calliau, and Architects of the Web by Robert H. Reid. InfoWorld of August 24 1987, September 7 1987, April 25 1988, November 28 1988, January 9 1989, October 23 1989, and February 4 1991; Computer Gaming World of May 1993.)

Footnotes

Footnotes
1 When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975.
 

Tags: