RSS

Tag Archives: eidos

A Looking Glass Half Empty, Part 2: A Series of Unfortunate Events


This article tells part of the story of Looking Glass Studios.

Coming out of 1998, the folks at Looking Glass Studios believed they had pretty good reason to feel optimistic about their future. With Thief, they had delivered not just their first profitable original game since 1995’s Flight Unlimited but their biggest single commercial success ever. They had no fewer than four more games slated for release within the next fifteen months, a positively blistering pace for them. Yes, all of said games were sequels and iterations on existing brands, but that was just the nature of the industry by now, wasn’t it? As long-running franchises like Ultima had first begun to demonstrate fifteen years ago, there was no reason you couldn’t continue to innovate under a well-known and -loved banner headline. Looking Glass closed their Austin office that had done so much to pay the bills in the past by taking on porting contracts. In the wake of Thief, they felt ready to concentrate entirely on their own games.

Then, just as they thought they had finally found their footing, the ground started to shift beneath Looking Glass once again. Less than a year and a half after the high point of Thief’s strong reviews and almost equally strong sales, Paul Neurath would be forced to shutter his studio forever.

We can date the beginning of the cascading series of difficulties that ultimately undid Looking Glass to March of 1999, when their current corporate parent decided to divest from games, which in turn meant divesting from them. Intermetrics had been on a roller-coaster ride of its own since being purchased by Michael Alexander in 1995. In 1998, the former television executive belatedly recognized the truth of what Mike Dornbrook had tried to tell him some time ago: that his dreams and schemes for turning Intermetrics into a games or multimedia studio made no sense whatsoever. He deigned to allow the company to return to its core competencies — indeed, to double-down on them. Late in the year, Intermetrics merged with Pacer InfoTec, another perennial recipient of government and military contracts. The new entity took the name of AverStar. When one looked through its collection of active endeavors — making an “Enterprise Information Portal” for the Army Chief of Staff; developing drainage-modeling software for the U.S. Geological Survey; providing “testing and quality-support services” for the Administrative Office of the U.S. Courts; writing and maintaining software for the Space Shuttle and other NASA vehicles — the games of Looking Glass stood out as decidedly unlike the others. Michael Alexander and his reconstituted team of managers, most of them grizzled veterans of the Beltway military-industrial complex, saw no point in continuing to dabble in games. In the words of Looking Glass programmer Mark LeBlanc, “AverStar threw us back into the sea.”

Just as is the case with Intermetrics’s acquisition of Looking Glass barely a year and a half earlier, the precise terms under which Alexander threw his once-prized catch back have never surfaced to my knowledge. It’s clear enough, however, that Looking Glass’s immediate financial position at this juncture was not quite so dire as it had been, thanks to the success of Thief if nothing else. Still, none of the systemic problems of being a small fish in the big pond of the games industry had been solved. Their recent success notwithstanding, without a deeper-pocketed parent or partner to negotiate for them, Looking Glass was destined to have a harder time getting their games into stores and selling them on their own terms.

The next unfortunate event — unfortunate for Looking Glass, but deeply tragic for some others — came about a month later. On April 20, 1999, two deeply troubled, DOOM-loving teenagers walked into their high school in the town of Columbine, Colorado, carrying multiple firearms each, and proceeded to kill thirteen of their fellow students and teachers and wound or terrorize hundreds more before turning their guns on themselves. This act of mass murder, occurring as it did before the American public had been somewhat desensitized to such massacres by the sheer numbing power of repetition, placed the subject of violence in videogames under the mass-media spotlight in a way it hadn’t been since Joseph Lieberman’s Senate hearings of 1993. Now Lieberman, a politician with mounting presidential ambitions, was back to point the finger more accusingly than ever.

This is not the place to attempt to address the fraught subject of what actual links there might be between violence in games and violence in the real world, links which hundreds of sociological and psychological studies have never managed to conclusively prove or disprove. Suffice to say that attributing direct causality to any human behavior outside the controlled setting of a laboratory is really, really hard, even before one factors in the distortions that can arise from motivated reasoning when the subject being studied is as charged as this one. Setting all of that aside, however, this was not a form of attention to which your average gaming executive of 1999 had any wish to expose himself. First-person action games that looked even vaguely like DOOM — such as most of the games of Looking Glass — were cancelled, delayed, or de-prioritized in an effort to avoid seeming completely insensitive to tragedy. De-prioritization rather than something worse was the fate of Looking Glass’s System Shock 2, but that would prove plenty bad enough for a studio with little margin for error.

The story of System Shock 2′s creation is yet another of those “only at Looking Glass” tales. In 1994, a 27-year-old Boston-area computer consultant named Ken Levine played System Shock 1 and was bowled over by the experience. A year or so later, he saw a want ad from the maker of his favorite game in a magazine. He applied and was hired. He contributed a great deal to Thief during that project’s formative period of groping in the dark — he is credited in the finished game for “initial design and story concepts” — and then was given a plum role indeed. Looking Glass had just won a contract to make an adventure game based on the popular new television series Star Trek: Voyager, and Levine was placed in charge of it.

Alas, that project fell apart within a year or so, when Viacom, the media conglomerate that owned the property, took note of the lackluster commercial performance of another recent Star Trek adventure game — and of recent adventure games in general — and pulled the plug. Understandably enough, Levine was devastated at having thus wasted a year of his life. Somewhat less understandably, he blamed the management of Looking Glass as much as Viacom for the fiasco. He left to start his own studio, taking with him two other Looking Glass employees, by the names of Jon Chey and Rob Fermier.

This is where the story gets weird, in an oh, so Looking Glass sort of way. Once they were out on their own, trading under the name of Irrational Games, the trio found that contracts and capital were not as easy to come by as they had believed they would be. At his wit’s end, facing the prospect of a return to his former life as an ordinary computer consultant, Levine came crawling back to his old boss Paul Neurath. But rather than ask for his old job back, he asked that Irrational be allowed to make a game in partnership with Looking Glass, using the same Dark Engine that was to power Thief. Most bosses would have laughed in the face of someone who had poached two of their people in a bid to show them up and show them how it was done, only to get his comeuppance in such deserving fashion. But not Neurath. He agreed to help Levine and his friends make a game in the spirit of System Shock, Levine’s whole reason for joining the industry in the first place. In fact, he even let them move back into Looking Glass’s offices for a while in order to do it. Neurath soon succeeded in capturing the interest of Electronic Arts, the corporate parent of Origin Systems and thus the owner of the System Shock brand. Just like that, Levine’s homage became a direct sequel, an officially anointed System Shock 2.

The ironic capstone to this tale is that Warren Spector had recently left Looking Glass because he had been unable to secure permission to do exactly what the unproven and questionably loyal young Ken Levine was now going to get to do: to make a spiritual heir to System Shock. Spector ended up at Ion Storm, a new studio founded by John Romero of DOOM fame, where he set to work on what would become Deus Ex.

In the course of making System Shock 2, the Irrational staff grew to about fifteen people, who did eventually move into their own office. Nonetheless, the line separating their contributions from those of Looking Glass proper remained murky at best. As a postmortem written by Jon Chey would later put it, “the project was a collaborative effort between two companies based on a contract that only loosely defined the responsibilities of each organization.” It’s for this reason that I’ll be talking about System Shock 2 from here on like I might any other Looking Glass game.

The sequel isn’t shy about embracing its heritage. Once again, it casts you into an outer-space complex gone badly, horrifyingly haywire; this time you find yourself in humanity’s first faster-than-light starship instead of a mere space station. Once again, the game begins with you waking up disoriented, not knowing how you got here, forced to rely on narrations of the backstory that may or may not be reliable. Once again, your first and most obvious antagonists are the zombified corpses of the people who used to crew the ship. Once again, you slowly learn what really went down here through the emails and logbooks you stumble across. Once again, you have a variety of cybernetic hardware to help you stay alive, presented via a relentlessly diegetic interface. Once again, you meet SHODAN, the disembodied, deliciously evil artificial intelligence who was arguably the most memorable single aspect of the very memorable first game. And once again, she is brought to iconic life by the voice of Terri Brosius. In these ways and countless others, this apple doesn’t fall far from the tree.

But even as it embraces its heritage in the broad strokes, System Shock 2 isn’t averse to tinkering with the formula, through both subtraction and addition. The most significant edit is the elimination of a separate, embodied cyberspace, which was already beginning to feel dated in 1994, having been parachuted in straight out of William Gibson’s 1984-vintage Neuromancer. Cyberspace has its charms in System Shock 1, but few would deny that it’s the roughest part of the game in terms of implementation; it was probably a wise choice for Ken Levine and company to focus their efforts elsewhere. More debatable are their decisions to simplify the hacking mini-games that you sometimes need to play to open locked doors and the like, and to eliminate the unique multi-variant difficulty settings of the first game, which let you turn it into whatever kind of experience you desire, from a walking simulator to an exercise in non-stop carnage to a cerebral pseudo-adventure game. System Shock 2 settles for letting you choose a single setting of “Easy,” “Normal,” “Hard,” or “Impossible,” like any standard-issue shooter of the era.

In fact, at first glance this game looks very much like a standard shooter. If you try to play it as one, however, you’ll be quickly disabused of that notion when you die… and die and die and die. This isn’t a stealth game to the same extent as Thief, but it does demand that you proceed with caution, looking for ways to outwit enemies whom you can’t overcome through firepower. If you can’t see your way to noticing and disabling the security cameras that lurk in many a corner, for example, you’re going to find yourself overwhelmed, no matter how fast and accurate a trigger finger you happen to possess.

By way of a partial replacement for the multi-variant difficulty settings of its predecessor, Irrational chose to graft onto System Shock 2 more CRPG elements. Theoretically at least, these give you almost as much control over what kind of game you end up playing. You can go for a combat-oriented build if you want more of a shooter experience — within reason, that is! — or you can become a hardcore tech-head or even a sort of Jedi who makes use of “psi” powers. Or you can judiciously mix and match your abilities, as most players doubtless wind up doing. After choosing an initial slate of skills at the outset, you are given the opportunity to learn more — or to improve the ones you already have — at certain milestones in the plot.

You create your character in System Shock 2 in a similar way to the old Traveller tabletop RPG, by sending him off on three tours of duty with different service branches — or the same one, if you prefer. (I fancy I can see some traces of the Star Trek: Voyager game which Ken Levine once set out to make in the vibe and the iconography here.) This is an example of how System Shock 2 can sometimes feel like it has a few too many ideas for its own good. It seems like an awful lot of effort to go through to establish a character who is about to get his memories erased anyway.

System Shock 2 is an almost universally acclaimed game today, perhaps even more so than its uglier low-res predecessor. There are good reasons for this. The atmosphere of dread builds and builds as you explore the starship, thanks not least to masterful environmental sound design; if anything, this game is more memorable for its soundscape than for its visuals. Although its emergent qualities are certainly nothing to sneeze at, in my opinion the peak moment of the game is actually pre-scripted. A jaw-dropping plot twist arrives about halfway through, one of the most shocking I’ve ever encountered in a game. I hesitate to say much more here, but will just reveal that nothing and no one turn out to be what you thought they were, and that SHODAN is involved. Because of course she is…

For all its increased resolution and equal mastery of atmosphere, however, System Shock 2 doesn’t strike me as quite so fully realized as the first System Shock. It also suffers by comparison with Warren Spector’s own System Shock successor Deus Ex, which was released about nine months later. System Shock 2 never seems entirely sure how to balance its CRPG elements, which are dependent on character skill, with its action elements, which are dependent on player skill. Increasing your character’s skill in gunnery, for example, somehow makes your guns do more damage when you shoot someone with them; this is not exactly intuitive or realistic. Deus Ex just does so much of this sort of thing so much better. In that game, a higher skill level lets your character hold the gun steadier when you’re trying to shoot with it; this makes a lot more sense.

Unusually for Looking Glass, who seldom released a game before its time, System Shock 2 shows all the signs of having been yanked out of its creators’ hands a few months too early. The level design declines dramatically during the final third of the game, becoming downright sketchy by the time you get to the underwhelming finale. The overall balance of the gameplay systems is somewhat out of whack as well. It’s really, really hard to gain traction as a psi-focused character in particular, and dismayingly easy to end up with a character that isn’t tenable by choosing the wrong skills early on. I found a lot of the design choices in System Shock 2 to be tedious and annoying, such that I wished for a way to just turn them off: the scarcity of ammunition (another way to find yourself in an unwinnable cul de sac), the way that weapons degrade at an absurd pace and constantly need to be repaired, the endlessly respawning enemies that make hard-won firefights feel kind of pointless, the decision to arbitrarily deprive you of your trusty auto-map just at the point when you need it most.

Granted, some of this was also in System Shock 1, but it irritated me much more here. In the end, the two games provide very similar subjective experiences. Perchance this was just a ride I was only interested in going on once; perchance I would have a very different reaction to System Shock 2 if I had met it before its older sibling. Or maybe I’m just getting more protective of my time as I get older and have less and less of it left. (Ach… hold that morbid thought!)

Whatever its ratio of strengths to weaknesses, System Shock 2 didn’t do very well at all upon its release in August of 1999. Many folks from both Looking Glass and Irrational attribute this disappointment entirely to the tragic occurrence of four months earlier in Columbine, Colorado. Although the full picture is surely more nuanced — it always is, isn’t it? — we have no reason to doubt that the fallout from the massacre was a major factor in the game’s commercial failure. According to Paul Neurath, Electronic Arts pondered for a while whether it was wise to put System Shock 2 out at all. He remembers EA’s CEO Larry Probst telling him that “we may just want to walk away from doing shooters because there’s talk of these shooters causing these kinds of events.” “We convinced them to release the game,” says Neurath, “but they did almost zero marketing and they put it in the bargain discount $9.95 bin 45 days after the game launched. It never stood a chance to make any money. That really hurt us financially.”

If System Shock 2 was to some extent a victim of circumstances, Looking Glass’s next game was a more foreseeable failure. For some reason, they just couldn’t stop beating the dead horse of flight simulation, even though it had long since become clear that this wasn’t what their primary audience wanted from them at all. Flight Unlimited III wasn’t a bad flight simulator, but the changes it introduced to the formula were nowhere near as dramatic as those that marked Flight Unlimited II. The most notable new development was a shift from the San Francisco Bay to Washington State, a much larger geographical area depicted in even greater detail. (Owners of the second game were given the privilege of loading their old scenery into the new engine as well.) Innovation or the lack thereof aside, the same old problem remained, in the form of Microsoft’s 800-pound-gorilla of a flight-simulation franchise, which was ready with its own “2000” update at the same time. Published by Electronic Arts in late 1999, Flight Unlimited III stiffed even more abjectly than had System Shock 2.

On the left, we see Seattle-Tacoma International Airport as depicted in Microsoft’s Flight Simulator 2000. On the right, we see the same airport in Flight Unlimited III. The former modeled the whole world, including more than 20,000 airports; the latter tried to compete by modeling a comparatively small area better. Regardless of the intrinsic merits of the two approaches, Looking Glass’s did not prove a formula for marketplace success.

A comparatively bright spot that holiday season was Thief Gold, which added three new missions to the original’s twelve and tweaked and polished the existing ones. It did decently well as a mid-tier product with a street price of about $25, plus a $10 rebate for owners of the previous version of Thief and the promise of a $10 discount off the upcoming Thief II. But a product like this was never going to offset Looking Glass’s two big failures of 1999.

In truth, the Looking Glass goose was probably already more or less cooked as Y2K began. The only thing that might have saved them was Thief II: The Metal Age turning into a massive hit right out of the gate. Sadly, there was little likelihood of that happening; the best that Looking Glass could realistically hope for was another solid half-million seller. There was already a sense in the studio as the final touches were being put on Thief II that, barring a miracle, this game was likely to be their swansong.

As swansongs go, Thief II acquits itself pretty darn well. It comes off as far more self-assured than its predecessor, being focused almost exclusively on stealth rather than monster-slaying through its fifteen cunningly crafted levels. Some of these spaces — a huge central bank, a sprawling warehouse complex, a rich art collector’s country estate — are intricate and beautiful enough that you almost wish there was an option to just wander around and admire them, without having to worry about guards and traps and all the rest. There’s a greater willingness here to use gameplay to advance the larger story: plot twists sometimes arrive in the midst of a mission, and you can often learn more about what’s really going on, if you’re interested, by listening carefully to the conversations that drift around the outskirts of the darkness in which you cloak yourself. Indeed, Thief II is positively bursting with little Easter eggs for the observant. Some of them are even funny, such as a sad-sack pair of guards who have by now been victimized by Garrett several times in other places, who complain to one another, Laurel and Hardy style, about their lot in life of constantly being outsmarted.

The subtitle pays tribute to the fact that the milieu of Thief has now taken on a distinct steampunk edge, with clanking iron robots and gun turrets for Garrett to contend with in addition to the ever-present human guards. Garrett now has a mechanical eye which he can use to zoom in on things, or even to receive the visual signal from a “scouting orb” that he’s tossed out into an exposed space to get a better picture of his surroundings. I must confess that I’m somewhat of two minds about this stuff: it’s certainly more interesting than zombies, but I do still kind of long for the purist neo-Renaissance milieu I thought I was getting when I played the first level of Thief I.

The “faces” on the robots look a bit like SHODAN, don’t they? Some of the code governing their behavior was also lifted directly from that game. But unlike your mechanical enemies in System Shock 2, these robots have steam boilers on their posteriors which you can douse with water arrows to disable them.

Beyond this highly debatable point, though, there’s very little to complain about here, unless it be that Thief II, for all its manifest strengths, doesn’t quite manage to stand on its own. Oddly in light of what a make-or-break title this was for them, Looking Glass seems not to have given much thought to easing new players into this very different way of approaching a first-person action game; they didn’t even bother to rehash the rudimentary tutorial that kicks off Thief I. As a result, and as a number of otherwise positively disposed contemporary reviewers noted, Thief II has more the flavor of an expansion pack — a really, really well-done one, mind you — than a full-fledged sequel. It probably isn’t the best place to start, but anyone who enjoyed the first game will definitely enjoy this one.

Looking Glass’s problem, of course, was that none of what I’ve just written sounds like a ticket to id- or Blizzard-level success, which was what they needed by this point to save the company. As Computer Gaming World wrote in its review, Thief II “is a ’boutique’ game: a gamer’s game. It pays its dividends in persistent tension rather than in bursts of fear. It still pumps as much adrenaline, but it works on a subtler level. It’s the difference between Strangers on a Train and Armageddon, between the intimated and the explicit.”

Having thus delivered another cult classic rather than a blockbuster, Looking Glass’s fate was sealed. By March of 2000, when Eidos published Thief II, Paul Neurath had been trying to sell the studio for a second time for the better part of a year. Sony was seriously interested for a while, until a management shakeup there killed the deal. Then Eidos was on the verge of pulling the trigger, only to have its bankers refuse to loan the necessary funds after a rather disappointing year for the company, in which the Tomb Raider train seemed to finally be running out of steam and John Romero’s would-be magnum opus Daikatana, which Eidos was funding and publishing for Ion Storm, ran way over time and budget. Not wanting to risk depriving his employees of their last paychecks, Neurath decided to shut the studio down with dignity. On May 24, 2000, he called everyone together to thank them for their efforts and to tell them that Thief II had been Looking Glass’s last game. “We’re closing,” he said. What else was there to say?

Plenty, as it turned out. The news of the shuttering prompted paroxysms of grief throughout gaming’s burgeoning online ecosystem, frequently accompanied by a full measure of self-loathing. Looking Glass had been just too smart for a public that wasn’t worthy of them, so the story went. Many a gamer who had always meant to pick up this or that subtly subversive Looking Glass masterstroke, but had kept delaying in favor of easier, more straightforward fare, blamed himself for being a part of the problem. But no amount of hand-wringing or self-flagellation could change the fact that Looking Glass was no more. The most it could do was to turn having worked for the studio into a badge of honor and one hell of a line item on anyone’s CV, as a Looking Glass diaspora spread out across the industry to influence its future.

To wit: the tearful tributes were still pouring in when Ion Storm’s Warren Spector-led Deus Ex reached store shelves in June of 2000. Cruel irony of ironies: Deus Ex became a hit on a scale that Thief, Looking Glass’s biggest game ever, could scarcely have dreamed of approaching. Right to the end, Looking Glass was always the bridesmaid, never the bride.


Looking Glass was a cool group, and a lot of us put a lot of time and energy and a large part of our lives into it, and it’s sad when that doesn’t work out. So there’s some part of me that says, oh, that sucks, that’s not fair, but it’s the real world and it had a pretty good run.

— Doug Church

Without consciously intending to, I’ve found myself writing quite a lot of obituaries of gaming icons recently: TSR, Sierra On-Line, MicroProse, Bullfrog, the adventure-making arm of Legend Entertainment. Call it a sign of the millennial times, a period of constant, churning acquisition and consolidation in which it began to seem that just half a dozen or so many-tendriled conglomerates were destined to divide the entirety of digital gaming among themselves. Now, we can add Looking Glass to our list of victims of this dubious idea of progress.

A lot of hyperbole has been thrown around about Looking Glass over the past quarter-century. A goodly portion of it is amply justified. That said, I do think there is some room for additional nuance. (There always is, isn’t there?) At the risk of coming off like the soulless curmudgeon in the room, I’m not going to write about Looking Glass here as if they were a bunch of tortured artists starving in a garret somewhere. Instead I’m going to put on my pragmatist’s hat and go off on in search of some more concrete reasons why these remarkable games didn’t resonate as much as they may have deserved to back in the day.

It shouldn’t be overlooked that Paul Neurath and Ned Lerner made some fairly baffling business decisions over the years. Their disastrous choice to try to make a go of it as an independent publisher against gale-force headwinds in 1995 can be all too easily seen as the precipitating event that sent Looking Glass down the road to closure five years later. Then, too, Neurath’s later insistence on persisting with the Flight Unlimited series must stand high on the list of mistakes. Incredibly, at the time Looking Glass was shut down, they were still at the flight-simulation thing, having spent a reported $3 million already on a fourth one, which was finally to add guns and enemy aircraft to the mix; this was half a million more than they had spent to make Thief II, a game with a far more secure customer base. [1]After the closure, some Looking Glass staffers migrated to the nearby Mad Doc Software, where they incorporated much of their flight-simulation code into Jane’s Combat Simulations: Attack Squadron. Released in 2002, it was not positively reviewed.

Then again, this isn’t a Harvard Business School case study. What final words are there to say about the games themselves, the real legacy of this company that failed rather spectacularly at its business-school ambition of making a lot of — okay, any — money? How should we understand them in their full historical context?

As you probably know, historical context is kind of my jam. Writing for this site is for me a form of time travel. I don’t play modern games for lack of hours in the day, and I’ve long since settled into a more or less one-to-one correspondence between present time and historical time; that’s to say, it takes me about one year worth of articles on this site to fully cover one year of gaming history and matters adjacent. We’ve by now moved out of the era when I was playing a lot of games in my previous life, so most of what I encounter is new to me. I think this puts me in a privileged position. I can come pretty close to experiencing and appreciating games — and the evolution of the medium as a whole — as a contemporary player might have done. When I read in the year 2025 that Looking Glass was poorly rewarded for their uncompromising spirit of innovation, I can understand and even to a large extent agree. And yet, in my role as a time traveler, I can also kind of understand why a lot of gamers ended up voting with their wallets for something else.

The decade after Looking Glass’s demise saw the rise of what gaming scholar Jesper Juul has dubbed the Casual Revolution; this was the heyday of BejeweledZumaDiner Dash, and the Big Fish portal, which brought gaming to whole new, previously untapped demographics who dwarfed the hardcore old guard in numbers. In 2010, when this revolution was at its peak, Juul put forth five characteristics that define casual gaming: “emotionally positive fictions”; “little presupposed knowledge” on the player’s part; a tolerance for being played in “short bursts”; “lenient punishments for failing”; and “positive feedback for every successful action the player performs.” The games of Looking Glass are the polar opposite of this list. At times, they seem almost defiantly so; witness the lack of an “easy” setting in Thief, as if to emphasize that anyone who might wish for such a thing is not welcome here. Looking Glass’s games are the ultimate “gamer’s games,” as Computer Gaming World put it, unabashedly demanding a serious commitment of time, focus, energy, and effort from their players. But daily life demands plenty of those things from most of us already, doesn’t it? In this light, it doesn’t really surprise me that a lot of people decided to just go play something more welcoming and less demanding. This didn’t make them ingrates; it just made them people who weren’t quite sure that there was enough space in their life to work that hard for their entertainment. I sympathize because I often felt the same in the course of my time-traveling; when I saw a new Looking Glass game on the syllabus, it was always a little bit harder than it ought to have been for me to muster the motivation to take the plunge. And this is part of what I do for a living!

Now, there’s certainly nothing wrong with gamer’s games. But they are by definition niche pursuits. The tragedy of Looking Glass (if I can presume to frame it in those terms in an article which has previously mentioned the real tragedy that took place at Columbine High School) is that they were making niche games at a time when the economics of the industry were militating against the long tail, pushing everyone toward a handful of tried-and-true mainstream gameplay formulas. After the millennium, the rise of digital distribution would give studios the luxury of being loudly and proudly niche, if that was where their hearts were. (Ironically, this happened at the same instant that ultra-mainstream casual gaming took off, and was enabled by the same transformative technology of broadband in the home.) But digital distribution of games as asset-heavy as those of Looking Glass was a non-starter throughout the 1990s. C’est la vie.

This situation being what it was, I do feel that Looking Glass could have made a bit more of an effort to be accessible, to provide those real or metaphorical easy modes, if only in the hope and expectation that their customers would eventually want to lose the training wheels and play the games as they were meant to be played. On-ramping is a vital part of the game designer’s craft, one at which Looking Glass, for all their strengths in other areas, wasn’t all that accomplished.

Another thing that Looking Glass was not at all good at, or seemingly even all that interested in, was multiplayer, which became a bigger and bigger part of gaming culture as the 1990s wore on. (They did add a co-operative multiplayer mode to System Shock 2 via a patch, but it always felt like the afterthought it was.) This was a problem in itself. Just to compound it, Looking Glass’s games were in some ways the most single-player games of them all. “Immersion” was their watchword: they played best in a darkened room with headphones on, almost requiring of their players that they deliberately isolate themselves from the real world and its inhabitants. Again, this is a perfectly valid design choice, but it’s an inherently niche one.

Speaking only for myself now, I think this is another reason that the games of Looking Glass proved a struggle for me at times. At this point in my life at least, I’m just not that excited about isolating myself inside hermetically sealed digital spaces. If I want total immersion, I take a walk and immerse myself in nature. Games I prefer to play on the sofa next to my wife. My favorite Looking Glass game, for what it’s worth, is System Shock 1, which I played at an earlier time in my life when immersion was perhaps more of a draw than it is today. Historical context is one thing, personal context another: it’s damnably difficult to separate our judgments of games from the circumstances in which we played them.

Of course, this is one of the reasons that I always encourage you not to take my judgments as the final word on anything, to check out the games I write about for yourself if they sound remotely interesting. It’s actually not that hard to get a handle on Looking Glass’s legacy for yourself. Considering the aura of near-divinity that cloaks the studio today, the canon of widely remembered Looking Glass classics is surprisingly small. They seem to have had a thing for duologies: their place in history boils down to the two Ultima Underworld games, the two System Shock games, and the two Thief games. The rest of their output has been pretty much forgotten, with the partial exception of Terra Nova on the part of the really dedicated.

Still, three bold and groundbreaking concepts that each found ways to advance the medium on multiple fronts is more than enough of a legacy for any studio, isn’t it? So, let us wave a fondly respectful farewell to Looking Glass, satisfied as we do so that we will be meeting many of their innovations and approaches, sometimes presented in more accessible packages, again and again as we continue to travel through time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


SourcesThe books Game Design Theory & Practice (2nd. ed.) by Richard Rouse III, A Casual Revolution: Reinventing Video Games and Their Players by Jesper Juul, and the Prima strategy guide to Thief II by Howard A. Jones; Computer Gaming World of January 1999, November 1999, January 2000, February 2000, and June 2000;  Retro Gamer 60, 177, and 260; Game Developer of November 1999; Boston Globe of May 26 2000; Boston Magazine of December 2013.

Online sources include “Ahead of Its Time: A History of Looking Glass” and “Without Looking Glass, There was No Irrational Games” by Mike Mahardy at Polygon, James Sterrett’s “Reasons for the Fall: A Post-Mortem on Looking Glass Studios,” GameSpy featurette by John “Warrior” Keefer, Christian Nutt’s interview with Ken Levine on the old Gamasutra site, and AverStar’s millennial-era corporate site.

My special thanks to Ethan Johnson, a fellow gaming historian who knows a lot more about Looking Glass than I do, and set me straight on some important points after the first edition of this article was published.

Where to Get Them: System Shock 2: 25th Anniversary Remaster (which includes the original version of the game as a bonus) and Thief II: The Metal Age are available as digital purchases on GOG.com.

Footnotes

Footnotes
1 After the closure, some Looking Glass staffers migrated to the nearby Mad Doc Software, where they incorporated much of their flight-simulation code into Jane’s Combat Simulations: Attack Squadron. Released in 2002, it was not positively reviewed.
 
 

Tags: , , , , ,

A Looking Glass Half Empty, Part 1: Just Lookin’ for a Hit


This article tells part of the story of Looking Glass Studios.

There was some discussion about it: “Wow, gosh, it’d sure be nice if we were making more money and selling more copies so we could do crazy games of the type we want, as opposed to having to worry about how we’re going to sell more.” Hey, I’d love it if the public was more into what I like to do and a little less into slightly more straightforward things. But I totally get that they’re into straightforward things. I don’t have any divine right to have someone hand me millions of dollars to make a game of whatever I want to do. At some fundamental level, everyone has a wallet, and they vote with it.

— Doug Church, Looking Glass Studios

Late in 1994, after their rather brilliant game System Shock had debuted to a reception most kindly described as constrained, the Cambridge, Massachusetts-based studio Looking Glass Technologies sent their star producer Warren Spector down to Austin, Texas. There he was to visit the offices of Looking Glass’s publisher Origin Systems, whose lack of promotional enthusiasm they largely blamed for their latest game’s lukewarm commercial performance. Until recently, Spector had been directly employed by Origin. The thinking, then, was that he might still be able to pull some strings in Austin to move the games of Looking Glass a little higher up in the priority rankings. The upshot of his visit was not encouraging. “What do I have to do to get a hit around here?” Spector remembers pleading to his old colleagues. The answer was “very quiet, very calm: ‘Sign Mark Hamill to star in your game.‘ That was the thinking at the time.” But interactive movies were not at all what Looking Glass wanted to be doing, nor where they felt the long-term future of the games industry lay.

So, founders Paul Neurath and Ned Lerner decided to make some major changes in their business model in the hope of raising their studio’s profile. They accepted $3.8 million in venture capital and cut ties with Origin, announcing that henceforward Looking Glass would publish as well as create their games for themselves. Jerry Wolosenko, a new executive vice president whom they hired to help steer the company into its future of abundance, told The Boston Globe in May of 1995 that “we expect to do six original titles per year. We are just beginning.” This was an ambitious goal indeed for a studio that, in its five and a half years of existence to date, had managed to turn out just three original games alongside a handful of porting jobs.

Even more ambitious, if not brazen, was the product that Looking Glass thought would provide them with their entrée into the ranks of the big-time publishers. They intended to mount a head-on challenge to that noted tech monopolist Microsoft, whose venerable, archetypally entitled Flight Simulator was the last word — in fact, very nearly the only word — in civilian flight simulation. David-versus-Goliath contests in the business of media didn’t come much more pronounced than this one, but Looking Glass thought they had a strategy that might allow them to break at least this particular Microsoft monopoly.

Flight Unlimited was the brainchild of a high-energy physicist, glider pilot, and amateur jazz pianist named Seamus Blackley, who had arrived at Looking Glass by way of the legendary Fermi Laboratory. His guiding principle was that Microsoft’s Flight Simulator as it had evolved over the last decade and a half had become less a simulation of flight itself than a simulation of the humdrum routine of civil aviation — of takeoff permissions and holding patterns, of navigational transponders and instrument landing systems. He wanted to return the focus to the simple joy of soaring through the air in a flying machine, something that, for all the technological progress that had been made since the Wright brothers took off from Kitty Hawk, could still seem closer to magic than science. The emphasis would be on free-form aerobatics rather than getting from Airport A to Airport B. “I want people to see that flying is beautiful, exciting, and see the thrill you can get from six degrees of freedom when you control an airplane,” Blackley said. “That’s why we’ve focused on the experience of flying. There is no fuel gauge.”

The result really was oddly beautiful, being arguably as close to interactive art as a product that bills itself as a vehicular simulation can possibility get. Its only real concession to structure took the form of a 33-lesson flying course, which brought you from just being able to hold the airplane straight and level to executing gravity-denying Immelman rolls, Cuban eights, hammerheads, and inverted spins. Any time that your coursework became too intense, you always had the option to just bin the lesson plans and, you know, go out and fly, maybe to try some improvisational skywriting.

In one sense, Flight Unlimited was a dramatic departure from the two Ultima Underworld games and System Shock, all of which were embodied first-person, narrative-oriented designs that relied on 3D graphics of a very different stripe. In another sense, though, it was business as usual, another example of Looking Glass not only pushing boundaries of technology in a purist sense — the flight model of Flight Unlimited really was second to none — but using it in the service of a game that was equally aesthetically innovative, and just a little bit more thoughtful all the way around than was the norm.

Upon its release in May of 1995, Flight Unlimited garnered a rare five-stars-out-of-five review from Computer Gaming World magazine:

It’s just you, the sky, and a plane that does just about anything you ask it to. Anything aerobatic, that is. Flight Unlimited is missing many of the staple elements of flight simulations. There are no missiles, guns, or enemy aircraft. You can’t learn IFR navigation or practice for your cross-country solo. You can’t even land at a different airport than the one you took off from. But unless you’re just never happy without something to shoot at, you won’t care. You’ll be too busy choreographing aerial ballets, pulling off death-defying aerobatic stunts, or just enjoying a quiet soar down the ridge line to miss that stuff.

Flight Unlimited sold far better than System Shock: a third of a million copies, more even than Looking Glass’s previous best-seller Ultima Underworld, enough to put itself solidly in the black and justify a sequel. Still, it seems safe to say that it didn’t cause any sleepless nights for anyone at Microsoft. Over the years, Flight Simulator had become less a game than a whole cottage industry unto itself, filled with armchair pilots who often weren’t quite gamers in the conventional sense, who often played nothing else. It wasn’t all that easy to make inroads with a crowd such as that. Like a lot of Looking Glass’s games, Flight Unlimited was a fundamentally niche product to which was attached the burden of mainstream sales expectations.

That said, the fact remained that Flight Unlimited had made money for Looking Glass, which allowed them to continue to live the dream for a while longer. Neurath and Lerner sent a homesick Warren Spector back down to Austin to open a second branch there, to take advantage of an abundance of talent surrounding the University of Texas that the Wing Commander-addled Origin Systems was believed to be neglecting.

Then Looking Glass hit a wall. Its name was Terra Nova.

Terra Nova: Strike Force Centauri had had the most protracted development cycle of any Looking Glass game, dating almost all the way back to the very beginning of the company and passing through dozens of hands before it finally came to fruition in the spring of 1996. At its heart, it was an ultra-tactical first-person shooter vaguely inspired by the old Robert Heinlein novel Starship Troopers, tasking you with leading teams of fellow soldiers through a series of missions, clad in your high-tech combat gear that turned you more than halfway into a sentient robot. But it was also as close as Looking Glass would ever come to their own stab at a Wing Commander: the story was advanced via filmed cutscenes featuring real human actors, and a lot of attention was paid to the goings-on back at the ranch when you weren’t dressed up in your robot suit. This sort of thing worked in Wing Commander, to whatever extent it did, because the gameplay that took place between the movie segments was fairly quick and simple. Terra Nova was not like that, which could make it feel like an even more awkward mélange of chocolate and peanut butter. It’s difficult to say whether Activision’s Mechwarrior 2, the biggest computer game of 1995, helped it or hurt it in the marketplace: on the one hand, that game showed that there was a strong appetite for tactical combat involving robots, but, on the other, said demand was already being fed by a glut of copycats. Terra Nova got lost in the shuffle. A game that had been expected to sell at least half a million copies didn’t reach one-fifth of that total.

Looking Glass’s next game didn’t do any better. Like Flight Unlimited, British Open Championship Golf cut against the dark, gritty, and violent stereotype that tended to hold sway when people thought of Looking Glass, or for that matter of the games industry writ large. It was another direct challenge to an established behemoth: in this case, Access Software’s Links franchise, which, like Flight Simulator, had its own unique customer base, being the only line of boxed computer games that sold better to middle-aged corporate executives than they did to high-school and university students. Looking Glass’s golf project was led by one Rex Bradford, whose own history with simulating the sport went all the way back to Mean 18, a hit for Accolade in 1986. This time around, though, the upstart challenger to the status quo never even got a sniff. By way of damning with faint praise, Computer Gaming World called British Open Championship Golf “solid,” but “somewhat unspectacular.” Looking Glass could only wish that its sales could have been described in the same way.

With the benefit of hindsight, we can see all too clearly that Neurath and Lerner crossed the line that separates ambition from hubris when they decided to try to set Looking Glass up as a publisher. At the very time they were doing so, many another boutique publisher was doing the opposite, looking for a larger partner or purchaser to serve as shelter from the gale-force winds that were beginning to blow through the industry. More games were being made than ever, even as shelf space at retail wasn’t growing at anything like the same pace, and digital distribution for most types of games remained a nonstarter in an era in which almost everyone was still accessing the Internet via a slow, unstable dial-up connection. This turned the fight over retail space into a free-for-all worthy of the most ultra-violent beat-em-up. Sharp elbows alone weren’t enough to win at this game; you had to have deep pockets as well, had to either be a big publisher yourself or have one of them on your side. In deciding to strike out on their own, Neurath and Lerner may have been inspired by the story of Interplay Productions, a development studio which in 1988 had broken free of the grasp of Electronic Arts — now Origin System’s corporate parent, as it happened — and gone on to itself become one of the aforementioned big publishers who were increasingly dominating at retail. But 1988 had been a very different time in gaming.

In short, Neurath and Lerner had chosen just about the worst possible instant to try to seize full control of their own destiny. “Game distribution isn’t always based on quality,” noted Warren Spector at the end of 1996. Having thus stated the obvious, he elaborated:

The business has changed radically in the last year, and it’s depressing. The competition for shelf space is ridiculous and puts retailers in charge. If you don’t buy an end-cap from retailers for, say, $50,000 a month, they won’t buy many copies.

Products once had three to six months. The average life is now 30 days. If you’re not a hit in 30 days, you’re gone. This is predicated on your association with a publisher who gets your title on shelves. It’s a nightmare.

With just three games shipped in the last two and a half years — a long way off their projected pace of “six original titles per year” — and with the last two of them having flopped like a wet tuna on a gymnastics court, Looking Glass was now in dire straits. The only thing that had allowed them to keep the doors open this long had been a series of workaday porting jobs that Warren Spector had been relegated to supervising down in Austin, while he waited for the company to establish itself on a sound enough financial footing to support game development from whole cloth in both locations. Ten years on, after Looking Glass had been enshrined in gaming lore as one of the most forward-thinking studios of all time and Spector as the ultimate creative producer, the idea of them wasting their collective talents on anonymous console ports would seem surreal. But such was the reality circa 1997, when Looking Glass, having burnt through all of their venture capital, was left holding on by a thread. “I remember people walking into the office to take back the [rented] plants which the studio was no longer able to pay for,” says programmer and designer Randy Smith.

Ned Lerner abandoned what seemed to be a sinking ship, leaving Looking Glass to co-found a new studio called Multitude, whose focus was to be Internet-enabled multiplayer gaming. Meanwhile Neurath swallowed the hubris of 1995 and did what the managers of all independent games studios do when they find themselves unable to pay the bills anymore: looking for a buyer who would be able to pay them instead. But because Looking Glass could never seem to do anything in the conventional way even when they tried to, the buyer Neurath found was one of the strangest ever.

The Cambridge firm known as Intermetrics, Inc., was far from a household name, but it had a proud history that long predated the personal-computer era. Intermetrics had grown out of the fecund soil of Project Apollo, having been founded in March of 1969 by some of the engineers and programmers behind the Apollo Guidance Computer that would soon help to place astronauts on the Moon. After that epochal achievement, Intermetrics continued to do a lot of work for NASA, providing much of the software that was used to control the Space Shuttle. Other government and aerospace-industry contracts filled out most of the balance of its order sheets.

In August of 1995, however, a group of investors led by a television executive bought the firm for $28 million, with the intention of turning it into something altogether different. Michael Alexander came from the media conglomerate MCA, where he had been credited with turning around the fortunes of the cable-television channel USA. Witnessing the transformation that high-resolution graphics, high-quality sound, and the enormous storage capacity of CD-ROM were wreaking on personal computing, he had joined dozens of his peers in deciding that the future of mass-market entertainment and infotainment lay with interactive multimedia. Deeming most of the companies who were already in that space to be “overvalued,” and apparently assuming that one type of computer programming was more or less the same as any other, he bought Intermetrics, whose uniform of white shirts, ties, and crew cuts had changed little since the heyday of the Space Race, to ride the hottest wave in 1990s consumer electronics.

“This is a company that has the skills and expertise to be in the multimedia business, but is not perceived as being in that business,” he told a reporter from The Los Angeles Times. (It was not a question of perception; Intermetrics was not in the multimedia business prior to the acquisition.) “And that is its strength.” (He failed to elaborate on exactly why this should be the case.) Even the journalist to whom he spoke seemed skeptical. “Ponytailed, black-clad, twenty-something multimedia developers beware,” she wrote, almost palpably smirking between the lines. “Graying engineers with pocket protectors and a dozen years of experience are starting to compete.” Likewise, it is hard not to suspect Brian Fargo of Interplay of trolling the poor rube when he said that “I think it’s great that the defense guys are doing this. It’s where the job security is now. It used to be in defense. Now it’s in the videogame business.” (Through good times and bad, one thing the videogame business has never, ever been noted for is its job security.)

Alas, Michael Alexander was not just a bandwagon jumper; he was a late bandwagon jumper. By the time he bought Intermetrics, the multimedia bubble was already close to popping under the pressure of a more sustained Internet bubble that would end the era of the non-game multimedia CD-ROM almost before it had begun. As this harsh reality became clear in the months that followed, Alexander had no choice but to push Intermetrics more and more in the direction of games, the only kind of CD-ROM product that was making anyone any money. The culture clash that resulted was intractable, as pretty much anyone who knew anything about the various cultures of computing could have predicted. Among these someones was Mike Dornbrook, a games-industry stalwart who had gotten his start with Infocom in the early 1980s. Seeking his next gig after Boffo Games, a studio he had founded with his old Infocom colleague Steve Meretzky, went down in flames, Dornbrook briefly kicked the tires at Intermetrics, but quickly concluded that what he saw “made no sense whatsoever”: “They were mostly COBOL programmers in their fifties and sixties. I remember looking around and saying, ‘You’re going to turn these guys into game programmers? What in the world are you thinking?'” [1]Dornbrook wound up signing on with a tiny startup called Harmonix Music Systems, which in 2005, after years of diligent experimentation with the possibilities for combining music and games, altered the landscape of gaming forever with Guitar Hero.

Belatedly realizing that all types of programming were perhaps not quite so interchangeable as he had believed, Michael Alexander set out in search of youngsters to teach his old dogs some new tricks. The Intermetrics rank and file must have shuddered at the advertisements he started to run in gaming magazines. “We are rocket scientists!” the ads trumpeted. “Even our games are mission-critical!” When these efforts failed to surface a critical mass of game-development talent, Alexander reluctantly moved on to doing what he should have done back in 1995: looking for an extant studio that already knew how to make games. It so happened that Looking Glass was right there in Cambridge, and, thanks to its troubled circumstances, was not as “overvalued” as most of its peers. Any port in a storm, as they say.

On August 14, 1997, a joint press release was issued: “Intermetrics, Inc., a 28-year-old leading software developer, and Looking Glass Studios, one of the computer gaming industry’s foremost developers, today announce the merger of the two companies’ gaming operations to form Intermetrics/Looking Glass Studios, LLC. Through the shared strengths of the two entities, the new company is strategically positioned to be a major force in the computer-game, console and online-gaming industries.” Evidently on a quest to find out how much meaningless corporate-speak he could shoehorn into one document, Michael Alexander went on to add that “Looking Glass Studios immediately catapults Intermetrics into a leading position in the gaming industry by giving us additional credentials and assets to compete in the market. Our business plan is to maintain and grow our core contract-services business while at the same time leveraging our expertise and financial resources to be a major player in the booming interactive-entertainment industry.” The price paid by the rocket scientists for their second-stage booster has to my knowledge never been publicly revealed.

The acquiring party may have been weird as all get-out, but it could have worked out far worse for Looking Glass, all things considered. In addition to the obvious benefit of being able to keep the doors open, at least a couple of other really good things came directly out of the acquisition. One was a change in name, from Looking Glass Technologies to Looking Glass Studios, emphasizing the creative dimension of their work. Another was a distribution deal with Eidos, a British publisher that had serious retail clout in both North America and Europe. Riding high on the back of the massive international hit Tomb Raider, Eidos could ensure that Looking Glass’s games got prominent placement in stores. Meanwhile this idea of the Looking Glass people serving as mentors to those who were struggling to make games at Intermetrics proper — an excruciating proposition for both parties — would prove to mostly be a polite, face-saving fiction for Michael Alexander; in practice, the new parent company would prove largely content to leave its subsidiary alone to do its own thing. Now the folks at Looking Glass just needed to deliver a hit to firmly establish themselves in their new situation. That was always the sticky wicket for them.

The first game that Looking Glass released under their new ownership was Flight Unlimited II, which appeared just a few months after the big announcement. Created without the input of Seamus Blackley, who had left the company, Flight Unlimited II sought simultaneously to capitalize on the relative success of Looking Glass’s first flight simulator and to adjust that game’s priorities to better coincide with the real or perceived desires of the market. Looking Glass paired the extant flight model with an impressively detailed depiction of the geography of the San Francisco Bay Area. Then they added a lot more structure to the whole affair, in the form of a set of missions to fly after you finished your training. The biggest innovation, a first for any civilian flight simulator, was the addition of other aircraft, turning San Francisco International Airport into the same tangle of congested flight lanes it was in the real world. These changes moved the game away from being such a purist simulation of flight as an end unto itself. Still, there was a logic to the additions; one can easily imagine them making Flight Unlimited II more appealing to the sorts of gamers who don’t tend to thrive in goal-less sandboxes. Be that as it may, though, it didn’t show up in the sales figures. Flight Unlimited II sold better than Terra Nova or British Open Championship Golf, but not as well as its series predecessor, just barely managing to break even.

This disappointment put that much more pressure on Looking Glass’s next game to please the new boss and show that the studio could deliver a solid, unqualified hit. In a triumph of hope over experience, everyone had high expectations for The Dark Project, which had been described in the press release announcing the acquisition as “a next-generation fantasy role-playing game.” Such a description might have left gamers wondering if Looking Glass was returning to the territory of Ultima Underworld. As things worked out, the game that they would come to know as simply Thief would not be that at all, but would instead break new ground in a completely different way. It stands today alongside Ultima Underworld in another sense: as one of the three principal legs — the last one being System Shock, of course — that hold up Looking Glass’s towering modern-day reputation for relentless, high-concept innovation.

The off-kilter masterstroke that is Thief started with a new first-person 3D engine known as The Dark Engine. It could have powered a “low-brain shooter,” as the Looking Glass folks called the likes of the mega-hit Quake, with perfect equanimity. But they just couldn’t bring themselves to make one.

It took a goodly while for them to decide what they did want to do with The Dark Engine. Doug Church, the iconoclastic programmer and designer who had taken the leading role on System Shock, didn’t want to be out-front to the same extent on this project. The initial result of this lack of a strong authority figure was an awful lot of creative churn. There was talk of making a game called Better Red than Undead, mixing a Cold War-era spy caper with a zombie invasion. Almost as bizarre was Dark Camelot, an inverted Arthurian tale in which you played the Black Knight against King Arthur and his cronies, who were depicted as a bunch of insufferable holier-than-thou prigs. “Our marketing department wasn’t really into that one,” laughs Church.

Yet the core sensibility of that concept — of an amoral protagonist set against the corrupt establishment and all of its pretensions — is all over the game that did finally get made. Doug Church:

The missions [in Dark Camelot] that we had the best definition on and the best detail on were all breaking into Camelot, meeting up with someone, getting a clue, stealing something, whatever. As we did more work in that direction, and those continued to be the missions that we could explain best to other people, it just started going that way. Paul [Neurath] had been pushing for a while that the thief side of it was the really interesting part, and why not just do a thief game?

And as things got more chaotic and more stuff was going on and we were having more issues with how to market stuff, we just kept focusing on the thief part. We went through a bunch of different phases of reorganizing the project structure and a bunch of us got sucked into doing some other project work on Flight [Unlimited] and stuff, and there was all this chaos. We said, “Okay, well, we’ve got to get this going and really focus and make a plan.” So we put Greg [LoPiccolo] in charge of the project and we agreed we were going to call it Thief and we were going to focus much more. That’s when we went from lots of playing around and exploring to “let’s make this Thief game.”

It surely comes as no revelation to anyone reading this article that most game stories are power fantasies at bottom, in which you get to take on the identity of a larger-than-life protagonist who just keeps on growing stronger as you progress. Games which took a different approach were, although by no means unknown by the late 1990s, in the decided minority even outside of the testosterone-drenched ghetto of the first-person shooter. The most obvious exponents of the ordinary-mortal protagonist were to be found in the budding survival-horror genre, as pioneered by Alone in the Dark and its sequels on computers and Resident Evil on the consoles. But these games cast you as nearly powerless prey, being stalked through dark corridors by zombies and other things that go bump in the night. Thief makes you a stealthy predator, the unwanted visitor rifling through cupboards and striking without warning out of the darkness, yet most definitely not in any condition to mow down dozens of his enemies in full-frontal combat, Quake-style. If you’re indiscreet in your predations, you can become the cornered prey with head-snapping speed. This was something new at the time.

Or almost so. Coincidentally, two Japanese stealthy-predator games hit the Sony PlayStation in 1998, the same year as Thief’s release. Tenchu: Stealth Assassins cast you as a ninja, while Metal Gear Solid cast you as an agent of the American government on a top-secret commando mission. The latter in particular caused quite a stir, by combining its unusual gameplay style with the sort of operatically melodramatic storytelling that was more commonly associated with the JRPG genre. That said, Thief is a far more sophisticated affair than either of these games, in terms of both its gameplay and its fiction.

The titular thief and protagonist is a man known only as Garrett, who learned his trade on the streets of The City, a mixture of urban squalor and splendor that is best described as Renaissance Florence with magic — a welcome alternative to more typical fantasy settings. Over the course of a twelve-act campaign, Garrett is given a succession of increasingly daunting assignments, during which a larger plot that involves more than the acquisition of wealth by alternative methods does gradually take shape.

Although the mission tree is linear, nothing else about your experience in Thief is set in stone. It was extremely important to Looking Glass that Thief not turn into a puzzle game, a series of set-piece challenges with set-piece solutions. They wanted to offer up truly dynamic environments, environments that were in their own way every bit as much simulations as Flight Unlimited. They wanted to make you believe you were really in these spaces. Artist Daniel Thron speaks of the “deep sense of trust we had in the player. There isn’t a single solution to Thief. It’s up to you to figure out how to steal the thing. It’s letting you tell that story through gameplay. And that sense of ownership makes it unique. It becomes yours.” In the spirit of all that, the levels are big, with no clearly delineated through-line. These dynamic virtual spaces full of autonomous actors demand constant improvisation on your part even if you’ve explored them before.

Looking Glass understood that, in order for Thief to work as a vehicle for emergent narrative, all of the other actors on the stage have to respond believably to your actions. It’s a given that guards ought to hunt you down if you blatantly give away your presence to them. Thief distinguishes itself by the way it responds to more subtle stimuli. An ill-judged footstep on a creaky floor tile might cause a guard to stop and mutter to himself: “Wait! Did I just hear something?” Stand stock still and don’t make a sound, and maybe — maybe — he’ll shrug his shoulders and move on without bothering to investigate. If you do decide to take a shot at him with your trusty bow or blackjack, you best not miss, to steal a phrase from Omar Little. And you best hide the body carefully afterward, before one of his comrades comes wandering along the same corridor to stumble over it.

These types of situations and the split-second decisions they force upon you are the beating heart of Thief. Bringing them off was a massive technical challenge, one that made the creation of 3D-graphics engine itself seem like child’s play. The state of awareness of dozens of non-player characters had to be tracked, as did sound and proximity, light and shadow, to an extent that no shooter — no, not even Half-Life — had ever come close to doing before. Remarkably, Looking Glass largely pulled it off, whilst making sure that the more conventional parts of the engine worked equally well. Garrett’s three principal weapons — a blackjack for clubbing unsuspecting victims in the back of the head, a rapier for hand-to-hand combat, and a bow which can be used to shoot a variety of different types of arrows — are all immensely satisfying to use, having just the right feeling of weight in your virtual hands. The bow is a special delight: the arrows arc through the air exactly as one feels they ought to. You actually get to use your bow in all sorts of clever ways that go beyond killing, such as shooting water arrows to extinguish pesky torches — needless to say, darkness is your best friend and light your eternal enemy in this game — and firing rope arrows that serve Garrett as grappling hooks would a more conventional protagonist.

Looking Glass being Looking Glass, even the difficulty setting in Thief is more than it first appears to be. It’s wouldn’t be much of an exaggeration to say that Thief is really three games in one, depending on whether you play it on Normal, Hard, or Expert. (Looking Glass apparently wasn’t interested in the sorts of players who might be tempted by an “easy” mode.) Not only do the harder settings require you to collect more loot to score a passing grade on each mission, but the environments themselves become substantially larger. Most strikingly, in a brave subversion of the standard shooter formula, each successive difficulty setting requires you to kill fewer rather than more people; at the Expert level, you’re not allowed to kill anyone at all.

Regardless of the difficulty setting you choose, Thief will provide a stiff challenge. Its commitment to verisimilitude extends to all of its facets. In lieu of a conventional auto-map, it provides you only with whatever scribbled paper map Garrett has been able to scrounge from his co-conspirators, or sometimes not even that much. If your innate sense of direction isn’t great — mine certainly isn’t — you can spend a long time just trying to find your way in these big, twisty, murky spaces.

When it’s at its best, Thief is as amazing as it is uncompromising. It oozes atmosphere and tension; it’s the sort of game that demands to be played in a dark room behind a big monitor, with the phone shut off and a pair of headphones planted firmly over the ears. Sadly, though, it isn’t always this best version of itself. In comparison to Ultima Underworld or System Shock, both of which I enjoyed from first to last, Thief strikes me as a lumpy creation, a game of soaring highs but also some noteworthy lows. I was all-in during the first mission, a heist taking place in the mansion of a decadent nobleman. Having recently read Sarah Dunant’s The Birth of Venus and written quite a lot about Renaissance Florence, my receptors were well primed for this Neo-Renaissance setting. Then I came to the second mission, and suddenly I was being asked to fight my way through a bunch of zombies in an anonymous cave complex. Suddenly Thief felt like dozens of other first-person action games.

This odd schizophrenia persists throughout the game. The stealthy experience I’ve just been describing — the boldly innovative experience that everyone thinks of today when they think of Thief — is regularly interspersed with splatterfests against enemies who wouldn’t have been out of place in Quake: zombies, rat men, giant exploding frogs, for Pete’s sake. (Because these enemies aren’t human, they’re generally exempt from the prohibition against killing at the Expert level.) All told, it’s a jarring failure to stick to its guns from a studio that has gone down in gaming lore for refusing to sacrifice its artistic integrity, to its own great commercial detriment.

As happens so often in these cases, the reality behind the legend of Looking Glass is more nuanced. Almost to a person, the team who made Thief attribute the inconsistency in the level design to outside pressure, especially from their publisher Eidos, who had agreed to partially fund the project. “Eidos never believed in it and until the end told us to put in more monsters and have more fighting and exploring and less stealth, and I’m not sure there was ever a point [when] they got it,” claims Doug Church. “I mean, the trailers Eidos did for Thief were all scenes with people shooting fire arrows at people charging them. So you can derive from that how well they understood or believed in the idea.”

And yet one can make the ironic case that Eidos knew what they were doing when they pushed Looking Glass to play up the carnage a little more. Released in November of 1998, Thief finally garnered Looking Glass some sales figures that were almost commensurate with their positive reviews. (“If you’re tired of DOOM clones and hungry for challenge, give this fresh perspective a try,” said Computer Gaming World.) The game sold about half a million copies — not a huge hit by the standards of an id Software or Blizzard Entertainment, but by far the most copies Looking Glass had ever sold of anything. It gave them some much-needed positive cash flow, which allowed them to pay down some debts and to revel in some good vibes for a change when they looked at the bottom line. But most importantly for the people who had made Thief, its success gave them the runway they needed to make a sequel that would be more confident in its stealthy identity.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


SourcesThe book Game Design Theory & Practice (2nd. ed.) by Richard Rouse III; Next Generation of March 1997 and June 1997; PC Zone of December 1998; Computer Gaming World of September 1995, June 1996, August 1997, April 1998, and March 1999; Retro Gamer 117, 177, and 260; Los Angeles Times of September 15 1995; Boston Globe of May 3 1995.

Online sources include the announcement of the Intermetrics acquisition on Looking Glass’s old website, InterMetrics’s own vintage website, “Ahead of Its Time: A History of Looking Glass” by Mike Mahardy at Polygon, and James Sterrett’s “Reasons for the Fall: A Post-Mortem on Looking Glass Studios.”

My special thanks to Ethan Johnson, a fellow gaming historian who knows a lot more about Looking Glass than I do, and set me straight on some important points after the first edition of this article was published.

Where to Get Them: Terra Nova: Strike Force Centauri and Thief Gold are available for digital purchase at GOG.com. The other Looking Glass games mentioned this article are unfortunately not.

Footnotes

Footnotes
1 Dornbrook wound up signing on with a tiny startup called Harmonix Music Systems, which in 2005, after years of diligent experimentation with the possibilities for combining music and games, altered the landscape of gaming forever with Guitar Hero.
 
 

Tags: , , , , ,

Putting the “J” in the RPG, Part 3: Playing Final Fantasy VII (or, Old Man Yells at Cloud)

Fair warning: this article includes plot spoilers of Final Fantasy VII.

Historians and critics like me usually have to play the know-it-all in order to be effective at our jobs. My work flow begins with me going out and learning everything I can about a topic in the time I have available. Then I decide what I think about it all, find a way to structure my article, and share it with you as if I’ve been carrying all this information around with me all my life. Often I get things wrong, occasionally horribly wrong. But I can always count on you astonishingly knowledgeable folks to set me straight in the end, and in the meantime being direct is preferable in my book to equivocating all over the place. For, with the arguable exception of a wide-eyed undergraduate here or there enrolled in her first class in postmodern studies, absolutely no one wants to read a writer prattling on about the impossibility of achieving Complete Truth or the Inherent Subjectivity of criticism. Of course complete truth is an unattainable ideal and all criticism is subjective! I assume that you all know these things already, so that we can jump past the hand-wringing qualifiers and get right to the good stuff.

Still, I don’t believe that all criticism is of equal value, for all that it may in the end all be “just, like, your opinion man!” The most worthwhile criticism comes from a place of sympathy with the goals and expectations that surround a work and is grounded in an understanding of the culture that produced it. It behooves no one to review a blockbuster action movie as if it was an artsy character study, any more than it makes sense to hold, say, Michael Crichton up to the standards of fine literature. Everything has its place in the media ecosystem, and it’s the critic’s duty to either understand that place or to get out of the way for those that do.

Which goes a long way toward explaining why I start getting nervous when I think about rendering a verdict on Final Fantasy VII. I am, at best, a casual tourist in the milieu that spawned it; I didn’t grow up with Japanese RPGs, didn’t even grow up with videogame consoles after I traded my Atari VCS in for a Commodore 64 at age eleven. Sitting with a game controller in my hand rather than a keyboard and mouse or joystick is still a fairly unfamiliar experience for me, almost 40 years after it became the norm for Generation Nintendo. My experience with non-gaming Japanese culture as well is embarrassingly thin. I’ve never been to Japan, although I did once glimpse it from the Russian island of Sakhalin. Otherwise, my encounters with it are limited to the Star Blazers episodes I used to watch as a grade-school kid on Saturday mornings, the World War II history books I read as an adolescent war monger, that one time in my twenties when I was convinced to watch Ghost in the Shell (I’m afraid it didn’t have much impact on me), a more recent sneaking appreciation for the uniquely unhinged quality of some Japanese music (which can make a walking-blues vamp sound like the apocalypse), and the Haruki Murakami novels sitting on the bookshelf behind me as I write these words, the same ones that I really, really need to get around to reading. In summation, I’m a complete ignoramus when it comes to console-based videogames and Japan alike.

So, the know-it-all approach is right out for this article; even I’m not daring enough to try to fake it until I make it in this situation. I hesitate to even go so far as to call what follows a review of Final Fantasy VII, given my manifest lack of qualifications to write a good one. Call it a set of impressions instead, an “old man yells at cloud” for the JRPG world where the joke is quite probably on the old man.

In a weird sort of way, though, maybe that approach will work out okay, just this once. For, as we learned in the last article, Final Fantasy VII was the first heavily hyped JRPG to be released on computers as well as consoles in the West. When that happened, many computer gamers who were almost as ignorant then as I am now played it. As I share my own experiences below, I can be their voice in this collision between two radically different cultures of gaming. The fallout from these early meetings would make games as a whole better in the long run, regardless of the hardware on which they ran or the country where they were made. It’s this gratifying ultimate outcome that prompted me to write this trilogy of articles in the first place. Perhaps it even makes my personal impressions relevant in this last entry of said trilogy, despite my blundering cluelessness.

Nevertheless, given the intense feelings that JRPGs in general and this JRPG in particular arouse in their most devoted fans, I’m sure some small portion of you will hate me for writing what follows. I ask only that you read to the end before you pounce, and remember that’s it’s just my opinion, man, and a critic’s aesthetic judgments do not reflect his moral character.


The trains in Final Fantasy VII look like steam locomotives. This doesn’t make much sense, given what we know of the technology in use in the city of Midgar, but it’s kind of cool.

I had heard a lot about Final Fantasy VII before I played it, most of it extremely positive, to put it lightly. In fact, I had seen it nominated again and again for the title of Best Game Ever. For all that I have no personal history with JRPGs, I do like to think of myself as a reasonably open-minded guy. I went into Final Fantasy VII wanting to be wowed, wanting to be introduced to an exciting new world of interactive narrative that stood apart from both the set-piece puzzle-solving of Western adventure games and the wide-open emergent diffusion of Western RPGs. But unfortunately, my first couple of hours with Final Fantasy VII were more baffling than bracing. I felt a bit like the caveman at the beginning of 2001: A Space Odyssey, trying to figure out what I was supposed to be doing with this new bone I had just picked up.

After watching a promisingly understated opening-credits sequence, accompanied by some rather lovely music, I started the game proper. I was greeted with a surreal introductory movie, in which a starry sky morphed into scenes from a gritty, neon-soaked metropolis of trains and heavy industry, with an enigmatic young girl selling flowers amidst it all. Then several people were leaping off the top of a train, and I realized that I was now controlling one of them. “C’mon, newcomer!” shouted one of the others. “Follow me.” I did my best to oblige him, fumbling through my first combat — against some soldierly types who were chasing us for some reason — along the way.

The opening credits are the last part of Final Fantasy VII that can be described as understated. From that point on, even the swords are outsized.

Who the hell was I? What was I supposed to be doing? Naïve child of 1980s computer gaming that I was, I thought maybe all of this was explained in the manual. But when I looked there, all I found were some terse, unhelpful descriptions of the main characters, not a word about the plot or the world I had just been dropped into. I was confused by everything I saw: by the pea soup of bad translation that made the strictly literal meanings of the sentences the other characters said to me impossible for me to divine at times; by the graphics that sometimes made it hard to separate depth from height, much less figure out where the climbable ladders and exit points on the borders of the maps lay; by the way my character lazily sauntered along — “Let’s mosey!”, to quote one of the game’s famously weird translations — while everyone else dashed about with appropriate urgency; by the enemies who kept jumping me every minute or two while I beat my head against the sides of the maps looking for the exits, enemies whom I could dispatch by simply mashing “attack” over and over again; by the fact that I seemed to be a member of a terrorist cell set on blowing up essential civic infrastructure, presumably killing an awful lot of innocent people in the process; by the way the leader of my terrorist group, a black man named Barret, spoke and acted like Mr. T on old A-Team reruns, without a hint of apparent irony.

What can I say? I bounced. Hard. After I made it to the first boss enemy and died several times because, as I would later learn from the Internet, the shoddy English translation was telling me to do the exact opposite of what I needed to do to be successful against it, I threw the game against the metaphorical wall. What did anyone see in this hot mess, I asked my wife — albeit in considerably more colorful language than that. She just laughed  — something that, to be fair, she spends a lot of time doing when I play these crazy old games on the television.

The first hill on which I died. Final Fantasy VII‘s original English translation is not just awful to read but actively wrong in places. When you meet the first boss monster, you’re told to “attack while it’s [sic] tail’s up!” The Japanese version tells you not to attack when its tail’s up. Guess which one is right…


I sulked for several weeks, deeply disappointed that this game that I had wanted to be awesome had turned out to be… less than awesome. But the fact remained that it was an important work, in the history of my usual beat of computer gaming almost as much as that of console gaming. Duty demanded that I go back in at some point.

When that point came, I steeled myself to fight harder for my pleasure. After all, there had to be some reason people loved this game so, right? I read a bit of background on the Internet, enough to understand that it takes place on an unnamed world whose economy is dominated by an all-powerful mega-corporation called the Shinra Electric Power Company, which provides energy and earns enormous profits by siphoning off the planet’s Mako, a sort of spiritual essence. I learned that AVALANCHE, the terrorist cell I was a part of, was trying to break Shinra’s stranglehold, because its activities were, as Barret repeats ad nauseum, “Killin’ the planet.” And I learned that the main character — the closest thing to “me” in the game — was a cynical mercenary named Cloud Strife, a former member of a group called SOLDIER that did Shinra’s dirty work. But Cloud has now switched sides, joining AVALANCHE strictly for the paycheck, as he makes abundantly clear to anyone who asks him about it and most of those who don’t. The action kicks off in the planet’s biggest city of Midgar, with AVALANCHE attempting to blow up the Shinra reactors there one by one.

With that modicum of background information, everything began to make a little more sense to me. I also picked up some vital practical tips on the Internet. For example, I discovered that I could push a button on the controller to clearly mark all ladders and exits from a map, and that I could hold down another button to make Cloud run like everybody else; having to do so basically all the time was a trifle annoying, but better than the alternative of moseying everywhere. I learned as well that I could turn off the incessant random encounters using a fan-made application called 7th Heaven, but I resisted the temptation to do so; I was still trying to be strong at this point, still trying to experience the game as a player would have in the late 1990s.

Things went better for a while. By doing the opposite of what the bad translation was telling me to do, I got past the first boss monster that had been killing me. (Although I didn’t know it at the time, this would prove to be the the only fight that ever really challenged me until I got to the very end of the game). Then I returned with the others to our terrorist hideout, and agreed to help AVALANCHE blow up the next reactor. (All in a day’s work for a mercenary, I suppose.) While the actual writing remained more or less excruciating most of the time, I started to recognize that there was some real sophistication to the narrative’s construction, that my frustration at the in medias res beginning had been more down to my impatience than any shortcoming on the game’s part. I realized I had to trust the game, to let it reveal its story in its way. Likewise, I had to recognize that its environmentalist theme, a trifle heavy-handed though it was, rang downright prescient in light of the sorry state of our own planet a quarter-century after Final Fantasy VII was made.

Which isn’t to say that it was all smooth sailing. After blowing up the second Mako reactor, Cloud was left dangling from a stray girder, hundreds of feet above the vaguely Blade Runner-like city of Midgar. After some speechifying, he tumbled to his presumed doom — only to wake up inside a cathedral, staring into the eyes of the flower girl from the opening movie. “The roof and the flower bed must have broken your fall,” she said. While my wife was all but rolling on the floor laughing at the sheer ridiculousness of this idea, I bravely SOLDIERed onward, learning that the little girl’s name was Aerith and that she was being stalked by Shinra thugs due to some special powers they believed her to possess. “Take me home,” she begged Cloud.

“Okay, I’ll do it,” he grunted in reply. “But it’ll cost you.” (Stay classy, Cloud… stay classy.)

And now I got another shock. “Well, then, let’s see…” Aerith said. “How about if I go out with you once?” Just like that, all of my paradigms had to shift. Little Aerith, it seemed, wasn’t so little after all. Nonetheless, playing Cloud in this situation left me feeling vaguely unclean, like a creepy old guy crashing his tweenage daughter’s slumber party.

Judging from his facepalm, Cloud may have been as shocked by Aerith’s offer of affection for protection as I was.

Ignoramus though I was, I did know that Japanese society is not generally celebrated for its progressive gender politics. (I do think this is the biggest reason that anime and manga have never held much attraction for me: the tendency of the tiny sliver of it which I’ve encountered to simultaneously infantalize and sexualize girls and women turns me right off.) Now, I realized that I — or rather Cloud — was being thrown into a dreaded love triangle, its third point being Cloud’s childhood friend and fellow eco-terrorist Tifa. Going forward, Aerith and Tifa would spend their character beats snipping at one another when not making moon-eyes at Cloud. Must be something about that giant sword he carries around, tucked only God knows where inside his clothing…

I was able to identify Tifa as an adult — or at least an adolescent — from the start, thanks to her giant breasts, which she seems to be trying to thrust right out of the screen at you when you win a fight, using them as her equivalent of Cloud’s victoriously twirling sword. (This was another thing my wife found absolutely hilarious…) The personalities of the women in this game demonstrate as well as anything its complete bifurcation between gameplay and story. When you control Aerith or Tifa in combat, they’re as capable as any of the men, but when they’re playing their roles in the story, they suddenly become fragile flowers utterly dependent on the kindness of Cloud.

Anyway, soon we got to Wall Market. Oh, my. This area is unusual in that it plays more like a puzzle-based adventure game than anything else, featuring no combat at all — what a blessed relief that was! — until the climax. Less positively, the specific adventure game it plays like is Leisure Suit Larry at its most retrograde. Tifa gets abducted and forced to join the harem of a Mafia kingpin-type named Don Corneo, and it’s up to Cloud and Aerith to rescue her. Aerith decides that the only way to get Cloud inside Corneo’s mansion and effect the rescue is to dress him up like… gasp… a girl! This suggestion Cloud greets with appropriate horror, understanding as he does that the merest contact with an article of female clothing not hanging on a female body carries with it the risk of an instant and incurable case of Homosexuality. But he finally comes around with all the good grace of a primary cast member of Bosom Buddies. Many shenanigans ensue, involving a whorehouse, a gay bathhouse, erectile dysfunction, a “love hotel,” cross-dressing bodybuilders, and a pair of panties, all loudly Othered for the benefit of the insecure straight male gaze. What the hell, I wondered for the umpteenth time, had I gotten myself into here?

You can’t make this stuff up…

But I didn’t let any of it stop me; I pushed right on through like the SOLDIER Cloud was. No, readers, what broke me wasn’t Don Corneo chasing Cloud-in-a-dress around his  bedroom, but rather the goddamn train graveyard. Let me repeat that with emphasis… the goddamn train graveyard.

In a way, this area illustrates one of Final Fantasy VII‘s more admirable attributes, its determination to give you a variety of different stuff to do. It’s a combination of a maze and a sort of Sokoban puzzle, as you must climb in and over broken-down train carriages and engines in an abandoned depot, even sometimes putting on your engineer’s cap and driving a locomotive out of the way. This is fine in and of itself. What I found less fine was, as usual, the random combat. I would be working out my route in my pokey middle-aged way, coming up with a plan… and then the screen would go all whooshy and the battle music would start, and I’d have to spend the next 30 seconds mashing the attack button before I could get back to the navigational puzzle, by which time I’d completely lost track of what I had intended to do there. Rinse and repeat. Words cannot express how much I had learned to loath that battle music already, but this took the torture to a whole new level, as combat seemed to come at twice, thrice, five times the rate of before. I just couldn’t take it anymore. I quit. Not willfully… I just stopped playing one evening and didn’t start again the next. Or the next. Or the one after that. You know how it goes.

The second hill on which I died.

So, real life went on. But as it did so, my conscience kept pricking me. This game is important, it said. People love this game. Can you not find some way to make friends with it?

I decided to give it one last shot. This time, however, I would approach it differently. Final Fantasy VII has a passionate, active fan community — have I told you that people love this game? — who have done some rather extraordinary things with it over the years. I already mentioned one of these things in passing: 7th Heaven, an application that makes it effortless to install dozens of different “mod” packages, which can alter the game in ways both trivial and major, allowing you to play Final Fantasy VII exactly the way that you wish.

Now, I normally consider such things off-limits; my aim on this site is to give you the historical perspective, which means playing and reviewing games as their original audience would have known them. Still, I decided that, if it could help me to see the qualities other people saw in Final Fantasy VII that I all too plainly was not currently seeing, it might be okay, just this one time. I installed 7th Heaven and started to tweak away. First and foremost, I turned off the random encounters. Then I set it up so Cloud would run rather than mosey by default. Carried away by my newfound spirit of why the heck not, I even replaced the Windows version’s tinny MIDI soundtrack with the PlayStation version’s lusher music.

And then, having come this far, I really took the plunge. A group of fans who call themselves “Tsunamods” have re-translated all of the text in the game from the original Japanese script. As if that wasn’t enough, they’ve also found a way to add voice acting, covering every single line of dialog in the game. I went for it.

I was amazed at the difference it made — so amazed that I felt motivated to start the game all over from the beginning. The Tsunamods voice acting is way, way better than it has any right to be — far better than the average professional CD-ROM production of the 1990s. Being able to listen to the dialog flowing by naturally instead of tapping through text box after text box was a wonderful improvement in itself. But I was even more stunned by the transformation wrought by the fresh translation. Suddenly the writing was genuinely good in places, and never less than serviceable, displaying all sorts of heretofore unsuspected layers of nuance and irony. Instead of fawning all over Cloud like every teenage boy’s sexual fantasy, Aerith and Tifa took a more bantering, patronizing attitude. The Wall Market sequence especially displayed a new personality, with Aerith now joshing and gently mocking Cloud for his hetero horror at the prospect of donning a dress. Even Barret evinced signs of an inner life, became something more than an inadvertent caricature of Mr. T. when he expressed his love for the little orphan girl to whom he’d become surrogate father. And I could enjoy all of this without having to fight a pointless random battle every three minutes; only the meaningful, plot-dictated fights remained. I was, to coin a phrase, in seventh heaven. I had abandoned all of my principles about fidelity to history, and it felt good.

At the same time, though, I wasn’t really sure whose game I was playing anymore. In his YouTube deconstruction of Final Fantasy VII‘s original English translation, Tim Rogers states that “I believe that no such thing exists as a ‘perfect’ translation of a work of literature from one language to another. All translation requires compromise.” I agree wholeheartedly.

For the act of translation — any act of translation — is a creative act in itself. Even those translations which strive to be as literal as possible — which in my opinion are often the least satisfying of them all — are the product of a multitude of aesthetic choices and of the translator’s own understanding of the source text. In short, a work in translation is always a different work from its source material. This is why Shakespeare buffs like me get so upset when people talk about “modernizing” the plays and poetry by translating them into 21st-century English. If you change the words, you change the works. Whether you think it’s better or worse, what you end up with is no longer Shakespeare. The same is true of the Bible; the King James Bible in English is a different literary work from the Hebrew Old Testament or the Greek New Testament. (This is what makes the very idea of Biblical Fundamentalism — of the Bible as the incontrovertible Word of God — so silly on its face…)

Needless to say, all of this holds equally true for Final Fantasy VII. When that game was first translated into English, it became a different work from the Japanese original. And when it was translated again by Tsunamods, it became yet another work, one reflecting not only these latest translators’ own personal understandings and aesthetics but also the changed cultural values of its time, more than twenty years after the first translation was done.

Of course, we can attempt to simply enjoy the latest translation for what it is, as I was intermittently able to do when I could shut my historian’s conscience off. Yet that same conscience taunts me even now with questions that I may never be able to answer, given that I don’t expect to find the time and energy in my remaining decades to become fluent in Japanese. Lacking that fluency, all I am left with are suppositions. I strongly suspect that the first English translation of Final Fantasy VII yielded a work that was cruder and more simplistic than its Japanese source material. Yet I also suspect that the latest English translation has softened many of the same source material’s rough edges, sanding away some racism, misogyny, and homophobia to suit the expectations of a 21st-century culture that has thankfully made a modicum of progress in these areas. What I would like to know but don’t is exactly where all of the borders lie in this territory. (Although Tim Rogers’s video essays are worthy in their way, I find them rather frustrating in that they never quite seem to answer the questions I have, whilst spending a lot of time on details of grammar and the like that strike me as fairly trivial in the larger scheme of things.)

What I do know, however, is that the Tsunamods re-translation and voice acting, combined with the other tweaks, finally allowed me to unabashedly enjoy Final Fantasy VII. I was worried in the beginning that forgoing random encounters might leave my characters hopelessly under-leveled, but the combat as a whole is so unchallenging that I found having a bit less experience to actually improve the game, by forcing me to employ at least a modicum of real strategy in some of the boss fights. I had a grand old time with my modified version of the game for the first seven or eight hours especially, when my party was still running around Midgar on the terrorist beat. Being no longer forced to gawk at the writing like a slow-motion train wreck, I could better appreciate the storytelling sophistication on display: the willingness of the plot to zig where conventional genre-narrative logic said it ought to zag, the refusal to shy away from the fact that AVALANCHE was, whatever the inherent justice of its cause, a gang of reckless terrorists who could and eventually did get lots and lots of innocent people killed.

After I carried the fight directly to the Shinra headquarters, I was introduced to the real villain of the story, a fellow named Sephiroth who used to be Cloud’s commanding officer in SOLDIER but had since transcended his humanity entirely through a complicated set of circumstances, and was now attempting to become a literal god at the expense of the planet and everyone else on it. Leaving Midgar and its comparatively parochial concerns behind, Cloud and his companions set off on Sephiroth’s trail, a merry chase across continents and oceans.

Wandering the world map.

This chase after Sephiroth fills the largest chunk of the game by far. Occasionally, dramatic revelations continued to leave me admiring its storytelling ambition. While the tragic death of Aerith at the hands of Sephiroth had perhaps been too thoroughly spoiled for me to have the impact it might otherwise have had, the gradual discovery that Cloud was not at all what he seemed to be — that he was in fact a profoundly unreliable narrator, a novelistic storytelling device seldom attempted in games — was shocking and at times even moving. Whenever the main plot kicked into gear for these or other reasons, I sat up and paid attention.

But a goodly portion of this last 80 percent of the game is spent meandering through lots and lots of disparate settings, from “rocket cities” to beach-side resort towns to a sprawling amusement park of all places, that have only a tangential relation to the real story and that I don’t tend to find as intrinsically interesting as Midgar. I often got restless and a bit bored in these places, with that all too familiar, creeping feeling that my time was being wasted. I’ve played and enjoyed plenty of Western RPGs whose watchword is “Go Forth and Explore,” but that approach didn’t work so well for me here. I found the game’s mechanics too simplistic to stand up on their own without the crutch of a compelling story, while the graphics, much-admired though they were by PlayStation gamers back in the day, were too hazy and samey in that early 3D sort of way to make the different areas stand out from one another in terms of atmosphere. Even the apparent non-linearity of the huge world map proved to be less than it seemed; there is actually only one really viable path through it, although there is a fair amount of optional content and Easter eggs for the truly dedicated to find. Being less dedicated, I soon began to wish for a way to further bastardize my version of the game, by turning off the plot-irrelevant bits in the same way I’d turned off the random encounters. Like a lot of RPGs of the Western stripe as well, Final Fantasy VII strikes me as far, far longer than it needs to be, an enjoyable 25-hour experience blown up to 50 hours or more, even without all those random encounters. I was more than ready for it to be over when I got to the end. The last fight was a doozie, what with my under-leveled characters, but it was nice to be pushed to the limit for once. And then it was all over.

What, then, do I think about Final Fantasy VII when all is said and done? For me, it’s a game that contains multitudes, one that resists blithe summation. Some of it is sublime, some of it is ridiculous. Sometimes it’s riveting, sometimes it’s exhausting. It certainly doesn’t achieve everything it aims for. But then again, how could it? It shoots for the moon, the sun, and the stars all at once when it comes to its story. It wants to move you so very badly that it’s perhaps inevitable that some of it just comes off as overwrought. Still, I’ll take its heartfelt earnestness over bro-dudes chortling about gibs and frags any day of the week, and all day on Sunday. “Can a computer make you cry?” asked another pioneering company almost a decade and a half before Final Fantasy VII was released. Square, it seems, was determined to provide a definitive affirmative answer to that question. And I must admit that the final scene, of ugly old Midgar now overrun with the beautiful fruits of the earth, did indeed leave a drop or two of moisture in the eyes of this nature lover, going a long way toward redeeming some of my earlier complaints. Whatever quibbles I may have with this game, its ultimate message that we humans can and must learn to live in harmony with nature rather than at odds with it is one I agree with, heart, mind, and soul.

My biggest problem with Final Fantasy VII — or rather with the version of it that I played to completion, which, as noted above, is not the same as the one Square created in Japanese — is that it tries to wed this story and message to a game, and said game isn’t always all that compelling. It’s not that there are no good ideas here; I do appreciate that Final Fantasy VII tries to give you a lot of different stuff to do, some of which, such as the action-based mini-games, I haven’t even mentioned here. (Suffice to say now that, while the mini-games won’t blow anyone away, they’re generally good enough for a few minutes’ change of pace.)

Still, and especially if you’re playing without mods, most of the gamey bits of this game involve combat, and the balance there is badly broken. Final Fantasy VII‘s equivalent of magic is a mystical substance called “materia,” which can be imbued with different spell-like capabilities and wielded by your characters. Intriguingly, the materia “levels up” with repeated usage, taking on new dimensions. But the balance of power is so absurdly tilted in favor of the player that you never really need to engage with these mechanics at all; there are credible reports of players making it all the way to the final showdown with Sephiroth without ever once even equipping any materia, just mashing that good old attack button. (To put this in terms that my fellow old-timers will understand: this is like playing all the way through, say, Pool of Radiance without ever casting a spell.) Now, you could say that this is such players’ loss and their failure, and perhaps you’d be partially correct. But the reality is that, if you give them the choice, most players will always take the path of least resistance, then complain about how bored they were afterward. It’s up to a game’s designer to force them to engage on a deeper level, thereby forcing them to have fun.

When I examine the history of this game’s development, I feel pretty convinced why it came to be the way it is. Throwing lots and lots of bodies at a project may allow you to churn out reams of cut scenes and dialog in a record time, but additional manpower cannot do much beyond a certain point to help with the delicate, tedious process of testing and balancing. What with a looming release date precluding more methodical balancing and the strong desire to make the game as accessible as possible so as to break the JRPG sub-genre for good and all in the West, a conscious decision was surely made to err on the side of easiness. In a way, I find it odd to be complaining about this here. I’m not generally a “hardcore” player at all; far more vintage games of the 1980s and 1990s are too hard than too easy for my taste. But this particular game’s balance is so absurdly out of whack that, well, here we are. I do detest mindless busywork, in games as in life, and if mashing that attack button over and over while waiting for a combat to end doesn’t qualify for that designation, I don’t know what does. If it couldn’t be balanced properly, I’d have preferred a version of Final Fantasy VII that played as a visual novel, without the RPG trappings at all. But commercial considerations dictated that that could never happen. So, again, here we are.[1]The game’s tireless fan base has gone to great lengths, here as in so many places, to mitigate its failings by upping the difficulty in various ways. I didn’t investigate much in this area, deciding I had already given the game the benefit of enough retro-fitting with the mods I did employ.

As it was, I found my modified version of Final Fantasy VII intermittently gripping, for all that I never quite fell completely in love with it. It’s inherently condescending for any critic to tell a game’s fans why they love it despite its flaws, and I don’t really want to do that here. That said, it does occur to me that a lot of Final Fantasy VII‘s status in gaming culture is what we might call situational. This game was a phenomenon back in 1997, the perfect game coming at the perfect time, sweeping away all reservations on a tide of novelty and excitement. It was a communal event as much as a videogame, a mind-blower for millions of people. If some of what it was and did wasn’t actually as novel as Generation PlayStation believed it to be — and to be fair, some of it was genuinely groundbreaking by any standard — that didn’t really matter then and doesn’t matter now. Final Fantasy VII brought high-concept videogame storytelling into the mainstream. It didn’t do so perfectly, but it did so well enough to create memories and expectations that would last a lifetime.

Even the romance was perfectly attuned to the times, or rather to the ages of many players when they first met this game. The weirdness of Wall Market aside, Cloud and Aerith and Tifa live in that bracket that goes under the name of “Young Adult” on bookstore shelves: that precious time which we used to call the period of “puppy love” and which most parents still wish lasted much longer, when romance is still a matter of “girls and guys” exchanging Valentines and passing notes in class (or perhaps messages on TikTok these days), when sex — or at least sex with other people — is still more of a theoretical future possibility than a lived reality. (Yes, the PlayStation itself was marketed to a slightly older demographic than this one, but, as I noted in my last article, that made it hugely successful with the younger set as well, who always want to be doing what their immediate elders are.) I suspect that I too would have liked this game a lot more if I’d come to it when the girls around me at my school and workplace were still exotic, semi-unknowable creatures, and my teenage heart beat with tender feelings and earthier passions that I’d hardly begun to understand.

In short, the nostalgia factor is unusually strong with this one. Small wonder that so many of its original players continue to cherish it so. If that causes them to overvalue its literary worth a bit, sometimes claiming a gravitas for it not entirely in keeping with what is essentially a work of young-adult fiction… well, such is human nature when it comes to the things we cherish. For its biggest fans, Final Fantasy VII has transcended the bits and bytes on the CDs that Square shipped back in 1997. It doesn’t exist as a single creative artifact so much as an abstract ideal, or perhaps an idealized abstraction. Like the Bible, it has become a palimpsest of memory and translation and interpretation, a story to be told again and again in different ways. To wit: in 2020, Square began publishing a crazily expansive re-imagining of Final Fantasy VII, to be released as a trilogy of games rather than a single one. The first entry in the trilogy — the only one available as of this writing — gets the gang only as far as their departure from Midgar. By all indications, this first part has been a solid commercial success, although not a patch on the phenomenon its inspiration was in a vastly different media ecosystem.

As for me, coming to this game so many years later, bereft of all those personal connections to it: I’m happy I played it, happy to have familiarized myself with one of the most important videogames in history, and happy to have found a way to more or less enjoy it, even if I did have to break my usual rules to do so. I wouldn’t call myself a JRPG lover by any means, but I am JRPG curious. I can see a lot of potential in the game I played, if it was tightened up in the right ways. I look forward to giving Final Fantasy VIII a try; although it’s widely regarded as one of the black sheep of the Final Fantasy family, it seems to me that some of the qualities widely cited as its failings, such as its more realistic, less anime-stylized art, might just strike my aesthetic sensibilities as strengths. And I understand that Square finally got its act together and sprang for proper, professional-quality English translations beginning with this installment, so there’s that.

Now, to do something about those Haruki Murakami novels on my shelf…


Where to Get It: The original version of Final Fantasy VII can still be purchased from Steam as a digital download. If you’re an impatient curmudgeon like me, you may also want to install the 7th Heaven mod manager to tweak the game to your liking. For the record, the mods I wound up using were “OST Music Remastered” (for better music), “Echo-S 7” (for the better translation and voice acting), and “Gameplay Tweaks — Qhimm Catalog” (strictly to make my characters “always run”; I left everything else here turned off). With 7th Heaven alone installed, you can toggle random encounters off and on by pressing CONTROL-B while playing. Note that you need to do this each time you start the game up again.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Footnotes

Footnotes
1 The game’s tireless fan base has gone to great lengths, here as in so many places, to mitigate its failings by upping the difficulty in various ways. I didn’t investigate much in this area, deciding I had already given the game the benefit of enough retro-fitting with the mods I did employ.
 
85 Comments

Posted by on December 22, 2023 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Tomb Raider

If you have to stare at someone’s bum, it’s far better to look at a nice female bum than a bloke’s bum!

— Adrian Smith of Core Design

There was something refreshing about looking at the screen and seeing myself as a woman. Even if I was performing tasks that were a bit unrealistic… I still felt like, hey, this is a representation of me, as myself, as a woman. In a game. How long have we waited for that?

— gamer Nikki Douglas

Sure, she’s powerful and assertive. She takes care of herself, and she knows how to handle a gun. She’s a great role model for girls. But how many copies of Tomb Raider do you think they’d have sold if they’d made Lara Croft flat-chested?

— Charles Ardai, Computer Gaming World

It strikes me that Lara Croft must be the most famous videogame character in history if you take the word “character” literally. Her only obvious competition comes from the Nintendo stable — from Super Mario and Pac-Man and all the rest. But they aren’t so much characters as eternal mascots, archetypes out of time in the way of Mickey Mouse or Bugs Bunny. Lara, on the other hand, has a home, a reasonably coherent personal chronology, a reasonably fleshed-out personality — heck, she even has a last name!

Of course, Lara is by no means alone in any of these things among videogame stars. Nevertheless, for all the cultural inroads that gaming has made in recent decades, most people who don’t play games will still give you a blank stare if you try to talk to them about any of our similarly well-rounded videogame characters. Mention Solid Snake, Cloud, or Gordon Freeman to them and you’ll get nothing. But Lara is another story. After twenty games that have sold almost 100 million copies combined and three feature films whose box-office receipts approach $1 billion, everybody not living under a proverbial rock has heard of Lara Croft. Love her or hate her, she has become one of us in a way that none of her peers can match.



Lara’s roots reach back to the first wave of computer gaming in Britain, to the era when Sinclair Spectrums and Commodore 64s were the hottest machines on the market. In 1984, in the midst of this boom, Ian Stewart and Kevin Norburn founded the publisher Gremlin Graphics — later Gremlin Interactive — in the back room of a Sheffield software shop. Gremlin went on to become the Kevin Bacon of British game development: seemingly everybody who was anybody over the ensuing decades was associated with them at one time or another, or at the very least worked with someone who had been. This applies not least to Lara Croft, that most iconic woman in the history of British gaming.

Core Design, the studio that made her, was formed in 1986 as Gremlin Derby, around the talents of four young men from the same town who had just created the hit game Bounder using the Commodore 64s in their bedrooms. But not long after giving the four a real office to work in, the folks at Gremlin’s Sheffield headquarters began to realize that they should have looked before they leaped — that they couldn’t actually afford to be funding outside studios with their current revenue stream. (Such was the way of things in the topsy-turvy world of early British game development, when sober business expertise was not an overly plentiful commodity.) Rather than close the Derby branch they had barely had time to open, three Gremlin insiders — a sales executive named Jeremy Heath-Smith, the current manager of the Derby studio Greg Holmes, and the original Gremlin co-founder Kevin Norburn — cooked up a deal to take it over and run it themselves as an independent entity. They set up shop under the name of Core Design in 1988.

Over the year that followed, Core had its ups and downs: Heath-Smith bought out Holmes in 1990 and Norburn in 1992, both under circumstances that weren’t entirely amicable. But the little studio had a knack for squeezing out a solid seller whenever one was really needed, such as Rick Dangerous and Chuck Rock. Although most of these games were made available for MS-DOS among other platforms, few of them had much in common with the high-concept adventure games, CRPGs, and strategy games that dominated among American developers at the time. They were rather direct descendants of 8-bit games like Bounder: fast-paced, colorful, modest in size and ambition, and shot through with laddish humor. By 1991, Core had begun porting their games to consoles like the Sega Genesis and Super Nintendo, with whose sensibilities they were perhaps a more natural fit. And indeed, the consoles soon accounted for the majority of their sales.

In late 1994, Jeremy Heath-Smith was invited to fly out to Japan to check out the two latest and greatest consoles from that country, both of which were due for a domestic Japanese release before the end of that year and an international rollout during the following one. The Sega Saturn and the Sony PlayStation were groundbreaking in a number of ways: not only did they use capacious CDs instead of cramped cartridges as their standard storage media, but they each included a graphics processing unit (GPU) for doing 3D graphics. At the time, id Software’s DOOM was in the vanguard of a 3D insurgency on personal computers, one that was sweeping away older, slower games like so much chaff in the breeze. The current generation of consoles, however, just didn’t have the horsepower to do a credible job of running games like that; they had been designed for another paradigm, that of 2D sprites moving across pixel-graphic backgrounds. The Saturn and the PlayStation would change all that, allowing the console games that constituted 80 to 90 percent of the total sales of digital games to join the 3D revolution as well. Needless to say, the potential payoff was huge.

Back at Core Design in Derby, Heath-Smith told everyone what he had seen in Japan, then asked for ideas for making maximum use of the new consoles’ capabilities. A quiet 22-year-old artist and designer named Toby Gard raised his hand: “I’ve got this idea of pyramids.” You would play a dashing archaeologist, he explained, dodging traps and enemies on the trail of ancient relics in a glorious 3D-rendered environment.

It must be said that it wasn’t an especially fresh or unexpected idea in the broad strokes. Raiders of the Lost Ark had been a constant gaming touchstone almost from the moment it had first reached cinemas in 1981. Core’s own Rick Dangerous had been essentially the same game as the one that Gard was now proposing, albeit implemented using 2D sprites rather than 3D graphics. (Its titular hero there was a veritable clone of the Raiders‘s hero Indiana Jones, right down to his trademark whip and fedora; if you didn’t read the box copy, you would assume it was a licensed game.)

Still, Gard was enthusiastic, and possessed of “immense talent” in the opinion of Heath-Smith. His idea certainly had the potential to yield an exciting 3D experience, and Heath-Smith had been around long enough to know that originality in the abstract was often overrated when it came to making games that sold. He gave Tomb Raider the green light to become Core’s cutting-edge showcase for the next-generation consoles, Core’s biggest, most expensive game to date. Which isn’t to say that he could afford to make it all that big or expensive by the standards of the American and Japanese studios: a team of just half a dozen people created Tomb Raider.

The Tomb Raider team. Toby Gard is third from left, Jeremy Heath-Smith second from right. Heather Gibson was the sole woman to work on the game — which, to be fair, was one more woman than worked on most games from this period.

The game would depart in a significant way from the many run-and-gun DOOM clones on personal computers by being a bit less bloody-minded, emphasizing puzzle-solving and platforming as much as combat. The developers quickly decided that the style of gameplay they had in mind demanded that they show the player’s avatar onscreen from a behind-the-back view rather than going with the first-person viewpoint of DOOM — an innovative choice at the time, albeit one that several other studios were making simultaneously, with such diverse eventual results as Fade to BlackDie Hard Trilogy, Super Mario 64, and MDK. In the beginning, though, they had no inkling that it would be Lara Croft’s bum the player would be staring at for hours. The star was to be Rick Dangerous or another of his ilk — i.e., just another blatant clone of Indiana Jones.

But Heath-Smith was seasoned enough to know that that sort of thing wouldn’t fly anymore in a world in which games were becoming an ever bigger and more visible mass-media phenomenon. “You must be insane,” he said to Toby Gard as soon as he heard about his intended Indiana clone. “We’ll get sued from here to kingdom come!” He told him to go back to the drawing board — literally; he was an artist, after all — and create a more clearly differentiated character.

So, Gard sat down at his desk to see what he could do. He soon produced the first sketches of Lara — Lara Cruz, as he called her in the beginning. Gard:

Lara was based on Indiana Jones, Tank Girl, and, people always say, my sister. Maybe subconsciously she was my sister. Anyway, she was supposed to be this strong woman, this upper-class adventurer. The rules at the time were, if you’re going to make a game, make sure the main character is male and make sure he’s American; otherwise it won’t sell in America. Those were the rules coming down from the marketing men. So I thought, “Ah, I know how to fix this. I’ll make the bad guys all American and the lead character female and as British as I can make her.”

She wasn’t a tits-out-for-the-lads type of character in any way. Quite the opposite, in fact. I thought that what was interesting about her was, she was this unattainable, austere, dangerous sort of person.

Sex appeal aside, Lara was in tune with the larger zeitgeist around her in a way that few videogames characters before her could match. Gard first sketched her during the fall of 1995, when Cool Britannia and Britpop were the rages of the age in his homeland, when Oasis and Blur were trash-talking one another and vying for the top position on the charts. It was suddenly hip to be British in a way it hadn’t been since the Swinging Sixties. Bands like the aforementioned made a great point of singing in their natural accents — or, some would say, an exaggerated version of same — and addressing distinctly British concerns rather than lapsing into the typical Americanisms of rock and pop music. Lara was cut from the same cloth. Gard changed her last name to “Croft” when he decided “Cruz” just wasn’t British enough, and created a defiantly blue-blooded lineage for her, making her the daughter of a Lord Henshingly Croft, complete with a posh public-school accent.

Jeremy Heath-Smith was not initially impressed. “Are you insane?” he asked Gard for the second time in a month. “We don’t do girls in videogames!” But Gard could be deceptively stubborn when he felt strongly about something, and this was one of those occasions. Heath-Smith remembers Gard telling him that “she’d be bendy. She’d do things that blokes couldn’t do.” Finally, he relented. “There was this whole movement of, females can really be cool, particularly from Japan,” he says.

And indeed, Lara was first drawn with a distinctly manga sensibility. Only gradually, as Gard worked her into the actual game, did she take on a more realistic style. Comparatively speaking, of course. We’ll come back to that…

An early concept sketch of Lara Croft.

Tomb Raider was becoming ever more important for Core. In the wake of the Sega Saturn and the Sony PlayStation, the videogames industry was changing quickly, in tandem with its customers’ expectations of what a new game ought to look like; there was a lot of space on one of those shiny new CDs, and games were expected to fill it. The pressures prompted a wave of consolidations in Britain, a pooling of a previously diffuse industry’s resources in the service of fewer but bigger, slicker, more expensive games. Core actually merged twice in just a couple of years: first with the US Gold publishing label (its name came from its original business model, that of importing American games into Britain) and then with Domark, another veteran of the 1980s 8-bit scene. Domark began trading under the name of Eidos shortly after making the deal, with Core in the role of its premier studio.

Eidos had as chairman of its board Ian Livingstone, a legend of British gaming in analog spaces, the mastermind of the Warhammer tabletop game and the Fighting Fantasy line of paperback gamebooks that enthralled millions of youth during the 1980s. He went out to have a look at what Core had in the works. “I remember it was snowing,” he says. “I almost didn’t go over to Derby.” But he did, and “I guess you could say it was love at first sight when I stepped through the door. Seeing Lara on screen.”

With such a powerful advocate, Tomb Raider was elevated to the status of Eidos’s showcase game for the Christmas of 1996, with a commensurate marketing budget. But that meant that it simply had to be a hit, a bigger one by far than anything Core had ever done before. And Core was getting some worrisome push-back from Eidos’s American arm, expressing all the same conventional wisdom that Toby Gard had so carefully created Lara to defy: that she was too British, that the pronunciation of her first name didn’t come naturally to American lips, that she was a girl, for Pete’s sake. Cool Britannia wasn’t really a thing in the United States; despite widespread predictions of a second muscial British Invasion in the States to supersede the clapped-out Seattle grunge scene, Oasis had only partially broken through, Blur not at all, and Spice Girls — the latest Britpop sensation — had yet to see their music even released Stateside. Eidos needed another way to sell Lara Croft to Americans.

It may have been around this time that an incident which Toby Gard would tell of frequently in the years immediately after Tomb Raider‘s release occurred. He was, so the story goes, sitting at his computer tweaking his latest model of Lara when his mouse hand slipped, and her chest suddenly doubled or tripled in size. When a laughing Gard showed it to his co-workers in a “look what a silly thing I did!” sort of way, their eyes lit up and they told him to leave it that way. “The technology didn’t allow us to make her [look] visually as we wanted, so it was more of a way of heightening certain things so it would give her some shape,” claims Core’s Adrian Smith.

Be that as it may, Eidos’s marketing team, eying that all-important American market that would make or break this game that would make or break their company, saw an obvious angle to take. They plastered Lara, complete with improbably huge breasts and an almost equally bulbous rear end, all over their advertising. “Sometimes, having a killer body just isn’t enough,” ran a typical tagline. “Hey, what’s a little temptation? Especially when everything looks this good. In the game, we mean.” As for the enemies Lara would have to kill, “Not everyone sees a bright light just before dying. Lucky stiffs.” (The innuendo around Lara was never subtle…)

This, then, was the way that Lara Croft greeted the public when her game dropped in September of 1996. And Toby Gard hated it. Giving every indication of having half fallen in love with his creation, he took the tarting up she was receiving under the hands of Eidos’s marketers badly. He saw them rather as a young man might the underworld impresario who had convinced his girlfriend — or his sister? — to become a stripper. A suggestion that reached Core’s offices to include a cheat code to remove Lara’s clothing entirely was, needless to say, not well-received by Gard. “It’s really weird when you see a character of yours doing these things,” he says. “I’ve spent my life drawing pictures of things — and they’re mine, you know?”

But of course they weren’t his. As is par for the course in the games industry, Gard automatically signed over all of the rights to everything he made at Core just as soon as he made it. He was not the final arbiter of what Lara did — or what was done to her – from here on out. So, he protested the only way he knew how: he quit.

Jeremy Heath-Smith, whose hardheaded businessman’s view of the world was the polar opposite of Gard’s artistic temperament, was gobsmacked by the decision.

I just couldn’t believe it. I remember saying, “Listen, Toby, this game’s going to be huge. You’re on a commission for this, you’re on a bonus scheme, you’re going to make a fortune. Don’t leave. Just sit here for the next two years. Don’t do anything. You’ll make more money than you’ve ever seen in your life.” I’m not arty, I’m commercial. I couldn’t understand his rationale for giving up millions of pounds for some artistic bloody stand. I just thought it was insanity.

Heath-Smith’s predictions of Tomb Raider‘s success — and with them the amount of money Gard was leaving on the table — came true in spades.

Suspecting every bit as strongly as Heath-Smith that they had a winner on their hands, Eidos had already flown a lucky flock of reporters all the way to Egypt in August of 1996 to see Tomb Raider in action for the first time, with the real Pyramids of Giza as a backdrop. By now, the Sega Saturn and the Sony PlayStation had been out for a year in North America and Europe, with the PlayStation turning into by far the bigger success, thanks both to Sony’s superior marketing and a series of horrific unforced errors on Sega’s part. Nevertheless, Tomb Raider appeared first on the Saturn, thanks to a deal Eidos had inked which promised Sega one precious month of exclusivity in return for a substantial cash payment. Rather than reviving the fortunes of Sega’s moribund console, Tomb Raider on the Saturn wound up serving mostly as a teaser for the PlayStation and MS-DOS versions that everyone knew were waiting in the wings.

The game still has qualities to recommend it today, although it certainly does show its age in some senses as well. The plot is barely comprehensible, a sort of Mad Libs of Raiders of the Lost Ark, conveyed in fifteen minutes of cut scenes worth of pseudo-mystical claptrap. The environments themselves, however, are possessed of a windy grandeur that requires no exposition, with vistas that can still cause you to pull up short from time to time. If nothing else, Tomb Raider makes a nice change of pace from the blood-splattered killing fields of the DOOM clones. In the first half of the game, combat is mostly with wildlife, and is relatively infrequent. You’ll spend more of your time working out the straightforward but satisfying puzzles — locked doors and hidden keys, movable boulders waiting to be turned into staircases, that sort of thing — and navigating vertigo-inducing jumps. In this sense and many others, Tomb Raider is more of an heir to the fine old British tradition of 8-bit action-adventures than it is to the likes of DOOM. Lara is quite an acrobat, able to crouch and spring, flip forward and backward and sideways, swim, climb walls, grab ledges, and when necessary shoot an arsenal of weapons that expands in time to include shotguns and Uzis alongside her iconic twin thigh-holstered pistols.

Amidst all the discussion of Lara Croft’s appearance, a lot of people failed to notice the swath she cuts through some of the world’s most endangered species of wildlife. “The problem is that any animal that’s dangerous to humans we’ve already hunted to near extinction,” said Toby Gard. “Maybe we should have used non-endangered, harmless animals. Then you’d be asking me, ‘Why was Lara shooting all those nice bunnies and squirrels?’ You can’t win, can you?”

Unfortunately, Tomb Raider increasingly falls prey to its designers’ less worthy instincts in its second half. As the story ups the stakes from just a treasure-hunting romp to yet another world-threatening videogame conspiracy, the environments grow less coherent and more nonsensical in rhythm, until Lara is battling hordes of mutant zombies inside what appears for all the world to be a pyramid made out of flesh and blood. And the difficulty increases to match, until gameplay becomes a matter of die-and-die-again until you figure out how to get that one step further, then rinse and repeat. This is particularly excruciating on the console versions, which strictly ration their save points. (The MS-DOS version, on the other hand, lets you save any time you like, which eases the pain considerably.) The final gauntlet you must run to escape from the last of the fifteen levels is absolutely brutal, a long series of tricky, non-intuitive moves that you have to time exactly right to avoid instant death, an exercise in rote yet split-second button mashing to rival the old Dragon’s Lair game. It’s no mystery why Tomb Raider ended up like this: its amount of content is limited, and it needed to stretch its playing time to justify a price tag of $50 or more. Still, it’s hard not to think wistfully about what a wonderful little six or seven hour game it might have become under other circumstances, if it hadn’t needed to fill fifteen or twenty hours instead.

Tomb Raider‘s other weaknesses are also in the predictable places for a game of this vintage, a time when designers were still trying to figure out how to make this style of game playable. (“Everyone is sitting down and realizing that it’s bloody hard to design games for 3D,” said Peter Molyneux in a contemporaneous interview.) The controls can be a little awkward, what with the way they keep changing depending on what Lara’s actually up to. Ditto the distractingly flighty camera through which you view Lara and her environs, which can be uncannily good at finding exactly the angle you don’t want it to at times. Then, too, in the absence of a good auto-map or clear line of progression through each level, you might sometimes find orientation to be at least as much a challenge as any of the other, more deliberately placed obstacles to progress.

Games would slowly get better at this sort of thing, but it would take time, and it’s not really fair to scold Tomb Raider overmuch for failings shared by virtually all of the 3D action games of 1996. Tomb Raider is never less than a solidly executed game, and occasionally it becomes an inspired one; your first encounter with a Tyrannosaurus Rex (!) in a lost Peruvian valley straight out of Arthur Conan Doyle remains as shocking and terrifying today as it ever was.

As a purely technical feat, meanwhile, Tomb Raider was amazing in its day from first to last. The levels were bigger than any that had yet been seen outside the 2.5D Star Wars shooter Dark Forces. In contrast to DOOM and its many clones, in contrast even to id’s latest 3D extravaganza Quake, Tomb Raider stood out as its own unique thing, and not just because of its third-person behind-the-back perspective. It just had a bit more finesse about it all the way around. Those other games all relied on big bazooka-toting lunks with physiques that put Arnold Schwarzenegger to shame. Even with those overgrown balloons on her chest, Lara managed to be lithe, nimble, potentially deadly in a completely different way. DOOM and Quake were a carpet-bombing attack; she was a precision-guide missile.

Sex appeal and genuinely innovative gameplay and technology all combined to make Lara Croft famous. Shelley Blond, who voiced Lara’s sharply limited amount of dialog in the game, tells of wandering into a department store on a visit to Los Angeles, and seeing “an enormous cutout of Lara Croft. Larger than live-size.” She made the mistake of telling one of the staff who she was, whereupon she was mobbed like a Beatle in 1964: “I was bright red and shaking. They all wanted pictures, and that was when I thought, ‘Shit, this is huge!'”

In a landmark moment for the coming out of videogames as a force in mainstream pop culture, id Software had recently convinced the hugely popular industrial-rock band Nine Inch Nails to score Quake. But that was nothing compared to the journey that Lara Croft now made in the opposite direction, from the gaming ghetto into the mainstream. She appeared on the cover of the fashion magazine The Face: “Occasionally the camera angle allows you a glimpse of her slanted brown eyes and luscious lips, but otherwise Lara’s always out ahead, out of reach, like the perfect girl who passes in the street.” She was the subject of feature articles in Time, Newsweek, and Rolling Stone. Her name got dropped in the most unlikely places. David James, the star goalkeeper for the Liverpool football club, said he was having trouble practicing because he’d rather be playing Tomb Raider. Rave-scene sensations The Prodigy used their addiction to the game as an excuse for delaying their new album. U2 commissioned huge images of her to show on the Jumbotron during their $120 million Popmart tour. She became a spokeswoman for the soft drink Lucozade and for Fiat cars, was plastered across mouse pads, CD-wallets, and lunch boxes. She became a kids’ action figure and the star of her own comic book. It really was as if people thought she was an actual person; journalists clamored to “interview” her, and Eidos was buried in fan mail addressed to her. “This was like the golden goose,” says Heath-Smith. “You don’t think it’s ever going to stop laying. Everything we touched turned gold. It was just a phenomenon.” Already in 1997, negotiations began for an eventual Tomb Raider feature film.

Most of all, Lara was the perfect mascot for the PlayStation. Sony’s most brilliant marketing stroke of all had been to pitch their console toward folks in their late teens and early twenties rather than children and adolescents, thereby legitimizing gaming as an adult pursuit, something for urban hipsters to do before and/or after an evening out at the clubs. (It certainly wasn’t lost on Sony that this older demographic tended to have a lot more disposable income than the younger ones…) Lara may have come along a year too late for the PlayStation launch, but better late than never. What hipster videogaming had been missing was its very own It Girl. And now it had her. Tomb Raider sold seven and a half million copies, at least 80 percent of them on the PlayStation.

That said, it did very well for itself on computers as well, especially after Core posted on their website a patch to make the game work with the new 3Dfx Voodoo chipset for hardware-accelerated 3D graphics on that platform. Tomb Raider drove the first wave of Voodoo adoption; countless folks woke up to find a copy of the game alongside a shiny new graphics card under the tree that Christmas morning. Eidos turned a £2.6 million loss in 1996 into a £14.5 million profit in 1997, thanks entirely to Lara. “Eidos is now the house that Lara built,” wrote Newsweek magazine.

There followed the inevitable sequels, which kept Lara front and center through the balance of the 1990s and beyond: Tomb Raider II in 1997, Tomb Raider III in 1998, Tomb Raider: The Last Revelation in 1999, Tomb Raider: Chronicles in 2000. These games were competently done for the most part, but didn’t stretch overmuch the template laid down by the first one; even the forthrightly non-arty Jeremy Heath-Smith admits that “we sold our soul” to keep the gravy train running, to make sure a new Tomb Raider game was waiting in stores each Christmas. Just as the franchise was starting to look a bit tired, with each successive game posting slowly but steadily declining sales numbers, the long-in-the-works feature film Lara Croft: Tomb Raider arrived in 2001 to bring her to a whole new audience and ensure that she became one of those rare pop-culture perennials.

By this time, a strong negative counter-melody had long been detectable underneath the symphony of commercial success. A lot of people — particularly those who weren’t quite ready to admit videogames into the same halls of culture occupied by music, movies, and books — had an all too clear image of who played Tomb Raider and why. They pictured a pimply teenage boy or a socially stunted adult man sitting on the couch in his parents’ basement with one hand on a controller and another in his pants, gazing in slack-jawed fascination at Lara’s gyrating backside, perhaps with just a trace of drool running down his spotty chin. And it must be admitted that some of Lara’s biggest fans didn’t do much to combat this image: the site called Nude Raider, which did what Toby Gard had refused to do by patching a naked version of Lara into the game, may just have been the most pathetic thing on the Internet circa 1997.

But other fans leaped to Lara’s defense as something more than just the world’s saddest masturbation aid. She was smart, she was strong, she was empowered, they said, everything feminist critics had been complaining for years that most women in games were not.

The problem, answered Lara’s detractors, was that she was still all too obviously crafted for the male gaze. She was, in other words, still a male fantasy at bottom, and not a terribly mature one at that, looking as she did like something a horny teenager who had yet to lay hands on a real girl might draw in his notebook. Her proportions — proudly announced by Eidos as 34D-24-35 — were obtainable by virtually no real woman, at least absent the services of a plastic surgeon. “If you genetically engineered a Lara-shaped woman,” noted PC Gaming World‘s (female) reviews editor Cal Jones, “she would die within around fifteen seconds, since there’s no way her tiny abdomen could house all her vital organs.” Violet Berlin, a popular technology commentator on British television, called Lara “a ’70s throwback from the days when pouting lovelies were always to be found propped up against any consumer icon advertised for men.”

Everyone was right in her or his own way, of course. Lara Croft truly was different from the videogame bimbos of the past, and the fact that millions of boys were lining up to become her — or at least to control her — was progress of some sort. But still… as soon as you looked at her, you knew which gender had drawn her. Even Toby Gard, who had given up millions in a purely symbolic protest against the way his managers wished to exploit her, talked about her in ways that were far from free of male gazing — that could start to sound, if we’re being honest, just a little bit creepy.

Lara was designed to be a tough, self-reliant, intelligent woman. She confounds all the sexist clichés apart from the fact that she’s got an unbelievable figure. Strong, independent women are the perfect fantasy girls — the untouchable is always the most desirable.

Some feminist linguists would doubtless make much of the unconscious slip from “women” to “girls” in this comment…

The Lara in the games was rather a cipher in terms of personality, which worked for her benefit in the mass media. She could easily be re-purposed to serve as anything from a feminist hero to a sex kitten, depending on what was needed at that juncture.

For every point there was a counterpoint. Some girls and women saw Lara as a sign of progress, even as an aspirational figure. Others saw her only as one more stereotype of female perfection created by and for males, one to which they could never hope to measure up. “It’s a well-known fact that most [male] youngsters get their first good look at the female anatomy through porn mags, and come away thinking women have jutting bosoms, airbrushed skin, and neatly trimmed body hair,” said Cal Jones. “Now, thanks to Lara, they also think women are super fit, agile gymnasts with enough stamina to run several marathons back to back. Cheers.”

On the other hand, the same male gamers had for years been seeing images of almost equally unattainable masculine perfection on their screens, all bulging biceps and chiseled abs. How was this different? Many sensed that it was different, somehow, but few could articulate why. Michelle Goulet of the website Game Girlz perhaps said it best: Lara was “the man’s ideal image of a girl, not a girl’s ideal image of a girl.” The inverse was not true of all those warrior hunks: they were “based on the body image that is ideal to a lot of guys, not girls. They are nowhere near my ideal man.” The male gaze, that is to say, was the arbiter in both cases. What to do about it? Goulet had some interesting suggestions:

My thoughts on this matter are pretty straightforward. Include females in making female characters. Find out what the ideal female would be for both a man and a woman and work with that. Respect the females the same as you would the males.

Respecting the female characters is hard when they look like strippers with guns and seem to be nothing more than an erection waiting to happen. Believing that the industry in general respects females is hard when you see ads with women tied up on beds. In my opinion, respect is what most girls are after, and I feel that if the gaming community had more respect for their female characters they would attract the heretofore elusive female market. This doesn’t mean that girls in games have to be some kind of new butch race. Femininity is a big part of being female. This means that girls should be girls. Ideal body images and character aspects that are ideal for females, from a female point of view. I would be willing to bet that guys would find these females more attractive than the souped-up bimbos we are used to seeing. If sexuality is a major selling point, and a major attraction for the male gamer, then, fine, throw in all the sexuality you want, but doing so should not preclude respect for females.

To sum up, I have to say I think the gaming industry should give guys a little more credit, and girls a lot more respect, and I hope this will move the tide in that direction.

I’m happy to say that the tide has indeed moved in that direction for Lara Croft at least since Michelle Goulet wrote those words in the late 1990s. It began in a modest way with that first Tomb Raider movie in 2001. Although Angeline Jolie wore prosthetic breasts when she played Lara, it was impossible to recreate the videogame character’s outlandish proportions in their entirety. In order to maintain continuity with that film and a second one that came out in 2003, the Tomb Raider games of the aughts modeled their Laras on Jolie, resulting in a slightly more realistic figure. Then, too, Toby Gard returned to the franchise to work on 2007’s Tomb Raider: Anniversary and 2008’s Tomb Raider: Underworld, bringing some of his original vision of Lara with him.

But the real shift came when the franchise, which was once again fading in popularity by the end of the aughts, was rebooted in 2013, with a game that called itself simply Tomb Raider. Instead of pendulous breasts and booty mounted on spaghetti-thin legs and torso, it gave us a fit, toned, proportional Lara, a woman who looked like she had spent a lot of time and money at the local fitness center instead of the plastic surgeon’s office. If you ask this dirty old male gazer, she’s a thousand times more attractive than the old Lara, even as she’s a healthy, theoretically attainable ideal for a young woman who’s willing to put in some hard hours at the gym. This was proved by Alicia Vikander, the star of a 2018 Tomb Raider movie, the third and last to date; she looked uncannily like the latest videogame Lara up there on the big screen, with no prosthetics required.

Bravo, I say. If the original Lara Croft was a sign of progress in her way, the latest Lara is a sign that progress continued. If you were to say the new Lara is the one we should have had all along — within the limits of what the technology of the time would allow, of course — I wouldn’t argue with you. But still… better late than never.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: The books Grand Thieves and Tomb Raiders: How British Video Games Conquered the World by Magnus Anderson and Rebecca Levene; From Barbie to Mortal Kombat: Gender and Computer Games, edited by Justine Cassell and Henry Jenkins; Beyond Barbie and Mortal Kombat: New Perspectives on Gender and Gaming, edited by Yasmin B. Kafai, Carrie Heeter, Jill Denner, and Jennifer Y. Sun; Gender Inclusive Game Design: Expanding the Market by Sheri Graner Ray; The Making of Tomb Raider by Daryl Baxter; 20 Years of Tomb Raider: Digging Up the Past, Defining the Future by Meagan Marie; and A Gremlin in the Works by Mark James Hardisty. Computer Gaming World of August 1996, October 1996, January 1997, March 1997, and November 1997; PC Powerplay of July 1997; Next Generation of May 1996, October 1996, and June 1998; The Independent of April 18 2004; Retro Gamer 20, 147, 163, and 245. Online sources include three pieces for the Game Studies journal, by Helen W. Kennedy, Janine Engelbrecht, and Esther MacCallum-Stewart. Plus two interview with Toby Gard, by The Guardian‘s Greg Howson and Game Developer‘s David Jenkins.

The first three Tomb Raider games are available as digital purchases at GOG.com, as are the many games that followed those three.)

 

Tags: , , ,

The Next Generation in Graphics, Part 3: Software Meets Hardware

The first finished devices to ship with the 3Dfx Voodoo chipset inside them were not add-on boards for personal computers, but rather standup arcade machines. That venerable segment of the videogames industry was enjoying its last lease on life in the mid-1990s; this was the last era when the graphics of the arcade machines were sufficiently better than those which home computers and consoles could generate as to make it worth getting up off the couch, driving into town, and dropping a quarter or two into a slot to see them. The Voodoo chips now became part and parcel of that, ironically just before they would do much to destroy the arcade market by bringing equally high-quality 3D graphics into homes. For now, though, they wowed players of arcade games like San Francisco Rush: Extreme Racing, Wayne Gretzky’s 3D Hockey, and NFL Blitz.

Still, Gary Tarolli, Scott Sellers, and Ross Smith were most excited by the potential of the add-on-board market. All too well aware of how the chicken-or-the-egg deadlock between game makers and players had doomed their earlier efforts with Pellucid and Media Vision, they launched an all-out charm offensive among game developers long before they had any actual hardware to show them. Smith goes so far as to call “connecting with the developers early on and evangelizing them” the “single most important thing we ever did” — more important, that is to say, than designing the Voodoo chips themselves, impressive as they were. Throughout 1995, somebody from 3Dfx was guaranteed to be present wherever developers got together to talk among themselves. While these evangelizers had no hardware as yet, they did have software simulations running on SGI workstations — simulations which, they promised, duplicated exactly the capabilities the real chips would have when they started arriving in quantity from Taiwan.

Our core trio realized early on that their task must involve software as much as hardware in another, more enduring sense: they had to make it as easy as possible to support the Voodoo chipset. In my previous article, I mentioned how their old employer SGI had created an open-source software library for 3D graphics, known as OpenGL. A team of programmers from 3Dfx now took this as the starting point of a slimmed-down, ultra-optimized MS-DOS library they called GLide; whereas OpenGL sported well over 300 individual function calls, GLide had less than 100. It was fast, it was lightweight, and it was easy to program. They had good reason to be proud of it. Its only drawback was that it would only work with the Voodoo chips — which was not necessarily a drawback at all in the eyes of its creators, given that they hoped and planned to dominate a thriving future market for hardware-accelerated 3D graphics on personal computers.

Yet that domination was by no means assured, for they were far from the only ones developing consumer-oriented 3D chipsets. One other company in particular gave every indication of being on the inside track to widespread acceptance. That company was Rendition, another small, venture-capital-funded startup that was doing all of the same things 3Dfx was doing — only Rendition had gotten started even earlier. It had actually been Rendition who announced a 3D chipset first, and they had been evangelizing it ever since every bit as tirelessly as 3Dfx.

The Voodoo chipset was technologically baroque in comparison to Rendition’s chips, which went under the name of Vérité. This meant that Voodoo should easily outperform them — eventually, once all of the logistics of East Asian chip fabricating had been dealt with and deals had been signed with board makers. In June of 1996, when the first Vérité-powered boards shipped, the Voodoo chipset quite literally didn’t exist as far as consumers were concerned. Those first Vérité boards were made by none other than Creative Labs, the 800-pound gorilla of the home-computer add-on market, maker of the ubiquitous Sound Blaster sound cards and many a “multimedia upgrade kit.” Such a partner must be counted as yet another early coup for Rendition.

The Vérité cards were followed by a flood of others whose slickly aggressive names belied their somewhat workmanlike designs: 3D Labs Permedia, S3 Virge, ATI 3D Rage, Matrox Mystique. And still Voodoo was nowhere.

What was everywhere was confusion; it was all but impossible for the poor, benighted gamer to make heads or tails of the situation. None of these chipsets were compatible with one another at the hardware level in the way that 2D graphics cards were; there were no hardware standards for 3D graphics akin to VGA, that last legacy of IBM’s era of dominance, much less the various SVGA standards defined by the Video Electronic Standards Association (VESA). Given that most action-oriented computer games still ran on MS-DOS, this was a serious problem.

For, being more of a collection of basic function calls than a proper operating system, MS-DOS was not known for its hardware agnosticism. Most of the folks making 3D chips did provide an MS-DOS software package for steering them, similar in concept to 3Dfx’s GLide, if seldom as optimized and elegant. But, just like GLide, such libraries worked only with the chipset for which they had been created. What was sorely needed was an intermediate layer of software to sit between games and the chipset-manufacturer-provided libraries, to automatically translate generic function calls into forms suitable for whatever particular chipset happened to exist on that particular computer. This alone could make it possible for one build of one game to run on multiple 3D chipsets. Yet such a level of hardware abstraction was far beyond the capabilities of bare-bones MS-DOS.

Absent a more reasonable solution, the only choice was to make separate versions of games for each of the various 3D chipsets. And so began the brief-lived, unlamented era of the 3D pack-in game. All of the 3D-hardware manufacturers courted the developers and publishers of popular software-rendered 3D games, dangling before them all sorts of enticements to create special versions that took advantage of their cards, more often than not to be included right in the box with them. Activision’s hugely successful giant-robot-fighting game MechWarrior 2 became the king of the pack-ins, with at least half a dozen different chipset-specific versions floating around, all paid for upfront by the board makers in cold, hard cash. (Whatever else can be said about him, Bobby Kotick has always been able to spot the seams in the gaming market where gold is waiting to be mined.)

It was an absurd, untenable situation; the game or games that came in the box were the only ones that the purchasers of some of the also-ran 3D contenders ever got a chance to play with their new toys. Gamers and chipset makers alike could only hope that, once Windows replaced MS-DOS as the gaming standard, their pain would go away.

In the meanwhile, the games studio that everyone with an interest in the 3D-acceleration sweepstakes was courting most of all was id Software — more specifically, id’s founder and tech guru, gaming’s anointed Master of 3D Algorithms, John Carmack. They all begged him for a version of Quake for their chipset.

And once again, it was Rendition that scored the early coup here. Carmack actually shared some of the Quake source code with them well before either the finished game or the finished Vérité chipset was available for purchase. Programmed by a pair of Rendition’s own staffers working with the advice and support of Carmack and Michael Abrash, the Vérité-rendered version of the game, commonly known as vQuake, came out very shortly after the software-rendered version. Carmack called it “the premier platform for Quake” — truly marketing copy to die for. Gamers too agreed that 3D acceleration made the original’s amazing graphics that much more amazing, while the makers of other 3D chipsets gnashed their teeth and seethed.

Quake with software rendering.

vQuake

Among these, of course, was the tardy 3Dfx. The first Voodoo cards appeared late, seemingly hopelessly so: well into the fall of 1996. Nor did they have the prestige and distribution muscle of a partner like Creative Labs behind them: the first two Voodoo boards rather came from smaller firms by the names of Diamond and Orchid. They sold for $300, putting them well up at the pricey end of the market —  and, unlike all of the competition’s cards, these required you to have another, 2D-graphics card in your computer as well. For all of these reasons, they seemed easy enough to dismiss as overpriced white elephants at first blush. But that impression lasted only until you got a look at them in action. The Voodoo cards came complete with a list of features that none of the competition could come close to matching in the aggregate: bilinear filtering, trilinear MIP-mapping, alpha blending, fog effects, accelerated light sources. If you don’t know what those terms mean, rest assured that they made games look better and play faster than anything else on the market. This was amply demonstrated by those first Voodoo boards’ pack-in title, an otherwise rather undistinguished, typical-of-its-time shooter called Hellbender. In its new incarnation, it suddenly looked stunning.

The Orchid Righteous 3D card, one of the first two to use the Voodoo chipset. (The only consumer category as fond of bro-dude phraseology like “extreme” and “righteous” as the makers of 3D cards was men’s razors.)

The battle lines were drawn between Rendition and 3Dfx. But sadly for the former, it quickly emerged that their chipset had one especially devastating weakness in comparison to its rival: its Z-buffering support left much to be desired. And what, you ask, is Z-buffering? Read on!

One of the non-obvious problems that 3D-graphics systems must solve is the need for objects in the foreground of a scene to realistically obscure those behind them. If, at the rendering stage, we were to simply draw the objects in whatever random order they came to us, we would wind up with a dog’s breakfast of overlapping shapes. We need to have a way of depth-sorting the objects if we want to end up with a coherent, correctly rendered scene.

The most straightforward way of depth-sorting is called the Painter’s Algorithm, because it duplicates the process a human artist usually goes through to paint a picture. Let’s say our artist wants to paint a still life of an apple sitting in front of a basket of other fruits. First she will paint the basket to her satisfaction, then paint the apple right over the top of it. Similarly, when we use a Painter’s Algorithm on the computer, we first sort the whole collection of objects into a hierarchy that begins with those that are farthest from our virtual camera and ends with those closest to it. Only after this has been done do we set about the task of actually drawing them to the screen, in our sorted order from the farthest away to the closest. And so we end up with a correctly rendered image.

But, as so often happens in matters like this, the most logically straightforward way is far from the most efficient way of depth-sorting a 3D scene. When the number of objects involved is few, the Painter’s Algorithm works reasonably well. When the numbers get into the hundreds or thousands, however, it results in much wasted effort, as the computer ends up drawing objects that are completely obscured by other objects in front of them — i.e., objects that don’t really need to be drawn at all. Even more importantly, the process of sorting all of the objects by depth beforehand is painfully time-consuming, a speed bump that stops the rendering process dead until it is completed. Even in the 1990s, when their technology was in a laughably primitive stage compared to today, GPUs tended to emphasize parallel processing — i.e., staying constantly busy with multiple tasks at the same time. The necessity of sorting every object in a scene by depth before even getting properly started on rendering it rather threw all that out the window.

Enter the Z-buffer. Under this approach, every object is rendered right away as soon as it comes down the pipeline, used to build the appropriate part of the raster of colored pixels that, once completed, will be sent to the monitor screen as a single frame. But there comes an additional wrinkle in the form of the Z-buffer itself: a separate, parallel raster containing not the color of each pixel but its distance from the camera. Before the GPU adds an entry to the raster of pixel colors, it compares the distance of that pixel from the camera with the number in that location in the Z-buffer. If the current distance is less than the one already found there, it knows that the pixel in question should be overwritten in the main raster and that the Z-buffer raster should be updated with that pixel’s new distance from the camera. Ditto if the Z-buffer contains a null value, indicating no object has yet been drawn at that pixel. But if the current distance is larger than the (non-null) number already found there, the GPU simply moves on without doing anything more, confident in the knowledge that what it had wanted to draw should actually be hidden by what it has already drawn.

There are plenty of occasions when the same pixel is drawn over twice — or many times — before reaching the screen even under this scheme, but it is nevertheless still vastly more efficient than the Painter’s Algorithm, because it keeps objects flowing through the pipeline steadily, with no hiccups caused by lengthy sorting operations. Z-buffering support was reportedly a last-minute addition to the Vérité chipset, and it showed. Turning depth-sorting on for 100-percent realistic rendering on these chips cut their throughput almost in half; the Voodoo chipset, by contrast, just said, “No worries!,” and kept right on trucking. This was an advantage of titanic proportions. It eventually emerged that the programmers at Rendition had been able to get Quake running acceptably on the Vérité chips only by kludging together their own depth-sorting algorithms in software. With Voodoo, programmers wouldn’t have to waste time with stuff like that.

But surprisingly, the game that blew open the doors for the Voodoo chipset wasn’t Quake or anything else from id. It was rather a little something called Tomb Raider, from the British studio Core Design, a game which used a behind-the-back third-person perspective rather than the more typical first-person view — the better to appreciate its protagonist, the buxom and acrobatic female archaeologist Lara Croft. In addition to Lara’s considerable assets, Tomb Raider attracted gamers with its unprecedentedly huge and wide-open 3D environments. (It will be the subject of my next article, for those interested in reading more about its massive commercial profile and somewhat controversial legacy.)

In November of 1996, when Tomb Raider been out for less than a month, Core put a  Voodoo patch for it up on their website. Gamers were blown away. “It’s a totally new game!” gushed one on Usenet. “It was playable but a little jerky without the patch, but silky smooth to play and beautiful to look at with the patch.” “The level of detail you get with the Voodoo chip is amazing!” enthused another. Or how about this for a ringing testimonial?

I had been playing the regular Tomb Raider on my PC for about two weeks
before I got the patch, with about ten people seeing the game, and not
really saying anything regarding how amazing it was. When I got the
accelerated patch, after about four days, every single person who has
seen the game has been in awe watching the graphics and how
smooth [and] lifelike the movement is. The feel is different, you can see
things much more clearly, it’s just a more enjoyable game now.

Tomb Raider became the biggest hit of the 1996 holiday season, and tens if not hundreds of thousands of Voodoo-based 3D cards joined it under Christmas trees.

Tomb Raider with software rendering.

Tomb Raider with a Voodoo card.

In January of 1997, id released GLQuake, a new version of that game that supported the Voodoo chipset. In telling contrast to the Vérité-powered vQuake, which had been coded by Rendition’s programmers, GLQuake had been taken on by John Carmack as a personal project. The proof was in the pudding; this Quake ran faster and looked better than either of the previous ones. Running on a machine with a 200 MHz Intel Pentium processor and a Voodoo card, GLQuake could manage 70 frames per second, compared to 41 frames for the software-rendered version, whilst appearing much more realistic and less pixelated.

GLQuake

One last stroke of luck put the finishing touch on 3Dfx’s destiny of world domination: the price of memory dropped precipitously, thanks to a number of new RAM-chip factories that came online all at once in East Asia. (The factories had been built largely to feed the memory demands of Windows 95, the straw that was stirring the drink of the entire computer industry.) The Voodoo chipset required 4 MB of memory to operate effectively — an appreciable quantity in those days, and a big reason why the cards that used it tended to cost almost as twice as much as those based on the Vérité chips, despite lacking the added complications and expense of 2D support. But with the drop in memory prices, it suddenly became practical to sell a Voodoo card for under $200. Rendition could also lower their prices somewhat thanks to the memory windfall, of course, but at these lower price points the dollar difference wasn’t as damaging to 3Dfx. After all, the Voodoo cards were universally acknowledged to be the class of the industry. They were surely worth paying a little bit of a premium for. By the middle of 1997, the Voodoo chipset was everywhere, the Vérité one left dead at the side of the road. “If you want full support for a gamut of games, you need to get a 3Dfx card,” wrote Computer Gaming World.

These were heady times at 3Dfx, which had become almost overnight the most hallowed name in hardcore action gaming outside of id Software, all whilst making an order of magnitude more money than id, whose business model under John Carmack was hardly fine-tuned to maximize revenues. In a comment he left recently on this site, reader Captain Kal said that, when it comes to 3D gaming in the late 1990s, “one company springs to my mind without even thinking: 3Dfx. Yes, we also had 3D solutions from ATI, NVIDIA, or even S3, but Voodoo cards created the kind of dedication that I hadn’t seen since the Amiga days.” The comparison strikes me as thoroughly apropos.

3Dfx brought in a high-profile CEO named Greg Ballard, formerly of Warner Music and the videogame giant Capcom, to oversee a smashingly successful initial public offering in June of 1997. He and the three thirty-something founders were the oldest people at the company. “Most of the software engineers were [in their] early twenties, gamers through and through, loved games,” says Scott Sellers. “Would code during the day and play games at night. It was a culture of fun.” Their offices stood at the eighth hole of a golf course in Sunnyvale, California. “We’d sit out there and drink beer,” says Ross Smith. “And you’d have to dodge incoming golf balls a bit. But the culture was great.” Every time he came down for a visit, says their investing angel Gordon Campbell,

they’d show you something new, a new demo, a new mapping technique. There was always something. It was a very creative environment. The work hard and play hard thing, that to me kind of was Silicon Valley. You went out and socialized with your crew and had beer fests and did all that kind of stuff. And a friendly environment where everybody knew everybody and everybody was not in a hierarchy so much as part of the group or the team.

I think the thing that was added here was, it’s the gaming industry. And that was a whole new twist on it. I mean, if you go to the trade shows, you’d have guys that would show up at our booth with Dracula capes and pointed teeth. I mean, it was just crazy.

Gary Tarolli, Scott Sellers, and Greg Ballard do battle with a dangerous houseplant. The 1990s were wild and crazy times, kids…

While the folks at 3Dfx were working hard and playing hard, an enormously consequential advancement in the field of software was on the verge of transforming the computer-games industry. As I noted previously, in 1996 most hardcore action games were still being released for MS-DOS. In 1997, however, that changed in a big way. With the exception of only a few straggling Luddites, game developers switched over to Windows 95 en masse. Quake had been an MS-DOS game; Quake II, which would ship at the end of 1997, ran under Windows. The same held true for the original Tomb Raider and its 1997 sequel, as it did for countless others.

Gaming was made possible on Windows 95 by Microsoft’s DirectX libraries, which finally let programmers do everything in Windows that they had once done in MS-DOS, with only a slight speed penalty if any, all while giving them the welcome luxury of hardware independence. That is to say, all of the fiddly details of disparate video and sound cards and all the rest were abstracted away into Windows device drivers that communicated automatically with DirectX to do the needful. It was an enormous burden lifted off of developers’ shoulders. Ditto gamers, who no longer had to futz about for hours with cryptic “autoexec.bat” and “config.sys” files, searching out the exact combination of arcane incantations that would allow each game they bought to run optimally on their precise machine. One no longer needed to be a tech-head simply to install a game.

In its original release of September 1995, the full DirectX suite consisted of DirectDraw for 2D pixel graphics, DirectSound for sound and music, DirectInput for managing joysticks and other game-centric input devices, and DirectPlay for networked multiplayer gaming. It provided no support for doing 3D graphics. But never fear, Microsoft said: 3D support was coming. Already in February of 1995, they had purchased a British company called RenderMorphics, the creator of Reality Lab, a hardware-agnostic 3D library. As promised, Microsoft added Direct3D to the DirectX collection with the latter’s 2.0 release, in June of 1996.

But, as the noted computer scientist Andrew Tanenbaum once said, “the nice thing about standards is that you have so many to choose from.” For the next several years, Direct3D would compete with another library serving the same purpose: a complete, hardware-agnostic Windows port of SGI’s OpenGL, whose most prominent booster was no less leading a light than John Carmack. Direct3D would largely win out in the end among game developers despite Carmack’s endorsement of its rival, but we need not concern ourselves overmuch with the details of that tempest in a teacup here. Suffice to say that even the most bitter partisans on one side of the divide or the other could usually agree that both Direct3D and OpenGL were vastly preferable to the bad old days of chipset-specific 3D games.

Unfortunately for them, 3Dfx, rather feeling their oats after all of their success, made in response to these developments the first of a series of bad decisions that would cause their time at the top of the 3D-graphics heap to be a relatively short one.

Like all of the others, the Voodoo chipset could be used under Windows with either Direct3D or OpenGL. But there were some features on the Voodoo chips that the current implementations of those libraries didn’t support. 3Dfx was worried, reasonably enough on the face of it, about a “least-common-denominator effect” which would cancel out the very real advantages of their 3D chipset and make one example of the breed more or less as good as any other. However, instead of working with the folks behind Direct3D and OpenGL to get support for the Voodoo chips’ special features into those libraries, they opted to release a Windows version of GLide, and to strongly encourage game developers to keep working with it instead of either of the more hardware-agnostic alternatives. “You don’t want to just have a title 80 percent as good as it could be because your competitors are all going to be at 100 percent,” they said pointedly. They went so far as to start speaking of Voodoo-equipped machines as a whole new platform unto themselves, separate from more plebeian personal computers.

It was the talk and actions of a company that had begun to take its own press releases a bit too much to heart. But for a time 3Dfx got away with it. Developers coded for GLide in addition to or instead of Direct3D or OpenGL, because you really could do a lot more with it and because the cachet of the “certified” 3Dfx logo that using GLide allowed them to put on their boxes really was huge.

In March of 1998, the first cards with a new 3Dfx chipset, known as Voodoo2, began to appear. Voodoo2 boasted twice the overall throughput of its predecessor, and could handle a screen resolution of 800 X 600 instead of just 640 X 480; you could even join two of the new cards together to get even better performance and higher resolutions. This latest chipset only seemed to cement 3Dfx’s position as the class of their field.

The bottom line reflected this. 3Dfx was, in the words of their new CEO Greg Ballard, “a rocket ship.” In 1995, they earned $4 million in revenue; in 1996, $44 million; in 1997, $210 million; and in 1998, their peak year, $450 million. And yet their laser focus on selling the Ferraris of 3D acceleration was blinding Ballard and his colleagues to the potential of 3D Toyotas, where the biggest money of all was waiting to be made.

Over the course of the second half of the 1990s, 3D GPUs went from being exotic pieces of kit known only to hardcore gamers to being just another piece of commodity hardware found in almost all computers. 3Dfx had nothing to do with this significant shift. Instead they all but ignored this so-called “OEM” (“Original Equipment Manufacturer”) side of the GPU equation: chipsets that weren’t the hottest or the sexiest on the market, but that were cheap and easy to solder right onto the motherboards of low-end and mid-range machines bearing such unsexy name plates as Compaq and Packard Bell. Ironically, Gordon Campbell had made a fortune with Chips & Technologies selling just such commodity-grade 2D graphics chipsets. But 3Dfx was obstinately determined to fly above the OEM segment, determined to offer “premium” products only. “It doesn’t matter if 20 million people have one of our competitors’ chips,” said Scott Sellers in 1997. “How many of those people are hardcore gamers? How many of those people are buying games?” “I can guarantee that 100 percent of 3Dfx owners are buying games,” chimed in a self-satisfied-sounding Gary Tarolli.

The obvious question to ask in response was why it should matter to 3Dfx how many games — or what types of games — the users of their chips were buying, as long as they were buying gadgets that contained their chips. While 3Dfx basked in their status as the hardcore gamer’s favorite, other companies were selling many more 3D chips, admittedly at much less of a profit on a chip-per-chip basis, at the OEM end of the market. Among these was a firm known as NVIDIA, which had been founded on the back of a napkin in a Denny’s diner in 1993. NVIDIA’s first attempt to compete head to head with 3Dfx at the high end was underwhelming at best: released well after the Voodoo2 chipset, the RIVA TNT ran so hot that it required a noisy onboard cooling fan, and yet still couldn’t match the Voodoo2’s performance. By that time, however, NVIDIA was already building a lucrative business out of cheaper, simpler chips on the OEM side, even as they were gaining the wisdom they would need to mount a more credible assault on the hardcore-gamer market. In late 1998, 3Dfx finally seemed to be waking up to the fact that they would need to reach beyond the hardcore to continue their rise, when they released a new chipset called Voodoo Banshee which wasn’t quite as powerful as the Voodoo2 chips but could do conventional 2D as well as 3D graphics, meaning its owners would not be forced to buy a second video card just in order to use their computers.

But sadly, they followed this step forward with an absolutely disastrous mistake. You’ll remember that prior to this point 3Dfx had sold their chips only to other companies, who then incorporated them into add-on boards of their own design, in the same way that Intel sold microprocessors to computer makers rather than directly to consumers (aside from the build-your-own-rig hobbyists, that is). This business model had made sense for 3Dfx when they were cash-strapped and hadn’t a hope of building retail-distribution channels equal to those of the established board makers. Now, though, they were flush with cash, and enjoyed far better name recognition than the companies that made the boards which used their chips; even the likes of Creative Labs, who had long since dropped Rendition and were now selling plenty of 3Dfx boards, couldn’t touch them in terms of prestige. Why not cut out all these middlemen by manufacturing their own boards using their own chips and selling them directly to consumers with only the 3Dfx name on the box? They decided to do exactly that with their third state-of-the-art 3D chipset, the predictably named Voodoo3, which was ready in the spring of 1999.

Those famous last words apply: “It seemed like a good idea at the time.” With the benefit of hindsight, we can see all too clearly what a terrible decision it actually was. The move into the board market became, says Scott Sellers, the “anchor” that would drag down the whole company in a rather breathtakingly short span of time: “We started competing with what used to be our own customers” — i.e., the makers of all those earlier Voodoo boards. Then, too, 3Dfx found that the logistics of selling a polished consumer product at retail, from manufacturing to distribution to advertising, were much more complex than they had reckoned with.

Still, they might — just might — have been able to figure it all out and make it work, if only the Voodoo3 chipset had been a bit better. As it was, it was an upgrade to be sure, but not quite as much of one as everyone had been expecting. In fact, some began to point out now that even the Voodoo2 chips hadn’t been that great a leap: they too were better than their predecessors, yes, but that was more down to ever-falling memory prices and ever-improving chip-fabrication technologies than any groundbreaking innovations in their fundamental designs. It seemed that 3Dfx had started to grow complacent some time ago.

NVIDIA saw their opening and made the most of it. They introduced a new line of their own, called the TNT2, which outdid its 3Dfx competitor in at least one key metric: it could do 24-bit color, giving it almost 17 million shades of onscreen nuance, compared to just over 65,000 in the case of Voodoo3. For the first time, 3Dfx’s chips were not the unqualified, undisputed technological leaders. To make matters worse, NVIDIA had been working closely with Microsoft in exactly the way that 3Dfx had never found it in their hearts to do, ensuring that every last feature of their chips was well-supported by the increasingly dominant Direct3D libraries.

And then, as the final nail in the coffin, there were all those third-party board makers 3Dfx had so rudely jilted when they decided to take over that side of the business themselves. These had nowhere left to go but into NVIDIA’s welcoming arms. And needless to say, these business partners spurned were highly motivated to make 3Dfx pay for their betrayal.

NVIDIA was on a roll now. They soon came out with yet another new chipset, the GeForce 256, which had a “Transform & Lighting” (T&L) engine built in, a major conceptual advance. And again, the new technology was accessible right from the start through Direct3D, thanks to NVIDIA’s tight relationship with Microsoft. Meanwhile the 3Dfx chips still needed GLide to perform at their best. With those chips’ sales now plummeting, more and more game developers decided the oddball library just wasn’t worth the trouble anymore. By the end of 1999, a 3Dfx death spiral that absolutely no one had seen coming at the start of the year was already well along. NVIDIA was rapidly sewing up both the high end and the low end, leaving 3Dfx with nothing.

In 2000, NVIDIA continued to go from strength to strength. Their biggest challenger at the hardcore-gamer level that year was not 3Dfx, but rather ATI, who arrived on the scene with a new architecture known as Radeon. 3Dfx attempted to right the ship with a two-pronged approach: a Voodoo4 chipset aimed at the long-neglected budget market, and a Voodoo5 aimed at the high end. Both had potential, but the company was badly strapped for cash by now, and couldn’t afford to give them the launch they deserved. In December of 2000, 3Dfx announced that they had agreed to sell out to NVIDIA, who thought they had spotted some bits and bobs in their more recent chips that they might be able to make use of. And that, as they say, was that.

3Dfx was a brief-burning comet by any standard, a company which did everything right up to the instant when someone somewhere flipped a switch and it suddenly started doing everything wrong instead. But whatever regrets Gary Tarolli, Scott Sellers, and Ross Smith may have about the way it all turned out, they can rest secure in the knowledge that they changed not just gaming but computing in general forever. Their vanquisher NVIDIA had revenues of almost $27 billion last year, on the strength of GPUs which are as far beyond the original Voodoo chips as an F-35 is beyond the Wright Brothers’ flier, which are at the forefront not just of 3D graphics but a whole new trend toward “massively parallel” computing.

And yet even today, the 3Dfx name and logo can still send a little tingle of excitement running down the spines of gamers of a certain age, just as that of the Amiga can among some just slightly older. For a brief few years there, over the course of one of most febrile, chaotic, and yet exciting periods in all of gaming history, having a Voodoo card in your computer meant that you had the best graphics money could buy. Most of us wouldn’t want to go back to the days of needing to constantly tinker with the innards of our computers, of dropping hundreds of dollars on the latest and the greatest and hoping that publishers would still be supporting it in six months, of poring over magazines trying to make sense of long lists of arcane bullet points that seemed like fragments of a particularly esoteric PhD thesis (largely because they originally were). No, we wouldn’t want to go back; those days were kind of ridiculous. But that doesn’t mean we can’t look back and smile at the extraordinary technological progression we were privileged to witness over such a disarmingly short period of time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke. Computer Gaming World of November 1995, January 1996, July 1996, November 1996, December 1996, September 1997, October 1997, November 1997, and April 1998; Next Generation of October 1997 and January 1998; Atomic of June 2003; Game Developer of December 1996/January 1997 and February/March 1997. Online sources include “3Dfx and Voodoo Graphics — The Technologies Within” at The Overclocker, former 3Dfx CEO Greg Ballard’s lecture for Stanford’s Entrepreneurial Thought Leader series, the Computer History Museum’s “oral history” with the founders of 3Dfx, Fabian Sanglard’s reconstruction of the workings of the Vérité chipset and the Voodoo 1 chipset, “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site, and “A Fallen Titan’s Final Glory” by Joel Hruska at the long-defunct Sudhian Media. Also, the Usenet discussions that followed the release of the 3Dfx patch for Tomb Raider and Nicol Bolas’s crazily detailed reply to the Stack Exchange question “Why Do Game Developer Prefer Windows?”.)

 

Tags: , , , , , , , ,