RSS

Tag Archives: microsoft

Mr. Roberts Goes to Hollywood, Part 2: The Producer


This article tells part of the story of Chris Roberts.

With the Wing Commander movie having gone down in flames, there was nothing left for Chris Roberts and the rest of Digital Anvil to do but go back to making games. This undoubtedly pleased Microsoft, which had been waiting for some return on its generous investment in what it had thought was a new games studio for more than two years now. Yet Microsoft must have been considerably less pleased by the actual states of the game projects being undertaken by Digital Anvil. For they rather belied Roberts’s repeated assurances that doing the special effects for the movie wouldn’t affect the games at all. Of the five game projects that had been begun before the movies came calling, Robert Rodriguez’s Tribe had ended with his departure and Highway Knight had also been quietly abandoned. Two of the other projects — the real-time-strategy game Conquest and the crazily ambitious alternative-life-in-a-box Freelancer — were spinning their wheels with no firm timetable.

That did at least leave Starlancer to stand out as a rare example of good sense. At the height of his brother’s movie mania, Erin Roberts had flown to Britain, to place his Starlancer design documents in the hands of a new outfit called Warthog, located in the Robertses’ old hometown of Manchester. The first tangible product to result from Microsoft’s investment in Digital Anvil would thus come from a sub-contractor rather than from the studio itself.

Starlancer shipped in April of 2000, whereupon it became clear that, while Warthog had done a competent job with it, they hadn’t been able to make it feel fresh or exciting. “An interest-killing combination of ennui and déjà vu snakes through the whole endeavor,” wrote Computer Gaming World. In terms of presentation, it most resembled a higher-resolution version of Wing Commander II, the last game in the series before digitized human actors entered the picture. It too made do with straightforward mission briefings and the occasional computer-generated cutscene. By no means ought this to have been an automatically bad thing. Yet Starlancer lacked the spark that might have let it challenge the previous year’s Freespace 2 for the title of the 1990s space sim’s crowning glory. It sold like the afterthought it felt like.

In the meantime, Chris Roberts had picked up the pieces after the disappointment of the Wing Commander movie’s reception and unleashed his prodigious capacity for enthusiasm upon the Freelancer project. As he told gaming magazines and websites throughout 1999 and 2000, his goal was to create a “detailed, dynamic, living world” — or rather a galaxy, in which you could travel from planet to planet in your customized spaceship, doing just about anything you could imagine.

Freelancer is way beyond anything I’ve done in the Wing Commander universe. It’s going to be a fully functioning, living, breathing universe with a whole ecosystem. You can see the promise in something like Privateer, but this is geometrically [exponentially?] beyond that game. It’s like building a city. [?] Compared to Privateer, the scope, the dynamic universe  — it’s all 3D — is much more interesting. There’s much more intrigue the player can get involved in. Everything’s rules-based versus scripted. Commerce happens, trade happens, and piracy happens because of what’s going on in the game universe and not because of scripted events.

Freelancer could be played alone, but would well and truly come alive only when played online, as described by Computer Gaming World:

Freelancer’s multiplayer game will be a massively-multiplayer universe where thousands of players will be able to fly around and interact with each other in a variety of capacities. Digital Anvil envisions a dynamic, socially-oriented game that features the single-player game’s politics and clans as a backdrop. This multiplayer game will also permit you to ally with one of the main houses in the game, or go it alone.

Perhaps the coolest potential feature is the ability to own your own base…

Any of you reading this article who have been following the more recent career of Chris Roberts will readily recognize the themes here. Roberts is not a designer with a huge number of grand conceptual ideas, but once he has one he likes, he holds onto it like a dog does a bone.

Alas, by the summer of 2000 Microsoft was finally running out of patience. Seeing Digital Anvil’s lack of concrete progress toward finishing Freelancer as their fourth anniversary as a studio approached, the mega-corp was becoming restless. Even Erin Roberts seemed to be losing patience with his brother. With Chris’s acquiescence, he set up his own studio in Austin, called Fever Pitch Studios, to finish Digital Anvil’s long back-burnered real-time-strategy game Conquest. It would emerge in August of 2001 under the name of Conquest: Frontier Wars, the second Digital Anvil game that had had to leave its place of birth in order to come to fruition. It would prove no more successful than Starlancer, drowning in a sea of similar games.

Well before then, Microsoft reluctantly concluded that Chris Roberts, the whole reason it had invested so heavily in Digital Anvil in the first place, was the primary reason that the studio couldn’t finish a single game on its own. Still not wanting to raise a scandal the year before the Xbox launched to signal an even deeper commitment to games, it “offered” to buy Roberts out, a transaction which would give it a majority stake in the studio. On December 5, 2000, the press release went out: “Microsoft has reached a preliminary agreement to buy Digital Anvil. The acquisition will strengthen our commitment to producing top-quality PC and Xbox titles.” Roberts was to be given the face-saving ongoing role of “creative consultant” on Freelancer, but the reality was that he had been fired from his own company for his inability to keep to a schedule and hold to a plan. His time at Digital Anvil had resulted in one commercially failed and critically panned movie, plus two games that had had to be sub-contracted out to other developers in order to get them finished; both of them as well had been or would become commercial failures. Yet Chris Roberts walked away from Digital Anvil much wealthier than when he had gone in. He told the press that he would “take some time off to kind of rethink what I want to do in the interactive-entertainment field.” When he was done thinking, he would decide to go back to movies instead of games.

In the meantime, Microsoft installed a new management team down in Austin, with orders to sort through the unfocused sprawl that Freelancer had become and find out if there was a game in there that was worth saving. Perhaps surprisingly, they decided that there was, and turned the project over to a producer named Phil Wattenberger and a lead designer named Jörg Neumann, both Origin Systems alumni who had worked on the old Wing Commander games. At Microsoft’s behest, they steered Freelancer in a slightly more casual direction, making the player’s ship easily — in fact, optimally — controllable using a mouse alone. The mouse-driven approach had actually originated during Roberts’s tenure, but there it had been tied to a customizable and upgradable “Neuronet,” an onboard artificial intelligence that was supposed to let you vibe-sim your way to glory. That got jettisoned, as did many other similarly unwieldy complications. The massively-multiplayer living galaxy, for example, became a single-player or locally multiplayer one that wasn’t quite so living as once envisioned.

When it finally shipped in March of 2003, Freelancer garnered unexpectedly strong reviews; Computer Gaming World called it “the best Chris Roberts space sim Chris Roberts didn’t actually make.” But it wasn’t rewarded commensurately in the marketplace. Even with its newfound accessibility, it was hard for it to shake the odor of an anachronism of the previous decade among gamers in general; meanwhile the dwindling number of TIE Fighter and Freespace enthusiasts had a tendency to reject it for being irredeemably dumbed-down. Instead of marking the beginning of a new era for the space sim, it went down in history as a belated coda: the very last space sim to be put out by a major publisher with real promotional efforts and the hope — unrealized in this case — of relatively high sales behind it.

As for Digital Anvil: it was shut down by Microsoft once and for all in November of 2005, after completing just one more game, a painfully unoriginal Xbox shoot-em-up called Brute Force. Two games finished in almost nine years, neither of them strong sellers; the most remarkable thing about Digital Anvil is that Microsoft allowed it to continue for as long as it did.

By the time his games studio shuffled off this mortal coil, Chris Roberts had been living in Hollywood for a number of years. And he had found a way to do pretty well for himself there, albeit in a role that he had never anticipated going in.


The decade that Chris Roberts spent in Hollywood is undoubtedly the least understood period of his career today, among both his detractors and his partisans. It is no secret why: documentation of his activities during the decade in question is far thinner on the ground than during any other time. Roberts arrived in Hollywood as just another semi-anonymous striver, not as the “game god” who had given the world Wing Commander. No one in Tinsel Town was lining up to interview him, and no one in the press paid all that much attention to what he got up to. Still, we can piece together a picture of his trajectory in which we can have reasonable confidence, even if some of the details remain hazy.

Roberts moved to Hollywood in the spring of 2001 with his windfall from the Digital Anvil buyout burning a hole in his pocket. Notwithstanding the fiasco that had been Wing Commander: The Movie, he still harbored serious ambitions of becoming a director, probably assuming that his ability to finance at least part of the budget of any film he was placed in charge of would give him a leg up. He even brought a preliminary script to show around town. It was called The American Knight, being a cinematic reinterpretation of another computer game: in this case, Origin Systems’s 1995 game Wings of Glory, which was itself yet another variation on the Wing Commander theme, dealing with the life of a World War I fighter ace in the air and on the ground. In an even more marked triumph of hope over experience, Roberts also nursed a dream of making a live-action Wing Commander television series. He founded a production company of his own, called Point of No Return Films, to forward both of these agendas. January of 2002 found Point of No Return at the Sundance Film Festival; according to E! Online, they “threw an after-hours shindig that attracted 250 revelers, with Treach and De La Soul among them.” It really did help Roberts’s cause to have some money to splash around.

But Roberts soon found that the people he met in Hollywood knew Wing Commander, if they knew it at all, only as a misbegotten flop of a film. And they weren’t much more interested in his World War I movie. They were, on the other hand, always ready to talk backroom business with someone who had some number of millions in his pocket, as Roberts did. What followed was a gradual but inexorable pivot away from being a filmmaker and toward being a film enabler, one of those who secured the cash that the creative types needed to do their thing. A watershed was reached in March of 2002, when Point of No Return Films morphed into Ascendant Pictures, whose focus was to be “improving film value in foreign territories (presales), attracting top talent and film projects, and generating equity investment in films.” It wasn’t the romantic life of an auteur, but it did show that Chris Roberts was learning to talk the talk of back-office Hollywood, aided and abetted by a network of more experienced hands that he was assembling around him. Among them was a German immigrant named Ortwin Freyermuth, who would become the most important and enduring business partner of Roberts’s post-Origin career.

Ortwin Freyermuth, right, discusses a director’s cut of Das Boot with the film’s original editor Hannes Nikel circa 1997. Like Chris Roberts, Freyermuth really does love movies.

Freyermuth was renowned in the proverbial smoke-filled rooms of Hollywood for having pioneered an incredibly useful funding model for American films. It hinged on a peculiarity of German tax law that had been intended to encourage local film-making but instead wound up becoming a demonstration of the law of unintended consequences, played out on an international stage. The original rule, as implemented by the German Ministry of Finance in the 1970s, stated that any money that a German resident invested into a film production could be immediately deducted from his or her taxable income as if it was a total loss. It was hoped that this would encourage more well-heeled Germans to invest in homegrown movies, in order to combat the creeping mono-culture of Hollywood and ensure that Germans would have films to see that dealt with contemporary life in their own country. In time, this well-meaning measure would produce just the opposite result.

Enter Ortwin Freyermuth, a lawyer who enrolled at the University of California, Los Angeles, in the mid-1980s to study international copyright law. When he stumbled across the German law I’ve just described in the course of his studies, he noted with no small excitement what it didn’t say: that the films that were deemed eligible for the tax deduction had to be German films. He arranged to fund the 1990 movie The Neverending Story II almost exclusively with German money. This first experiment in the field was not so egregious compared to what would come later, given that the movie was also shot in Germany, albeit using mostly American actors. Then again, it was only a proof of concept. Freyermuth co-founded Capella Films thereafter to make German financing a veritable way of life for Hollywood. “In the best Hollywood tradition,” wrote Variety in 1994, “the company is rife with layers of relationships, both contractual and personal, here and abroad, such that an organizational chart, if one existed, would have more lines and intersections than fractal math.” Such byzantine structures, which had a way of obscuring realities upon which people might otherwise look askance, were standard operating procedure for Freyermuth.

The Freyermuth model spread throughout Hollywood as the 1990s wore on. It seemed like a win-win, both to those in California and to the Germans who were suddenly funding so many of their movies. In some cases, you could just borrow the money you wanted to invest, use your investment to reduce your taxable income dramatically, then pay off the loan from the returns a year or two later. And there was nothing keeping you from doing this over and over, year after year. Large private-equity funds emerged in Germany, pooling the contributions of hundreds of shareholders to invest them in movies, 80 percent of them made outside of the country. These Medienfonds became as ordinary as any other form of financial planning for Herr und Frau Deutschland. They were great for people on the verge of retirement: make an investment just before retiring, then enjoy the return afterward when your tax rate was lower. They were great for spreading out and reducing the tax liability that accompanied a major windfall, great for parents wishing to move money into the hands of their grown children without getting hit by high inheritance taxes. For Hollywood, meanwhile, they turned into a money spigot like no other. Insiders took to calling it “stupid German money,” because the people behind the spigot tended to take it in stride even if the films they were investing in never turned much of a profit. The real point of the investment was the tax relief; any additional profits that emerged were just gravy. The highest tax bracket in Germany at the time was about 51.5 percent. If you were in this tax bracket, then as long as you got at least half of your money back, you came out ahead.

The sheer ubiquity of these media funds placed the German people’s elected representatives in Berlin in a delicate situation; a growing number of their own constituents were benefiting from the current state of the law. Nevertheless, in 1999 the Ministry of Finance made an attempt to stop the madness. It revised the rules to bring them into closer alignment with those that governed other, superficially similar European incentive schemes: to qualify, a film now had to either be made in Germany at least partially or have a German copyright owner. (A law of this sort in Luxembourg was the reason that the Wing Commander movie had been shot in that country.) But stupid German money was now too entrenched as a modus operandi for people on either side of the Atlantic to walk away from it without putting up a fight. Artful dodgers like Ortwin Freyermuth realized that they could sell the copyright to a Hollywood production to a German media fund, whilst inserting into the sales contract a right to buy it back at a future date for an agreed-upon price. Far from being hobbled by the change in law, they realized that they could use it to charge a premium for the tax relief they were providing to the citizens of Germany. For example, the Germans paid $94 million to Paramount Pictures for the copyright to the 2001 videogame adaptation Lara Croft: Tomb Raider. When they sold it back, the Germans were paid only $83.8 million. The tax benefits were so great that it was still worth it. By now, half of all the foreign money pouring into Hollywood was coming from the single country of Germany: $1.1 billion in 2004 alone.

Despite their ongoing popularity among the well-heeled classes, the media funds became more and more controversial in Germany as the young millennium wore on. Germany was, it was more and more loudly complained, effectively subsidizing Hollywood using money that ought to have been going to roads, schools, hospitals, and defense. Stefan Arndt, the producer of the rather wonderful German movies Run Lola Run and Good Bye Lenin!, noted that he had had to go outside his homeland to finance them because his fellow citizens all had their gazes fixed so firmly on Hollywood. “It’s crazy,” he said. “Every other country in the world ties strings to its film subsidies.” Even a group of hardcore Tolkien fans sleeping in line the night of the premiere of The Return of the King, the third film in Peter Jackson’s disproportionately German-funded Lord of the Rings trilogy, thought the situation a little bit absurd when they were told about it: “I don’t think that’s good, because I think that the three films carry themselves, that they put in enough money, that it doesn’t necessarily have to be financed with taxes.”

Whether we wish to see him as a devil tempting a young Faust named Chris Roberts, or just as a savvy man of business who found a mentee he deemed well worth his time, Ortwin Freyermuth showed our once and future game developer how this particular game was played. In April of 2004, Roberts was credited onscreen for the first time in a finished wide-release film as an executive producer. As if to underscore the transition he had made from creator to enabler, it was not a terribly Chris Roberts sort of movie. The Punisher was based on a Marvel Comics character, but it was no family-friendly superhero movie either. It was a grim, dark, and brutally violent revenge fantasy that made Dirty Harry look cute and cuddly. “At the end,” wrote the late great Roger Ebert in his review, “we feel battered down and depressed, emotions we probably don’t seek from comic-book heroes.” Whatever else you can say about Wing Commander, it does care deeply about the nobler human virtues which The Punisher submerges under fountains of blood, even if Chris Roberts is often irredeemably clumsy at presenting them.

Although The Punisher may have had a B-movie attitude, it wasn’t a B-movie, any more than Wing Commander had been. It was made for a budget of $33 million, with a cast that included John Travolta. (Admittedly, he sleepwalks through his performance as if he can barely be bothered to learn his lines, but one can’t have everything.) However joyless fuddy-duddies like yours truly and Roger Ebert may find movies like this, there was and is a market for them. The Punisher earned $20 million more than it had cost to make at the box office even before the long tail of cable-television showings and home-video rentals was factored into the equation.

Chris Roberts was off and running as a backstage Hollywood player. At the Sundance Film Festival in January of 2005, his name could be seen alongside those of George Clooney and Steven Soderbergh among the producer credits for The Jacket, an arty but flawed science-fiction film starring Adrien Brody, Keira Knightley, Kris Kristofferson, and the future Agent 007 Daniel Craig, with a soundtrack by Brian Eno. Again, these names are not the stuff of B-movies.

After The Jacket, Ascendant Pictures graduated from being an ancillary source of funding to becoming one of the primary production houses behind four reasonably high-profile independent features during 2005 and 2006. None of Lord of WarThe Big WhiteAsk the Dust, or Lucky Number Slevin has gone down in film history as a deathless classic. Yet all of them could boast of A-list actors: Nicolas Cage, Jared Leto, Ethan Hawke, Robin Williams, Holly Hunter, Woody Harrelson, Colin Farrell, Salma Hayek, Donald Sutherland, Morgan Freeman, Ben Kingsley, and Bruce Willis can all be found amongst their casts.

As you have probably guessed, all of these films were funded primarily with German money. The aggregate return on them was middling at best. Lord of War and Lucky Number Slevin did pretty well; The Big White and Ask the Dust flopped miserably. As already noted, though, the fact that most of their investors were more concerned about the tax benefits than a more conventional return on investment made this less of an issue than it might otherwise have been. Then, too, like mutual funds on the conventional stock market, the German media funds put money into many movies in order to avoid a single point of failure. A film that became an unexpected hit could easily offset two or three duds.

Chris Roberts had arrived in the Hollywood inner circle — perhaps still the outer edge of the inner circle, but still. He had come a long way from that nerdy bedroom coder who had bumped into an artist from Origin Systems one day in an Austin games shop. Now he was living in a luxury condo in the Hollywood Hills, with one live-in girlfriend and a former one stalking him. (Oddly, it would be the latter whom he would wind up marrying.) I’ve been pretty hard on Roberts in these articles, and I’m afraid I’m going to have to be so again — harder than ever, in fact — before we’re finished. But two things he most definitely is not are stupid or lazy. I wrote at the outset of this pair of articles that few people have ever stretched so thin a thread of creative talent as far as he has. Let me amend that bit of snark now by acknowledging that he could never have done so if he wasn’t smart and driven in a very different sort of way. And let me make it crystal clear as well that nothing I’ve written about Roberts’s tenure in Hollywood so far should necessarily lead us to criticize him in any but the most tempered of ways. In exploiting a loophole in German tax law for all it was worth, he wasn’t doing anything that tons of others — a full-fledged cottage industry worth of them, on both sides of the Atlantic — weren’t also doing. But there’s more to the story in his case. Chris Roberts and Ortwin Freyermuth were actually near the center of one of the biggest financial scandals in modern German history, where dubious ethics crossed over into outright fraud.

Hollywood accounting is never simple. In that spirit, Ascendant Pictures spun off another company not long after its own founding. The wholly-owned subsidiary Rising Star Pictures was created to “closely cooperate with VIP Medienfonds Film and Entertainment”; this was the largest of all the German media funds, which collected almost half a billion Euros every year from its shareholders. Rising Star’s purpose was to be VIP’s anointed agent on the left side of the Atlantic, directing that fire hose of stupid German money around Hollywood. This meant the films of Ascendant, yes, but also those of others, to which Rising Star presumably charged a brokering fee. The final incarnation of Ascendant’s website, which is for some reason still extant, claims that Rising Star was involved in the funding of fourteen films in 2003 alone. A version of their site from March of 2005, accessible today via the Internet Archive’s Wayback Machine, heavily stresses the relationship with VIP, calling Rising Star the latter’s “primary placement agent.” This was a big, big deal, given the sheer quantity of money that VIP was taking in and paying out; more than $250 million came into Rising Star from VIP during 2003. The speed and scale of Chris Roberts’s rise in Hollywood becomes even more impressive when figures like these are taken into consideration.

Andreas Schmid

Unfortunately, Andreas Schmid, the head of VIP, was arrested for tax fraud in Cologne in October of 2005. It seemed that he had not been putting most of the money he collected into movies with even ostensibly German owners, as the law required. At regular intervals, Schmid dutifully gave his shareholders a list of films into which he claimed to have invested their contributions. In actuality, however, VIP used only 20 percent of their money for its advertised purpose of funding movies. Schmid deposited the remaining 80 percent into his bank, either parking it there to earn long-term interest or sending it elsewhere from there, to places where he thought he could get a higher rate of return. He then sent fake earnings reports to his shareholders. By defrauding both the government and his clients in this way, he could make a lot of money for himself and his partners in crime. There is reason to believe that Chris Roberts and Ortwin Freyermuth were among said partners, working the scam with him through Rising Star. I’ll return to that subject shortly.

For now, though, know that Schmid may have gotten so greedy because he knew the jig was soon to be up. Rumors were swirling in both Hollywood and Berlin throughout 2005 that the German Ministry of Finance had just about had enough of watching its tax money fly out of the country. The VIP Media scandal proved the last straw, if one was needed. In November of 2005, just one month after Schmid’s arrest, it was announced that blanket tax write-offs for film investments of any stripe were a thing of the past. Going forward, Hollywood would have to find another golden goose.

Even if they weren’t in on the fix, so to speak, the arrest of Schmid and the elimination of their primary funding mechanism could only have had a deleterious effect on Ascendant Pictures. Just when they had seemed to be hitting the big time, the ground had shifted beneath their feet. Those films that were already paid for by Germans could still be made, but there would be no more like them. The last Ascendant movie from the salad days to emerge from the pipeline was Outlander, their most expensive one ever and arguably also their worst one yet; not released until 2008 due to a whole host of difficulties getting it done, it managed to lose $40 million on a $47 million budget.

Deprived of the golden eggs, Ascendant blundered from lowlight to lowlight. They had to renege on a promise to Kevin Costner to line up the financing for a movie called Taming Ben Taylor, about “a grouchy, divorced man who refuses to sell his failing vineyard to the golf course next door.” Costner, who had been so excited about the movie that he had co-written the screenplay himself, sued Ascendant for $8 million for breach of contract; the case was settled in March of 2008 under undisclosed terms.

The first and only film that Ascendant helped to fund without German money only served to advertise how far down they had come in the world. Keeping with the golf theme, the low-rent Caddyshack ripoff Who’s Your Caddy?, which made Wing Commander look like Hamlet, was released in 2007 and failed to earn back its $7 million budget. It’s best remembered today for an anecdotal report that Bill Clinton loved it. By this point, Ascendant was little more than Chris Roberts and Ortwin Freyermuth; everyone else had jumped ship. (Freyermuth seems genuinely fond of Roberts. He has stuck with him through thick and thin.) The company would nominally continue to exist for another three years, but would shepherd no more movies to completion. Its final notices in the Hollywood trade press were in association with Black Water Transit, a locus of chaos, conflict, and dysfunction that culminated in a film so incoherent that it would never be released.

Over in Germany, Andreas Schmid was convicted and sentenced to six years in prison in November of 2007. Yet the fallout from the VIP scandal was still ongoing. Shortly after his conviction in criminal court, 250 former shareholders in his fund, from whom the German government was aggressively demanding the taxes they ought to have paid earlier, launched a civil lawsuit against Schmid and the UniCredit Bank of Munich, where he had been depositing the money he claimed was being used to fund movies. The case hinged on a single deceptively simple question: had the information that Schmid sent to his shareholders in the reports issued by his fund been knowingly falsified? Some of the documents from these court proceedings, which would be decided in favor of the plaintiffs on December 30, 2011, can be accessed online at the German Ministry of Justice. I’ve spent some time going over them in the hope of learning more about the role played by Roberts and Freyermuth.

It’s been a challenge because the documents in question are not the trial transcripts, transcripts of witness interviews, nor the detailed briefs one might wish to have. They are rather strictly procedural documents, used by the court to schedule its sessions, outline the arguments being made before it, and handle the other logistics of the proceedings. Nonetheless, they contain some tantalizing tidbits that point more in the direction of Roberts and Freyermuth as co-conspirators with Schmid than as his innocent victims. I’ll tell you now what I’ve been able to glean from them as a non-lawyer and non-accountant. I’ve also made them available for download from this site, for any readers who might happen to have a more nuanced command of the German language and German law than I do.

The claimants in the lawsuit show great interest in Ascendant’s daughter company Rising Star, which they believe had no legitimate reason for existing at all, a judgment which is confirmed by the court in a preliminary draft of the final ruling. A document dated June 27, 2008, contains the startling charge that Rising Star “never produced films, but were merely an intermediary layer used for concealment,”[1]Diese produzierten nie Filme, sondern waren lediglich eine zur Verschleierung eingeschaltete Zwischenebene. citing emails written by Chris Roberts and Ortwin Freyermuth to Andreas Schmid between 2003 and 2005 that have been submitted into evidence. (Sadly, they are not included among these papers.) Another document, dated May 15, 2009, calls Rising Star “an artificially imposed layer.”[2]Eine künstlich dazwischen geschaltete Ebene. The final judgment concludes that Rising Star was an essential conduit of the fraud. What with Rising Star being “the primary placement agency for VIP,” as was acknowledged on the Ascendant website, all of the money passed through it. But instead of putting the entirety of the money into movies, it only used 20 percent of it for that purpose, funneling the rest of it back to the UniCredit Bank of Munich, Andreas Schmid’s co-defendant in the shareholder lawsuit. Even the 20 percent that stayed in Hollywood was placed with other production companies that took over the responsibility of overseeing the actual movies. Rising Star, in other words, was nothing but a shell company, a false front for getting the money from the investment fund into Schmid’s bank.

Both Roberts and Freyermuth were interviewed at least once, presumably in the United States, by investigators from the Munich Public Prosecutor’s Office; this must have been done in the run-up to Schmid’s earlier, criminal trial. They were witnesses in that trial rather than defendants, yet the facts from their testimony that are cited here leave one wondering why that should be the case. From a document dated May 15, 2009: “The structure provided by VIP was a ‘pro forma transaction,’ solely intended to achieve a certain tax advantage. This was also explained by witness Freyermuth.”[3]Die von VIP vorgegebene Struktur sei ein „Pro-Forma-Geschäft“ gewesen, alleine mit der Zielsetzung einen gewissen Steuervorteil zu erreichen. Dies habe auch der Zeuge Freyermuth so erläutert. The claimants cite the testimony of Roberts and Freyermuth as evidence that “the fund managers therefore instructed their American partners to submit inflated estimates.”[4]Die Fondsverantwortlichen hätten deshalb ihre amerikanischen Partner veranlasst, überhöhte Schätzungen abzugeben. Likewise, it is written that Roberts and Freyermuth confessed to a falsified “profit distribution for the film Perfume: The Story of a Murderer, which, according to the fund’s information, was 45 percent produced by VIP. In reality, the profit distribution did not correspond to the alleged 45-percent co-production share; it was significantly less favorable.”[5]Insoweit greift die Klageseite auf eine Gewinnverteilung (sog „waterfall“) für den Film „Das Parfum“ zurück, der nach den Fondsangaben zu 45 % von VIP 4 produziert worden sei (sog. Coproduktion). Tatsächlich habe die Gewinnverteilung keinesfalls dem angeblichen Co.-Produktionsanteil von 45 % entsprochen, sie sei wesentlich ungünstiger gewesen. Even with the most open of minds, it is very hard to read statements like this and conclude that Chris Roberts and Ortwin Freyermuth were anything other than active, willing co-conspirators in a large-scale, concerted fraud perpetrated on German investors and ordinary taxpayers.

In a document dated May 17, 2010, it is stated that Freyermuth and Roberts are being summoned to appear as witnesses before this court, on the morning and afternoon respectively of July 16, 2010. But a report dated July 8, 2010, states that “the hearing scheduled for July 16, 2010, is cancelled after witness Freyermuth informed the court that he could not appear on such short notice, and the summons for witness Chris Roberts was returned to the court as undeliverable.”[6]Der Termin vom 16. Juli 2010 wird aufgehoben, nachdem der Zeuge Freyermuth mitgeteilt hat, nicht so kurzfristig erscheinen zu können, und die Ladung des Zeugen Chris Roberts als unzustellbar wieder in den Gerichtseinlauf gekommen ist. On August 3, 2010, the court states that they will be ordered to appear again, this time on September 20, 2010, saying that Freyermuth will be told to inform Roberts, who apparently still cannot be reached, about the summons.[7]Zu diesem Termin sind die Zeugen Freyermuth und Roberts, letzterer über Freyermuth, zu laden. However, the paper trail ends there. It seems most likely that the two never did come to Munich to answer questions before the court.

Assuming all of this really is as bad as it looks, the final question we are left with is why and how Roberts and Freyermuth escaped prosecution. This question I cannot even begin to answer, other than to say that international prosecutions for financial malfeasance are notoriously difficult to coordinate and carry off. Perhaps the German authorities decided they had the ringleader in Andreas Schmid, and that was good enough. Perhaps Roberts and Freyermuth were given immunity in return for their testimony about the mechanics of the fraud in the United States. Or maybe there were some extenuating circumstances of which I am not aware, hard as it is to imagine what they might be.

In July of 2010, Roberts and Freyermuth sold Ascendant Pictures and all of its intellectual property to a film studio, film school, film distributor, real-estate developer, venture-capital house, and children’s charity — never put all your eggs in one basket! — called Bigfoot, located in, of all places, the Philippines. Roberts had left Hollywood some weeks or months before this transaction was finalized; thus the undeliverable court summons from Germany, addressed to the old Ascendant office. I do not know whether or how much he and Freyermuth ended up profiting personally from the VIP Media affair when all was said and done. I can only say that he does not seem to have been a poor man when he moved back to Austin to think about his next steps in life.


Most of you probably know what Chris Roberts got up to after leaving Hollywood, but a brief precis may be in order by way of conclusion, given that it will be many years at best before we meet him again in these histories.

Man of good timing that he was, Roberts started looking for fresh opportunities just as the new Kickstarter crowd-funding platform was tempting dozens of figures from the old days of gaming to launch new projects. In 2012, he joined together with a number of his earlier business partners, from both Digital Anvil and Ascendant Pictures — Erin Roberts, Tony Zurovec, and Ortwin Freyermuth were all among them — to found Cloud Imperium Games and kick-start Star Citizen, the “virtual life in space” game that he had once thought Freelancer would become. Brilliantly executed from a promotional standpoint, it turned into the biggest crowd-funded game ever, raising hundreds of millions of dollars.

As of this writing, thirteen years later, Star Citizen is officially still in the early alpha stage of development, although it is actively played every day by tens of thousands of subscribers who are willing to pay for the privilege. A single-player variant called Squadron 42 — the Starlancer to Star Citizen’Freelancer — was originally slated for release in 2014, and is thus now eleven years behind schedule. Cloud Imperium promises that it is coming soon. (If and when it finally does surface, it will include motion-captured footage, shot in 2015, of Mark Hamill, Gillian Anderson, Andy Serkis, and Gary Oldman.)

Having long since exhausted its initial rounds of crowd-funding, Cloud Imperium now pays its bills largely through pay-to-win schemes involving in-game spaceships and other equipment, often exorbitantly priced; Ars Technica reported in January of 2024 that buying the full hangar of ships would set up you back a cool $48,000, almost enough to make you start looking around for the real spaceship in the deal. By any standard, the amount of money Cloud Imperium has brought in over the years is staggering. Assuming the whole thing doesn’t implode in the coming months, Star Citizen seems set to become the world’s first $1-billion videogame. While we wait, Wing Commander IV, the last game Chris Roberts actually finished, looks forward to its swift-approaching 30-year anniversary.

Naturally, all of this has made Cloud Imperium and Chris Roberts himself magnets for controversy. The loyal fans who continue to log on every day insist that the scale of what Star Citizen is trying to achieve is so enormous that the time and money being spent on it are unavoidable. Others accuse the game of being nothing but a giant scam, of a size and shameless audacity that would put a twinkle in even Andreas Schmid’s jaundiced eyes. Some of those who think the truth is most likely somewhere in between these extremes — a group that includes me — wonder if we should really be encouraging people to upload so much of their existence into a game in the first place. It seems to me that games that are meant to be enjoyed in the real world are healthier than those that set themselves up as a replacement for it.

Even if everything about Star Citizen is on the up-and-up, it’s difficult to avoid the conclusion that breathtaking incompetence has played as big a part as over-ambition in running up the budget and pushing out the timeline. I tend to suspect that some sort of spectacular collapse is more probable than a triumphant version 1.0 as the climax of the Star Citizen saga. But we shall see… we shall see. Either way, I have a feeling that Chris Roberts will emerge unscathed. Some guys just have all the luck, don’t they?



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


SourcesComputer Gaming World of November 1999, August 2000, and May 2003; PC Gamer of November 2000; Los Angeles Times of August 14 2008; Der Spiegel of June 13 1993; Variety of February 24 1994 and November 13 2007; Los Angeles Daily News of March 5 2008; Billboard of April 19 2005, May 10 2005, September 20 2005, October 4 2005, and October 11 2005; Austin Business Journal of April 20 2001; Die Welt of December 6 2009; Deutsches Ärzteblatt of May 2 2003; New York Times of December 13 2004; Forbes of May 31 2019.

Online sources about games include a 2002 Wing Commander retrospective by the German website PC Player Forever; a 2000 GameSpot interview with Chris Roberts; Freelancer previews on ActionTrip and Games Domain; the old Freelancer News site; and the GameSpot review of Freelancer. Vintage reports of Digital Anvil’s acquisition by Microsoft can be found on GameSpotIGN, Microsoft’s home page, and EuroGamer.

Online sources about movies include “Send in the Clowns (But Beware of Their Funny Money)” by Doug Richardson, Roger Ebert’s review of The Punisher, a profile of Ortwin Freyermuth at Alumniportal Deutschland, “How to Finance a Hollywood Blockbuster” and “Hollywood’s Big Loss” by Edward Jay Epstein at Slate, the current zombie version of Ascendant’s website and the more incriminating 2005 version, Bigfoot’s 2011-vintage websiteE! Online’s report from the 2002 Sundance festival, “Medienfonds als ‘Stupid German Money'” by Dr. Matthias Kurp at Medienmaerkte.de, “Filmfonds für Reiche” at ansTageslicht.de, “Was sind Medienfonds?” at Investoren Beteiligung, and “Stupid German Money” by Günter Jagenburg at Deutschlandfunk. I made extensive use of the Wing Commander Combat Information Center, and especially its voluminous news archives that stretch all the way back to 1998.

As noted above, I’ve made the documents I found relating to Rising Star in the class-action lawsuit against Andreas Schmid available for local download. By all means, German speakers, dive in and tell me if you can find anything I’ve missed! I retrieved them from the official German Federal Gazette, or Bundesanzeiger.

My invaluable cheat sheet for this article, as for the last, was “The Chris Roberts Theory of Everything” by Nick Monroe from Gameranx.

But my superhero and secret weapon was our own stalwart commenter Busca, who used his far greater familiarity with the German Web and the German language to find most of the German-language sources shown above, and even provided some brief summaries of their content for orientation purposes. I owe him a huge debt of gratitude. Do note, however, that the buck stops with me as far as factual accuracy goes, and that all of the opinions and conclusions expressed in this article are strictly my own.

Footnotes

Footnotes
1 Diese produzierten nie Filme, sondern waren lediglich eine zur Verschleierung eingeschaltete Zwischenebene.
2 Eine künstlich dazwischen geschaltete Ebene.
3 Die von VIP vorgegebene Struktur sei ein „Pro-Forma-Geschäft“ gewesen, alleine mit der Zielsetzung einen gewissen Steuervorteil zu erreichen. Dies habe auch der Zeuge Freyermuth so erläutert.
4 Die Fondsverantwortlichen hätten deshalb ihre amerikanischen Partner veranlasst, überhöhte Schätzungen abzugeben.
5 Insoweit greift die Klageseite auf eine Gewinnverteilung (sog „waterfall“) für den Film „Das Parfum“ zurück, der nach den Fondsangaben zu 45 % von VIP 4 produziert worden sei (sog. Coproduktion). Tatsächlich habe die Gewinnverteilung keinesfalls dem angeblichen Co.-Produktionsanteil von 45 % entsprochen, sie sei wesentlich ungünstiger gewesen.
6 Der Termin vom 16. Juli 2010 wird aufgehoben, nachdem der Zeuge Freyermuth mitgeteilt hat, nicht so kurzfristig erscheinen zu können, und die Ladung des Zeugen Chris Roberts als unzustellbar wieder in den Gerichtseinlauf gekommen ist.
7 Zu diesem Termin sind die Zeugen Freyermuth und Roberts, letzterer über Freyermuth, zu laden.
 
 

Tags: , , , , , ,

Mr Roberts Goes to Hollywood, Part 1: A Digital Anvil


This article tells part of the story of Chris Roberts.

What I’d really like to do is a game where you could travel from planet to planet — and there would be hundreds of planets — with full 3D action. You could go down and explore each planet in detail and interact with all sorts of live-action characters. Plus you could retool your ship with lots of different guns and engines.

The project would feature all the best elements of adventure and virtual reality, but with the same high production level of a Hollywood blockbuster. That means big-name stars and the look and quality of, say, Bladerunner. I guess my goal is to bring the superior production values of large Hollywood movies into the interactive realm — creating an environment that is really cool and fun and where you can spend hundreds of hours exploring a virtual universe that seems totally lifelike down to the smallest detail. Sort of a SimUniverse on steroids!

— Chris Roberts in early 1995, speaking from the department of The More Things Change…

One thing I believe I have learned during my 50-plus years on this planet is that flawed people are far more commonplace than genuinely, consciously bad ones. Given this, I try not to rush to attribute to malice aforethought that which can be explained by simple human weakness. I try to apply this rule when I weigh the surprising number of game developers who were well-nigh universally admired giants in their field during the twentieth century, only to become magnets for controversy in the 21st.

Thus I prefer to believe that Richard Garriott’s habit of lending his name to sketchy endeavors that never live up to expectations stems not from conscious grift but from a desire to still be seen as a gaming visionary, which is unfortunately accompanied by a reluctance to do the hard work that making really good games entails. Likewise, I think that Peter Molyneux’s habit of wildly over-promising stems not from his being “a pathological liar,” as journalist John Walker once infamously called him, but rather from a borderline pathological tendency to get high on his own supply. I’m prepared to come up with excuses for John Romero, for George Broussard, even for those two guys who have been trying to make a Space Quest successor — a dubiously necessary proposition in itself — for about fifteen years now. When you combine real but fairly venial character flaws with the eternal tendency of some fans to take their hobby just a little bit more seriously than it probably deserves, the result can be a toxic stew indeed.

Yet I must confess that one old warhorse from gaming’s younger days does give a degree of pause to my rationalizing. Few people have ever stretched so thin a thread of actual creative talent so far as has Chris Roberts. In the process, he’s amply demonstrated that his larger talents are for failing upward, and getting people to give him flabbergasting amounts of money while he’s at it. I’m not yet prepared to call him a conscious grifter, mind you, but I do think that there is a lot more plotting going on behind that seemingly guileless chipmunk smile of his than we might first suspect. Never fear: I’m not going to jump the chronology entirely to wade into the argument over whether Star Citizen, the most expensive game ever made even though it has not yet been made, was a giant scam from the start, a good-faith effort that later became a scam, or is still an honest endeavor thirteen years after its initial Kickstarter. What I do want to do is examine the period in Chris Roberts’s life between Wing Commander IV in 1996 and that first splashy Star Citizen Kickstarter of 2012. Who knows? Maybe doing so will help to explain some of what came later.


I have infinite respect for Chris Roberts, who wants to make interactive movies, but I can get a better cinematic experience by watching reruns of Diff’rent Strokes than by playing Wing Commander IV.

— Warren Spector, March 1997

In the summer of 1996, after it had become clear that Wing Commander IV was going to struggle just to earn back its development budget of more than $10 million, the management of its publisher Origin Systems sat down with Chris Roberts, the Wing Commander series’s creator and mastermind, to discuss the future of what had been the most popular franchise in computer gaming just a few years earlier. With a heritage like that behind it, the inhabitants of Origin’s executive suites weren’t yet ready to give up on Wing Commander completely. Yet they made it clear to Roberts that the next installment would have to scale back the budget and place less emphasis on the interactive-movie side of the experience and more on the space-combat side, in order to address a mounting chorus of complaints that the latter had been allowed to grow stale and rote in the last couple of installments while Roberts poured all of his energy into the former. Roberts thought for a few days about whether he was willing to continue to make Wing Commander games under his managers’ new terms, then turned in his resignation. No one could possibly have imagined at the time that Chris Roberts, who was not yet 30 years old, would still be one of the most prominent game developers in the world 30 years later, even though he would never manage to complete and ship another game of his own during that span of time. Our world is a deeply strange place sometimes.

That October, Roberts filed the necessary paperwork to found a company of his own with two other former Origin people: his brother Erin Roberts, who had just produced the poorly received Wing Commander spinoff Privateer 2: The Darkening, and Tony Zurovec, the programmer and designer behind the reasonably successful action-adventures Crusader: No Remorse and Crusader: No Regret. They called their new studio Digital Anvil. “I liked the idea of a name that could suggest Old World care and craftsmanship in the digital age,” said Roberts. “It’s like we’re hammering out fantastic experiences in our little forge.” By his account, their method of seeking funding was breathtaking in its naïveté. They got their hands on Bill Gates’s email address, and simply wrote him a letter. Incredibly, they received a call the next day from Ed Fries, who had been tasked with making Microsoft a major player in games, one of the few software markets the foremost ruthless mega-corporation of the era had yet to conquer. He had been given serious money to spend to make that initiative a reality. Digital Anvil, in other words, had been lucky enough to strike while the iron was hot.

On February 19, 1997, a press release announced that Microsoft had signed Digital Anvil to “a multi-title publishing deal” which entailed “a significant investment” on its part — in fact, an investment that made Microsoft the owner of just short of half of the new company. The trio of founders set up shop in rather lavish fashion in downtown Austin, Texas, not far from Origin’s offices. They hired an initial staff of about 35 people, who got to enjoy such Microsoft-funded perks as an onsite state-of-the-art movie theater with Dolby Sound and leather seats. On paper at least, the staff of Digital Anvil made for a diverse and impressive group. Hidden amidst a galaxy of bright and eager faces out of the nearby University of Texas could be glimpsed Chief Technology Officer John Miles, whose Miles Sound System had long been the standard for audio programming among game developers, and Robert Rodriguez, a young filmmaker who had recently directed Quentin Tarantino’s script of From Dusk Till Dawn and was now being talked about as the burgeoning Austin film scene’s next Richard Linklater. “The parameters of the film world are pretty set,” said Rodriguez. “You’ve got to work with a two-hour chunk of time and things like that. Some of the stories I want to tell don’t fit within those slots.”

Rodriguez’s presence was arguably the first sign of the muddled priorities that would become a fact of life at Digital Anvil. Chris Roberts told the magazine Texas Monthly in the summer of 1997 that the studio had four games in the works: a real-time-strategy game called Conquest, a Mad Max-inspired driving game called Highway Knight, a hyper-ambitious multiplayer space sim called Freelancer, and Rodriguez’s amorphous project, which was called Tribe. (“The idea is, he will write a movie, possibly direct it, and then write a game.”) Another game in the pipeline that went unmentioned was Erin Roberts’s Starlancer, which was to be a linear space sim with a set-piece story line, an even more obvious successor to Wing Commander than was Freelancer. (Students of the Robertses’ later careers will recognize a kinship between Freelancer and Starlancer on the one hand and Star Citizen and its single-player companion Squadron 42 on the other.) That’s five games in all: it was quite the agenda for such a small studio. And then the movies came calling.

If Robert Rodriguez was a filmmaker who was tempted by the possibilities of games, Chris Roberts was the opposite, a game maker who seemed for all the world like he really wanted to be making movies; if Wing Commander III and IV had shone a spotlight on nothing else, it was this. While still working for Origin Systems, he’d come up with an outline for a non-interactive Wing Commander movie. He gave it to Kevin Droney, a screenwriter who had earlier turned the Mortal Kombat games into a movie, to make a proper script out of it, then sent it to Hollywood on a wing and a prayer: “It was my passion project, my baby.” It finally reached a hard-bitten Svengali of a producer named Todd Moyer. He pronounced it “pretty bad” — “basically, it was a C-rate Star Wars ripoff” — but his ears perked up when the agent who had sent it to him explained that Wing Commander was a hit series of computer games. “I’m not very reverential toward videogame creators,” Moyer confesses. “Games just don’t get me excited.” Or rather, they didn’t do so as creative productions in their own right; as product lines, Moyer saw them as a largely untapped opportunity for franchising: “Once you own [the] intellectual property, you can carve out very different deals for the creators and for a lot of people.” Chris Roberts fell under Moyer’s spell from the first meeting, which came right in the middle of all of the work to build out Digital Anvil. For he had no fonder dream than that of making a real Hollywood movie, and he definitely wasn’t going to let the games studio he was building at the same time get in its way. Moyer was telling him precisely what he most wanted to hear.

That said, it’s fair to ask who was really pulling the wool over whose eyes. For all that the movie industry had a well-earned reputation for all manner of financial trickery, it was expected to reveal as a matter of course and trade-union law how much each film had cost to make and how much it earned back in ticket sales. Meanwhile budgets and sales figures were regarded as trade secrets by game publishers, to be divulged only when doing so served their interests. It’s hard not to suspect that Chris Roberts benefited from this opacity, which required an insider’s perspective to begin to penetrate. Todd Moyer was no one’s idea of a babe in the wood; nor for that matter was Microsoft’s Ed Fries. Yet both were new to the games industry, and by all indications in a bigger hurry to sign deals than to do their due diligence. The culture of gaming moved fast in the 1990s. Describing Wing Commander as a “series of hit computer games” in 1997 wasn’t an outright lie, but it did neglect the salient fact that this series’s best days as a marketplace proposition were already well behind it, that the last couple of Wing Commander games hadn’t been hits at all. While the series certainly still had its fans, far more hardcore gamers in 1997 were excited about Quake and Warcraft II and Diablo than Wing Commander. In short, there was ample reason for the observant to question how much appetite there really was for a Wing Commander movie — or, now that we’re on the subject, for the new space sims that Digital Anvil proposed to craft in the image of Chris Roberts’s most famous creation.

Nevertheless, Todd Moyer took it upon himself to make the movie happen, just as Microsoft had agreed to fund the games. He sent Droney’s screenplay to some (uncredited) script doctors for some hasty revision. He judged the new version “only a little bit better” when it came back to him, but decided it was good enough for franchise work. He convinced a rather bemused-seeming Origin Systems to agree to license the Wing Commander name and characters in return for a small piece of any profits. He convinced 20th Century Fox — the house that built Star Wars, as Chris Roberts knew well — to agree to distribute the eventual film to theaters. He didn’t even blink when Roberts came to him with his one real demand: that he be allowed to direct the movie himself. “No one gave a shit about Chris Roberts as a director or not a director,” he says. “With these movies, at the right price, nobody cares who directs them.”

In the end, Moyer put together what journalist Jamie Russell describes as “a stunning deal — or rather series of deals — that jigsawed together money from all over. It began with a small domestic minimum guarantee from Fox and was followed by a Luxembourg tax incentive, some French investment, an Australian tax shelter, UK financing, and foreign sales.” The whole pot together came to almost $30 million — a relatively modest sum by Hollywood action-movie standards, but three times what Chris Roberts had had to hand when he shot the movie parts of Wing Commander IV.

Roberts and Moyer would have few kinds words to say about one another in later years. “While Todd was good at doing deals, he didn’t give a damn or even know much about the creative process,” said Roberts in 2012. “As a first-time director, I really could have used the support of a proper creative producer who understood film-making and being on the set, rather than an ex-agent who couldn’t tell you the difference between a single or a master shot.” And yet for all the rancor that would follow the Wing Commander film becoming a laughingstock, it seems pretty clear from his subsequent career that Roberts was watching with keen eyes as Moyer scraped together funding for the movie in all sorts of head-scratching ways.

Indeed, even at this early juncture, Roberts was savvy enough to put together one eyebrow-raising arrangement of his own: he “hired” Digital Anvil, his own company, to provide the movie’s visual effects, thus funneling some substantial portion of that $30 million budget into his and his colleagues’ own coffers long before the movie ever made it into theaters. With this windfall, Digital Anvil doubled in size and announced to the world that they were now a cinematic special-effects house as well as a games studio. Chris Roberts insisted publicly that the two halves of the company were “entirely unrelated, except for me,” but nobody believed him. Coincidentally or not, John Miles and Robert Rodriguez both left Digital Anvil soon after. (Rodriguez would go on to become the marquee Hollywood director that Roberts had always dreamed of becoming, turning out hits such as Spy Kids and Sin City.) Microsoft, which had made its “significant investment” in Digital Anvil in the expectation that the studio would exclusively make games exclusively for it, could hardly have been pleased by the pivot into conventional film-making, but it showed remarkable patience and forbearance on the whole. Knowing that his mega-corp’s reputation as a ruthless monopolist preceded it, Ed Fries was determined to present a different face to the games industry, to show that Microsoft could be a good, supportive partner to the studios it took under its wing. An ugly lawsuit against Digital Anvil — even a justified one — would not have forwarded that agenda. Once again, in other words, Chris Roberts got lucky.

The cast of the Wing Commander movie was brokered by Todd Moyer, in ways intended to protect the piebald interests of his many investors. In one of their first conversations, he had carefully explained to Chris Roberts that Mark Hamill, the star of the third and fourth Wing Commander games, was not adored by the general public for having once played Luke Skywalker in the same way that he was by the hardcore-gaming demographics. To John and Jane Doe, he was just a middle-aged curiosity for the “Where are they now?” file. The rewritten script offered up as the protagonist a fresh-faced space jockey who had just earned his wings, a perfect fit for a younger, up-and-coming actor. It turned out that Fox had just such an actor in mind: Freddie Prinze, Jr., a 21-year-old who had recently become regular cover fodder for the teen magazines, thanks to a star turn in I Know What You Did Last Summer, a slasher flick that earned $125 million at the box office in 1997. He would play an earlier incarnation of Christopher Blair, Mark Hamill’s old role. For his sidekick Todd “Maniac” Marshall, Fox proposed another product of the 1990s teen-horror craze: Matthew Lillard, who had played a serial killer in Scream. Other cast members were hand-picked to enhance the film’s appeal in foreign markets: David Suchet, known to a generation of British television viewers for his depiction of Agatha Christie’s fussy detective Hercule Poirot; Jürgen Prochnow, who had portrayed a U-Boat captain in the German classic Das Boot; Tchéky Karyo, a veteran French character actor whose CV included films like The Bear and La Femme Nikita. Betwixt and between all of the new faces, there was some talk of bringing back some of the supporting cast from Wing Commander III and IV — the most sustained discussions were held with Malcolm McDowell — but all of those negotiations ultimately fell through for one reason or another. When all was said and done, the cast for the movie overlapped not at all with the one from the games.

As a byproduct of the Luxembourg tax incentives that had helped to bring it into being, the entirety of the movie was shot on a sound stage there between February and April of 1998. The process was by most accounts a difficult one at times. Not only had Chris Roberts never received any formal training as a film director, but the cast and crew had three different mother tongues, with wildly varying levels of proficiency in the other two languages. Still, by no means was it a case of rank amateurs at every level. The set designer, for example, was Peter Lamont, who came in fresh off James Cameron’s Titanic, the biggest blockbuster in film history; the cinematographer was Thierry Arbogast, who had just performed that same task for the The Fifth Element.

Once the shoot was finished, Chris Roberts returned to Austin with his reels of raw footage, to begin the work of splicing it together with the outer-space scenes being generated at Digital Anvil and turning it all into a proper movie. By December of 1998, he had a rough cut ready to go. In keeping with time-tested Hollywood tradition, Fox arranged for a handful of preview showings to ordinary members of the public. The feedback that came in was enough to tell the Fox executives, even if their own critical faculties could not, that they had a potential boat anchor — or maybe an anvil? — on their hands. They were left pondering what to do with this less-than-stellar take on outer-space adventure.

After hearing that Fox was considering condemning the movie to the memory hole of a direct-to-videotape release, Todd Moyer tried to buy the film studio out so that he could shop Wing Commander elsewhere. But at the end of January of 1999, just when he thought the buy-out deal was done, he got a phone call from Tom Sherack, Fox’s head of distribution. As Moyer reported it to Jamie Russell decades later, their conversation went something like this:

“Todd, I’m not giving you the picture.”

“But we had a deal!”

“Good fucking luck. I’ll never sign the papers. I don’t give a shit. I’m not doing it. If you want to have a huge lawsuit, go ahead.”

“Tom, I’ve got to tell you…

“No! It’s coming out in six weeks, and it’s going to have the Phantom Menace trailer on it.”

The Phantom Menace, George Lucas’s feverishly anticipated first prequel to his classic Star Wars trilogy, was scheduled to hit theaters in May of 1999. At the last minute, Fox had had the clever idea of attaching the second trailer for that movie — the first had come in November of 1998 — to the start of Wing Commander, making the latter the first place where the Star Wars faithful could catch this glimpse of what awaited them in a couple of months. Wing Commander was promptly slated for release in March of 1999, giving George Lucas and company just enough time to put the trailer together. It left no time, on the other hand, to mount a proper advertising campaign for Wing Commander. Nor did it leave Chris Roberts and company much time to try to fix the many infelicities that had been pointed out by the preview audiences.

The official Wing Commander world premiere took place on March 12. It was less than a gala affair, being held in Austin rather than Hollywood, with none of the cast in attendance; the actors in question were still saying polite things about the movie when forced into it, but quite obviously preferred to talk about something else. (Freddie Prinze, Jr., would grow less polite in later years, calling Wing Commander “a piece of shit” that he couldn’t stand to see or even think back on.) It appeared on 1500 screens across the country that same weekend, complete with the Star Wars trailer that Fox hoped would prove its secret weapon.

Alas, even this potent last-minute triage wasn’t enough to save the patient. Wing Commander brought in $5 million the first weekend, good for seventh place in the box-office listings. The reviews that appeared at the start of the following week were savage. Every critic in the land piled on to see who could come up with the best zinger. (Cinemax: “Filmed in Luxembourg(!), this low-flying turkey is an international co-production between the U.S., France, England, Germany, and Ireland. That pretty much spreads the blame as Wing Commander, in any language, goes down in computer-generated flames.” Entertainment Weekly: “It’s enough to make you wonder if the geniuses at Fox deliberately decided to release a movie this lifeless. They may have figured that everyone who showed up to see the new Star Wars trailer would be so bored by the main feature that they’d exit the theater screaming for a science-fiction movie that was actually fun.” SF Gate: “Wing Commander is the latest exhibit in the case to prove that Star Wars has wrecked American cinema.”) Perhaps in response to the reviews, more likely just as a result of natural gravity — most of the hardcore fans of the computer games presumably went out to see it right away — the movie earned just $2.2 million the next weekend, dropping to eleventh place. The third weekend, it was in fifteenth place with earnings of $1.1 million, and then it was out of American theaters and off the charts forever. A planned panoply of Wing Commander action figures, toy spaceships, backpacks, lunchboxes, tee-shirts, and Halloween costumes either never reached stores at all or were pulled from the shelves in short order. Star Wars this movie was not, in all sorts of ways.


Origin flew the teenage proprietors of the biggest Wing Commander fan site down to Austin for the premiere. (Aren’t they adorable, by the way?) They saw the movie four times in a single weekend — not a fate I would wish on anyone, but more power to them.

Chris Roberts at the premiere. Another fan in attendance wrote that “he seemed to be stressing that if he had had more money and time to spend on the movie, he would have made some changes.”

Richard Garriott at the premiere.

The general public was somewhat less enthused than our friends who saw the movie four times. These signs started to appear in theaters after it became a trend for patrons to buy a ticket, go in to watch the Star Wars trailer, then walk out and ask for their money back.



In light of the critical drubbing to which it was subjected and its modern-day status as a cinematic punchline, I watched Wing Commander: The Movie for the first time recently with, shall we say, considerable trepidation. My first reaction might serve as an argument for the value of low expectations: in many ways, it actually wasn’t as bad as I expected it to be.

The opening credits were snazzy and stylish, worthy of a far more respectable film. Even once the movie proper began, the production values and acting weren’t anywhere near as terrible as I had anticipated. This is not inexplicable: the belief shared by many fans that Wing Commander was an ultra-low-budget movie doesn’t hold water. As points of comparison, take the three vastly better received films which created and for a time cemented Freddie Prinze, Jr.’s standing as a teen heartthrob. I Know What You Did Last SummerI Still Know What You Did Last Summer, and She’s All That all sported budgets well below that of Wing Commander; the last named, which was shot after Wing Commander but released before, had only one-third the budget of Chris Roberts’s film. Of course, none of these others were science-fiction films with a need for lots of fancy visual effects. Nonetheless, you don’t sign a heavyweight production designer like Peter Lamont, nor for that matter a potential star-in-the-making like Prinze, if you don’t have a certain level of connections and financial resources.

All of which is to say that, if you were to walk into a room where Wing Commander happened to be showing on the television, it wouldn’t jump out to you immediately as B-grade schlock in the way of, say, the notorious Plan 9 from Outer Space. The sets look good enough; the cinematography and sound design are perfectly professional; the acting doesn’t stand out for being awful either. In an ironic sort of way, all of this is a problem, for it means that Wing Commander manages to be just good enough to be merely boring and irritating rather than lovable in its sheer cluelessness.

My second big takeaway from watching the Wing Commander movie is closely related to my first: I was surprised at how similar it is to the computer games, after having heard legions of fans complain about just the opposite. There’s the same jarring bifurcation between the scenes of character interaction, which are shot like a conventional movie, and the ones depicting the action in outer space, which are completely computer-generated and, indeed, look very much like scenes from a game — a game, that is, made five to ten years after this movie was made. Likewise, there’s the same sense of a cast and crew of professionals doing their level best, knowing that what they’re creating is never going to be high art or even high entertainment, but feeling a craftsman’s responsibility to make the material come across as well as it possibly can. Nobody in film ever wants to be the weak link, even on a bad movie.

Rather than being awful on the face of it, then, Wing Commander is awful in a subtler way. Its problems all stem from the script, which doesn’t do the things that even popcorn-movie storytelling needs to do to be successful, and from its director’s baffling decisions about what parts of the script to leave in and leave out. A work of fiction — any work of fiction — is a clockwork mechanism beneath the surface. The author has to move her characters around in arbitrary ways to set up the plot beats her narrative requires. The art comes in making the mechanistic feel natural, even inevitable; at the risk of hopelessly muddling my metaphors, call it applying the flesh and sinew that are needed to conceal the bones of the story. In Wing Commander, said bones are poking out everywhere. The result feels so artificial that one is left looking for a stronger word than “contrived” to try to capture it.

Take the opening beats. The race of evil felines known as the Kilrathi attack a Terran Confederation flagship and secure — just to provide a note of contemporary relevance for those of us living in the third decade of the 21st century — an “AI” that can lead them to Earth, the location of which planet is for some reason unknown to them. This is an existential threat for the Terrans.

There’s just one ship that might be able to intercept the Kilrathi and report on their numbers and disposition before they make the jump to Earth: the outer-space aircraft carrier Tiger’s Claw. Unfortunately, it’s impossible for Terran High Command to tell this ship to do so because it is “beyond the reach of our communications.” (Presumably, the Tiger’s Claw’s radio will start working again before it’s time to send the report on the Kilrathi.) Luckily, a resupply vessel which can be reached is on its way out to the Tiger’s Claw. Even better, this resupply vessel is captained by one “Paladin,” some sort of special Terran “scout” who is only playing the role of the captain of an ordinary freighter. (What he or anyone else hopes to achieve by this deception is never explained.) Admiral Tolwyn, who stands at the head of the Terran High Command brain trust, such as it is, likes Paladin so much that he gave him his ring. (Isn’t that sweet?) Now, he needs only call up his favorite scout and tell him to tell the captain of the Tiger’s Claw to get a move on and intercept the Kilrathi.

Is this what he in fact does? No, reader, it is not. Instead Tolwyn remembers that the freighter happens to be ferrying a couple of young pilots fresh out of flight school over to the Tiger’s Claw. One of them is named Christopher Blair. Another Blair with whom he once served — now sadly deceased — was the kid’s father. “He was a good man,” Tolwyn says. On the basis of a zealous belief in eugenics, he elects not to convey the vital orders and intelligence to the grizzled special agent to whom he gave his ring but rather to the wet-behind-the-ears kid whom he’s never met.

It just goes on and on and on like this, with characters constantly making decisions that don’t make any sense. If you want your audience to become invested in your story, you have to provide them with a coherent internal logic that they can follow, no matter how outlandish your larger premise may be.

Another barrier to investment, likewise reflecting a bizarre lack of understanding of the fundamentals of this sort of fiction, is the yawning absence of a villain. Star Wars had Darth Vader; the best-ever Star Trek movie had Khan. Wing Commander has a few animatronic cats who spend less than five minutes onscreen and look absolutely appalling — and not in a good way — while they’re doing it; the Kilrathi are the one place where Wing Commander really does look like a B-movie through and through. To his credit, Chris Roberts was perceptive enough to see that it wouldn’t be a good idea to use the version of the Kilrathi from the games, actors in furry costumes who wound up looking more like cuddly department-store mascots or sports-team cheerleaders than a galaxy-enslaving force for evil. But what he was able to put in their place was not any better, as he also recognized. This explains why they got so little screen time: “The Kilrathi sucked and were basically cut out of the movie.”

A subtler, more aesthetically sensitive director might have spun our lack of eyes on the Kilrathi into a positive, turning their very mysteriousness into a sinister virtue in much the same way that the FreeSpace space sims did their evil aliens, the Shivans. Suffice to say that Chris Roberts was not such a director. The lack of an identifiable antagonist just emphasizes the sense of plot gears arbitrarily clanking around, oblivious to the requirements of compelling fiction. We see a lot of people fighting and dying, but we never know why or against whom or what. A popcorn movie without a villain just doesn’t work.

As for the heroes: this cast could have easily served the purpose if given a stronger script to work with. None of the young actors comes across as unlikable, but no actor could fully compensate for dialog as bad as this. “It takes balls — big balls, not ovaries — to keep track of four enemy fighters!” says Maniac, as the script desperately tries to interest us in a bantering will-they-or-won’t-they situation between him and one of the female pilots. Wing Commander is that guy at a party who thinks he’s hilarious and cool, whom everyone else just thinks is an annoying dweeb.

The image that springs to my mind now when I think back on Wing Commander: The Movie is one that nobody ever talks about. Early in the film, when he and Maniac are still aboard the tramp freighter, Blair has to plot a daredevil hyperspace jump because… Reasons. He does so, using what looks like a Casio calculator keyboard and some innate genetic talent that comes courtesy of his background as a “Pilgrim,” a whole other unnecessary and confusing thing in the script that I can’t be bothered to go into here. Anyway, he plots the jump, and just as it’s about to be made Maniac raises his hands above his head as if he’s riding a roller coaster. As he does so, you can see the most delicious expression on actor Matthew Lillard’s face: he looks all sorts of confused and bemused, as if wondering if this lame joke is really what he’s being asked to do here, even as he’s gamely trying to stay in character and look cocksure and pumped. He gets through the scene, the joke utterly fails to land… and Chris Roberts proceeds to put it in the final cut of his movie, no doubt sure that his audience will find it hilarious. It’s what the kids today call Cringe.

In a saner world, I would be able to end this article by telling you that all of the foregoing explains why Chris Roberts never got another sniff at a career in Hollywood. But he did, my friends… he did. Failing upwards is his superpower.


You might want to hold on tight, Maniac. It’s gonna be a rough ride.

Our principal cast of hot young pilots. From left to right, Saffron Burrows plays Lieutenant Commander “Angel” Deveraux; Ginny Holder is Lieutenant Rosie “Sassy” Forbes; Mathew Lillard is Todd “Maniac” Marshall; Freddie Prinze, Jr., is Lieutenant Christopher “Maverick” Blair. (Is a case of Top Gun envy involved?) Of the four, Lillard makes the best of the bad situation and delivers the most energetic performance. Prinze mostly just stands around looking conflicted and earnest. “I tried to make him young and confused,” Prinze said when asked what he wanted to bring to the character. Exactly what every action-movie lead should aspire to be, right?

Devearux enforces discipline in her squadron by pulling out a gun and threatening to murder one of her pilots. None of her superiors aboard the Tiger’s Claw expresses any concern about this unhinged behavior. For all his obvious fascination with military culture, I’m not sure that Chris Roberts understands how it works.

Maniac and Sassy consummate their romantic relationship with a lot of clumsy thrashing about without ever actually taking off their clothes. Thank God for small mercies. I shudder to think what a real Chris Roberts-directed sex scene would be like.

Oddly, it’s the veteran David Suchet who delivers the worst performance of the cast, constantly swinging between equanimity and rage for no apparent reason. I’m not sure I’d put Hercule Poirot in charge of a starship anyway.

At one point, our World War II aircraft carrier in space suddenly turns into a submarine, complete with sonar pings and “Silence in the boat!” (never mind the soundless vacuum of space) and all the rest. Why? Because Chris Roberts thinks submarines are pretty cool too, that’s why. At least actor Jürgen Prochnow (left) had experience with this sort of thing…

Our space fighters, on the other hand, are decommissioned 1950s-era fighter jets when they’re at home in the hangar.

For the most part, the visual effects that were created by Digital Anvil while they were supposed to be making games for Microsoft aren’t terrible.

The special effects get themselves into serious trouble only when they’re blended with shots of the actors. Not coincidentally, videogames tended to have the same problem.

Do you prefer your Kilrathi plush, as in the games…

…or plastic, as in the movie? This is what is known as a Hobbes’s Choice. (There’s a dad joke in there for you old-school Wing Commander fans.)


There has to be someone else out there besides us. I hope they won’t be hostile, and I hope Earth is cool and doesn’t screw up first contact. No doubt our military will be there to greet them, defending the country. That’s not good. These aliens will come out, and they’re not going to be heavily armed because they’re not about that. We have to be mellow and peaceful. If that happens, it’ll be cool. But I don’t think it’ll happen that way. I think we’ll come hard, which is probably standard operating procedure. And that’s not a cool thing because we’ll probably get worked.

— Words of wisdom from Freddie Prinze, Jr., on the possibility of real extraterrestrial contact



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


SourcesThe book Generation Xbox: How Video Games Invaded Hollywood by Jamie Russell. Next Generation of March 1997; Computer Gaming World of May 1995 and June 1998; Starlog of May 1999, Austin Business Journal of March 2 1997, Texas Monthly of September 1997.

Online sources include “Chris Roberts explains what went wrong on the Wing Commander film” by Ben Kuchera at Penny Arcade, a 1998 Games On Line interview with Chris Roberts, a 2012 Chris Roberts “Ask Me Anything” from Reddit, a Microsoft press release announcing the Digital Anvil investment, the 1999-vintage Dan’s Wing Commander: The Movie Page (including the proprietor’s story of attending the premiere), and a 2002 Wing Commander retrospective by the German website PC Player Forever. I made extensive use of the Wing Commander Combat Information Center, and especially its voluminous news archives that stretch all the way back to 1998.

My invaluable cheat sheet for this article was “The Chris Roberts Theory of Everything” by Nick Monroe from Gameranx.

 
57 Comments

Posted by on November 21, 2025 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

Age of Empires (or, How Microsoft Got in on Games)

We don’t have a strategy to do a $200 game console that is a direct competitor to what Nintendo, Sega, and Sony are doing…

— Bill Gates, June 1996

It’s hard to overstate the scale of the real-time-strategy deluge of the late 1990s. For a period of several years, it seemed that every studio and publisher in the industry was convinced that duplicating the gameplay of Blizzard’s Warcraft and Westwood’s Command & Conquer franchises, those two most striking success stories in the business of computer games since Myst and DOOM, must surely be the digital equivalent of printing money. In the fall of 1997, Computer Gaming World magazine counted no fewer than 40 RTS’s slated for release during the coming Christmas season alone, to go along with the “nearly 20” that had already appeared with names other than Warcraft or Command & Conquer on their boxes. With no other obvious way of sorting through the jumble, the magazine chose simply to alphabetize the combatants in this “biggest clone war to hit the PC,” resulting in a list that began with 7th Legion and ended with Waterworld.

If those names don’t ring any bells with you today, you aren’t alone. While many of these games were competently made by genuinely enthusiastic developers, few mass movements in gaming have ever felt quite so anonymous. Although the drill of collecting resources, building up an army, and attacking your computerized or human enemies in real time struck a lot of people as a whole lot of fun — there was, after all, a reason that Warcraft and Command & Conquer had become so popular in the first place — it was hard for the creators of the next RTS generation to figure out what to do to set their games apart, whilst also staying within a strict set of design constraints that were either self-imposed or imposed upon them by their conservative publishers. Adventure games, CRPGs, and first-person shooters had all been the beneficiaries or victims of similar gluts in the past, but they had managed to explore a larger variety of fictional contexts if not always gameplay innovations. When it came to RTS’s, though, they all seemed to follow in the footsteps of either the high-fantasy Warcraft or the techno-futuristic Command & Conquer in their fictions as well as their gameplay. This can make even those members of the RTS Class of 1997 that are most fondly remembered today, such as the fantasy Myth or the science-fictional Total Annihilation, feel just a little generic to the uninitiated.

One game from this group, however, did stand out starkly from the crowd for the editors of Computer Gaming World, as it still does in the memories of gamers to this day. Whilst sticking to the tried and true in many of its mechanics, Age of Empires dared to try something different in terms of theme, mining its fiction from the real cultures of our planet’s ancient past. It played relatively straight with history, with no magic spells or aliens in sight. This alone was enough to make Age of Empires a welcome gust of fresh air in a sub-genre that was already sorely in need of it.

Yet there was also something else that made it stand out from the pack. Although its developer was an unknown outfit called Ensemble Studios — one of many that were springing up like toadstools after a rain to feed the real or perceived hunger among gamers for more, more, more RTS’s — its publisher was, of all companies, Microsoft, that one name in software that even your grandparents knew. The arrival of Age of Empires signaled a new era of interest and engagement with games by the most daunting single corporate power in the broader field of computing in general. If anyone still needed convincing that computer games were becoming mainstream entertainments in every sense of the phrase, this ought to have been enough to do the trick. For, whatever else one could say about Microsoft, it was not in the habit of exploring the nooks and crannies of the software market — not when there was a sprawling middle ground where it could plant its flag.



The man behind Ensemble Studios was one Tony Goodman, whose life’s direction had been set in the sixth grade, when his father, a professor of management science at Southern Methodist University in Dallas, Texas, brought home a terminal that could be used to connect to the university’s mainframe. “He would give me the same problems that he had given his students,” says Goodman. “My father would say, ‘Tony, I have a puzzle for you.’ Immediately, I was sucked in for the rest of the day. I always looked at the problems as puzzles. I loved puzzles and games, so I just couldn’t get enough. It came to me naturally. I remember saying, ‘This is it. This is what I’m going to do with the rest of my life!'”

In an ironic sense, Goodman’s career path would be the opposite of that of the typical game developer, who joins the world of more plebeian software development only after getting burnt out by the long hours and comparatively low pay in games. Long before starting Ensemble Studios, Goodman made a career for himself in the information-technology departments of the banking industry, specializing, like his father before him, in data-visualization tools and the like that could aid executive-level decision-making. Along the way, he learned much that he would later be able to apply to games — for, he says, good games have much in common with good software of any other stripe: “One of the most valuable things that I learned about developing software was that, for users to be productive, the software had to be fun to use. The key is to keep people entertained long enough to be rewarded. This also happens to be the fundamental dynamic of games and, indeed, all human experiences.”

In 1989, Tony Goodman and three partners formed Ensemble Corporation — not to be confused with Ensemble Studios — in his garage. Two years later, they released Command Center, a user-friendly front-end for Borland’s Paradox database system that could “automate queries, reports, forms, and graphics.” The company exploded from there, becoming a darling of the Forbes and Inc. set.

Throughout his years in business software, Goodman never lost touch with that younger version of himself who had been drawn to computers simply because he found them so wonderfully entertaining. He and his older brother Rick, who joined Ensemble Corporation as a programmer shortly after the release of Command Center, were lifelong board and computer gamers, watching at first-hand the aesthetic and technical evolution of the latter, parallel software industry. They found a kindred soul in another Ensemble programmer named Angelo Laudon, who, like them, could appreciate the higher salaries and profit margins in productivity software but nonetheless felt a longing to engage with his biggest passion. “We would talk about games until the early hours of the morning,” says Tony Goodman. “I loved the business of developing software, but I wanted to create products that everyone would tell their friends about. I wanted to create a pop-culture phenomenon. If you want to create software that people really want, developing videogames places you at the center of the universe.”

He realized that computer games had hit a watershed moment when Microsoft announced Windows 95, and with it DirectX, a software subsystem that would allow people to install and run even cutting-edge games as effortlessly as any other type of software, without the travails of the bespoke IRQ and DMA settings and memory managers that had been such a barrier to entry in the past. If he ever wanted to try to make games of his own, he knew, the time to get started was now, between the market’s expansion and the inevitable market saturation that would follow. Rick Goodman remembers how one day his brother

walks into work, assembles the team of database programmers, and says, “Would any of you guys rather be making games than database applications?”

I think people were caught off-guard. We were looking around the room, like, “Is this a trick question?” But I raised my hand, and Angelo Laudon raised his. Tony was serious. He said, “I’m going to pull you guys aside and we’ll make a game.” I thought that was awesome. I said, “Okay! What kind of game?” None of us had any idea.

For months thereafter, they continued to do their usual jobs during the day, then gathered again in the evening to hash through ideas and plans. During one of these sessions, Rick suddenly brought up a name that Tony hadn’t heard in a long, long time: Bruce Shelley, an older fellow with whom the brothers had played a lot of board games during their pre-teen and teenage years. Shelley worked in computer games now, said Rick — had in fact assisted Sid Meier with the design of Railroad Tycoon and Civilization. “Maybe — maybe —  he’s not busy.”

And lo and behold, it turned out that he wasn’t. After finishing Civilization, Shelley had left Meier and his other colleagues at MicroProse Software in order to follow his new wife, a banking executive, to Chicago, where she’d secured a job that was far more lucrative than any that he’d ever held. He was writing gaming strategy guides out of his home office when Tony Goodman called him up one day out of the blue: “I hadn’t heard from him in fifteen years, and here he is with his own business in Dallas, doing software for banks, and he’s got guys who want to make computer games. We had these long conversations about what it takes to make a game. I told my wife, ‘I think this guy’s going to start a game company.’ And finally he did call me and say, ‘We are going to start a game company, and we want you to be involved.'” Shelley agreed to fly down to Dallas to talk it over.

But they still weren’t sure what kind of game they wanted to make. Then, as Shelley remembers, “One day one of the guys walked in with Warcraft. He said, ‘We’ve got to make this. We’ve got to make one of these. This is blowing the socks off the gaming world right now.'” It all came together quickly after that. Why not combine the hottest current trend in gaming with the last game Shelley had helped to make, which was already widely regarded as a hallowed classic? “The idea was, let’s take the ideas of Civilization — an historical game — and do a Warcraft/Command & Conquer-style RTS.”

This, then, was the guiding ethos of the project, the first line of any pitch document to a potential publisher: to combine the fast action of the typical RTS with at least some of the more expansive scope of Civilization. You would guide a tribe — in time, a full-fledged civilization — through the Paleolithic Age, the Neolithic Age, the Bronze Age, and the early stages of the Iron Age (where this particular voyage through history would end, leaving the table set for a sequel). Along the way, you would research a variety of technologies and build ever more impressive structures, some of which would not be strictly military in application, such as granaries and temples. There would even be a version of Wonders of the World, those grandest of all Civilization achievements, waiting to be built. But the whole experience would be compressed down into the typical RTS time frame of an hour or so, as opposed to the dozen or more hours it might take to get through a full game of MicroProse’s Civilization.

Initially titled Dawn of Man, the game evolved slowly but steadily betwixt and between the usual daily routine at Ensemble Corporation. The other Ensemble principals took Tony Goodman’s after-hours vanity project with a shrug. They didn’t really understand it, but he had worked hard for a long time and was entitled to it, they supposed, in the same way that other successful entrepreneurs were entitled to go out and buy themselves a Porsche.

When Tony Goodman started shopping the game to prospective publishers, it already looked and played decently well. He was growing more and more convinced that he had a winner on his hands. Yet even he was surprised at his good fortune when he made a cold call to Stuart Moulder, a middle manager at Microsoft’s relatively little-remarked games division, and captured the interest of the biggest fish in the software sea.

Historically speaking, Microsoft’s relationship to games had long been a tentative one. It was true that, in the very early days of the company, when it was known chiefly as a peddler of 8-bit BASIC implementations, Microsoft had published a fair number of games. (The most important of these was probably its ethically dodgy commercial version of Will Crowther and Don Woods’s classic Adventure, the game that lent its name to a whole genre.) Even after it signed the landmark deal to provide IBM’s first mass-market personal computer with an operating system — a deal that resulted in the ever-evolving PC standard that remains dominant to this day — Microsoft continued to dabble in games for a while. There was a good reason for this; it’s often forgotten today that IBM and Microsoft first envisioned that original IBM PC becoming a fixture in homes as well as offices. But when home users didn’t embrace the platform as rapturously as the partners had hoped, even as Corporate America took it to its bosom more quickly than they had ever dreamed, Microsoft abandoned games, thanks not only to the bigger profits that could be earned in operating systems and business software but out of fear of the stigma that surrounded games and their makers in the more “serious” software circles of the 1980s. The one exception to Microsoft’s no-fun-allowed policy was — at least according to some people’s definition of “fun” — Flight Simulator, an early product for the IBM PC that turned into a minor cash cow for the company; like Microsoft’s operating systems and productivity packages, it was a program that people proved willing to buy all over again every few years, whenever it was updated to take advantage of the latest graphics cards and microprocessors. Its focus on the pedantic details of flying a real civilian airplane — the complications of VOR navigation systems and the insidious threat of carburetor ice were implemented, but absolutely no guns were to hand — presumably made it acceptable in Microsoft’s staid software lineup.

The release in 1990 of the comparatively approachable, user-friendly Windows 3.0 operating environment marked the moment when more conventional games began to become less of an anathema to Microsoft once again. An implementation of the hoary old card game Solitaire was among this latest Windows’s standard suite of software accessories. As easy to pick up as it was to put down, it became the perfect time killer or palate cleanser for hundreds of millions of office workers all over the world, enough to make it quite probably the most popular videogame ever in terms of sheer number of person-hours played. Microsoft went on to release four “Entertainment Packs” of similarly simple games for the Windows 3.x desktop, and to include a clever Battleship variant called Minesweeper in 1992’s Windows 3.1. Microsoft was slowly loosening up; even Bill Gates confessed to a Minesweeper addiction.

The company now began to dabble in more ambitious games, the kind that could stand on their own rather than needing to be packaged a half-dozen to a box. There came a golf game for the corporate set, and then there came Space Simulator, an attempt to do for armchair astronauts what Flight Simulator had for so long been doing for armchair aviators. But the big shift came with Windows 95, the first (and arguably only) Microsoft operating system whose arrival would become a full-fledged pop-culture event. That old dream of the PC as a standard for the home as well as the office was coming true in spades by now; amidst the hype over multimedia and the World Wide Web, ordinary people were buying computers to use in their homes in unprecedented numbers. Microsoft was determined to serve their wishes and needs just as they had for so long been serving those of the corporate world. One result of this determination was DirectX, which allowed Microsoft’s customers to install and play audiovisually rich, immersive games without having to learn the arcane mantras of MS-DOS or memorize every detail of a computer’s hardware configuration. Another, less initially prominent one was a more empowered games division, which was for the first time given permission to blow through the musty vibes of office life or educational value that had clung to Microsoft’s earlier entertainment efforts and give the hardcore gamers what they really wanted.

At the same time, though, it should be understood that even by this point game publishing had not become a major priority at Microsoft. Far from it. There remained plenty of people inside the company who didn’t think getting into that business was a good idea at all, who feared that it would be perceived as a conflict of interest by the very extant game publishers Microsoft was trying to convince to embrace DirectX, or who thought the potential rewards just weren’t worth the distraction; after all, even if Microsoft managed to publish the most popular computer game in the world, those revenues would still pale in comparison to the Windows and Office juggernauts. Among the skeptics who did no more than tolerate the notion of Microsoft peddling games was Bill Gates himself.

The games division was in the keeping of one Tony Garcia at this time. One day a manager a rung below him on the hierarchy, a “talent scout” named Stuart Moulder whom he had explicitly tasked with finding hot “gamer’s games” to sway the naysayers and reinvigorate the division, knocked on his door to say that he’d just seen an RTS work-in-progress by a brand-new studio that was being bootstrapped out of a business-software maker. Yes, Moulder rushed to add, he understood that no part of that sentence sounded overly promising at first blush. But the game itself looked surprisingly good, he said. Really, really good. This could be the Big One they’d been waiting for.

So, Garcia invited the Dawn of Man crew to come up to Microsoft’s headquarters in Redmond, Washington, and show him what they had. And he too liked what he saw enough to want to put the Microsoft logo on it.

Microsoft was an infamously tough negotiator, but Tony Goodman was no slouch in that department either. “Negotiation is often about compromise,” he says. “However, negotiating with Microsoft is more often about leverage. Microsoft negotiates hard. They don’t respect you unless you do the same.” Goodman gained some of his needed leverage by showing the game to other publishers as well — Electronic Arts, Hasbro, even Discovery Channel Multimedia (who were attracted by the game’s interest in real history) — and showing Microsoft the letters they had sent him to express their very real interest. Meanwhile Microsoft’s marketing department had already come up with the perfect name for a game whose historical time frame extended well beyond the Dawn of Man: Age of Empires. Having invented the name, Microsoft insisted on owning the trademark. Goodman wasn’t able to move the beast from Redmond on this point, but he did secure a royalty rate and other contract terms that he could live with.

In February of 1996, Goodman’s moonlighting venture was transformed from a skunk works inside a business-software maker to a proper games studio at long last, via official articles of incorporation. That said, it wouldn’t do to exaggerate the degree of separation even now: Ensemble Studios was still run out of the office of Ensemble Corporation. It had about ten employees in the beginning. Angelo Laudon was listed as lead programmer and Rick Goodman as lead designer, despite the latter’s complete lack of experience in that field. Fortunately, Bruce Shelley had agreed to join up as well, coming down to Dallas about one week of every month and working from home in Chicago the rest of the time.

Soon after Age of Empires became a real project from a real studio, Tony Garcia left Microsoft. He was replaced by Ed Fries, a veteran member of the Office team who had programmed games for 8-bit Atari computers before starting at Microsoft in 1986. When he agreed to take this new job in games, he was told by his colleagues that he was committing career suicide: “Why would you leave Office, one of the most important parts of this company, to go work on something nobody cares about?”

For all their apparent differences in size and clout, Microsoft and Ensemble Corporation were in an oddly similar boat; both were specialists in other kinds of software who were trying to break into games. Or rather, a handful of passionate individuals within each of the companies was, while everyone else looked on with bemused indifference. In an odd sort of way, though, said indifference was the passionate individuals’ superpower. If the new RTS failed utterly, it wouldn’t show up on the ledgers of Microsoft or Ensemble Corporation as anything more than a slight blip on an otherwise healthy bottom line. This lack of existential stakes — an extreme rarity in an industry whose instability is legendary — was greatly to the game’s benefit. With no pressure to have it finished by such-and-such a date or else, the developers could fuss over it until they got every detail just exactly perfect. Sticking close to the RTS playbook even in his choice of metaphors, Rick Goodman describes time in game development as “a resource, like collecting wood. The more of it you have, the better off you are. We took a lot of time. A lot of time. Most companies would not have survived that length of time.”

During that time, the game got played. Over and over and over and over again, it got played, not only by the Ensemble crew but by lots of folks at Microsoft, including the experts at that company’s “usability laboratory.” Microsoft brought in people from the street who had never played an RTS before, who didn’t even know what those initials stood for, and had them run through the early tutorial missions to see if they communicated what they were supposed to. Rinse and repeat, rinse and repeat. Age of Empires was tested and tweaked no differently than it would have been if it was a $1000 mission-critical software application destined to be the fodder of corporate purchasing departments all over the world.

For this was to be a broad-spectrum computer game, beamed straight at the center of the mass market but wide and diffuse enough to capture an unusual variety of playing styles and priorities. Bruce Shelley has spoken often since of the value of putting “multiple gaming experiences within one box.”

To reach a broad audience, include a variety of game types and adjustable game parameters that combine in different ways to create a range of quite different gaming experiences, all within the same game. Examples of different gaming experiences with the Age of Empires games are multiplayer death matches, single-player campaigns, random-map games, cooperative-play games, and Wonder races. Victory conditions, map types, and level-of-difficulty settings are examples of parameters that can be adjusted to create different gaming experiences.

We want the smartest kid in junior-high school (a hardcore gamer) telling his or her friends that our game is his or her favorite right now. When those friends buy our game, they probably won’t be able to compete with the star, but by adjusting those parameters they can still find a type of game that suits them and have fun. The average kids and the smart kids can both enjoy our game, although they play quite different parts of it.

When we provide a variety of gaming experiences within the single box, we increase the number of people who can buy our game and be happy with it. Each of these satisfied customers becomes in turn a potential evangelist.

Although I wouldn’t directly equate being “hardcore” when it comes to games with being “smarter” than those who are not in the way that Shelley (perhaps inadvertently) does here, the larger point is well-taken. This was something that the industry in general was finally coming to realize by the latter 1990s, probably more belatedly than it ought to have done. By making it possible to play the same game in a variety of different ways, you could dramatically expand the size of that game’s audience. You did so by including varying difficulty levels and speed settings, to make the game as easy or hard, as relaxing or frenetic, as any particular player wished. And you did so by including different modes of play: story-driven campaigns, a single-player skirmish mode, online multiplayer contests. It might take additional time and money to make all of these things, especially if you were determined, as you ought to be, to make them all well, but it remained vastly cheaper than making a whole new game. Most older games dictate to you how you must play them; newer ones ask you how you would like to play them. And this has been, it seems to me, an immensely positive development on the whole, broadening immeasurably the quantity and types of people who are able to enjoy games — both each individual game that appears and gaming in the aggregate.

Certainly Age of Empires understood all of this; in addition to selectable difficulty levels and speed settings, it includes campaigns, pre-crafted singleton maps for single- or multiplayer sessions, randomly generated maps, even a scenario and campaign editor for those who want to turn their hobby into a truly creative pursuit. Anyone who has been reading these histories of mine for a while will surely know that the RTS is far from my favorite sub-genre of games. Yet even I found Age of Empires surprisingly easy to get along with. I turned the difficulty and speed down and approached the campaigns as an interactive whirlwind tour of the ancient world; as readers of this site’s companion The Analog Antiquarian are well aware, that is a subject I can never get enough of. I have a friend, on the other hand, who tells me that he can’t remember ever even starting a campaign back in the day, that he jumped right into multiplayer on Day One to engage in ferocious zero-sum contests with his friends and never looked back. And that’s fine too. Different strokes for different folks.

But since I am the person I am, I just have to say a bit more about the campaigns. There are actually four of them in all, chronicling the evolution of ancient Egypt, Greece, Babylon, and Japan. (An expansion pack that appeared about a year after the base game includes three more campaigns that deal exclusively with the rise and fall of Rome.) The campaigns were a labor of love for the lifetime history buff Bruce Shelley, as were the 40-plus pages in the manual dedicated to the twelve different playable civilizations, whose ranks include not only the aforementioned but also such comparatively obscure cultures as the Minoans, the Phoenicians, and even the Shang Chinese, all with strengths and weaknesses that stem from what we know — in some cases, what little we know — of their real-world inspirations.

“We really only needed one grand theme for a civilization that was historical enough to make people believe,” says Rick Goodman. “Like, they know Rome was good at X and the Greeks were good at Y.” For all that Age of Empires is no one’s idea of a studious exploration of history, it does have a little bit more on its mind than the likes of Warcraft or Command & Conquer. At its best, it can make you ponder where and how human civilization came to be, starting as it does with the bedrock resources, the food and wood and, yes, stone out of which everything that followed was built. I’m sure it must have sent at least a few of its young players scurrying to the library to learn a little more about our shared heritage. Perhaps it managed to spark an enduring passion for history in some of them.

The graphics style was an additional key to Age of Empires’s appeal. Bruce Shelley:

The sun is always shining in Age of Empires. It was always a bright, inviting world that you wanted to know more about. I’ve always had problems with dark, forbidding games. You’re crushing your audience — you’re really narrowing who is going to consider buying a game when you make it ugly, dark, and forbidding. Maybe it appeals to a certain audience, but…

When you set out to develop a PC game, the potential market is everyone on Earth who owns a PC. Once you begin making decisions about your game (gory, sci-fi, RTS, shooter), you begin losing potential customers who are not interested in your topic, genre, or style. Commercially successful games hold onto [a] significant share of that market because they choose a topic, genre, and style that connect with a broad audience. The acceptance of the PC into more world communities, different age groups, and by women means that games do not need to be targeted, and perhaps should not be targeted, solely to the traditional gaming audience of young males.

Age of Empires inevitably comes down to war in the end, as do most computerized depictions of history. But the violence is kept low-key in comparison to many another RTS bloodbath, and there is at least a nod in the direction of a non-conquest victory, an equivalent to sending a spaceship off to Alpha Centauri as a capstone to a game of Civilization: if you can build yourself a Wonder of the World in Age of Empires, then defend it for a period of time against all comers, you are declared the victor then and there. A “religious” victory can also be achieved, by collecting all of the religious artifacts on the map or holding all of its sacred sites for a period of 2000 years — about ten minutes in game time. There’s even some nods toward diplomacy, although in practice becoming allies usually just means you’ve agreed not to fight each other quite yet.

I don’t want to overstate the scale of the game’s innovations. At the end of the day, Age of Empires remains an RTS in the classic mold, with far more in common with Warcraft and Command & Conquer than it has with Civilization. It’s an extremely well-made derivative work with a handful of fresh ideas, not a revolution from whole cloth. Its nods in the direction of Civilization are no more than that; it’s not, that is to say, the full-blown fusion that may have been Bruce Shelley’s original vision for it. Compressing into just one hour the first 10,000 to 12,000 years of human civilization, from the dawn of sedentary farming to the splendors of high antiquity, means that lots of the detail and texture that make the game called Civilization so compelling must get lost. Even if you’re a story guy like me, you’ll no longer be marveling that you’ve brought writing, irrigation, or religion to your little group of meeples after you’ve played your first map or two; those things will have become mere rungs on the ladder to the victory screen, the real point of the endeavor. In a rare lukewarm review, GameSpot‘s T. Liam MacDonald put his finger on some of the places where Age of Empires’s aspirations toward Civilization don’t live up to the reality of its well-worn RTS template.

I wish that Age of Empires was what it claimed to be: Civilization with a Warcraft twist. Instead, it is Warcraft with a hint of Civilization. That’s all well and good, but it places it firmly in the action-oriented real-time combat camp, rather than in the high-minded empire-building [camp] of Civilization. The result is Warcraft in togas, with slightly more depth but a familiar feel.

I too must confess that I did eventually get bored with the standard RTS drill of collect, build, and attack that is the basis of almost every scenario. As the scenarios got harder, I gradually lost the will to put in the effort it would take to beat them; I wound up quitting without regrets about halfway through the second campaign, satisfied that I’d had my measure of fun and certain that life is too short to continue with entertainments of any type that you no longer find entertaining. Still, I won’t soon forget Age of Empires, and not just because its theme and atmosphere make it stand out so from the crowd. I would be the last person to deny that it’s an incredibly polished product from top to bottom, a game that was clearly fussed over and thought about to the nth degree. It exudes quality from its every virtual pore.


The Age of Empire intro movie displays some of the game’s contradictory impulses. The scenes of combat are no better nor worse than those of any other game that attempts to make war seem glorious rather than terrible. Yet the weathered ancient stone raises other, more poignant thoughts about the cycles of life, time, and civilization. “For dust you are, and to dust you shall return.”

Each campaign follows the historical development of the civilization in question to whatever extent the demands of gameplay allow.



In commercial terms, Age of Empires was a perfect storm, a great game with wide appeal combined with a lot of marketing savvy and the international distributional muscle of the biggest software publisher in the world. The principals from Ensemble remember a pivotal demonstration to Bill Gates, whose reservations about Microsoft’s recent push into games were well-known to all of them. He emerged from his first first-hand encounter with Age of Empires calling it “amazing,” assuring it the full support of the Microsoft machine.

While Microsoft’s marketing department prepared an advertising campaign whose slick sophistication would make it the envy of the industry, Tony Goodman deployed a more personal touch, working the phones at the big gaming magazines. He wasn’t above using some psychological sleight-of-hand to inculcate a herd mentality.

I built relationships with the most recognized gaming magazines. I invested a lot of time with key editors, seeding the idea that Age of Empires was “revolutionary” and would become a “phenomenon.” They may not have believed me at first, but my goal wasn’t to convince them. My goal was to plant wondrous possibilities in their brains and create anticipation, like Christmas for kids.

When the early previews began appearing, they were using the terms that we seeded: “revolutionary” and “phenomenon.” These early opinions were then picked up and echoed by other publications, creating a snowball effect. Eventually, all the publications would get on board with this message, just so they didn’t look out of touch.

Sure enough, in the Computer Gaming World RTS roundup with which I opened this article, Age of Empires was given pride of place at the top of the otherwise alphabetized pile, alongside just one august companion: Starcraft, Blizzard’s long-awaited follow-up to Warcraft II, which was to try the science-fiction side of the usual RTS fantasy/science-fiction dichotomy on for size. As it happened, Starcraft would wind up slipping several months into 1998, leaving the coming yuletide season free to become the Christmas of Age of Empires.

So, while Age of Empires may not have quite lived up to its “revolutionary” billing in gameplay terms, it definitely did become a marketplace phenomenon after its release in October of 1997, demonstrating to everyone what good things can happen when a fun game with broad appeal is combined with equally broad and smart marketing. It doubled Microsoft’s own lifetime sales projections of about 400,000 units in its first three months; it would probably have sold considerably more than that, but Microsoft had under-produced based on those same sales predictions, leaving the game out of stock on many store shelves for weeks on end while the factories scrambled to take up the slack. Age of Empires recovered from those early travails well enough to sell 3 million units by 1999, grossing a cool $120 million. It left far behind even those other members of the RTS Class of 1997 that did very well for themselves by the conventional standards of the industry, such as Myth and Total Annihilation. In fact, Age of Empires and the franchise that it spawned came to overshadow even Command & Conquer, taking the latter’s place as the only RTS series capable of going toe-to-toe with Blizzard’s Warcraft and Starcraft.

And yet that is only a part of Age of Empires’s legacy — in a way, the smaller part. In the process of single-handedly accounting for half or more of the Microsoft games division’s revenue during the last couple of years of the 1990s, Age of Empires changed Microsoft’s attitude about games forever. The direct result of that shift in attitude would be a little product called the Xbox. “I believe there were two successes that had to happen at Microsoft in order for the Xbox console to happen,” says Stuart Moulder. “One was DirectX, which showed that we had the chops on the operating-system side to deliver technology that made it possible to build great games. Then, on the other side, we had to show that we had the ability as a first-party publisher to deliver a hit game aimed at core gamers — because that’s [the] people who buy and play console games.” Thanks to Age of Empires, gaming would be overlooked no more at Microsoft.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: The book Gamers at Work by Morgan Ramsay; Computer Gaming World of October 1997, November 1997, and January 1998; Next Generation of June 1996; InfoWorld of April 22 1991.

Online sources include Soren Johnson’s interview with Bruce Shelley, Scott Stilphen’s interview with Ed Fries, David L. Craddock’s long ShackNews series on Microsoft’s gaming history (especially the chapter dealing directly with Age of Empires), Thomas Wilde’s profile of Ed Fries for GeekWire, Richard C. Moss’s history of Age of Empires for Ars Technica, a Microsoft press release from February of 1998, T. Liam MacDonald’s vintage review of Age of Empires for GameSpot.

Finally, the box of documents that Bruce Shelley donated to the Strong Museum of Play were a valuable resource.

A “Definitive Edition” of the original Age of Empires is available as a digital purchase on Steam.

 
 

Tags: , , ,

Doing Windows, Part 12: David and Goliath

Microsoft, intent on its mission to destroy Netscape, rolled out across the industry with all the subtlety and attendant goodwill of Germany invading Poland…

— Merrill R. Chapman

No one reacted more excitedly to the talk of Java as the dawn of a whole new way of computing than did the folks at Netscape. Marc Andreessen, whose head had swollen exactly as much as the average 24-year-old’s would upon being repeatedly called a great engineer, businessman, and social visionary all rolled into one, was soon proclaiming Netscape Navigator to be far more than just a Web browser: it was general-purpose computing’s next standard platform, possibly the last one it would ever need. Java, he said, generously sharing the credit for this development, was “as revolutionary as the Web itself.” As for Microsoft Windows, it was merely “a poorly debugged set of device drivers.” Many even inside Netscape wondered whether he was wise to poke the bear from Redmond so, but he was every inch a young man feeling his oats.

Just two weeks before the release of Windows 95, the United States Justice Department had ended a lengthy antitrust investigation of Microsoft’s business practices with a decision not to bring any charges. Bill Gates and his colleagues took this to mean it was open season on Netscape.

Thus, just a few weeks after the bravura Windows 95 launch, a war that would dominate the business and computing press for the next three years began. The opening salvo from Microsoft came in a weirdly innocuous package: something called the “Windows Plus Pack,” which consisted mostly of slightly frivolous odds and ends that hadn’t made it into the main Windows 95 distribution — desktop themes, screensavers, sound effects, etc. But it also included the very first release of Microsoft’s own Internet Explorer browser, the fruit of the deal with Spyglass. After you put the Plus! CD into the drive and let the package install itself, it was as hard to get rid of Internet Explorer as it was a virus. For unlike all other applications, there appeared no handy “uninstall” option for Internet Explorer. Once it had its hooks in your computer, it wasn’t letting go for anything. And its preeminent mission in life there seemed to be to run roughshod over Netscape Navigator. It inserted itself in place of its arch-enemy in your file associations and everywhere else, so that it kept turning up like a bad penny every time you clicked a link. If you insisted on bringing up Netscape Navigator in its stead, you were greeted with the pointed “suggestion” that Internet Explorer was the better, more stable option.

Microsoft’s biggest problem at this juncture was that that assertion didn’t hold water; Internet Explorer 1.0 was only a modest improvement over the old NCSA Mosaic browser on whose code it was based. Meanwhile Netscape was pushing aggressively forward with its vision of the browser as a platform, a home for active content of all descriptions. Netscape Navigator 2.0, whose first beta release appeared almost simultaneously with Internet Explorer 1.0, doubled down on that vision by including an email and Usenet client. More importantly, it supported not only Java but a second programming language for creating active content on the Web — a language that would prove much more important to the evolution of the Web in the long run.

Even at this early stage — still four months before Sun would deign to grant Java its own 1.0 release — some of the issues with using it on the Web were becoming clear: namely, the weight of the virtual machine that had to be loaded and started before a Java applet could run, and said applet’s inability to communicate easily with the webpage that had spawned it. Netscape therefore decided to create something that lay between the static simplicity of vanilla HTML and the dynamic complexity of Java. The language called JavaScript would share much of its big brother’s syntax, but it would be interpreted rather than compiled, and would live in the same environment as the HTML that made up a webpage rather than in a sandbox of its own. In fact, it would be able to manipulate that HTML directly and effortlessly, changing the page’s appearance on the fly in response to the user’s actions. The idea was that programmers would use JavaScript for very simple forms of active content — like, say, a popup photo gallery or a scrolling stock ticker — and use Java for full-fledged in-browser software applications — i.e., your word processors and the like.

In contrast to Java, a compiled language walled off inside its own virtual machine, JavaScript is embedded directly into the HTML that makes up a webpage, using the handy “<script>” tag.

​There’s really no way to say this kindly: JavaScript was (and is) a pretty horrible programming language by any objective standard. Unlike Java, which was the product of years of thought, discussion, and experimentation, JavaScript was the very definition of “quick and dirty” in a computer-science context. Even its principal architect Brendan Eich doesn’t speak of it like an especially proud parent; he calls it “Java’s dumb little brother” and “a rush job.” Which it most certainly was: he designed and implemented JavaScript from scratch in a matter of bare weeks.

What he ended up with would revolutionize the Web not because it was good, but because it was good enough, filling a craving that turned out to be much more pressing and much more satisfiable in the here and now than the likes of in-browser word processing. The lightweight JavaScript could be used to bring the Web alive, to make it a responsive and interactive place, more quickly and organically than the heavyweight Java. Once JavaScript had reached a critical mass in that role, it just kept on rolling with all the relentlessness of a Microsoft operating system. Today an astonishing 98 percent of all webpages contain at least a little bit of JavaScript in addition to HTML, and a cottage industry has sprung up to modify and extend the language — and attempt to fix the many infelicities that haunt the sleep of computer-science professors all over the world. JavaScript has become, in other words, the modern world’s nearest equivalent to what BASIC was in the 1980s, a language whose ease of use, accessibility, and populist appeal make up for what it lacks in elegance. These days we even do online word processing in JavaScript. If you had told Brendan Eich that that would someday be the case back in 1995, he would have laughed as loud and long at you as anyone.

Although no one could know it at the time, JavaScript also represents the last major building block to the modern Web for which Marc Andreessen can take a substantial share of the credit, following on from the “image” tag for displaying inline graphics, the secure sockets layer (SSL) for online encryption (an essential for any form of e-commerce), and to a lesser extent the Java language. Microsoft, by contrast, was still very much playing catch-up.

Nevertheless, on December 7, 1995 — the symbolism of this anniversary of the United States’s entry into World War II was lost on no one — Bill Gates gave a major address to the Microsoft faithful and assembled press, in which he made it clear that Microsoft was in the browser war to win it. In addition to announcing that his company too would bite the bullet and license Java for Internet Explorer, he said that the latter browser would no longer be a Windows 95 exclusive, but would soon be made available for Windows 3 and even MacOS as well. And everywhere it appeared, it would continue to sport the very un-Microsoft price tag of free, proof that this old dog was learning some decidedly new tricks for achieving market penetration in this new era of online software distribution. “When we say the browser’s free, we’re saying something different from other people,” said Gates, in a barbed allusion to Netscape’s shareware distribution model. “We’re not saying, ‘You can use it for 90 days,’ or, ‘You can use it and then maybe next year we’ll charge you a bunch of money.'” Netscape, whose whole business revolved around its browser, couldn’t afford to give Navigator away, a fact of which Gates was only too well aware. (Some pundits couldn’t resist contrasting this stance with Gates’s famous 1976 “Open Letter To Hobbyists,” in which he had asked, “Who can afford to do professional work for nothing?” Obviously Microsoft now could…)

Netscape’s stock price dropped by $28.75 that day. For Microsoft’s research budget alone was five times the size of Netscape’s total annual revenues, while the bigger company now had more than 800 people — twice Netscape’s total headcount — working on Internet Explorer alone. Marc Andreessen could offer only vague Silicon Valley aphorisms when queried about these disparities: “In a fight between a bear and an alligator, what determines the victor is the terrain” — and Microsoft, he claimed, had now moved “onto our terrain.” The less abstractly philosophical Larry Ellison, head of the database giant Oracle and a man who had had more than his share of run-ins with Bill Gates in the past, joked darkly about the “four stages” of Microsoft stealing someone else’s innovation. Stage 1: to “ridicule” it. Stage 2: to admit that, “yeah, there are a few interesting ideas here.” Stage 3: to make its own version. Stage 4: to make the world forget that the non-Microsoft version had ever existed.

Yet for the time being the Netscape tail continued to wag the Microsoft dog. A more interactive and participatory vision of the Web, enabled by the magic of JavaScript, was spreading like wildfire by the middle of 1996. You still needed Netscape Navigator to experience this first taste of what would eventually be labelled Web 2.0, a World Wide Web that blurred the lines between readers and writers, between content consumers and content creators. For if you visited one of these cutting-edge sites with Internet Explorer, it simply wouldn’t work. Despite all of Microsoft’s efforts, Netscape in June of 1996 could still boast of a browser market share of 85 percent. Marc Andreessen’s Sun Tzu-lite philosophy appeared to have some merit to it after all; his company was by all indications still winning the browser war handily. Even in its 2.0 incarnation, which had been released at about the same time as Gates’s Pearl Harbor speech, Internet Explorer remained something of a joke among Windows users, the annoying mother-in-law you could never seem to get rid of once she showed up.

But then, grizzled veterans like Larry Ellison had seen this movie before; they knew that it was far too early to count Microsoft out. That August, both Netscape and Microsoft released 3.0 versions of their browsers. Netscape’s was a solid evolution of what had come before, but contained no game changers like JavaScript. Microsoft’s, however, was a dramatic leap forward. In addition to Java support, it introduced JScript, a lightweight scripting language that just so happened to have the same syntax as JavaScript. At a stroke, all of those sites which hadn’t worked with earlier versions of Internet Explorer now displayed perfectly well in either browser.

With his browser itself more or less on a par with Netscape’s, Bill Gates decided it was time to roll out his not-so-secret weapon. In October of 1996, Microsoft began shipping Windows 95’s “Service Pack 2,” the second substantial revision of the operating system since its launch. Along with a host of other improvements, it included Internet Explorer. From now on, the browser would ship with every single copy of Windows 95 and be installed automatically as part of the operating system, whether the user wanted it or not. New Windows users would have to make an active choice and then an active effort to go to Netscape’s site — using Internet Explorer, naturally! — and download the “alternative” browser. Microsoft was counting on the majority of these users not knowing anything about the browser war and/or just not wanting to be bothered.

Microsoft employed a variety of carrots and sticks to pressure other companies throughout the computing ecosystem to give or at the bare minimum to recommend Internet Explorer to their customers in lieu of Netscape Navigator. It wasn’t above making the favorable Windows licensing deals it signed with big consumer-computer manufacturers like Compaq dependent on precisely this. But the most surprising pact by far was the one Microsoft made with America Online (AOL).

Relations between the face of the everyday computing desktop and the face of the Internet in the eyes of millions of ordinary Americans had been anything but cordial in recent years. Bill Gates had reportedly told Steve Case, his opposite number at AOL, that he would “bury” him with his own Microsoft Network (MSN). Meanwhile Case had complained long and loud about Microsoft’s bullying tactics to the press, to the point of mooting a comparison between Gates and Adolf Hitler on at least one occasion. Now, though, Gates was willing to eat crow and embrace AOL, even at the expense of his own MSN, if he could stick it to Netscape in the process.

For its part, AOL had come as far as it could with its Booklink browser. The Web was evolving too rapidly for the little development team it had inherited with that acquisition to keep up. Case grudgingly accepted that he needed to offer his customers one of the Big Two browsers. All of his natural inclinations bent toward Netscape. And indeed, he signed a deal with Netscape to make Navigator the browser that shipped with AOL’s turnkey software suite — or so Netscape believed. It turned out that Netscape’s lawyers had overlooked one crucial detail: they had never stipulated exclusivity in the contract. This oversight wasn’t lost on the interested bystander Microsoft, which swooped in immediately to take advantage of it. AOL soon announced another deal, to provide its customers with Internet Explorer as well. Even worse for Netscape, this deal promised Microsoft not only availability but priority: Internet Explorer would be AOL’s recommended, default browser, Netscape Navigator merely an alternative for iconoclastic techies (of which there were, needless to say, very few in AOL’s subscriber base).

What did AOL get in return for getting into bed with Adolf Hitler and “jilting Netscape at the altar,” as the company’s own lead negotiator would later put it? An offer that was impossible for a man with Steve Case’s ambitions to refuse, as it happened. Microsoft would put an AOL icon on the desktop of every new Windows 95 installation, where the hundreds of thousands of Americans who were buying a computer every month in order to check out this Internet thing would see it sitting there front and center, and know, thanks to AOL’s nonstop advertising blitz, that the wonders of the Web were just one click on it away. It was a stunning concession on Microsoft’s part, not least because it came at the direct cost of MSN, the very online network Bill Gates had originally conceived as his method of “burying” AOL. Now, though, no price was too high to pay in his quest to destroy Netscape.

Which raises the question of why he was so obsessed, given that Microsoft was making literally no money from Internet Explorer. The answer is rooted in all that rhetoric that was flying around at the time about the browser as a computing platform — about the Web effectively turning into a giant computer in its own right, floating up there somewhere in the heavens, ready to give a little piece of itself to anyone with a minimalist machine running Netscape Navigator. Such a new world order would have no need for a Microsoft Windows — perish the thought! But if, on the other hand, Microsoft could wrest the title of leading browser developer out of the hands of Netscape, it could control the future evolution of this dangerously unruly beast known as the World Wide Web, and ensure that it didn’t encroach on its other businesses.

That the predictions which prompted Microsoft’s downright unhinged frenzy to destroy Netscape were themselves wildly overblown is ironic but not material. As tech journalist Merrill R. Chapman has put it, “The prediction that anyone was going to use Navigator or any other browser anytime soon to write documents, lay out publications, build budgets, store files, and design presentations was a fantasy. The people who made these breathless predictions apparently never tried to perform any of these tasks in a browser.” And yet in an odd sort of way this reality check didn’t matter. Perception can create its own reality, and Bill Gates’s perception of Netscape Navigator as an existential threat to the software empire he had spent the last two decades building was enough to make the browser war feel like a truly existential clash for both parties, even if the only one whose existence actually was threatened — urgently threatened! — was Netscape. Jim Clark, Marc Andreessen’s partner in founding Netscape, makes the eyebrow-raising claim that he “knew we were dead” in the long run well before the end of 1996, when the Department of Justice declined to respond to an urgent plea on Netscape’s part to take another look at Microsoft’s business practices.

Perhaps the most surprising aspect of the conflict is just how long Netscape’s long run proved to be. It was in most respects David versus Goliath: Netscape in 1996 had $300 million in annual revenues to Microsoft’s nearly $9 billion. But whatever the disparities of size, Netscape had built up a considerable reservoir of goodwill as the vehicle through which so many millions had experienced the Web for the first time. Microsoft found this soft power oddly tough to overcome, even with a browser of its own that was largely identical in functional terms. A remarkable number of people continued to make the active choice to use Netscape Navigator instead of the passive one to use Internet Explorer. By October of 1997, one year after Microsoft brought out the big gun and bundled Internet Explorer right into Windows 95, its browser’s market share had risen as high as 39 percent — but it was Netscape that still led the way at 51 percent.

Yet Netscape wasn’t using those advantages it did possess all that effectively. It was not a happy or harmonious company: there were escalating personality clashes between Jim Clark and Marc Andreessen, and also between Andreessen and his programmers, who thought their leader had become a glory hound, too busy playing the role of the young dot.com millionaire to pay attention to the vital details of software development. Perchance as a result, Netscape’s drive to improve its browser in paradigm-shifting ways seemed to slowly dissipate after the landmark Navigator 2.0 release.

Netscape, so recently the darling of the dot.com age, was now finding it hard to make a valid case for itself merely as a viable business. The company’s most successful quarter in financial terms was the third of 1996 — just before Internet Explorer became an official part of Windows 95 — when it brought in $100 million in revenue. Receipts fell precipitously after that point, all the way down to just $18.5 million in the last quarter of 1997. By so aggressively promoting Internet Explorer as entirely and perpetually free, Bill Gates had, whether intentionally or inadvertently, instilled in the general public an impression that all browsers were or ought to be free, due to some unstated reason inherent in their nature. (This impression has never been overturned, as has been testified over the years by the failure of otherwise worthy commercial browsers like Opera to capture much market share.) Thus even the vast majority of those who did choose Netscape’s browser no longer seemed to feel any ethical compulsion to pay for it. Netscape was left in a position all too familiar to Web firms of the past and present alike: that of having immense name recognition and soft power, but no equally impressive revenue stream to accompany them. It tried frantically to pivot into back-end server architecture and corporate intranet solutions, but its efforts there were, as its bottom line will attest, not especially successful. It launched a Web portal and search engine known as Netcenter, but struggled to gain traction against Yahoo!, the leader in that space. Both Jim Clark and Marc Andreessen sold off large quantities of their personal stock, never a good sign in Silicon Valley.

Netscape Navigator was renamed Netscape Communicator for its 4.0 release in June of 1997. As the name would imply, Communicator was far more than just a browser, or even just a browser with an integrated email client and Usenet reader, as Navigator had been since version 2.0. Now it also sported an integrated editor for making your own websites from scratch, a real-time chat system, a conference caller, an appointment calendar, and a client for “pushing” usually unwanted content to your screen. It was all much, much too much, weighted down with features most people would never touch, big and bloated and slow and disturbingly crash-prone; small wonder that even many Netscape loyalists chose to stay with Navigator 3 after the release of Communicator. Microsoft had not heretofore been known for making particularly svelte software, but Internet Explorer, which did nothing but browse the Web, was a lean ballerina by comparison with the lumbering Sumo wrestler that was Netscape Communicator. The original Netscape Navigator had sprung from the hacker culture of institutional computing, but the company had apparently now forgotten one of that culture’s key dictums in its desire to make its browser a platform unto itself: the best programs are those that do only one thing, but do that one thing very, very well, leaving all of the other things to other programs.

Netscape Communicator. I’m told that there’s an actual Web browser buried somewhere in this pile. Probably a kitchen sink too, if you look hard enough.

Luckily for Netscape, Internet Explorer 4.0, which arrived three months after Communicator, violated the same dictum in an even more inept way. It introduced what Microsoft called the “Active Desktop,” which let it bury its hooks deeper than ever into Windows itself. The Active Desktop was — or tried to be —  Bill Gates’s nightmare of a Web that was impossible to separate from one’s local computer come to life, but with Microsoft’s own logo on it. Ironically, it blurred the distinction between the local computer and the Internet more thoroughly than anything the likes of Sun or Netscape had produced to date; local files and applications became virtually indistinguishable from those that lived on the Internet in the new version of the Windows desktop it installed in place of the old. The end result served mainly to illustrate how half-baked all of the prognostications about a new era of computing exclusively in the cloud really were. The Active Desktop was slow and clumsy and confusing, and absolutely everyone who was exposed to it seemed to hate it and rush to find a way to turn it off. Fortunately for Microsoft, it was possible to do so without removing the Internet Explorer 4 browser itself.

The dreaded Active Desktop. Surprisingly, it was partially defended on philosophical grounds by Tim Berners-Lee, not normally a fan of Microsoft. “It was ridiculous for a person to have two separate interfaces, one for local information (the desktop for their own computer) and one for remote information (a browser to reach other computers),” he writes. “Why did we need an entire desktop for our own computer, but only get little windows through which to view the rest of the planet? Why, for that matter, should we have folders on our desktop but not on the Web? The Web was supposed to be the universe of all accessible information, which included, especially, information that happened to be stored locally. I argued that the entire topic of where information was physically stored should be made invisible to the user.” For better or for worse, though, the public didn’t agree. And even he had to allow that “this did not have to imply that the operating system and browser should be the same program.”

The Active Desktop damaged Internet Explorer’s reputation, but arguably not as badly as Netscape’s had been damaged by the bloated Communicator. For once you turned off all that nonsense, Internet Explorer 4 proved to be pretty good at doing the rest of its job. But there was no similar method for trimming the fat from Netscape Communicator.

While Microsoft and Netscape, those two for-profit corporations, had been vying with one another for supremacy on the Web, another, quieter party had been looking on with great concern. Before the Web had become the hottest topic of the business pages, it had been an idea in the head of the mild-mannered British computer scientist Tim Berners-Lee. He had built the Web on the open Internet, using a new set of open standards; his inclination had never been to control his creation personally. It was to be a meeting place, a library, a forum, perhaps a marketplace if you liked — but always a public commons. When Berners-Lee formed the non-profit World Wide Web Consortium (W3C) in October of 1994 in the hope of guiding an orderly evolution of the Web that kept it independent of the moneyed interests rushing to join the party, it struck many as a quaint endeavor at best. Key technologies like Java and JavaScript appeared and exploded in popularity without giving the W3C a chance to say anything about them. (Tellingly, the word “JavaScript” never even appears in Berners-Lee’s 1999 book about his history with and vision for the Web, despite the scripting language’s almost incalculable importance to making it the dynamic and diverse place it had become by that point.)

From the days when he had been a mere University of Illinois student making a browser on the side, Marc Andreessen had blazed his own trail without giving much thought to formal standards. When the things he unilaterally introduced proved useful, others rushed to copy them, and they became de-facto standards. This was as true of JavaScript as it was of anything else. As we’ve seen, it began as a Netscape-exclusive feature, but was so obviously transformative to what the Web could do and be that Microsoft had no choice but to copy it, to incorporate its own implementation of it into Internet Explorer.

But JavaScript was just about the last completely new feature to be rolled out and widely adopted in this ad-hoc fashion. As the Web reached a critical mass, with Netscape Navigator and Internet Explorer both powering users’ experiences of it in substantial numbers, site designers had a compelling reason not to use any technology that only worked on the one or the other; they wanted to reach as many people as possible, after all. This brought an uneasy sort of equilibrium to the Web.

Nevertheless, the first instinct of both Netscape and Microsoft remained to control rather than to share the Web. Both companies’ histories amply demonstrated that open standards meant little to them; they preferred to be the standard. What would happen if and when one company won the browser war, as Microsoft seemed slowly to be doing by 1997, what with the trend lines all going in its favor and Netscape in veritable financial free fall? Once 90 percent or more of the people browsing the Web were doing so with Internet Explorer, Microsoft would be free to give its instinct for dominance free rein. With an army of lawyers at its beck and call, it would be able to graft onto the Web proprietary, patented technologies that no upstart competitor would be able to reverse-engineer and copy, and pragmatic website designers would no longer have any reason not to use them, if they could make their sites better. And once many or most websites depended on these features that were available only in Internet Explorer, that would be that for the open Web. Despite its late start, Microsoft would have managed to embrace, extend, and in a very real sense destroy Tim Berners-Lee’s original vision of a World Wide Web. The public commons would have become a Microsoft-branded theme park.

These worries were being bandied about with ever-increasing urgency in January of 1998, when Netscape made what may just have been the most audacious move of the entire dot.com boom. Like most such moves, it was born of sheer desperation, but that shouldn’t blind us to its importance and even bravery. First of all, Netscape made its browser free as in beer, finally giving up on even asking people to pay for the thing. Admittedly, though, this in itself was little more than an acceptance of the reality on the ground, as it were. It was the other part of the move that really shocked the tech world: Netscape also made its browser free as in freedom — it opened up its source code to all and sundry. “This was radical in its day,” remembers Mitchell Baker, one of the prime drivers of the initiative at Netscape. “Open source is mainstream now; it was not then. Open source was deep, deep, deep in the technical community. It never surfaced in a product. [This] was a very radical move.”

Netscape spun off a not-for-profit organization, led by Baker and called Mozilla, after a cartoon dinosaur that had been the company’s office mascot almost from day one. Coming well before the Linux operating system began conquering large swaths of corporate America, this was to be open source’s first trial by fire in the real world. Mozilla was to concentrate on the core code required for rendering webpages — the engine room of a browser, if you will. Then others — not least among them the for-profit arm of Netscape — would build the superstructures of finished applications around that sturdy core.

Alas, Netscape the for-profit company was already beyond saving. If anything, this move only hastened the end; Netscape had chosen to give away the one product it had that some tiny number of people were still willing to pay for. Some pundits talked it up as a dying warrior’s last defiant attempt to pass the sword to others, to continue the fight against Microsoft and Internet Explorer: “From the depths of Hell, I spit at thee!” Or, as Tim Berners-Lee put it more soberly: “Microsoft was bigger than Netscape, but Netscape was hoping the Web community was bigger than Microsoft.” And there may very well be something to these points of view. But regardless of the motivations behind it, the decision to open up Netscape’s browser proved both a landmark in the history of open-source software and a potent weapon in the fight to keep the Web itself open and free. Mozilla has had its ups and downs over the years since, but it remains with us to this day, still providing an alternative to the corporate-dominated browsers almost a quarter-century on, having outlived the more conventional corporation that spawned it by a factor of six.

Mozilla’s story is an important one, but we’ll have to leave the details of it for another day. For now, we return to the other players in today’s drama.

While Microsoft and Netscape were battling one another, AOL was soaring into the stratosphere, the happy beneficiary of Microsoft’s decision to give it an icon on the Windows 95 desktop in the name of vanquishing Netscape. In 1997, in a move fraught with symbolic significance, AOL bought CompuServe, its last remaining competitor from the pre-Web era of closed, proprietary online services. By the time Netscape open-sourced its browser, AOL had 12 million subscribers and annual profits — profits, mind you, not revenues — of over $500 million, thanks not only to subscription fees but to the new frontier of online advertising, where revenues and profits were almost one and the same. At not quite 40 years old, Steve Case had become a billionaire.

“AOL is the Internet blue chip,” wrote the respected stock analyst Henry Blodget. And indeed, for all of its association with new and shiny technology, there was something comfortingly stolid — even old-fashioned — about the company. Unlike so many of his dot.com compatriots, Steve Case had found a way to combine name recognition and a desirable product with a way of getting his customers to actually pay for said product. He liked to compare AOL with a cable-television provider; this was a comparison that even the most hidebound investors could easily understand. Real, honest-to-God checks rolled into AOL’s headquarters every month from real, honest-to-God people who signed up for real, honest-to-God paid subscriptions. So what if the tech intelligentsia laughed and mocked, called AOL “the cockroach of cyberspace,” and took an “@AOL.com” suffix on someone’s email address as a sign that they were too stupid to be worth talking to? Case and his shareholders knew that money from the unwashed masses spent just as well as money from the tech elites.

Microsoft could finally declare victory in the browser war in the summer of 1998, when the two browsers’ trend lines crossed one another. At long last, Internet Explorer’s popularity equaled and then rapidly eclipsed that of Netscape Navigator/Communicator. It hadn’t been clean or pretty, but Microsoft had bludgeoned its way to the market share it craved.

A few months later, AOL acquired Netscape through a stock swap that involved no cash, but was worth a cool $9.8 billion on paper — an almost comical sum in relation to the amount of actual revenue the purchased company had brought in during its lifetime. Jim Clark and Marc Andreessen walked away very, very rich men. Just as Netscape’s big IPO had been the first of its breed, the herald of the dot.com boom, Netscape now became the first exemplar of the boom’s unique style of accounting, which allowed people to get rich without ever having run a profitable business.

Even at the time, it was hard to figure out just what it was about Netscape that AOL thought was worth so much money. The deal is probably best understood as a product of Steve Case’s fear of a Microsoft-dominated Web; despite that AOL icon on the Windows desktop, he still didn’t trust Bill Gates any farther than he could throw him. In the end, however, AOL got almost nothing for its billions. Netscape Communicator was renamed AOL Communicator and offered to the service’s subscribers, but even most of them, technically unsophisticated though they tended to be, could see that Internet Explorer was the cleaner and faster and just plain better choice at this juncture. (The open-source coders working with Mozilla belatedly realized the same; they would wind up spending years writing a brand-new browser engine from scratch after deciding that Netscape’s just wasn’t up to snuff.)

Most of Netscape’s remaining engineers walked soon after the deal was made. They tended to describe the company’s meteoric rise and fall in the terms of a Shakespearean tragedy. “At least the old timers among us came to Netscape to change the world,” lamented one. “Getting killed by the Evil Empire, being gobbled up by a big corporation — it’s incredibly sad.” If that’s painting with rather too broad a brush — one should always run away screaming when a Silicon Valley denizen starts talking about “changing the world” — it can’t be denied that Netscape at no time enjoyed a level playing field in its war against Microsoft.

But times do change, as Microsoft was about to learn to its cost. In May of 1998, the Department of Justice filed suit against Microsoft for illegally exploiting its Windows monopoly in order to crush Netscape. The suit came too late to save the latter, but it was all over the news even as the first copies of Windows 98, the hotly anticipated successor to Windows 95, were reaching store shelves. Bill Gates had gotten his wish; Internet Explorer and Windows were now indissolubly bound together. Soon he would have cause to wish that he had not striven for that outcome quite so vigorously.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher, In Search of Stupidity: Over Twenty Years of High-Tech Marketing Disasters by Merrill R. Chapman, Coders at Work: Reflections on the Craft of Programming by Peter Seibel, and Weaving the Web by Tim Berners-Lee. Online sources include “1995: The Birth of JavaScript” at Web Development History, the New York Times timeline of AOL’s history, and Mitchell Baker’s talk about the history of Mozilla, which is available on Wikipedia.)

 
45 Comments

Posted by on December 23, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

Doing Windows, Part 11: The Internet Tidal Wave

On August 6, 1991, when Microsoft was still in the earliest planning stages of creating the operating system that would become known as Windows 95, an obscure British researcher named Tim Berners-Lee, working out of the Conseil Européen pour la Recherche Nucléaire (CERN) in Switzerland, put the world’s first publicly accessible website online. For years to come, these two projects would continue to evolve separately, blissfully unconcerned by if not unaware of one another’s existence. And indeed, it is difficult to imagine two computing projects with more opposite personalities. Mirroring its co-founder and CEO Bill Gates, Microsoft was intensely pragmatic and maniacally competitive. Tim Berners-Lee, on the other hand, was a classic academic, a theorist and idealist rather than a businessman. The computers on which he and his ilk built the early Web ran esoteric operating systems like NeXTSTEP and Unix, or at their most plebeian MacOS, not Microsoft’s mass-market workhorse Windows. Microsoft gave you tools for getting everyday things done, while the World Wide Web spent the first couple of years of its existence as little more than an airy proof of concept, to be evangelized by wide-eyed adherents who often appeared to have read one too many William Gibson novels. Forbes magazine was soon to anoint Bill Gates the world’s richest person, his reward for capturing almost half of the international software market; the nascent Web was nowhere to be found in the likes of Forbes.

Those critics who claim that Microsoft was never a visionary company — that it instead thrived by letting others innovate, then swooping in and taking taking over the markets thus opened — love to point to its history with the World Wide Web as Exhibit Number One. Despite having a role which presumably demanded that he stay familiar with all leading-edge developments in computing, Bill Gates by his own admission never even heard of the Web until April of 1993, twenty months after that first site went up. And he didn’t actually surf the Web for himself until another six months after that — perhaps not coincidentally, shortly after a Windows version of NCSA Mosaic, the user-friendly graphical browser that made the Web a welcoming place even for those whose souls didn’t burn with a passion for information theory, had finally been released.

Gates focused instead on a different model of online communication, one arguably more in keeping with his instincts than was the free and open Web. For almost a decade and a half by 1993, various companies had been offering proprietary dial-up services aimed at owners of home computers. These came complete with early incarnations of many of the staples of modern online life: email, chat lines, discussion forums, online shopping, online banking, online gaming, even online dating. They were different from the Web in that they were walled gardens that provided no access to anything that lay beyond the big mainframes that hosted them. Yet within their walls lived bustling communities whose citizens paid their landlords by the minute for the privilege of participation.

The 500-pound gorilla of this market had always been CompuServe, which had been in the business since the days when a state-of-the-art home computer had 16 K of memory and used cassette tapes for storage. Of late, however, an upstart service called America Online (AOL) had been making waves. Under Steve Case, its wunderkind CEO, AOL aimed its pitch straight at the heart of Middle America rather than the tech-savvy elite. Over the course of 1993 alone, it went from 300,000 to 500,000 subscribers. But that was only the beginning if one listened to Case. For a second Home Computer Revolution, destined to be infinitely more successful and long-lasting than the first, was now in full swing, powered along by the ease of use of Windows 3 and by the latest consumer-grade hardware, which made computing faster and more aesthetically attractive than it had ever been before. AOL’s quick and easy custom software fit in perfectly with these trends. Surely this model of the online future — of curated content offered up by a firm whose stated ambition was to be the latest big player in mass media as a whole; of a subscription model that functioned much like the cable television which the large majority of Americans were already paying for — was more likely to take hold than the anarchic jungle that was the World Wide Web. It was, at any rate, a model that Bill Gates could understand very well, and naturally gravitated toward. Never one to leave cash on the table, he started asking himself how Microsoft could get a piece of this action as well.

Steve Case celebrates outside the New York Stock Exchange on March 19, 1992, the day America Online went public.

Gates proceeded in his standard fashion: in May of 1993, he tried to buy AOL outright. But Steve Case, who nursed dreams of becoming a media mogul on the scale of Walt Disney or Jack Warner, turned him down flat. At this juncture, Russ Siegelman, a 33-year-old physicist-by-education whom Gates had made his point man for online strategy, suggested a second classically Microsoft solution to the dilemma: they could build their own online service that copied AOL in most respects, then bury their rival with money and sheer ubiquity. They could, Siegelman suggested, make their own network an integral part of the eventual Windows 95, make signing up for it just another step in the installation process. How could AOL possibly compete with that? It was the first step down a fraught road that would lead to widespread outrage inside the computer industry and one of the most high-stakes anti-trust investigations in the history of American business — but for all that, the broad strategy would prove very, very effective once it reached its final form. It had a ways still to go at this stage, though, targeting as it did AOL instead of the Web.

Gates put Siegelman in charge of building Microsoft’s online service, which was code-named Project Marvel. “We were not thinking about the Internet at all,” admits one of the project’s managers. “Our competition was CompuServe and America Online. That’s what we were focused on, a proprietary online service.” At the time, there were exactly two computers in Microsoft’s sprawling Redmond, Washington, campus that were connected to the Internet. “Most college kids knew much more than we did because they were exposed to it,” says the Marvel manager. “If I had wanted to connect to the Internet, it would have been easier for me to get into my car and drive over to the University of Washington than to try and get on the Internet at Microsoft.”

It came down to the old “not built here” syndrome that dogs so many large institutions, as well as the fact that the Web and the Internet on which it lived were free, and Bill Gates tended to hold that which was free in contempt. Anyone who attempted to help him over his mental block — and there were more than a few of them at Microsoft — was greeted with an all-purpose rejoinder: “How are we going to make money off of free?” The biggest revolution in computing since the arrival of the first pre-assembled personal computers back in 1977 was taking place all around him, and Gates seemed constitutionally incapable of seeing it for what it was.

In the meantime, others were beginning to address the vexing question of how you made money out of free. On April 4, 1994, Marc Andreessen, the impetus behind the NCSA Mosaic browser, joined forces with Jim Clark, a veteran Silicon Valley entrepreneur, to found Netscape Communications for the purpose of making a commercial version of the Mosaic browser. A team of programmers, working without consulting the Mosaic source code so as to avoid legal problems, soon did just that, and uploaded Netscape Navigator to the Web on October 13, 1994. Distributed under the shareware model, with a $39 licensing fee requested but not demanded after a 90-day trial period was up, the new browser was installed on more than 10 million computers within nine months.

AOL’s growth had continued apace despite the concurrent explosion of the open Web; by the time of Netscape Navigator’s release, the service had 1.25 million subscribers. Yet Steve Case, no one’s idea of a hardcore techie, was ironically faster to see the potential — or threat — of the Web than was Bill Gates. He adopted a strategy in response that would make him for a time at least a superhero of the business press and the investor set. Instead of fighting the Web, AOL would embrace it — would offer its own Web browser to go along with its proprietary content, thereby adding a gate to its garden wall and tempting subscribers with the best of both worlds. As always for AOL, the whole package would be pitched toward neophytes, with a friendly interface and lots of safeguards — “training wheels,” as the tech cognoscenti dismissively dubbed them — to keep the unwashed masses safe when they did venture out into the untamed wilds of the Web.

But Case needed a browser of his own in order to execute his strategy, and he needed it in a hurry. He needed, in short, to buy a browser rather than build one. He saw three possibilities. One was to bring Netscape and its Navigator into the AOL fold. Another was a small company called Spyglass, a spinoff of the National Center for Supercomputing (NCSA) which was attempting to commercialize the original NCSA Mosaic browser. And the last was a startup called Booklink Technologies, which was making a browser from scratch.

Netscape was undoubtedly the superstar of the bunch, but that didn’t help AOL’s cause any; Marc Andreessen and Jim Clark weren’t about to sell out to anyone. Spyglass, on the other hand, struck Case as an unimaginative Johnny-come-lately that was trying to shut the barn door long after the horse called Netscape had busted out. That left only Booklink. In November of 1994, AOL paid $30 million for the company. The business press scoffed, deeming it a well-nigh flabbergasting over-payment. But Case would get the last laugh.

While AOL was thus rushing urgently to “embrace and extend” the Web, to choose an ominous phrase normally associated with Microsoft, the latter was dawdling along more lackadaisically toward a reckoning with the Internet. During that same busy fall of 1994, IBM released OS/2 3.0, which was marketed as OS/2 Warp in the hope of lending it some much-needed excitement. By either name, it was the latest iteration of an operating system that IBM had originally developed in partnership with Microsoft, an operating system that had once been regarded by both companies as nothing less than the future of mainstream computing. But since the pair’s final falling out in 1991, OS/2 had become an irrelevancy in the face of the Windows juggernaut, winning a measure of affection only in some hacker circles and a few other specialized niches. Despite its snazzy new name and despite being an impressive piece of software from a purely technical perspective, OS/2 Warp wasn’t widely expected to change those fortunes before its release, and this lack of expectations proved well-founded afterward. Yet it was a landmark in another way, being the first operating system to include a Web browser as an integral component, in this case a program called Web Explorer, created by IBM itself because no one else seemed much interested in making a browser for the unpopular OS/2.

This appears to have gotten some gears turning in Bill Gates’s head. Microsoft already planned to include more networking tools than ever before in Windows 95. They had, for example, finally decided to bow to customer demand and build right into the operating system TCP/IP, the networking protocol that allowed a computer to join the Internet; Windows 3 required the installation of a third-party add-on for the same purpose. (“I don’t know what it is, and I don’t want to know what it is,” said Steve Ballmer, Gates’s right-hand man, to his programmers on the subject of TCP/IP. “[But] my customers are screaming about it. Make the pain go away.”) Maybe a Microsoft-branded Web browser for Windows 95 would be a good idea as well, if they could acquire one without breaking the bank.

Just days after AOL bought Booklink for $30 million, Microsoft agreed to give $2 million to Spyglass. In return, Spyglass would give Microsoft a copy of the Mosaic source code, which it could then use as the basis for its own browser. But, lest you be tempted to see this transaction as evidence that Gates’s opinions about the online future had already undergone a sea change by this date, know that the very day this deal went down was also the one on which he chose to publicly announce Microsoft’s own proprietary AOL competitor, to be known as simply the Microsoft Network, or MSN. At most, Gates saw the open Web at this stage as an adjunct to MSN, just as it would soon become to AOL. MSN would come bundled into Windows 95, he told the assembled press, so that anyone who wished to could become a subscriber at the click of a mouse.

The announcement caused alarm bells to ring at AOL. “The Windows operating system is what the dial tone is to the phone industry,” said Steve Case. He thus became neither the first nor the last of Gates’s rival to hint at the need for government intervention: “There needs to be a level playing field on which companies compete.” Some pundits projected that Microsoft might sign up 20 million subscribers to MSN before 1995 was out. Others — the ones whom time would prove to have been more prescient — shook their heads and wondered how Microsoft could still be so clueless about the revolutionary nature of the World Wide Web.

AOL leveraged the Booklink browser to begin offering its subscribers Web access very early in 1995, whereupon its previously robust rate of growth turned downright torrid. By November of 1995, it would have 4 million subscribers. The personable and photogenic Steve Case became a celebrity in his own right, to the point of starring in a splashy advertising campaign for The Gap’s line of khakis; the man and the pants represented respectively the personification and the uniform of the trend in corporate America toward “business casual.” Meanwhile Case’s company became an indelible part of the 1990s zeitgeist. “You’ve got mail!,” the words AOL’s software spoke every time a new email arrived — something that was still very much a novel experience for many subscribers — was featured as a sample in a Prince song, and eventually became the name of a hugely popular romantic comedy starring Tom Hanks and Meg Ryan. CompuServe and AOL’s other old rivals in the proprietary space tried to compete by setting up Internet gateways of their own, but were never able to negotiate the transition from one era of online life to another with the same aplomb as AOL, and gradually faded into irrelevancy.

Thankfully for Microsoft’s shareholders, Bill Gates’s eyes were opened before his company suffered the same fate. At the eleventh hour, with what were supposed to be the final touches being put onto Windows 95, he made a sharp swerve in strategy. He grasped at last that the open Web was the here, the now, and the future, the first major development in mainstream consumer computing in years that hadn’t been more or less dictated by Microsoft — but be that as it may, the Web wasn’t going anywhere. On May 26, 1995, he wrote a memo to every Microsoft employee that exuded an all-hands-on-deck sense of urgency. Gates, the longstanding Internet agnostic, had well and truly gotten the Internet religion.

I want to make clear that our focus on the Internet is critical to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of [the] graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn’t perfect. Aspects of the PC were arbitrary or even poor. However, a phenomena [sic] grew up around the IBM PC that made it a key element of everything that would happen for the next fifteen years. Companies that tried to fight the PC standard often had good reasons for doing so, but they failed because the phenomena overcame any weakness that [the] resistors identified.

Over the last year, a number of people [at Microsoft] have championed embracing TCP/IP, hyperlinking, HTML, and building clients, tools, and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features.

Everything changed that day. Instead of walling its campus off from the Internet, Microsoft put the Web at every employee’s fingertips. Gates himself sent his people lists of hot new websites to explore and learn from. The team tasked with building the Microsoft browser, who had heretofore labored in under-staffed obscurity, suddenly had all the resources of the company at their beck and call. The fact was, Gates was scared; his fear oozes palpably from the aggressive language of the memo above. (Other people talked of “joining” the Internet; Gates wanted to “compete” on it.)

But just what was he so afraid of? A pair of data points provides us with some clues. Three days before he wrote his memo, a new programming language and run-time environment had taken the industry by storm. And the day after he did so, a Microsoft executive named Ben Slivka sent out a memo of his own with Gate’s blessing, bearing the odd title of “The Web Is the Next Platform.” To understand what Slivka was driving at, and why Bill Gates took it as such an imminent existential threat to his company’s core business model, we need to back up a few years and look at the origins of the aforementioned programming language.


Bill Joy, an old-school hacker who had made fundamental contributions to the Unix operating system, was regarded as something between a guru and an elder statesman by 1990s techies, who liked to call him “the other Bill.” In early 1991, he shared an eye-opening piece of his mind at a formal dinner for select insiders. Microsoft was then on the ascendant, he acknowledged, but they were “cruising for a bruising.” Sticking with the automotive theme, he compared their products to the American-made cars that had dominated until the 1970s — until the Japanese had come along peddling cars of their own that were more efficient, more reliable, and just plain better than the domestic competition. He said that the same fate would probably befall Microsoft within five to seven years, when a wind of change of one sort or another came along to upend the company and its bloated, ugly products. Just four years later, people would be pointing to a piece of technology from his own company Sun Microsystems as the prophesied agent of Microsoft’s undoing.

Sun had been founded in 1982 to leverage the skills of Joy along with those of a German hardware engineer named Andy Bechtolsheim, who had recently built an elegant desktop computer inspired by the legendary Alto machines of Xerox’s Palo Alto Research Center. Over the remainder of the 1980s, Sun made a good living as the premier maker of Unix-based workstations: computers that were a bit too expensive to be marketed to even the most well-heeled consumers, but were among the most powerful of their day that could be fit onto or under a single desktop. Sun possessed a healthy antipathy for Microsoft, for all of the usual reasons cited by the hacker contingent: they considered Microsoft’s software derivative and boring, considered the Intel hardware on which it ran equally clunky and kludgy (Sun first employed Motorola chips, then processors of their own design), and loathed Microsoft’s intensely adversarial and proprietorial approach to everything it touched. For some time, however, Sun’s objections remained merely philosophical; occupying opposite ends of the market as they did, the two companies seldom crossed one another’s paths. But by the end of the decade, the latest Intel hardware had advanced enough to be comparable with that being peddled by Sun. And by the time that Bill Joy made his prediction, Sun knew that something called Windows NT was in the works, knew that Microsoft would be coming in earnest for the high-end-computing space very soon.

About six months after Joy played the oracle, Sun’s management agreed to allow one of their star programmers, a fellow named James Gosling, to form a small independent group in order to explore an idea that had little obviously to do with the company’s main business. “When someone as smart as James wants to pursue an area, we’ll do our best to provide an environment,” said Chief Technology Officer Eric Schmidt.

James Gosling

The specific “area” — or, perhaps better said, problem — that Gosling wanted to address was one that still exists to a large extent today: the inscrutability and lack of interoperability of so many of the gadgets that power our daily lives. The problem would be neatly crystalized almost five years later by one of the milquetoast jokes Jay Leno made at the Windows 95 launch, about how the VCR in even Bill Gates’s living room was still blinking “12:00” because he had never figured out how to set the thing’s clock. What if everything in your house could be made to talk together, wondered Gosling, so that setting one clock would set all of them — so that you didn’t have to have a separate remote control for your television and your VCR, each with about 80 buttons on it that you didn’t understand what they did and never, ever pressed. “What does it take to watch a videotape?” he mused. “You go plunk, plunk, plunk on all of these things in certain magic sequences before you can actually watch your videotape! Why is it so hard? Wouldn’t it be nice if you could just slide the tape into the VCR, [and] the system sort of figures it out: ‘Oh, gee, I guess he wants to watch it, so I ought to power up the television set.'”

But when Gosling and his colleagues started to ponder how best to realize their semi-autonomous home of the future, they tripped over a major stumbling block. While it was true that more and more gadgets were becoming “smart,” in the sense of incorporating programmable microprocessors, the details of their digital designs varied enormously. Each program to link each individual model of, say, VCR into the home network would have to be written, tested, and debugged from scratch. Unless, that is, the program could be made to run in a virtual machine.

A virtual machine is an imaginary computer which a real computer can be programmed to simulate. It permits a “write once, run everywhere” approach to software: once a given real computer has an interpreter for a given virtual machine, it can run any and all programs that have been or will be written for that virtual machine, albeit at some cost in performance.

Like almost every other part of the programming language that would eventually become known as Java, the idea of a virtual machine was far from new in the abstract. (“In some sense, I would like to think that there was nothing invented in Java,” says Gosling.) For example, a decade before Gosling went to work on his virtual machine, the Apple Pascal compiler was already targeting one that ran on the lowly Apple II, even as the games publisher Infocom was distributing its text adventures across dozens of otherwise incompatible platforms thanks to its Z-Machine.

Unfortunately, Gosling’s new implementation of this old concept proved unable to solve by itself the original problem for which it had been invented. Even Wi-Fi didn’t exist at this stage, much less the likes of Bluetooth. Just how were all of these smart gadgets supposed to actually talk to one another, to say nothing of pulling down the regular software updates which Gosling envisioned as another benefit of his project? (Building a floppy-disk drive into every toaster was an obvious nonstarter.) After reluctantly giving up on their home of the future, the team pivoted for a while toward “interactive television,” a would-be on-demand streaming system much like our modern Netflix. But Sun had no real record in the consumer space, and cable-television providers and other possible investors were skeptical.

While Gosling was trying to figure out just what this programming language and associated runtime environment he had created might be good for, the World Wide Web was taking off. In July of 1994, a Sun programmer named Patrick Naughton did something that would later give Bill Gates nightmares: he wrote a fairly bare-bones Web browser in Java, more for the challenge than anything else. A couple of months later there came the eureka moment: Naughton and another programmer named Jonathan Payne made it possible to run other Java programs, or “applets” as they would soon be known, right inside their browser. They stuck one of the team’s old graphical demos on a server and clicked the appropriate link, whereupon they were greeted with a screen full of dancing Coca-Cola cans. Payne found it “breathtaking”: “It wasn’t just playing an animation. It was physics calculations going on inside a webpage!”

In order to appreciate his awe, we need to understand what a static place the early Web was. HTML, the “language” in which pages were constructed, was an abbreviation for “Hypertext Markup Language.” In form and function, it was more akin to a typesetting specification than a Turing-complete programming language like C or Pascal or Java; the only form of interactivity it allowed for was the links that took the reader from static page to static page, while its only visual pizazz came in the form of static in-line images (themselves a relatively recent addition to the HTML specification, thanks to NCSA Mosaic). Java stood to change all that at a stroke. If you could embed programs running actual code into your page layouts, you could in theory turn your pages into anything you wanted them to be: games, word processors, spreadsheets, animated cartoons, stock-market tickers, you name it. The Web could almost literally come alive.

The potential was so clearly extraordinary that Java went overnight from a moribund project on the verge of the chopping block to Sun’s top priority. Even Bill Joy, now living in blissful semi-retirement in Colorado, came back to Silicon Valley for a while to lend his prodigious intellect to the process of turning Java into a polished tool for general-purpose programming. There was still enough of the old-school hacker ethic left at Sun that management bowed to the developers’ demand that the language be made available for free to individual programmers and small businesses; Sun would make its money on licensing deals with bigger partners, who would pay for the Java logo on their products and the right to distribute the virtual machine. The potential of Java certainly wasn’t lost on Netscape’s Marc Andreessen, who had long been leading the charge to make the Web more visually exciting. He quickly agreed to pay Sun $750,000 for the opportunity to build Java into the Netscape Navigator browser. In fact, it was Andreessen who served as master of ceremonies at Java’s official coming-out party at a SunWorld conference on May 23, 1995 — i.e., three days before Bill Gates wrote his urgent Internet memo.

What was it that so spooked him about Java? On the one hand, it represented a possible if as-yet unrealized challenge to Microsoft’s own business model of selling boxed software on floppy disks or CDs. If people could gain access to a good word processor just by pointing their browsers to a given site, they would presumably have little motivation to invest in Microsoft Office, the company’s biggest cash cow after Windows. But the danger Java posed to Microsoft might be even more extreme. The most maximalist predictions, which were being trumpeted all over the techie press in the weeks after the big debut, had it that even Windows could soon become irrelevant courtesy of Java. This is what Microsoft’s own Ben Slivka meant when he said that “the Web is the next platform.” The browser itself would become the operating system from the perspective of the user, being supported behind the scenes only by the minimal amount of firmware needed to make it go. Once that happened, a new generation of cheap Internet devices would be poised to replace personal computers as the world now knew them. With all software and all of each person’s data being stored in the cloud, as we would put it today, even local hard drives might become passé. And then, with Netscape Navigator and Java having taken over the role of Windows, Microsoft might very well join IBM, the very company it had so recently displaced from the heights of power, in the crowded field of computing’s has-beens.

In retrospect, such predictions seem massively overblown. Officially labeled beta software, Java was in reality more like an alpha release at best at the time it was being celebrated as the Paris to Microsoft’s Achilles, being painfully crash-prone and slow. And even when it did reach a reasonably mature form, the reality of it would prove considerably less than the hype. One crippling weakness that would continue to plague it was the inability of a Java applet to communicate with the webpage that spawned it; applets ran in Web browsers, but weren’t really of them, being self-contained programs siloed off in a sandbox from the environment that spawned them. Meanwhile the prospects of applications like online word processing, or even online gaming in Java, were sharply limited by the fact that at least 95 percent of Web users were accessing the Internet on dial-up connections, over which even the likes of a single high-resolution photograph could take minutes to load. A word processor like the one included with Microsoft Office would require hours of downloading every time you wanted to use it, assuming it was even possible to create such a complex piece of software in the fragile young language. Java never would manage to entirely overcome these issues, and would in the end enjoy its greatest success in other incarnations than that of the browser-embedded applet.

Still, cooler-headed reasoning like this was not overly commonplace in the months after the SunWorld presentation. By the end of 1995, Sun’s stock price had more than doubled on the strength of Java alone, a product yet to see a 1.0 release. The excitement over Java probably contributed as well to Netscape’s record-breaking initial public offering in August. A cavalcade of companies rushed to follow in the footsteps of Netscape and sign Java distribution deals, most of them on markedly more expensive terms. Even Microsoft bowed to the prevailing winds on December 7 and announced a Java deal of its own. (BusinessWeek magazine described it as a “capitulation.”) That all of this was happening alongside the even more intense hype surrounding the release of Windows 95, an operating system far more expansive than any that had come out of Microsoft to date but one that was nevertheless of a very traditionalist stripe at bottom, speaks to the confusion of these go-go times when digital technology seemed to be going anywhere and everywhere at once.

Whatever fear and loathing he may have felt toward Java, Bill Gates had clearly made his peace with the fact that the Web was computing’s necessary present and future. The Microsoft Network duly debuted as an icon on the default Windows 95 desktop, but it was now pitched primarily as a gateway to the open Web, with just a handful of proprietary features; MSN was, in other words, little more than yet another Internet service provider, of the sort that were popping up all over the country like dandelions after a summer shower. Instead of the 20 million subscribers that some had predicted (and that Steve Case had so feared), it attracted only about 500,000 customers by the end of the year. This left it no more than one-eighth as large as AOL, which had by now completed its own deft pivot from proprietary online service of the 1980s type to the very face of the World Wide Web in the eyes of countless computing neophytes.

Yet if Microsoft’s first tentative steps onto the Web had proved underwhelming, people should have known from the history of the company — and not least from the long, checkered history of Windows itself — that Bill Gates’s standard response to failure and rejection was simply to try again, harder and better. The real war for online supremacy was just getting started.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher.)

 
 

Tags: , , , ,