RSS

Doing Windows, Part 12: David and Goliath

Microsoft, intent on its mission to destroy Netscape, rolled out across the industry with all the subtlety and attendant goodwill of Germany invading Poland…

— Merrill R. Chapman

No one reacted more excitedly to the talk of Java as the dawn of a whole new way of computing than did the folks at Netscape. Marc Andreessen, whose head had swollen exactly as much as the average 24-year-old’s would upon being repeatedly called a great engineer, businessman, and social visionary all rolled into one, was soon proclaiming Netscape Navigator to be far more than just a Web browser: it was general-purpose computing’s next standard platform, possibly the last one it would ever need. Java, he said, generously sharing the credit for this development, was “as revolutionary as the Web itself.” As for Microsoft Windows, it was merely “a poorly debugged set of device drivers.” Many even inside Netscape wondered whether he was wise to poke the bear from Redmond so, but he was every inch a young man feeling his oats.

Just two weeks before the release of Windows 95, the United States Justice Department had ended a lengthy antitrust investigation of Microsoft’s business practices with a decision not to bring any charges. Bill Gates and his colleague took this to mean it was open season on Netscape.

Thus, just a few weeks after the bravura Windows 95 launch, a war that would dominate the business and computing press for the next three years began. The opening salvo from Microsoft came in a weirdly innocuous package: something called the “Windows Plus Pack,” which consisted mostly of slightly frivolous odds and ends that hadn’t made it into the main Windows 95 distribution — desktop themes, screensavers, sound effects, etc. But it also included the very first release of Microsoft’s own Internet Explorer browser, the fruit of the deal with Spyglass. After you put the Plus! CD into the drive and let the package install itself, it was as hard to get rid of Internet Explorer as it was a virus. For unlike all other applications, there appeared no handy “uninstall” option for Internet Explorer. Once it had its hooks in your computer, it wasn’t letting go for anything. And its preeminent mission in life there seemed to be to run roughshod over Netscape Navigator. It inserted itself in place of its arch-enemy in your file associations and everywhere else, so that it kept turning up like a bad penny every time you clicked a link. If you insisted on bringing up Netscape Navigator in its stead, you were greeted with the pointed “suggestion” that Internet Explorer was the better, more stable option.

Microsoft’s biggest problem at this juncture was that that assertion didn’t hold water; Internet Explorer 1.0 was only a modest improvement over the old NCSA Mosaic browser on whose code it was based. Meanwhile Netscape was pushing aggressively forward with its vision of the browser as a platform, a home for active content of all descriptions. Netscape Navigator 2.0, whose first beta release appeared almost simultaneously with Internet Explorer 1.0, doubled down on that vision by including an email and Usenet client. More importantly, it supported not only Java but a second programming language for creating active content on the Web — a language that would prove much more important to the evolution of the Web in the long run.

Even at this early stage — still four months before Sun would deign to grant Java its own 1.0 release — some of the issues with using it on the Web were becoming clear: namely, the weight of the virtual machine that had to be loaded and started before a Java applet could run, and said applet’s inability to communicate easily with the webpage that had spawned it. Netscape therefore decided to create something that lay between the static simplicity of vanilla HTML and the dynamic complexity of Java. The language called JavaScript would share much of its big brother’s syntax, but it would be interpreted rather than compiled, and would live in the same environment as the HTML that made up a webpage rather than in a sandbox of its own. In fact, it would be able to manipulate that HTML directly and effortlessly, changing the page’s appearance on the fly in response to the user’s actions. The idea was that programmers would use JavaScript for very simple forms of active content — like, say, a popup photo gallery or a scrolling stock ticker — and use Java for full-fledged in-browser software applications — i.e., your word processors and the like.

In contrast to Java, a compiled language walled off inside its own virtual machine, JavaScript is embedded directly into the HTML that makes up a webpage, using the handy “<script>” tag.

​There’s really no way to say this kindly: JavaScript was (and is) a pretty horrible programming language by any objective standard. Unlike Java, which was the product of years of thought, discussion, and experimentation, JavaScript was the very definition of “quick and dirty” in a computer-science context. Even its principal architect Brendan Eich doesn’t speak of it like an especially proud parent; he calls it “Java’s dumb little brother” and “a rush job.” Which it most certainly was: he designed and implemented JavaScript from scratch in a matter of bare weeks.

What he ended up with would revolutionize the Web not because it was good, but because it was good enough, filling a craving that turned out to be much more pressing and much more satisfiable in the here and now than the likes of in-browser word processing. The lightweight JavaScript could be used to bring the Web alive, to make it a responsive and interactive place, more quickly and organically than the heavyweight Java. Once JavaScript had reached a critical mass in that role, it just kept on rolling with all the relentlessness of a Microsoft operating system. Today an astonishing 98 percent of all webpages contain at least a little bit of JavaScript in addition to HTML, and a cottage industry has sprung up to modify and extend the language — and attempt to fix the many infelicities that haunt the sleep of computer-science professors all over the world. JavaScript has become, in other words, the modern world’s nearest equivalent to what BASIC was in the 1980s, a language whose ease of use, accessibility, and populist appeal make up for what it lacks in elegance. These days we even do online word processing in JavaScript. If you had told Brendan Eich that that would someday be the case back in 1995, he would have laughed as loud and long at you as anyone.

Although no one could know it at the time, JavaScript also represents the last major building block to the modern Web for which Marc Andreessen can take a substantial share of the credit, following on from the “image” tag for displaying inline graphics, the secure sockets layer (SSL) for online encryption (an essential for any form of e-commerce), and to a lesser extent the Java language. Microsoft, by contrast, was still very much playing catch-up.

Nevertheless, on December 7, 1995 — the symbolism of this anniversary of the United States’s entry into World War II was lost on no one — Bill Gates gave a major address to the Microsoft faithful and assembled press, in which he made it clear that Microsoft was in the browser war to win it. In addition to announcing that his company too would bite the bullet and license Java for Internet Explorer, he said that the latter browser would no longer be a Windows 95 exclusive, but would soon be made available for Windows 3 and even MacOS as well. And everywhere it appeared, it would continue to sport the very un-Microsoft price tag of free, proof that this old dog was learning some decidedly new tricks for achieving market penetration in this new era of online software distribution. “When we say the browser’s free, we’re saying something different from other people,” said Gates, in a barbed allusion to Netscape’s shareware distribution model. “We’re not saying, ‘You can use it for 90 days,’ or, ‘You can use it and then maybe next year we’ll charge you a bunch of money.'” Netscape, whose whole business revolved around its browser, couldn’t afford to give Navigator away, a fact of which Gates was only too well aware. (Some pundits couldn’t resist contrasting this stance with Gates’s famous 1976 “Open Letter To Hobbyists,” in which he had asked, “Who can afford to do professional work for nothing?” Obviously Microsoft now could…)

Netscape’s stock price dropped by $28.75 that day. For Microsoft’s research budget alone was five times the size of Netscape’s total annual revenues, while the bigger company now had more than 800 people — twice Netscape’s total headcount — working on Internet Explorer alone. Marc Andreessen could offer only vague Silicon Valley aphorisms when queried about these disparities: “In a fight between a bear and an alligator, what determines the victor is the terrain” — and Microsoft, he claimed, had now moved “onto our terrain.” The less abstractly philosophical Larry Ellison, head of the database giant Oracle and a man who had had more than his share of run-ins with Bill Gates in the past, joked darkly about the “four stages” of Microsoft stealing someone else’s innovation. Stage 1: to “ridicule” it. Stage 2: to admit that, “yeah, there are a few interesting ideas here.” Stage 3: to make its own version. Stage 4: to make the world forget that the non-Microsoft version had ever existed.

Yet for the time being the Netscape tail continued to wag the Microsoft dog. A more interactive and participatory vision of the Web, enabled by the magic of JavaScript, was spreading like wildfire by the middle of 1996. You still needed Netscape Navigator to experience this first taste of what would eventually be labelled Web 2.0, a World Wide Web that blurred the lines between readers and writers, between content consumers and content creators. For if you visited one of these cutting-edge sites with Internet Explorer, it simply wouldn’t work. Despite all of Microsoft’s efforts, Netscape in June of 1996 could still boast of a browser market share of 85 percent. Marc Andreessen’s Sun Tzu-lite philosophy appeared to have some merit to it after all; his company was by all indications still winning the browser war handily. Even in its 2.0 incarnation, which had been released at about the same time as Gates’s Pearl Harbor speech, Internet Explorer remained something of a joke among Windows users, the annoying mother-in-law you could never seem to get rid of once she showed up.

But then, grizzled veterans like Larry Ellison had seen this movie before; they knew that it was far too early to count Microsoft out. That August, both Netscape and Microsoft released 3.0 versions of their browsers. Netscape’s was a solid evolution of what had come before, but contained no game changers like JavaScript. Microsoft’s, however, was a dramatic leap forward. In addition to Java support, it introduced JScript, a lightweight scripting language that just so happened to have the same syntax as JavaScript. At a stroke, all of those sites which hadn’t worked with earlier versions of Internet Explorer now displayed perfectly well in either browser.

With his browser itself more or less on a par with Netscape’s, Bill Gates decided it was time to roll out his not-so-secret weapon. In October of 1996, Microsoft began shipping Windows 95’s “Service Pack 2,” the second substantial revision of the operating system since its launch. Along with a host of other improvements, it included Internet Explorer. From now on, the browser would ship with every single copy of Windows 95 and be installed automatically as part of the operating system, whether the user wanted it or not. New Windows users would have to make an active choice and then an active effort to go to Netscape’s site — using Internet Explorer, naturally! — and download the “alternative” browser. Microsoft was counting on the majority of these users not knowing anything about the browser war and/or just not wanting to be bothered.

Microsoft employed a variety of carrots and sticks to pressure other companies throughout the computing ecosystem to give or at the bare minimum to recommend Internet Explorer to their customers in lieu of Netscape Navigator. It wasn’t above making the favorable Windows licensing deals it signed with big consumer-computer manufacturers like Compaq dependent on precisely this. But the most surprising pact by far was the one Microsoft made with America Online (AOL).

Relations between the face of the everyday computing desktop and the face of the Internet in the eyes of millions of ordinary Americans had been anything but cordial in recent years. Bill Gates had reportedly told Steve Case, his opposite number at AOL, that he would “bury” him with his own Microsoft Network (MSN). Meanwhile Case had complained long and loud about Microsoft’s bullying tactics to the press, to the point of mooting a comparison between Gates and Adolf Hitler on at least one occasion. Now, though, Gates was willing to eat crow and embrace AOL, even at the expense of his own MSN, if he could stick it to Netscape in the process.

For its part, AOL had come as far as it could with its Booklink browser. The Web was evolving too rapidly for the little development team it had inherited with that acquisition to keep up. Case grudgingly accepted that he needed to offer his customers one of the Big Two browsers. All of his natural inclinations bent toward Netscape. And indeed, he signed a deal with Netscape to make Navigator the browser that shipped with AOL’s turnkey software suite — or so Netscape believed. It turned out that Netscape’s lawyers had overlooked one crucial detail: they had never stipulated exclusivity in the contract. This oversight wasn’t lost on the interested bystander Microsoft, which swooped in immediately to take advantage of it. AOL soon announced another deal, to provide its customers with Internet Explorer as well. Even worse for Netscape, this deal promised Microsoft not only availability but priority: Internet Explorer would be AOL’s recommended, default browser, Netscape Navigator merely an alternative for iconoclastic techies (of which there were, needless to say, very few in AOL’s subscriber base).

What did AOL get in return for getting into bed with Adolf Hitler and “jilting Netscape at the altar,” as the company’s own lead negotiator would later put it? An offer that was impossible for a man with Steve Case’s ambitions to refuse, as it happened. Microsoft would put an AOL icon on the desktop of every new Windows 95 installation, where the hundreds of thousands of Americans who were buying a computer every month in order to check out this Internet thing would see it sitting there front and center, and know, thanks to AOL’s nonstop advertising blitz, that the wonders of the Web were just one click on it away. It was a stunning concession on Microsoft’s part, not least because it came at the direct cost of MSN, the very online network Bill Gates had originally conceived as his method of “burying” AOL. Now, though, no price was too high to pay in his quest to destroy Netscape.

Which raises the question of why he was so obsessed, given that Microsoft was making literally no money from Internet Explorer. The answer is rooted in all that rhetoric that was flying around at the time about the browser as a computing platform — about the Web effectively turning into a giant computer in its own right, floating up there somewhere in the heavens, ready to give a little piece of itself to anyone with a minimalist machine running Netscape Navigator. Such a new world order would have no need for a Microsoft Windows — perish the thought! But if, on the other hand, Microsoft could wrest the title of leading browser developer out of the hands of Netscape, it could control the future evolution of this dangerously unruly beast known as the World Wide Web, and ensure that it didn’t encroach on its other businesses.

That the predictions which prompted Microsoft’s downright unhinged frenzy to destroy Netscape were themselves wildly overblown is ironic but not material. As tech journalist Merrill R. Chapman has put it, “The prediction that anyone was going to use Navigator or any other browser anytime soon to write documents, lay out publications, build budgets, store files, and design presentations was a fantasy. The people who made these breathless predictions apparently never tried to perform any of these tasks in a browser.” And yet in an odd sort of way this reality check didn’t matter. Perception can create its own reality, and Bill Gates’s perception of Netscape Navigator as an existential threat to the software empire he had spent the last two decades building was enough to make the browser war feel like a truly existential clash for both parties, even if the only one whose existence actually was threatened — urgently threatened! — was Netscape. Jim Clark, Marc Andreessen’s partner in founding Netscape, makes the eyebrow-raising claim that he “knew we were dead” in the long run well before the end of 1996, when the Department of Justice declined to respond to an urgent plea on Netscape’s part to take another look at Microsoft’s business practices.

Perhaps the most surprising aspect of the conflict is just how long Netscape’s long run proved to be. It was in most respects David versus Goliath: Netscape in 1996 had $300 million in annual revenues to Microsoft’s nearly $9 billion. But whatever the disparities of size, Netscape had built up a considerable reservoir of goodwill as the vehicle through which so many millions had experienced the Web for the first time. Microsoft found this soft power oddly tough to overcome, even with a browser of its own that was largely identical in functional terms. A remarkable number of people continued to make the active choice to use Netscape Navigator instead of the passive one to use Internet Explorer. By October of 1997, one year after Microsoft brought out the big gun and bundled Internet Explorer right into Windows 95, its browser’s market share had risen as high as 39 percent — but it was Netscape that still led the way at 51 percent.

Yet Netscape wasn’t using those advantages it did possess all that effectively. It was not a happy or harmonious company: there were escalating personality clashes between Jim Clark and Marc Andreessen, and also between Andreessen and his programmers, who thought their leader had become a glory hound, too busy playing the role of the young dot.com millionaire to pay attention to the vital details of software development. Perchance as a result, Netscape’s drive to improve its browser in paradigm-shifting ways seemed to slowly dissipate after the landmark Navigator 2.0 release.

Netscape, so recently the darling of the dot.com age, was now finding it hard to make a valid case for itself merely as a viable business. The company’s most successful quarter in financial terms was the third of 1996 — just before Internet Explorer became an official part of Windows 95 — when it brought in $100 million in revenue. Receipts fell precipitously after that point, all the way down to just $18.5 million in the last quarter of 1997. By so aggressively promoting Internet Explorer as entirely and perpetually free, Bill Gates had, whether intentionally or inadvertently, instilled in the general public an impression that all browsers were or ought to be free, due to some unstated reason inherent in their nature. (This impression has never been overturned, as has been testified over the years by the failure of otherwise worthy commercial browsers like Opera to capture much market share.) Thus even the vast majority of those who did choose Netscape’s browser no longer seemed to feel any ethical compulsion to pay for it. Netscape was left in a position all too familiar to Web firms of the past and present alike: that of having immense name recognition and soft power, but no equally impressive revenue stream to accompany them. It tried frantically to pivot into back-end server architecture and corporate intranet solutions, but its efforts there were, as its bottom line will attest, not especially successful. It launched a Web portal and search engine known as Netcenter, but struggled to gain traction against Yahoo!, the leader in that space. Both Jim Clark and Marc Andreessen sold off large quantities of their personal stock, never a good sign in Silicon Valley.

Netscape Navigator was renamed Netscape Communicator for its 4.0 release in June of 1997. As the name would imply, Communicator was far more than just a browser, or even just a browser with an integrated email client and Usenet reader, as Navigator had been since version 2.0. Now it also sported an integrated editor for making your own websites from scratch, a real-time chat system, a conference caller, an appointment calendar, and a client for “pushing” usually unwanted content to your screen. It was all much, much too much, weighted down with features most people would never touch, big and bloated and slow and disturbingly crash-prone; small wonder that even many Netscape loyalists chose to stay with Navigator 3 after the release of Communicator. Microsoft had not heretofore been known for making particularly svelte software, but Internet Explorer, which did nothing but browse the Web, was a lean ballerina by comparison with the lumbering Sumo wrestler that was Netscape Communicator. The original Netscape Navigator had sprung from the hacker culture of institutional computing, but the company had apparently now forgotten one of that culture’s key dictums in its desire to make its browser a platform unto itself: the best programs are those that do only one thing, but do that one thing very, very well, leaving all of the other things to other programs.

Netscape Communicator. I’m told that there’s an actual Web browser buried somewhere in this pile. Probably a kitchen sink too, if you look hard enough.

Luckily for Netscape, Internet Explorer 4.0, which arrived three months after Communicator, violated the same dictum in an even more inept way. It introduced what Microsoft called the “Active Desktop,” which let it bury its hooks deeper than ever into Windows itself. The Active Desktop was — or tried to be —  Bill Gates’s nightmare of a Web that was impossible to separate from one’s local computer come to life, but with Microsoft’s own logo on it. Ironically, it blurred the distinction between the local computer and the Internet more thoroughly than anything the likes of Sun or Netscape had produced to date; local files and applications became virtually indistinguishable from those that lived on the Internet in the new version of the Windows desktop it installed in place of the old. The end result served mainly to illustrate how half-baked all of the prognostications about a new era of computing exclusively in the cloud really were. The Active Desktop was slow and clumsy and confusing, and absolutely everyone who was exposed to it seemed to hate it and rush to find a way to turn it off. Fortunately for Microsoft, it was possible to do so without removing the Internet Explorer 4 browser itself.

The dreaded Active Desktop. Surprisingly, it was partially defended on philosophical grounds by Tim Berners-Lee, not normally a fan of Microsoft. “It was ridiculous for a person to have two separate interfaces, one for local information (the desktop for their own computer) and one for remote information (a browser to reach other computers),” he writes. “Why did we need an entire desktop for our own computer, but only get little windows through which to view the rest of the planet? Why, for that matter, should we have folders on our desktop but not on the Web? The Web was supposed to be the universe of all accessible information, which included, especially, information that happened to be stored locally. I argued that the entire topic of where information was physically stored should be made invisible to the user.” For better or for worse, though, the public didn’t agree. And even he had to allow that “this did not have to imply that the operating system and browser should be the same program.”

The Active Desktop damaged Internet Explorer’s reputation, but arguably not as badly as Netscape’s had been damaged by the bloated Communicator. For once you turned off all that nonsense, Internet Explorer 4 proved to be pretty good at doing the rest of its job. But there was no similar method for trimming the fat from Netscape Communicator.

While Microsoft and Netscape, those two for-profit corporations, had been vying with one another for supremacy on the Web, another, quieter party had been looking on with great concern. Before the Web had become the hottest topic of the business pages, it had been an idea in the head of the mild-mannered British computer scientist Tim Berners-Lee. He had built the Web on the open Internet, using a new set of open standards; his inclination had never been to control his creation personally. It was to be a meeting place, a library, a forum, perhaps a marketplace if you liked — but always a public commons. When Berners-Lee formed the non-profit World Wide Web Consortium (W3C) in October of 1994 in the hope of guiding an orderly evolution of the Web that kept it independent of the moneyed interests rushing to join the party, it struck many as a quaint endeavor at best. Key technologies like Java and JavaScript appeared and exploded in popularity without giving the W3C a chance to say anything about them. (Tellingly, the word “JavaScript” never even appears in Berners-Lee’s 1999 book about his history with and vision for the Web, despite the scripting language’s almost incalculable importance to making it the dynamic and diverse place it had become by that point.)

From the days when he had been a mere University of Illinois student making a browser on the side, Marc Andreessen had blazed his own trail without giving much thought to formal standards. When the things he unilaterally introduced proved useful, others rushed to copy them, and they became de-facto standards. This was as true of JavaScript as it was of anything else. As we’ve seen, it began as a Netscape-exclusive feature, but was so obviously transformative to what the Web could do and be that Microsoft had no choice but to copy it, to incorporate its own implementation of it into Internet Explorer.

But JavaScript was just about the last completely new feature to be rolled out and widely adopted in this ad-hoc fashion. As the Web reached a critical mass, with Netscape Navigator and Internet Explorer both powering users’ experiences of it in substantial numbers, site designers had a compelling reason not to use any technology that only worked on the one or the other; they wanted to reach as many people as possible, after all. This brought an uneasy sort of equilibrium to the Web.

Nevertheless, the first instinct of both Netscape and Microsoft remained to control rather than to share the Web. Both companies’ histories amply demonstrated that open standards meant little to them; they preferred to be the standard. What would happen if and when one company won the browser war, as Microsoft seemed slowly to be doing by 1997, what with the trend lines all going in its favor and Netscape in veritable financial free fall? Once 90 percent or more of the people browsing the Web were doing so with Internet Explorer, Microsoft would be free to give its instinct for dominance free rein. With an army of lawyers at its beck and call, it would be able to graft onto the Web proprietary, patented technologies that no upstart competitor would be able to reverse-engineer and copy, and pragmatic website designers would no longer have any reason not to use them, if they could make their sites better. And once many or most websites depended on these features that were available only in Internet Explorer, that would be that for the open Web. Despite its late start, Microsoft would have managed to embrace, extend, and in a very real sense destroy Tim Berners-Lee’s original vision of a World Wide Web. The public commons would have become a Microsoft-branded theme park.

These worries were being bandied about with ever-increasing urgency in January of 1998, when Netscape made what may just have been the most audacious move of the entire dot.com boom. Like most such moves, it was born of sheer desperation, but that shouldn’t blind us to its importance and even bravery. First of all, Netscape made its browser free as in beer, finally giving up on even asking people to pay for the thing. Admittedly, though, this in itself was little more than an acceptance of the reality on the ground, as it were. It was the other part of the move that really shocked the tech world: Netscape also made its browser free as in freedom — it opened up its source code to all and sundry. “This was radical in its day,” remembers Mitchell Baker, one of the prime drivers of the initiative at Netscape. “Open source is mainstream now; it was not then. Open source was deep, deep, deep in the technical community. It never surfaced in a product. [This] was a very radical move.”

Netscape spun off a not-for-profit organization, led by Baker and called Mozilla, after a cartoon dinosaur that had been the company’s office mascot almost from day one. Coming well before the Linux operating system began conquering large swaths of corporate America, this was to be open source’s first trial by fire in the real world. Mozilla was to concentrate on the core code required for rendering webpages — the engine room of a browser, if you will. Then others — not least among them the for-profit arm of Netscape — would build the superstructures of finished applications around that sturdy core.

Alas, Netscape the for-profit company was already beyond saving. If anything, this move only hastened the end; Netscape had chosen to give away the one product it had that some tiny number of people were still willing to pay for. Some pundits talked it up as a dying warrior’s last defiant attempt to pass the sword to others, to continue the fight against Microsoft and Internet Explorer: “From the depths of Hell, I spit at thee!” Or, as Tim Berners-Lee put it more soberly: “Microsoft was bigger than Netscape, but Netscape was hoping the Web community was bigger than Microsoft.” And there may very well be something to these points of view. But regardless of the motivations behind it, the decision to open up Netscape’s browser proved both a landmark in the history of open-source software and a potent weapon in the fight to keep the Web itself open and free. Mozilla has had its ups and downs over the years since, but it remains with us to this day, still providing an alternative to the corporate-dominated browsers almost a quarter-century on, having outlived the more conventional corporation that spawned it by a factor of six.

Mozilla’s story is an important one, but we’ll have to leave the details of it for another day. For now, we return to the other players in today’s drama.

While Microsoft and Netscape were battling one another, AOL was soaring into the stratosphere, the happy beneficiary of Microsoft’s decision to give it an icon on the Windows 95 desktop in the name of vanquishing Netscape. In 1997, in a move fraught with symbolic significance, AOL bought CompuServe, its last remaining competitor from the pre-Web era of closed, proprietary online services. By the time Netscape open-sourced its browser, AOL had 12 million subscribers and annual profits — profits, mind you, not revenues — of over $500 million, thanks not only to subscription fees but to the new frontier of online advertising, where revenues and profits were almost one and the same. At not quite 40 years old, Steve Case had become a billionaire.

“AOL is the Internet blue chip,” wrote the respected stock analyst Henry Blodget. And indeed, for all of its association with new and shiny technology, there was something comfortingly stolid — even old-fashioned — about the company. Unlike so many of his dot.com compatriots, Steve Case had found a way to combine name recognition and a desirable product with a way of getting his customers to actually pay for said product. He liked to compare AOL with a cable-television provider; this was a comparison that even the most hidebound investors could easily understand. Real, honest-to-God checks rolled into AOL’s headquarters every month from real, honest-to-God people who signed up for real, honest-to-God paid subscriptions. So what if the tech intelligentsia laughed and mocked, called AOL “the cockroach of cyberspace,” and took an “@AOL.com” suffix on someone’s email address as a sign that they were too stupid to be worth talking to? Case and his shareholders knew that money from the unwashed masses spent just as well as money from the tech elites.

Microsoft could finally declare victory in the browser war in the summer of 1998, when the two browsers’ trend lines crossed one another. At long last, Internet Explorer’s popularity equaled and then rapidly eclipsed that of Netscape Navigator/Communicator. It hadn’t been clean or pretty, but Microsoft had bludgeoned its way to the market share it craved.

A few months later, AOL acquired Netscape through a stock swap that involved no cash, but was worth a cool $9.8 billion on paper — an almost comical sum in relation to the amount of actual revenue the purchased company had brought in during its lifetime. Jim Clark and Marc Andreessen walked away very, very rich men. Just as Netscape’s big IPO had been the first of its breed, the herald of the dot.com boom, Netscape now became the first exemplar of the boom’s unique style of accounting, which allowed people to get rich without ever having run a profitable business.

Even at the time, it was hard to figure out just what it was about Netscape that AOL thought was worth so much money. The deal is probably best understood as a product of Steve Case’s fear of a Microsoft-dominated Web; despite that AOL icon on the Windows desktop, he still didn’t trust Bill Gates any farther than he could throw him. In the end, however, AOL got almost nothing for its billions. Netscape Communicator was renamed AOL Communicator and offered to the service’s subscribers, but even most of them, technically unsophisticated though they tended to be, could see that Internet Explorer was the cleaner and faster and just plain better choice at this juncture. (The open-source coders working with Mozilla belatedly realized the same; they would wind up spending years writing a brand-new browser engine from scratch after deciding that Netscape’s just wasn’t up to snuff.)

Most of Netscape’s remaining engineers walked soon after the deal was made. They tended to describe the company’s meteoric rise and fall in the terms of a Shakespearean tragedy. “At least the old timers among us came to Netscape to change the world,” lamented one. “Getting killed by the Evil Empire, being gobbled up by a big corporation — it’s incredibly sad.” If that’s painting with rather too broad a brush — one should always run away screaming when a Silicon Valley denizen starts talking about “changing the world” — it can’t be denied that Netscape at no time enjoyed a level playing field in its war against Microsoft.

But times do change, as Microsoft was about to learn to its cost. In May of 1998, the Department of Justice filed suit against Microsoft for illegally exploiting its Windows monopoly in order to crush Netscape. The suit came too late to save the latter, but it was all over the news even as the first copies of Windows 98, the hotly anticipated successor to Windows 95, were reaching store shelves. Bill Gates had gotten his wish; Internet Explorer and Windows were now indissolubly bound together. Soon he would have cause to wish that he had not striven for that outcome quite so vigorously.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher, In Search of Stupidity: Over Twenty Years of High-Tech Marketing Disasters by Merrill R. Chapman, Coders at Work: Reflections on the Craft of Programming by Peter Seibel, and Weaving the Web by Tim Berners-Lee. Online sources include “1995: The Birth of JavaScript” at Web Development History, the New York Times timeline of AOL’s history, and Mitchell Baker’s talk about the history of Mozilla, which is available on Wikipedia.)

 
43 Comments

Posted by on December 23, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

Doing Windows, Part 11: The Internet Tidal Wave

On August 6, 1991, when Microsoft was still in the earliest planning stages of creating the operating system that would become known as Windows 95, an obscure British researcher named Tim Berners-Lee, working out of the Conseil Européen pour la Recherche Nucléaire (CERN) in Switzerland, put the world’s first publicly accessible website online. For years to come, these two projects would continue to evolve separately, blissfully unconcerned by if not unaware of one another’s existence. And indeed, it is difficult to imagine two computing projects with more opposite personalities. Mirroring its co-founder and CEO Bill Gates, Microsoft was intensely pragmatic and maniacally competitive. Tim Berners-Lee, on the other hand, was a classic academic, a theorist and idealist rather than a businessman. The computers on which he and his ilk built the early Web ran esoteric operating systems like NeXTSTEP and Unix, or at their most plebeian MacOS, not Microsoft’s mass-market workhorse Windows. Microsoft gave you tools for getting everyday things done, while the World Wide Web spent the first couple of years of its existence as little more than an airy proof of concept, to be evangelized by wide-eyed adherents who often appeared to have read one too many William Gibson novels. Forbes magazine was soon to anoint Bill Gates the world’s richest person, his reward for capturing almost half of the international software market; the nascent Web was nowhere to be found in the likes of Forbes.

Those critics who claim that Microsoft was never a visionary company — that it instead thrived by letting others innovate, then swooping in and taking taking over the markets thus opened — love to point to its history with the World Wide Web as Exhibit Number One. Despite having a role which presumably demanded that he stay familiar with all leading-edge developments in computing, Bill Gates by his own admission never even heard of the Web until April of 1993, twenty months after that first site went up. And he didn’t actually surf the Web for himself until another six months after that — perhaps not coincidentally, shortly after a Windows version of NCSA Mosaic, the user-friendly graphical browser that made the Web a welcoming place even for those whose souls didn’t burn with a passion for information theory, had finally been released.

Gates focused instead on a different model of online communication, one arguably more in keeping with his instincts than was the free and open Web. For almost a decade and a half by 1993, various companies had been offering proprietary dial-up services aimed at owners of home computers. These came complete with early incarnations of many of the staples of modern online life: email, chat lines, discussion forums, online shopping, online banking, online gaming, even online dating. They were different from the Web in that they were walled gardens that provided no access to anything that lay beyond the big mainframes that hosted them. Yet within their walls lived bustling communities whose citizens paid their landlords by the minute for the privilege of participation.

The 500-pound gorilla of this market had always been CompuServe, which had been in the business since the days when a state-of-the-art home computer had 16 K of memory and used cassette tapes for storage. Of late, however, an upstart service called America Online (AOL) had been making waves. Under Steve Case, its wunderkind CEO, AOL aimed its pitch straight at the heart of Middle America rather than the tech-savvy elite. Over the course of 1993 alone, it went from 300,000 to 500,000 subscribers. But that was only the beginning if one listened to Case. For a second Home Computer Revolution, destined to be infinitely more successful and long-lasting than the first, was now in full swing, powered along by the ease of use of Windows 3 and by the latest consumer-grade hardware, which made computing faster and more aesthetically attractive than it had ever been before. AOL’s quick and easy custom software fit in perfectly with these trends. Surely this model of the online future — of curated content offered up by a firm whose stated ambition was to be the latest big player in mass media as a whole; of a subscription model that functioned much like the cable television which the large majority of Americans were already paying for — was more likely to take hold than the anarchic jungle that was the World Wide Web. It was, at any rate, a model that Bill Gates could understand very well, and naturally gravitated toward. Never one to leave cash on the table, he started asking himself how Microsoft could get a piece of this action as well.

Steve Case celebrates outside the New York Stock Exchange on March 19, 1992, the day America Online went public.

Gates proceeded in his standard fashion: in May of 1993, he tried to buy AOL outright. But Steve Case, who nursed dreams of becoming a media mogul on the scale of Walt Disney or Jack Warner, turned him down flat. At this juncture, Russ Siegelman, a 33-year-old physicist-by-education whom Gates had made his point man for online strategy, suggested a second classically Microsoft solution to the dilemma: they could build their own online service that copied AOL in most respects, then bury their rival with money and sheer ubiquity. They could, Siegelman suggested, make their own network an integral part of the eventual Windows 95, make signing up for it just another step in the installation process. How could AOL possibly compete with that? It was the first step down a fraught road that would lead to widespread outrage inside the computer industry and one of the most high-stakes anti-trust investigations in the history of American business — but for all that, the broad strategy would prove very, very effective once it reached its final form. It had a ways still to go at this stage, though, targeting as it did AOL instead of the Web.

Gates put Siegelman in charge of building Microsoft’s online service, which was code-named Project Marvel. “We were not thinking about the Internet at all,” admits one of the project’s managers. “Our competition was CompuServe and America Online. That’s what we were focused on, a proprietary online service.” At the time, there were exactly two computers in Microsoft’s sprawling Redmond, Washington, campus that were connected to the Internet. “Most college kids knew much more than we did because they were exposed to it,” says the Marvel manager. “If I had wanted to connect to the Internet, it would have been easier for me to get into my car and drive over to the University of Washington than to try and get on the Internet at Microsoft.”

It came down to the old “not built here” syndrome that dogs so many large institutions, as well as the fact that the Web and the Internet on which it lived were free, and Bill Gates tended to hold that which was free in contempt. Anyone who attempted to help him over his mental block — and there were more than a few of them at Microsoft — was greeted with an all-purpose rejoinder: “How are we going to make money off of free?” The biggest revolution in computing since the arrival of the first pre-assembled personal computers back in 1977 was taking place all around him, and Gates seemed constitutionally incapable of seeing it for what it was.

In the meantime, others were beginning to address the vexing question of how you made money out of free. On April 4, 1994, Marc Andreessen, the impetus behind the NCSA Mosaic browser, joined forces with Jim Clark, a veteran Silicon Valley entrepreneur, to found Netscape Communications for the purpose of making a commercial version of the Mosaic browser. A team of programmers, working without consulting the Mosaic source code so as to avoid legal problems, soon did just that, and uploaded Netscape Navigator to the Web on October 13, 1994. Distributed under the shareware model, with a $39 licensing fee requested but not demanded after a 90-day trial period was up, the new browser was installed on more than 10 million computers within nine months.

AOL’s growth had continued apace despite the concurrent explosion of the open Web; by the time of Netscape Navigator’s release, the service had 1.25 million subscribers. Yet Steve Case, no one’s idea of a hardcore techie, was ironically faster to see the potential — or threat — of the Web than was Bill Gates. He adopted a strategy in response that would make him for a time at least a superhero of the business press and the investor set. Instead of fighting the Web, AOL would embrace it — would offer its own Web browser to go along with its proprietary content, thereby adding a gate to its garden wall and tempting subscribers with the best of both worlds. As always for AOL, the whole package would be pitched toward neophytes, with a friendly interface and lots of safeguards — “training wheels,” as the tech cognoscenti dismissively dubbed them — to keep the unwashed masses safe when they did venture out into the untamed wilds of the Web.

But Case needed a browser of his own in order to execute his strategy, and he needed it in a hurry. He needed, in short, to buy a browser rather than build one. He saw three possibilities. One was to bring Netscape and its Navigator into the AOL fold. Another was a small company called Spyglass, a spinoff of the National Center for Supercomputing (NCSA) which was attempting to commercialize the original NCSA Mosaic browser. And the last was a startup called Booklink Technologies, which was making a browser from scratch.

Netscape was undoubtedly the superstar of the bunch, but that didn’t help AOL’s cause any; Marc Andreessen and Jim Clark weren’t about to sell out to anyone. Spyglass, on the other hand, struck Case as an unimaginative Johnny-come-lately that was trying to shut the barn door long after the horse called Netscape had busted out. That left only Booklink. In November of 1994, AOL paid $30 million for the company. The business press scoffed, deeming it a well-nigh flabbergasting over-payment. But Case would get the last laugh.

While AOL was thus rushing urgently to “embrace and extend” the Web, to choose an ominous phrase normally associated with Microsoft, the latter was dawdling along more lackadaisically toward a reckoning with the Internet. During that same busy fall of 1994, IBM released OS/2 3.0, which was marketed as OS/2 Warp in the hope of lending it some much-needed excitement. By either name, it was the latest iteration of an operating system that IBM had originally developed in partnership with Microsoft, an operating system that had once been regarded by both companies as nothing less than the future of mainstream computing. But since the pair’s final falling out in 1991, OS/2 had become an irrelevancy in the face of the Windows juggernaut, winning a measure of affection only in some hacker circles and a few other specialized niches. Despite its snazzy new name and despite being an impressive piece of software from a purely technical perspective, OS/2 Warp wasn’t widely expected to change those fortunes before its release, and this lack of expectations proved well-founded afterward. Yet it was a landmark in another way, being the first operating system to include a Web browser as an integral component, in this case a program called Web Explorer, created by IBM itself because no one else seemed much interested in making a browser for the unpopular OS/2.

This appears to have gotten some gears turning in Bill Gates’s head. Microsoft already planned to include more networking tools than ever before in Windows 95. They had, for example, finally decided to bow to customer demand and build right into the operating system TCP/IP, the networking protocol that allowed a computer to join the Internet; Windows 3 required the installation of a third-party add-on for the same purpose. (“I don’t know what it is, and I don’t want to know what it is,” said Steve Ballmer, Gates’s right-hand man, to his programmers on the subject of TCP/IP. “[But] my customers are screaming about it. Make the pain go away.”) Maybe a Microsoft-branded Web browser for Windows 95 would be a good idea as well, if they could acquire one without breaking the bank.

Just days after AOL bought Booklink for $30 million, Microsoft agreed to give $2 million to Spyglass. In return, Spyglass would give Microsoft a copy of the Mosaic source code, which it could then use as the basis for its own browser. But, lest you be tempted to see this transaction as evidence that Gates’s opinions about the online future had already undergone a sea change by this date, know that the very day this deal went down was also the one on which he chose to publicly announce Microsoft’s own proprietary AOL competitor, to be known as simply the Microsoft Network, or MSN. At most, Gates saw the open Web at this stage as an adjunct to MSN, just as it would soon become to AOL. MSN would come bundled into Windows 95, he told the assembled press, so that anyone who wished to could become a subscriber at the click of a mouse.

The announcement caused alarm bells to ring at AOL. “The Windows operating system is what the dial tone is to the phone industry,” said Steve Case. He thus became neither the first nor the last of Gates’s rival to hint at the need for government intervention: “There needs to be a level playing field on which companies compete.” Some pundits projected that Microsoft might sign up 20 million subscribers to MSN before 1995 was out. Others — the ones whom time would prove to have been more prescient — shook their heads and wondered how Microsoft could still be so clueless about the revolutionary nature of the World Wide Web.

AOL leveraged the Booklink browser to begin offering its subscribers Web access very early in 1995, whereupon its previously robust rate of growth turned downright torrid. By November of 1995, it would have 4 million subscribers. The personable and photogenic Steve Case became a celebrity in his own right, to the point of starring in a splashy advertising campaign for The Gap’s line of khakis; the man and the pants represented respectively the personification and the uniform of the trend in corporate America toward “business casual.” Meanwhile Case’s company became an indelible part of the 1990s zeitgeist. “You’ve got mail!,” the words AOL’s software spoke every time a new email arrived — something that was still very much a novel experience for many subscribers — was featured as a sample in a Prince song, and eventually became the name of a hugely popular romantic comedy starring Tom Hanks and Meg Ryan. CompuServe and AOL’s other old rivals in the proprietary space tried to compete by setting up Internet gateways of their own, but were never able to negotiate the transition from one era of online life to another with the same aplomb as AOL, and gradually faded into irrelevancy.

Thankfully for Microsoft’s shareholders, Bill Gates’s eyes were opened before his company suffered the same fate. At the eleventh hour, with what were supposed to be the final touches being put onto Windows 95, he made a sharp swerve in strategy. He grasped at last that the open Web was the here, the now, and the future, the first major development in mainstream consumer computing in years that hadn’t been more or less dictated by Microsoft — but be that as it may, the Web wasn’t going anywhere. On May 26, 1995, he wrote a memo to every Microsoft employee that exuded an all-hands-on-deck sense of urgency. Gates, the longstanding Internet agnostic, had well and truly gotten the Internet religion.

I want to make clear that our focus on the Internet is critical to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of [the] graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn’t perfect. Aspects of the PC were arbitrary or even poor. However, a phenomena [sic] grew up around the IBM PC that made it a key element of everything that would happen for the next fifteen years. Companies that tried to fight the PC standard often had good reasons for doing so, but they failed because the phenomena overcame any weakness that [the] resistors identified.

Over the last year, a number of people [at Microsoft] have championed embracing TCP/IP, hyperlinking, HTML, and building clients, tools, and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features.

Everything changed that day. Instead of walling its campus off from the Internet, Microsoft put the Web at every employee’s fingertips. Gates himself sent his people lists of hot new websites to explore and learn from. The team tasked with building the Microsoft browser, who had heretofore labored in under-staffed obscurity, suddenly had all the resources of the company at their beck and call. The fact was, Gates was scared; his fear oozes palpably from the aggressive language of the memo above. (Other people talked of “joining” the Internet; Gates wanted to “compete” on it.)

But just what was he so afraid of? A pair of data points provides us with some clues. Three days before he wrote his memo, a new programming language and run-time environment had taken the industry by storm. And the day after he did so, a Microsoft executive named Ben Slivka sent out a memo of his own with Gate’s blessing, bearing the odd title of “The Web Is the Next Platform.” To understand what Slivka was driving at, and why Bill Gates took it as such an imminent existential threat to his company’s core business model, we need to back up a few years and look at the origins of the aforementioned programming language.


Bill Joy, an old-school hacker who had made fundamental contributions to the Unix operating system, was regarded as something between a guru and an elder statesman by 1990s techies, who liked to call him “the other Bill.” In early 1991, he shared an eye-opening piece of his mind at a formal dinner for select insiders. Microsoft was then on the ascendant, he acknowledged, but they were “cruising for a bruising.” Sticking with the automotive theme, he compared their products to the American-made cars that had dominated until the 1970s — until the Japanese had come along peddling cars of their own that were more efficient, more reliable, and just plain better than the domestic competition. He said that the same fate would probably befall Microsoft within five to seven years, when a wind of change of one sort or another came along to upend the company and its bloated, ugly products. Just four years later, people would be pointing to a piece of technology from his own company Sun Microsystems as the prophesied agent of Microsoft’s undoing.

Sun had been founded in 1982 to leverage the skills of Joy along with those of a German hardware engineer named Andy Bechtolsheim, who had recently built an elegant desktop computer inspired by the legendary Alto machines of Xerox’s Palo Alto Research Center. Over the remainder of the 1980s, Sun made a good living as the premier maker of Unix-based workstations: computers that were a bit too expensive to be marketed to even the most well-heeled consumers, but were among the most powerful of their day that could be fit onto or under a single desktop. Sun possessed a healthy antipathy for Microsoft, for all of the usual reasons cited by the hacker contingent: they considered Microsoft’s software derivative and boring, considered the Intel hardware on which it ran equally clunky and kludgy (Sun first employed Motorola chips, then processors of their own design), and loathed Microsoft’s intensely adversarial and proprietorial approach to everything it touched. For some time, however, Sun’s objections remained merely philosophical; occupying opposite ends of the market as they did, the two companies seldom crossed one another’s paths. But by the end of the decade, the latest Intel hardware had advanced enough to be comparable with that being peddled by Sun. And by the time that Bill Joy made his prediction, Sun knew that something called Windows NT was in the works, knew that Microsoft would be coming in earnest for the high-end-computing space very soon.

About six months after Joy played the oracle, Sun’s management agreed to allow one of their star programmers, a fellow named James Gosling, to form a small independent group in order to explore an idea that had little obviously to do with the company’s main business. “When someone as smart as James wants to pursue an area, we’ll do our best to provide an environment,” said Chief Technology Officer Eric Schmidt.

James Gosling

The specific “area” — or, perhaps better said, problem — that Gosling wanted to address was one that still exists to a large extent today: the inscrutability and lack of interoperability of so many of the gadgets that power our daily lives. The problem would be neatly crystalized almost five years later by one of the milquetoast jokes Jay Leno made at the Windows 95 launch, about how the VCR in even Bill Gates’s living room was still blinking “12:00” because he had never figured out how to set the thing’s clock. What if everything in your house could be made to talk together, wondered Gosling, so that setting one clock would set all of them — so that you didn’t have to have a separate remote control for your television and your VCR, each with about 80 buttons on it that you didn’t understand what they did and never, ever pressed. “What does it take to watch a videotape?” he mused. “You go plunk, plunk, plunk on all of these things in certain magic sequences before you can actually watch your videotape! Why is it so hard? Wouldn’t it be nice if you could just slide the tape into the VCR, [and] the system sort of figures it out: ‘Oh, gee, I guess he wants to watch it, so I ought to power up the television set.'”

But when Gosling and his colleagues started to ponder how best to realize their semi-autonomous home of the future, they tripped over a major stumbling block. While it was true that more and more gadgets were becoming “smart,” in the sense of incorporating programmable microprocessors, the details of their digital designs varied enormously. Each program to link each individual model of, say, VCR into the home network would have to be written, tested, and debugged from scratch. Unless, that is, the program could be made to run in a virtual machine.

A virtual machine is an imaginary computer which a real computer can be programmed to simulate. It permits a “write once, run everywhere” approach to software: once a given real computer has an interpreter for a given virtual machine, it can run any and all programs that have been or will be written for that virtual machine, albeit at some cost in performance.

Like almost every other part of the programming language that would eventually become known as Java, the idea of a virtual machine was far from new in the abstract. (“In some sense, I would like to think that there was nothing invented in Java,” says Gosling.) For example, a decade before Gosling went to work on his virtual machine, the Apple Pascal compiler was already targeting one that ran on the lowly Apple II, even as the games publisher Infocom was distributing its text adventures across dozens of otherwise incompatible platforms thanks to its Z-Machine.

Unfortunately, Gosling’s new implementation of this old concept proved unable to solve by itself the original problem for which it had been invented. Even Wi-Fi didn’t exist at this stage, much less the likes of Bluetooth. Just how were all of these smart gadgets supposed to actually talk to one another, to say nothing of pulling down the regular software updates which Gosling envisioned as another benefit of his project? (Building a floppy-disk drive into every toaster was an obvious nonstarter.) After reluctantly giving up on their home of the future, the team pivoted for a while toward “interactive television,” a would-be on-demand streaming system much like our modern Netflix. But Sun had no real record in the consumer space, and cable-television providers and other possible investors were skeptical.

While Gosling was trying to figure out just what this programming language and associated runtime environment he had created might be good for, the World Wide Web was taking off. In July of 1994, a Sun programmer named Patrick Naughton did something that would later give Bill Gates nightmares: he wrote a fairly bare-bones Web browser in Java, more for the challenge than anything else. A couple of months later there came the eureka moment: Naughton and another programmer named Jonathan Payne made it possible to run other Java programs, or “applets” as they would soon be known, right inside their browser. They stuck one of the team’s old graphical demos on a server and clicked the appropriate link, whereupon they were greeted with a screen full of dancing Coca-Cola cans. Payne found it “breathtaking”: “It wasn’t just playing an animation. It was physics calculations going on inside a webpage!”

In order to appreciate his awe, we need to understand what a static place the early Web was. HTML, the “language” in which pages were constructed, was an abbreviation for “Hypertext Markup Language.” In form and function, it was more akin to a typesetting specification than a Turing-complete programming language like C or Pascal or Java; the only form of interactivity it allowed for was the links that took the reader from static page to static page, while its only visual pizazz came in the form of static in-line images (themselves a relatively recent addition to the HTML specification, thanks to NCSA Mosaic). Java stood to change all that at a stroke. If you could embed programs running actual code into your page layouts, you could in theory turn your pages into anything you wanted them to be: games, word processors, spreadsheets, animated cartoons, stock-market tickers, you name it. The Web could almost literally come alive.

The potential was so clearly extraordinary that Java went overnight from a moribund project on the verge of the chopping block to Sun’s top priority. Even Bill Joy, now living in blissful semi-retirement in Colorado, came back to Silicon Valley for a while to lend his prodigious intellect to the process of turning Java into a polished tool for general-purpose programming. There was still enough of the old-school hacker ethic left at Sun that management bowed to the developers’ demand that the language be made available for free to individual programmers and small businesses; Sun would make its money on licensing deals with bigger partners, who would pay for the Java logo on their products and the right to distribute the virtual machine. The potential of Java certainly wasn’t lost on Netscape’s Marc Andreessen, who had long been leading the charge to make the Web more visually exciting. He quickly agreed to pay Sun $750,000 for the opportunity to build Java into the Netscape Navigator browser. In fact, it was Andreessen who served as master of ceremonies at Java’s official coming-out party at a SunWorld conference on May 23, 1995 — i.e., three days before Bill Gates wrote his urgent Internet memo.

What was it that so spooked him about Java? On the one hand, it represented a possible if as-yet unrealized challenge to Microsoft’s own business model of selling boxed software on floppy disks or CDs. If people could gain access to a good word processor just by pointing their browsers to a given site, they would presumably have little motivation to invest in Microsoft Office, the company’s biggest cash cow after Windows. But the danger Java posed to Microsoft might be even more extreme. The most maximalist predictions, which were being trumpeted all over the techie press in the weeks after the big debut, had it that even Windows could soon become irrelevant courtesy of Java. This is what Microsoft’s own Ben Slivka meant when he said that “the Web is the next platform.” The browser itself would become the operating system from the perspective of the user, being supported behind the scenes only by the minimal amount of firmware needed to make it go. Once that happened, a new generation of cheap Internet devices would be poised to replace personal computers as the world now knew them. With all software and all of each person’s data being stored in the cloud, as we would put it today, even local hard drives might become passé. And then, with Netscape Navigator and Java having taken over the role of Windows, Microsoft might very well join IBM, the very company it had so recently displaced from the heights of power, in the crowded field of computing’s has-beens.

In retrospect, such predictions seem massively overblown. Officially labeled beta software, Java was in reality more like an alpha release at best at the time it was being celebrated as the Paris to Microsoft’s Achilles, being painfully crash-prone and slow. And even when it did reach a reasonably mature form, the reality of it would prove considerably less than the hype. One crippling weakness that would continue to plague it was the inability of a Java applet to communicate with the webpage that spawned it; applets ran in Web browsers, but weren’t really of them, being self-contained programs siloed off in a sandbox from the environment that spawned them. Meanwhile the prospects of applications like online word processing, or even online gaming in Java, were sharply limited by the fact that at least 95 percent of Web users were accessing the Internet on dial-up connections, over which even the likes of a single high-resolution photograph could take minutes to load. A word processor like the one included with Microsoft Office would require hours of downloading every time you wanted to use it, assuming it was even possible to create such a complex piece of software in the fragile young language. Java never would manage to entirely overcome these issues, and would in the end enjoy its greatest success in other incarnations than that of the browser-embedded applet.

Still, cooler-headed reasoning like this was not overly commonplace in the months after the SunWorld presentation. By the end of 1995, Sun’s stock price had more than doubled on the strength of Java alone, a product yet to see a 1.0 release. The excitement over Java probably contributed as well to Netscape’s record-breaking initial public offering in August. A cavalcade of companies rushed to follow in the footsteps of Netscape and sign Java distribution deals, most of them on markedly more expensive terms. Even Microsoft bowed to the prevailing winds on December 7 and announced a Java deal of its own. (BusinessWeek magazine described it as a “capitulation.”) That all of this was happening alongside the even more intense hype surrounding the release of Windows 95, an operating system far more expansive than any that had come out of Microsoft to date but one that was nevertheless of a very traditionalist stripe at bottom, speaks to the confusion of these go-go times when digital technology seemed to be going anywhere and everywhere at once.

Whatever fear and loathing he may have felt toward Java, Bill Gates had clearly made his peace with the fact that the Web was computing’s necessary present and future. The Microsoft Network duly debuted as an icon on the default Windows 95 desktop, but it was now pitched primarily as a gateway to the open Web, with just a handful of proprietary features; MSN was, in other words, little more than yet another Internet service provider, of the sort that were popping up all over the country like dandelions after a summer shower. Instead of the 20 million subscribers that some had predicted (and that Steve Case had so feared), it attracted only about 500,000 customers by the end of the year. This left it no more than one-eighth as large as AOL, which had by now completed its own deft pivot from proprietary online service of the 1980s type to the very face of the World Wide Web in the eyes of countless computing neophytes.

Yet if Microsoft’s first tentative steps onto the Web had proved underwhelming, people should have known from the history of the company — and not least from the long, checkered history of Windows itself — that Bill Gates’s standard response to failure and rejection was simply to try again, harder and better. The real war for online supremacy was just getting started.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher.)

 
 

Tags: , , , ,

Doing Windows, Part 10: Chicago

(As the name would indicate, this article marks a belated continuation of my series about the life and times of Microsoft Windows. But, because any ambitious dive into history such as this site has become is doomed to be a tapestry of stories rather than a single linear one, this article and the next couple of them will also pull on some of the other threads I’ve left dangling — most obviously, my series on the origins of the Internet and the World Wide Web, on the commercial online networks of the early personal-computing era, and on the shareware model for selling software online and the changes it wrought in the culture of gaming in particular. You might find some or all of the aforementioned worthwhile to read before what follows. Or just dive in and see how you go; it’s all good.)

For the vast majority of us in the PC software business, it’s important to realize that systems such as Windows 95 will be important and that systems such as Windows NT won’t be. Evolutionary changes are much easier for the market to accept. For a revolutionary upset to be accepted, it must be an order of magnitude better than what it seeks to replace. Not 25 percent or 33 percent better, but at least ten times better. Otherwise, change had better be gradual, like Windows 95. Products such as NT speak to too small a niche to be interesting. And even the NT sales that do occur don’t lead anywhere: right now I’m running on a network with an NT server, but no software is ever likely to be bought for that server. It sits in a closet that no one touches for weeks at a time. This is not the sort of platform on which to base your fortune.

If you’re choosing platforms for which to develop software, remember that what ultimately matters is not technical excellence but market penetration. The two rarely go hand-in-hand. This is not simply a matter of bowing to the foolish whims of the market, however: market penetration leads to standardization, and standards have tangible benefits that are more important than the coolest technical feature. Yes, Windows 95 still uses MS-DOS; no, it’s not a pure Win32 system; no, it’s not particularly integrated; no, it hasn’t been rewritten from the ground up; and yes, it is lacking some nice features found in Windows NT or OS/2. But none of these compromises will hurt Windows 95’s chances for success, and some will actually help make Windows 95 a success. Windows 95 will be the standard desktop-computing platform for the next five years, and that by itself is worth far more than the coolest technology.

— Andrew Schulman, 1994

In July of 1992, Microsoft hosted the first Windows NT Professional Developers Conference in San Francisco. The nearly 5000 hand-picked attendees were each given a coveted pre-release “developer’s version” of Windows NT (“New Technology”), the company’s next-generation operating system. “The major operating systems of today, DOS and Windows, were designed eight to twelve years ago, so they lie way behind our current hardware capabilities,” said one starry-eyed Microsoft partner. “We’ve now got bigger disks, displays, and memory, and faster CPUs than ever before. As a true 32-bit operating system, Windows NT exploits the power of the 32-bit chip.” Unlike Microsoft’s current 3.1 version of Windows and its predecessors, which were balanced precariously on the narrow foundation of MS-DOS like an elephant atop a lamp pole, Windows NT owed nothing to the past, and performed all the better for it.

But what follows is not the story of Windows NT.

It is rather the story of another operating system that was publicly mentioned for the very first time in passing at that same conference, an operating system whose user base over the course of the 1990s would eclipse that of Windows NT by a margin of about 50 to 1. Microsoft was calling it “Chicago” in 1992. The name derived from “Cairo,” a code name for a projected future version of Windows NT. “We wanted something between Seattle” — Microsoft’s home metropolitan area, which presumably stood for the current status quo — “and Cairo in terms of functionality,” said a Microsoft executive later. “The less ambitious picked names closer to Seattle — like Spokane for a minor upgrade, all the way to London for something closer to Cairo.” Chicago seemed like a suitable compromise — a daunting distance to travel, but not too daunting. The world would come to know the erstwhile Chicago three years later as Windows 95. It would become the most ballyhooed new operating system in the entire history of computing, even as it remained a far more compromised, less technically impressive piece of software architecture than Windows NT.

Why did Microsoft split their efforts along these two divergent paths? One answer lay in the wildly divergent hardware that was used to run their operating systems. Windows NT was aimed at the latest and the greatest, while Chicago was aimed at the everyday computers that everyday people tended to have in their offices and homes. But another reason was just as important. Microsoft had gotten to where they were by the beginning of the 1990s — to the position of the undisputed dominant force in personal computing — not by always or even usually having the best or most innovative products, but rather by being always the safe choice. “No one ever got fired for buying IBM,” ran an old maxim among corporate purchasing managers; in this new era, the same might be said about Microsoft. Part of being safe was placing a heavy emphasis on backward compatibility, thus ensuring that the existing software an individual or organization had gotten to know and love would continue to run on their shiny new Microsoft operating system. In the context of the early 1990s, this meant, for better or for worse, continuing to build at least one incarnation of Windows on top of MS-DOS, so that it could continue to run even a program written for the original IBM PC from 1981. Windows NT broke that compatibility in the name of power and performance — but, if getting that 1985-vintage version of WordPerfect up and running was more important to you than such distractions, Microsoft still had you covered.

Which isn’t to say, of course, that Microsoft wouldn’t have preferred for you to give up your hoary old favorites and enter fully into the brave new Windows world of mice and widgets. They had struggled for most of the 1980s to make Windows into a place where people wanted to live and work, and had finally broken through at the dawn of the new decade, with the release of Windows 3.0 in 1990 and 3.1 in 1992. The old stars of MS-DOS productivity software — names like the aforementioned WordPerfect, as well as Lotus, Borland, and others — were scrambling to adapt their products to a Windows-driven marketplace, even as Microsoft, whose ambitions for domination knew few bounds, was driving aggressively into the gaps with their own Microsoft Office lineup, which was tightly integrated with the operating system in ways that their competitors found difficult to duplicate. (This was due not least to Microsoft’s ability to take advantage of so-called “undocumented APIs,” hidden features and shortcuts provided by Windows which the company neglected to tell its competitors about — an underhanded trick that was an open secret in the software industry.) By  the summer of 1993, when Windows NT officially debuted with very little fanfare in the consumer press, Windows 3.x had sold 30 million copies in three years, and was continuing to sell at the healthy clip of 1.5 million copies per month. Windows had become the face of computing as the majority of people knew it, the MS-DOS command line a dusty relic of a less pleasant past.

With, that is, one glaring exception that is of special interest to us: Windows 3 had never caught on for hardcore gaming, and never would. Games were played on Windows 3, mind you. In fact, they were played extensively. Microsoft Solitaire, which was included with every copy of Windows, is almost certainly the single most-played computer game in history, having served as a distraction for hundreds of millions of bored office workers and students all over the world from 1990 until the present day. Some other games, generally of the sort that weren’t hugely demanding in hardware terms and that boasted a fair measure of casual appeal, did almost equally well. Myst, for example, sold an astonishing 5 million or more copies for Windows 3, while Microsoft’s own “Entertainment Packs,” consisting mostly of more simple time fillers much like Solitaire, also did very well for themselves.

But then there were the hardcore gamers, the folks who considered gaming an active hobby rather than a passive distraction, who waited eagerly for each new issue of Computer Gaming World to arrive in their mailbox and spent hundreds or thousands of dollars every year keeping the “rigs” in their bedrooms up to date, in much the same way that a previous generation of mostly young men had tinkered endlessly with the hot rods in their garages. The people who made games for this group told Microsoft, accurately enough, that Windows as it was currently constituted just wouldn’t do for their purposes. It was too inflexible in its assumptions about the user interface and much else, and above all just too slow. They loved the idea of a runtime environment that would let them forget about the idiosyncrasies of 1000 different graphics and sound cards, thanks to the magic of integrated device drivers. But it had to be flexible, and it had to be fast — and Windows 3 was neither of those things. Microsoft admitted in one of their own handbooks that “game graphics under Windows make slug racing look exciting.”

One big issue that game developers had with Windows 3 for a long time was that it was a 16-bit operating system in a world where even the most ordinary off-the-shelf computer hardware had long since gone 32 bit. The largest number that can be represented in 16 bits is 65,535, or 64 kilobytes. A 16-bit program can therefore only allocate memory in discrete segments of no more than 64 K. This became more and more of a problem as games grew more complex in terms of logic and especially graphics and sound. MS-DOS was also 16-bit, but, being far simpler, it was much easier to hack. The tools known as “32-bit DOS extenders” did just that, giving game developers a way of using 32-bit processors to their maximum potential more or less transparently, with a theoretical upper limit of fully 4 GB per memory segment. (This was, needless to say, much, much more memory than anyone actually had in their computers in the early 1990s.) Ironically, Windows 3 itself depended on a 32-bit DOS extender to be able to run on top of MS-DOS, but it didn’t extend all of its benefits to the applications it hosted. That did finally change, however, in July of 1993, when Microsoft released an add-on called “Win32S” that did make it possible to run 32-bit applications in Windows 3 (including many applications written for Windows NT).

That was one problem more or less solved. But another one was the painfully slow Windows graphics libraries that served as the intermediary between applications software and the bare metal of the machine. These were impossible to bypass by design; one of the major points of Windows was to provide a buffer between applications and the hardware, to enable features such as multitasking, virtual memory, and a consistent look and feel from program to program. But game developers saw only how slow the end result was. The only way they could consider coding for Windows was if Microsoft could provide libraries that were as fast — or at least 90 percent as fast — as banging the bare metal in MS-DOS.

In the meantime, game developers would continue to write for vanilla MS-DOS and to sweat the details of all those different graphics and sound cards for themselves, and the hardcore gamers would have to continue to spend hours tweaking memory settings and IRQ addresses in order to get each new game they bought up and running just exactly perfectly. Admittedly, some gamers did consider this almost half the fun, a talent for it as much a badge of honor as a high score in Warcraft; boys do love their technological toys, after all. Still, it was obvious to any sensible observer that the games industry as a whole would be better served by a universal alternative to the current bespoke status quo. Hardcore gamers made up a relatively small proportion of the people using computers, but they were a profitable niche, what with their voracious buying habits, and they were also trail blazers and influencers in their fashion. It would seem that Microsoft had a vested interest in keeping them happy.

Windows NT might sound like the logical place for such early adopters to migrate, but this was not Microsoft’s view. “Serious” users of computers in corporate and institutional environments — the kind at which Windows NT was primarily targeted — had a long tradition of looking down on computers that happened to be good at playing games, and this attitude had by no means disappeared entirely by the early 1990s. In short, Microsoft had no wish to muddy the waters surrounding their most powerful operating system with a bunch of scruffy gamers. Games of all stripes were to be left to the consumer-grade operating systems, meaning the current Windows 3 and the forthcoming Chicago. And even there, they seemed to be a dismayingly low priority for Microsoft in the eyes of the people who made them and played them.

This doesn’t mean that there was no progress whatsoever. By very early in 1994, a young Microsoft programmer named Chris Hecker, working virtually alone, had put together a promising system called WinG, which let Windows games and other software render graphics surprisingly quickly to a screen buffer, with a minimum of interference from the heretofore over-officious operating system.

Hecker knew exactly what game to target as a proof of concept for WinG: DOOM, id Software’s first-person shooter, which had recently risen up from the shareware underground to complete the remaking of a broad swath of gamer culture in the image of id’s fast-paced, ultra-violent aesthetic. If DOOM could be made to run well under WinG, that would lend the system an instant street cred that no other demonstration could possibly have equaled. So, Hecker called up John Carmack, the man behind the DOOM engine. A skeptical Carmack said he didn’t have time to learn the vagaries of WinG and do the port, even assuming it was possible, whereupon Hecker said that he would do it himself if Carmack would just give him the DOOM source — under the terms of a strict confidentiality agreement, of course. Carmack agreed, and Hecker did the job in a single frenzied weekend. (It doubtless helped that Carmack’s DOOM code, which has long since been released to the entire world, is famously clean and readable, and thus eminently portable.)

Hecker brought WinDOOM, as he called it, to the Computer Game Developers Conference in April of 1994, the place where the leading lights of the industry gathered to talk shop among themselves. When he showed them DOOM running at full speed on Windows, just four months after it had become a sensation on MS-DOS, they were blown away. “WinG could usher in a whole new era for computer-based entertainment,” wrote Computer Gaming World breathlessly in their report from the conference. “As a result of this effort, we should expect to see universal installation routines, hardware independence, and an end to the memory-configuration haze that places a minimum technical-expertise barrier over our hobby and keeps out the novice user.”

Microsoft officially released WinG as a Windows 3 add-on in September of 1994, but it never quite lived up to its glowing advance billing. Hecker was a lone-wolf coder, and by some reports at least a decidedly difficult one to work with. Microsoft insiders from the time characterize WinG more as a “hack” than a polished piece of software engineering. Hecker “was able to take a piece of shit called Windows and make games work on it,” says Rick Segal, a Microsoft executive who was then in charge of “multimedia evangelism.” “He strapped a jet engine on a Beechcraft and got the thing in the air.” But when developers started trying to work with it in the real world, “the wings came off first, followed by the rest of the plane.” That’s perhaps overstating the case: WinG combined with Win32S was used to bring a few dozen games to Windows more or less satisfactorily between 1994 and 1997, from strategy games like Colonization to adventure games like Titanic: Adventure Out of Time. WinG was not so much a defective tool as a sharply limited one. While it gave developers a way of getting graphics onto the screen reasonably quickly, it gave them no help with the other pressing problems of sound, joysticks and other controllers, and networking in a game context.

Many of Microsoft’s initiatives during this period were organized by and around their team of “evangelists,” charismatic bright sparks who were given a great deal of freedom and a substantial discretionary budget in the cause of advancing the company’s interests and “fucking the competition,” as it was put by the evangelist for WinG, an unforgettable character named Alex St. John. St. John was a 350-pound grizzly bear of a man who had spent much of his childhood in the wilds of Alaska, and still sported a lumberjack’s beard and a backwoods sartorial sense; in the words of one horrified Microsoft marketing manager, he “looked like a bomb going off.” Shambling onto the stage, the living antithesis of the buttoned-down Microsoft rep that everybody expected, he told his audiences of gamers and game developers that he knew just what they thought of Windows. Then he showed them a clip of a Windows logo being blown away by a shotgun. “The gamers loved it,” says Rick Segal. “They thought they had someone who had their interests at heart.”

St. John soon decided that his constituency deserved something much, much better than WinG. His motivations were at least partly personal. He had come to loathe Chris Hecker, who was intense in a quieter, more penetrating way that didn’t mix well with St. John’s wild-man persona; St. John was therefore looking for a way to freeze Hecker out. But he was also sincere in his belief that WinG just didn’t go far enough toward making Windows a viable platform for hardcore gaming. With Chicago on the horizon, now was the perfect time to change that. He thundered at his bosses that games were a $5 billion market already, and they were just getting started. Windows’s current ineptitude at running them threatened Microsoft’s share in not only that market but the many other consumer-computing spaces that surrounded it. At some point, game developers would say farewell to antiquated MS-DOS. If Microsoft didn’t provide them with a viable alternative, somebody else would.

He rallied two programmers by the names of Craig Eisler and Eric Engstrom to his cause. In attitude and affect, the trio seemed a better fit for the unruly halls of id Software than those of Microsoft. They ran around terrorizing their colleagues with plastic battle axes, and gave their initiative the rather tasteless name of The Manhattan Project — a name their managers found especially inappropriate in light of Japan’s importance in gaming. But they remained unapologetic: “The Manhattan Project changed the world, for good or bad,” shrugged Eisler. “And we really like nuclear explosions.”

As I just noted, St. John’s title of evangelist afforded him a considerable degree of latitude and an equally considerable financial war chest. Taking advantage of the lack of any definitive rejection of their schemes more so than any affirmation of them among the higher-ups, the trio wrote the first lines of code for their new, fresh-from-the-ground-up tools for Windows gaming on December 24, 1994. (The date was characteristic of these driven young men, who barely noticed a family holiday such as Christmas.) St. John was determined to have something to show the industry at the next Computers Game Developers Conference in less than four months.

Alex St. John, Craig Eisler, and Eric Engstrom prepare to run amok.

WinG was also still alive at this point, under the stewardship of the hated Chris Hecker — but not for long. Disney had released a CD-ROM tie-in to The Lion King, the year’s biggest movie, just in time for that Christmas of 1994. It proved a debacle; hundreds of thousands of children unwrapped the box on Christmas morning, pushed the shiny disc eagerly into the family computer… and found out that it just wouldn’t work, no matter how long Mom and Dad fiddled with it. The Internet lit up with desperate parents of sobbing children, and news of the crisis soon reached USA Today and Billboard, who declared Disney’s “Animated Storybook” to be 1994’s Grinch: the game that had ruined Christmas.

Although the software used WinG, that was neither the only nor the worst source of its problems. (That honor goes to its support for 16-bit sound cards only, as stipulated in tiny print on the box, at a time when many or most people still had 8-bit sound cards and the large majority of computers owners had no idea whether they had the one or the other.) Nevertheless, the disaster was laid at the feet of WinG inside the games industry, creating an overwhelming consensus that a far more comprehensive solution was needed if games were ever to move en masse from MS-DOS to Windows. Alex St. John shed no tears: “I was happy to be proven right about WinG’s inadequacy.” The WinG name was hopelessly tainted now, he argued. Chris Hecker was moved to another project, an event which marked the end of active development on WinG. When it came to Windows gaming in the long term, it was now the Manhattan Project or bust.

By the spring of 1995, St. John had managed to assemble a team of about a dozen programmers, mostly contractors with something to prove rather than full-time Microsoft employees. They settled on the label of “Direct” for their suite of libraries, a reference to the way that they would let game programmers get right down to making cool things happen quickly, without having to mess around with all of the usual Windows cruft. DirectDraw would do what WinG had done only better, letting programmers draw on the screen where, how, and when they would; DirectSound would give the same level of flexible control over the sound hardware; DirectInput would provide support for joysticks and the like; and DirectPlay would be in some ways the most forward-looking piece of all, providing a complete set of tools for online multiplayer gaming. The collection as a whole would come to be known as DirectX. St. John, a man not prone to understatement, told Computer Gaming World that “the PC game market has been suppressed for two major reasons: difficulty with installation and configuration, and lack of significant new hardware innovation for games, because developers have had to code so intimately to the metal that it has become a nightmare to introduce new hardware and get it widely adopted. We’re going to bring all the benefits of device independence to games, and none of the penalties that have discouraged them from using APIs.”

It’s understandable if many developers greeted such broad claims with suspicion. But plenty of them became believers in April of 1995, when Alex St. John crashed into the Computer Game Developers Conference like a force of nature. Founded back in 1988 by Chris Crawford, one of gaming’s most prominent philosophers, the CGDC had heretofore been a fairly staid affair, a domain of gray lecture halls and earnest intellectual debates over the pressing issues of the day. “My job was to see DirectX launched successfully,” says St. John. “I concluded that if we set up a session or a suite at the conference itself, no one would come. Microsoft would have to do something so spectacular that it couldn’t be ignored.” So, he rented out the entirety of the nearby Great America amusement park and invited everyone to come out on the day after the conference ended for rides and fun — and, oh, yes, also a presentation of this new thing called DirectX. When he took the stage, the well-lubricated crowd started mocking him with a chant of “DOS! DOS! DOS!” But the chanting ceased when St. John pulled up a Windows port of a console hit called Bubsy, running at 83 frames per second. It became clear then and there that the days of MS-DOS as the primary hardcore-gaming platform were as numbered as those of the old, hype-immune, comfortably collegial CGDC — both thanks to Alex St. John.

St. John and company had never intended to make a version of DirectX for Windows 3; it was earmarked for Chicago, or rather Windows 95, the now-finalized name for Microsoft’s latest consumer operating system. And indeed, most of us old-timer gamers still remember the switch to Windows 95 as the time when we began to give up our MS-DOS installations and have our fun as well as get our work done under Windows. But for all that DirectX couldn’t exist outside of Windows 95, it wasn’t quite of Windows 95. It wasn’t included with the initial version of the operating system that finally shipped, a year behind schedule, in August of 1995; the first official release of DirectX didn’t appear until a month later. “DirectX was built to be parasitic,” says St. John. “It was carried around in games, not the operating system.” What he means is that he arranged to make it possible for game publishers to distribute the libraries free of charge on their installation CDs. When a game was installed, it checked to see whether DirectX was already on the computer, and if so whether the version there was as new as or newer than the one on the CD; if the answer to either of these questions was no, the latest version of DirectX was installed alongside the game it enabled. In an era when Internet connectivity was still spotty and online operating-system updates still a new frontier, this approach doubtless saved game makers many, many thousands of tech-support calls.

Now, though, we should have a look at some of the new features that were an integral part of Windows 95 from the start. Previous versions of Windows were more properly described as operating environments than full-fledged operating systems; one first installed MS-DOS, then installed Windows on top of that, starting it up via the MS-DOS command line. Windows 95, on the other hand, presented itself to the world as a self-contained entity; one could install it to an entirely blank hard drive, and could boot into it without ever seeing a command line. Yet the change really wasn’t as dramatic as it appeared. Unlike Windows NT, Windows 95 still owed much to the past, and was still underpinned by MS-DOS; the elephant balanced on a light pole had become a blue whale perched nimbly up there on one fin. Microsoft had merely become much more thorough in their efforts to hide this fact.

And we really shouldn’t scoff at said efforts. Whatever its underpinnings, Windows 95 did a very credible job of seeming like a seamless experience. Certainly it was by far the most approachable version of Windows ever. It had a new interface that was a vast improvement over the old one, and it offered countless other little quality-of-life enhancements to boot. In fact, it stands out today as nothing less than the most dramatic single evolutionary leap in the entire history of Windows, setting in place a new usage paradigm that has been shifted only incrementally in all the years since. A youngster of today who has been raised on Windows 10 or 11 would doubtless find Windows 95 a bit crude and clunky in appearance, but would be able to get along more or less fine in it without any coaching. This is much less true in relation to Windows 3 and its predecessors. Tellingly, whenever Microsoft has tried to change the Windows 95 interface paradigm too markedly in the decades since, users have complained so loudly that they’ve been forced to reverse course.

Windows 95 may still have been built on MS-DOS, but 32-bit applications were now the standard, the ability to run 16-bit software relegated to a legacy feature in the name of Microsoft’s all-important backward compatibility. (Microsoft went to truly heroic lengths in the service of the latter, to the extent of special-casing a raft of popular programs: “If you’re running this specific program, do this.” An awful kludge, but needs must…) Another key technical feature, from which tens of millions of people would benefit without ever realizing they were doing so, was “Plug and Play,” which made installing new hardware a mere matter of plugging it in, turning on the computer, and letting the operating system do the rest; no more fiddling about with an alphabet soup of IRQ, DMA, and port settings, trying to hit upon the magic combination that actually worked. Equally importantly, Windows 95 introduced preemptive multitasking in place of the old cooperative model, meaning the operating system would no longer have to depend upon the willingness of individual programs to yield time to others, but could and would hold them to its own standards. At a stroke, all kinds of scenarios — like, say, rendering 3D graphics in the background while doing other work (or play) in the foreground — became much more practical.

A Quick Tour of Windows 95


One of the simplest but most effective ways that Microsoft concealed the still-extant MS-DOS underpinnings of Windows 95 and made it seem like its own, self-contained thing was giving it a graphical boot screen.

It seems almost silly to exhaustively explicate Windows 95’s interface, given that it’s largely the one we still see in Windows today. Nevertheless, I started a tradition in the earlier articles in this series that I might as well continue. So, note that the old “Program Manager” master window has been replaced by a Mac-like full-screen desktop, with a “Start” Menu of all installed applications at the bottom left, a task bar at the bottom center for switching among running applications, and quick-access icons and the clock at the bottom right. Window-manipulation controls too have taken on the form we still know today, with minimize, maximize, and close buttons all clustered at the top right of each window.

Plug And Play was one of the most welcome additions to Windows 95. Instead of manually fiddling with esoteric settings, you just plugged in your hardware and let Windows do it all for you.

Microsoft bent over backward to make Windows 95 friendly and approachable for the novice. What experienced users found annoying and condescending, new users genuinely appreciated. That said, the hand-holding would only get more belabored in the future, trying the patience of even many non-technical users. (Does anyone remember Clippy?)

In keeping with its role in the zeitgeist, the Windows 95 CD-ROM included a grab bag of random pop-culture non sequiturs, such as a trailer for the movie Rob Roy and a Weezer music video.

While Windows 95 made a big point of connectivity and did include a built-in TCP/IP stack for getting onto the Internet, it initially sported no Web browser. But that would soon change, with consequences that would reverberate from Redmond, Washington, to Washington, D.C., from Silicon Valley to Brussels.

The most obvious drawback to Windows’s hybrid architecture was its notorious instability; the “Blue Screen of Death” became an all too familiar sight for users. System crashes tended to stem from those places where the new rubbed up against the old — from the point of contact, if you will, between the blue whale’s flipper and the light pole.



Windows 95 stretched the very definition of what should constitute an operating system; it was the first version of Windows on which you could do useful things without installing a single additional application, thanks to built-in tools like WordPad (a word processor more full-featured than many of the commercially available ones of half a decade earlier) and Paint (as the name would imply, a paint program, and a surprisingly good one at that). Some third-party software publishers, suddenly faced with the prospect of their business models going up in smoke, complained voraciously to the press and to the government about this bundling. Nonetheless, the lines between operating systems and applications had been blurred forever.

Indeed, this was in its way the most revolutionary of all aspects of Windows 95, an operating system that otherwise still had one foot rooted firmly in the past. That didn’t much matter to most people because it was a new piece of software engineering second, a flashy new consumer product first. Well before the launch, a respected tech journalist named Andrew Schulman told how “the very name Windows 95 suggests this product will play a leading role” in “the movement from a technology-based into a consumer-product-based industry.”

If a Windows program queries the GetVersion function in Windows 95, it will get back 4.0 as the answer; a DOS program will get back the answer 7.0. But in its marketing, Microsoft has decided to trade in the nerdy major.minor version-numbering scheme (version x.0 had always given the company trouble anyway) for a new product-naming scheme based on that used by automobile manufacturers and vineyards. Windows 95 isn’t foremost a technology or an operating system; it’s a product. It is targeted not at developers or end users but at consumers.

In that spirit, Microsoft hired Brian Eno, a famed composer and producer of artsy rock and ambient music, to provide the now-iconic Windows 95 startup theme. Eno:

The thing from the agency said, “We want a piece of music that is inspiring, universal, blah-blah, da-da-da, optimistic, futuristic, sentimental, emotional,” this whole list of adjectives, and then at the bottom it said, “And it must be 3.25 seconds long.”

I thought this was funny, and an amazing thought to actually try to make a little piece of music. It’s like making a tiny little jewel.

In fact, I made 84 pieces. I got completely into this world of tiny, tiny little pieces of music. Then when I’d finished that and I went back to working with pieces that were like three minutes long, it seemed like oceans of time…

Ironically, Eno created this, his most-heard single composition, on an Apple Macintosh. “I’ve never used a PC in my life,” he said in 2009. “I don’t like them.”


On a more populist musical note, Microsoft elected to make the Rolling Stones tune “Start Me Up” the centerpiece of their unprecedented Windows 95 advertising blitz. By one report, they paid as much as $12 million to license the song, so enamored were they by its synergy with the new Windows 95 “Start” menu, apparently failing to notice in their excitement that the song is actually a feverish plea for sex. “[Mick] Jagger was half kidding” when he named that price, claimed the anonymous source. “But Microsoft was in a big hurry, so they took the deal, unlike anything else in the software industry, where they negotiate to death.” Of course, Microsoft was careful not to include in their commercials the main chorus of “You make a grown man cry.” (Much less the fade-out chorus of “You make a dead man come.”)


Microsoft spent more than a quarter of a billion dollars in all on the Windows 95 launch, making it by a veritable order of magnitude the most lavish to that point in the history of the computer industry. One newspaper said the campaign was “how the Ten Commandments would have been launched, if only God had had Bill Gates’s money.” The goal was to make Windows, as journalist James Wallace put, “the most talked-about consumer product since New Coke” — albeit one that would hopefully enjoy a better final fate. Both goals were achieved. If you had told an ordinary American on the street even five years earlier that a new computer operating system, of all things, would shortly capture the pop-culture zeitgeist so thoroughly, she would doubtless have looked at you like you had three heads. But now it was 1995, and here it was. The Cold War was over, the War on Terror not yet begun, the economy booming, and the wonders of digital technology at the top of just about everyone’s mind; the launch of a new operating system really did seem like just about the most important thing going on in the world at the time.

The big day was to be August 24, 1995. Bill Gates made 29 separate television appearances in the week leading up it. A 500-foot banner was unfurled from the top floor of a Toronto skyscraper, while hundreds of spotlights served to temporarily repaint the Empire State Building in the livery of Windows 95. Even the beloved Doonesbury comic strip was co-opted, turning into a thinly veiled Windows 95 advertisement for a week. Retail stores all over the continent stayed open late on the evening of August 23, so that they could sell the first copies of Windows 95 to eager customers on the stroke of midnight. (“Won’t it be available tomorrow?” asked one baffled journalist to the people standing in line.) There were reports that some impressionable souls got so caught up in the hype that they turned up and bought a copy even though they didn’t own a computer on which to run it.

But the excitement’s locus was Microsoft’s Redmond, Washington, campus, which had been turned into a carnival grounds for the occasion, with fifteen tents full of games and displays and even a Ferris wheel to complete the picture. From here the proceedings were telecast live to millions of viewers all over the world. Gates took the stage at 11:00 AM with a surprise sparring partner: comedian Jay Leno, host of The Tonight Show, the country’s most popular late-night talk show. He worked the crowd with his broad everyman humor; this presentation was most definitely not aimed at the nerdy set. His jokes are as fine a time capsule of the mid-1990s as you’ll find. “To give you an idea of how powerful Windows 95 is, it is able to keep track of all O.J.’s alibis at once,” said Leno. Gates wasn’t really so much smarter than the rest of us; Leno had visited his house and found his VCR’s clock still blinking 12:00. As for Windows 95, it was like a good date: “smart, user-friendly, and under $100.” The show ended with Microsoft’s entire senior management team displaying their dubious dance moves up there onstage to the strains of “Start Me Up.” “It was the coolest thing I’ve ever been a part of,” gushed Gates afterward.

Bill Gates and Jay Leno onstage.

Windows 95 sold 1 million copies in its first four days, 30 million copies in its first seven months, 65 million copies in its first sixteen months. (For the record, this last figure was 15 million more copies than the best-selling album of all time, thus cementing the operating system’s place in pop-culture as well as technology history.) By the beginning of 1998, when talk turned to its successor Windows 98, it boasted an active user base three and a half times larger than that of Windows 3.

And by that same point in time, the combination of Windows 95 and DirectX had remade the face of gaming. A watershed moment arrived already just one year after the debut of Windows 95, when Microsoft used DirectX to make the first-ever Windows version of their hugely popular Flight Simulator, for almost a decade and a half now the company’s one really successful hardcore gamer’s game. From that moment on, DirectX was an important, even integral part of Microsoft’s corporate strategy. As such, it was slowly taken out of the hands of Alex St. John, Craig Eisler, and Eric Engstrom, whose bro-dude antics, such as hiring a Playboy Playmate to choose from willing male “slaves” at one industry party and allowing the sadomasochistic shock-metal band GWAR to attend another with an eight-foot tall anthropomorphic vagina and penis in tow, had constantly threatened to erupt into scandal if they should ever escape the ghetto of the gaming press and make it into the mainstream. Whatever else one can say about these three alpha-nerds, they changed gaming forever — and changed it for the better, as all but the most hidebound MS-DOS Luddites must agree. By the time Windows 98 hit the scene, vanilla MS-DOS was quite simply dead as a gaming platform; all new computer games for a Microsoft platform were Windows games, coming complete with quick and easy one-click installers that made gaming safe even for those who didn’t know a hard drive from a RAM chip. The DirectX revolution, in other words, had suffered the inevitable fate of all successful revolutions: that of becoming the status quo.

St. John’s inability to play well with others got him fired in 1997, while Eisler and Engstrom grew up and mellowed out a bit and moved into Web technologies at Microsoft. (The Web-oriented software stack they worked on, which never panned out to the extent they had hoped, was known as Chrome; it seems that everything old truly is new again at some point.)

Speaking of the Internet: what did Windows 95 mean for it, and vice versa? I must confess that I’ve been deliberately avoiding that question until now, because it has such a complicated answer. For if there was one tech story that could compete with the Windows 95 launch in 1995, it was surely that of the burgeoning World Wide Web. Just two weeks before Bill Gates enjoyed the coolest day of his life, Netscape Communications held its initial public offering, ending its first day as a publicly traded company worth a cool $2.2 billion in the eyes of stock buyers. Some people were saying even in the midst of all the hype coming out Redmond that Microsoft and Windows 95 were computing’s past, a new era of simple commodity appliances connecting to operating-system-agnostic networks its future. Microsoft’s efforts to challenge this wisdom and compete on this new frontier were just beginning to take shape at the time, but they would soon become the company’s overriding obsession, with well-nigh earthshaking stakes for everyone involved with computers or the Web.

(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Show-stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft by G. Pascal Zachary, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Unauthorized Windows 95: A Developer’s Guide to the Foundations of Windows “Chicago” by Andrew Schulman, Undocumented Windows: A Programmer’s Guide to Reserved Microsoft Windows API Functions by Andrew Schulman, David Maxey, and Matt Pietrek, and Windows Internals: The Implementation of the Windows Operating Environment by Matt Pietrek; Computer Gaming World of August 1994, June 1995, and September 1995; Game Developer of August/September 1995; InfoWorld of March 15 1993; Mac Addict of April 2000; Windows Magazine of April 1996; PC Magazine of November 8 2005. Online sources include an Ars Technica piece on Microsoft’s efforts to keep Windows compatible with earlier software, a Usenet thread about the Lion King CD-ROM debacle which dates from Christmas Day 1994, a Music Network article about Brian Eno’s Windows 95 theme, an SFGate interview with Eno, and Chris Hecker’s overview of WinG for Game Developer. I owe a special thanks to Ken Polsson for his personal-computing chronology, which has been invaluable for keeping track of what happened when and pointing me to sources during the writing of this and other articles. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 
66 Comments

Posted by on November 18, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

Mastodon Feed

I’ve set up an account on Mastodon for posting new articles and (very rarely) other newsworthy tidbits. You can access it at https://oldbytes.space/@DigiAntiquarian. For the time being at least, this will exist in addition to rather than instead of my Twitter feed. I’ll only delete the latter if things go completely off the rails on Twitter. Like a lot of people, I’m in wait-and-see mode right now, but it never hurts to be prepared, right?

I’m new to Mastodon and still struggling just a bit to wrap my head around it, so do let me know if I’ve done anything horribly wrong.

 
12 Comments

Posted by on November 4, 2022 in Interactive Fiction, Modern Times

 

The Pandora Directive

When we started out with Mean Streets, we wanted a vintage, hard-boiled detective from the 1930s and 1940s. You know, the Humphrey Bogart, Raymond Chandler classic character. But since then, we’ve changed our picture of [our detective] Tex [Murphy]. We want to see some human vulnerability. We don’t want the superhero. Too much of the videogame genre is just these invincible characters. They aren’t real; they don’t have texture; they don’t have any kind of fabric to their personality. It’s not very interesting really, dealing strictly with such a one-dimensional character.

For us, the idea was to make this person seem more real. Whether he’s fumbling around or whatever — okay, let’s give him a talent, but let’s put a few defects in his character. He’s still a good guy, but he screws up a lot and says the wrong thing. He’s really from a different time period. We set it in the future because we wanted to give it the gadgets and get it out of today. So we take this man out of time. The general focus of Tex is this: I’m this guy who’s got these problems, who tries to date women but has a hard time with it, and ends up dating the wrong women. If someone good actually likes Tex, well, he figures there must be something wrong with them.

But now, to take Tex down three different paths… this is very interesting.

— Chris Jones in 1995, speaking about plans for The Pandora Directive

Under a Killing Moon, the first interactive full-motion-video film noir to feature the perpetually down-on-his-luck detective-out-of-time Tex Murphy, didn’t become one of that first tier of mid-1990s adventure games that sold over a million copies, captured mainstream headlines, and fomented widespread belief in a new era of interactive mainstream entertainment. It did become, however, a leading light of the second tier, selling almost half a million copies for its Salt Lake City-based developer and publisher Access Software over the course of the year after its release in late 1994. Such numbers were enough to establish Tex Murphy as something more than just a sideline to Links, Access’s enormously profitable series of golf simulations. Indeed, they made a compelling case for a sequel, especially in light of the fact that the second game ought to cost considerably less to make than the $5 million that had been invested in the first, what with the sequel being able to reuse an impressive game engine whose creation had eaten up a good chunk of that budget. The sequel was officially underway already by the beginning of 1995.

The masterminds of the project were once again Chris Jones and Aaron Conners — the former being the man who had invented the character of Tex Murphy and who still played him onscreen when not moonlighting as Access’s chief financial officer (or vice versa), the latter being the writer who had breathed new life into him for Under a Killing Moon. The sequel was to be an outlier in the novelty-driven world of game development, representing a creative and writerly evolution rather than a technological one. For the fact was that the free-roaming 3D adventuring engine used in Under a Killing Moon was still very nearly unique.

Conners concocted a new script, called The Pandora Directive, that was weightier and just plain bigger than what had come before; it was projected to require about half again as much time to play through. It took place in the same post-apocalyptic future and evinced the same Raymond Chandler-meets-Blade Runner aesthetic, but it also betrayed a marked new source of inspiration: the hit television series The X-Files, whose murky postmodern vision of sinister aliens and labyrinthine government conspiracies was creeping into more and more games during this era. Conners was forthright about its influence in interviews, revealing at the same time something of the endearingly gawky wholesomeness of The Pandora Directive‘s close-knit, largely Mormon developers, which sometimes sat a little awkwardly alongside the subject matter of their games. Watching The X-Files in secret, away from the prying eyes of spouses and children, was about as edgy as this bunch ever got in their personal lives.

Everyone else in the development team is a family man, and X-Files is a little heavy for the kids. So they all ask me to record it. I bring it in and we watch it during lunch. I really like the show. It’s been nice because we watch carefully to see what they do with music and lighting to portray a mood. Their production is closer to what we do than to a cinematic feature — tighter budget, working faster. So we found the show very informative.

If anything, The X-Files‘s influence on The Pandora Directive‘s plot is a little too on-the-nose. Like its television inspiration, the game revolves around the UFO that allegedly crashed in Roswell, New Mexico, in 1947, the wellspring of a thousand overlapping conspiracy theories in both real life and fiction. If the one in the game is ultimately less intricately confounding than its television counterpart, that is only, one senses, because Conners had less space to develop his mythology. It’s all complete nonsense, of course, but The Pandora Directive is hardly the only game to mine escapist fun from the overwrought fever dreams of the conspiracy theorists.

But Jones and Conners were as eager to experiment with the form of their game as its content. Like the vast majority of adventure games, Under a Killing Moon had draped only a thin skein of player agency over a plot whose broad beats were as fixed as those of any traditional novel or film. You could tinker with the logistical details, in other words — maybe do certain things in a different order from some other player — but the overall arc of the story remained fixed. Conners’s script for The Pandora Directive aimed to change that, at least partially. While your path to unraveling the conspiracy would remain mostly set in stone, you would be able to determine Tex’s moral arc, if you will. If you played him as a paragon of virtue, he might just win true love and find himself on the threshold of an altogether better life by the end of the game. Cut just a few ethical corners here and there, and Tex would finish the game more or less where he’d always been, just about managing to keep his head above water and make ends meet, in a financial and ethical sense. But play him as a complete jerk, and he’d wind up dissipated and alone. Chris Jones on the bad path, which was clearly a challenge for this particular group of people to implement:

It’s a gradual fall. Bit by bit. Bad decision here, another there. As Tex sees it and the surroundings begin to change, he realizes that he blew it. His opportunity’s gone; this other girl is dead because of his mistake. And that darkens the character. So if each step gets you just a little darker, then it’s believable. It starts to have a real texture to it. That’s what we’re trying to do. Tex makes choices, tripping down the dark path, and starts to question himself: “Do I want to save myself? Or maybe this is what I want.” And then eventually you’re trapped. And that’s when it gets very interesting. We start to give Tex some options where the player will say, “Whoa! Can I make this choice?” By the end of the game, he turns into a real cynical bastard. If he chooses to stay on the darker side — each choice is just a shade of gray really, but all those shades of gray add up to a pretty dark character by the time you’re done. Just like in life.

I’m a bit uncomfortable about the way the [dark] path turns out. That was never my vision of what Tex could be. On the other hand, we have this medium which allows you to do such a thing. It is our competitive advantage over movies and television to be able to say to our audience, “Sit in this seat, make different decisions, and see how it turns out.” If we can pull it off with our characterizations and acting… well, now, that’s a very powerful medium. And so I feel like we have a responsibility to do that, to provide these kinds of choices. As I said, as an actor, I feel uncomfortable with this portrayal of Tex. But I feel it would shortchange people who buy the game to say, no, this is Tex, do it my way. If you’re kind of leaning down the dark path, take it and see what happens. You become the character. I’m in your hands.

In addition to the artistic impulse behind it, the more broad-brushed interactivity was intended to allay one of the most notable weaknesses of adventure games as a commercial proposition: the fact that they cost as much or more than other types of games to buy, but, unlike them, were generally interesting to play through only once. Jones and Conners hoped that their players would want to experience their game two or three times, in order to explore the possibility space of Tex’s differing moral arcs.

They implemented a user-selectable difficulty level, another rarity in adventure games, for the same reason. The “Entertainment” level gives access to a hint system; the “Game Players” level removes that, whilst also removing some in-game nudges, adding some red herrings to throw you off the scent, making some of the puzzles more complex, and adding time limits here and there. Again, Jones and Conners imagined that many players would want to go through the story once at Entertainment level, then try to beat the game on Game Players.

But the most obvious way that Access raised the bar over Under a Killing Moon was the cast and crew that they hired for the cinematic cut scenes and dialogs that intersperse your first-person explorations as Tex Murphy. While the first games had employed such Hollywood actors as Margot Kidder, Brian Keith, Russell Means, and even the voice of James Earl Jones as its narrator, it had done so only as an afterthought, once Jones and Conners had already built the spine of their game around local Salt Lake City worthies. This time they chose to invest a good part of the money they had saved from their tools budget into not just “real” actors but a real, professional director.

The official resumé which Access’s press releases provided for Adrian Carr, the man chosen for the latter role, was written on a curve typical of interactive movies, treating the five episodes he had directed of the cheesy children’s show Mighty Morphin Power Rangers, his most prominent American credit to date, like others might a prestigious feature film. “He has directed, written, and/or edited work in almost every genre, from features to documentaries, television drama to commercials,” Access wrote breathlessly. All kidding aside, Carr really was an experienced journeyman, who had directed two low-budget features in his native Australia and edited a number of films for Hollywood. He had never seriously played a computer game in his life, but that didn’t strike Jones and Conners as a major problem; they were confident that they had a handle on that side of the house. Carr was brought in not least because professional actors of the sort that he’d seen before on television or movie screens tended to intimidate Chris Jones, who’d directed Under a Killing Moon himself. He “didn’t know how to handle them exactly,” allowed Conners.

Whatever the initial impulse behind it, it proved to be a very smart move. Adrian Carr may not have been the film industry’s ideal of an auteur, but he was more than capable of giving The Pandora Directive a distinctive look that wasn’t just an artifact of the technology behind the production — a look which, once again, stemmed principally from The X-Files, from that television show’s way of portraying its shadowy conspiracies using an equally shadowy visual aesthetic. Carr:

We started lighting darker, and putting in Venetian blinds and shadows and reflections to create texture. And the poor people who render the backgrounds moaned, “But it’s so dark!” And I’d say, “But it’s a movie!”

This has been one of my contributions, I guess. The technicians have been learning about mood. Like when Tex comes home and the room is only lit from outside, or there’s just one lamp on — see, guys, the murkiness is actually good, it creates a certain texture for the mood that we want.

Gordon Fitzpatrick (Kevin McCarthy) and Tex Murphy (Chris Jones).

The cast as a whole remained a mixture of amateurs and professionals; those returning characters that had been played by locals in the previous game were still played by them in this one. Among these was Tex Murphy himself, played by Chris Jones, the man whom everyone agreed really was Tex in some existential sense; he probably wasn’t much of an actor in the abstract, probably would have been a disaster in any other role, but he was just perfect for this one. Likewise, Tex’s longtime crush Chelsee Bando and the other misfits that surround his office on Chandler Avenue all came back to make up for in enthusiasm what they lacked in acting-school credentials.

Chelsee Bando (Suzanne Barnes) is literally the girl next door; she runs a newspaper stand just outside of Tex’s office and apartment.

On the other hand, none of the professional actors make a return appearance. (I must admit that I sorely miss the dulcet tones of James Earl Jones.) The cast instead includes Tanya Roberts, who was one of Charlie’s Angels (in the show’s last season only), a Bond girl, and a Playboy centerfold during the earlier, more successful years of her career; here she plays Regan Madsen, the sultry femme fatale who may just be able to make Tex forget his unrequited love for Chelsee. Also present is Kevin McCarthy, who had been a Hollywood perennial with his name in every casting director’s Rolodex for almost half a century by the time this game was made, with his role in the 1956 B-movie classic Invasion of the Body Snatchers standing out as the one real star turn on his voluminous resumé; here he plays Dr. Gordon Fitzpatrick, the former Roswell scientist who draws Tex into the case. And then there’s John Agar, who first captured international headlines back in 1945, when he married the seventeen-year-old former child star Shirley Temple. He never got quite that much attention again, but he did put together another long and fruitful career as a Hollywood supporting player; he appears here as Thomas Malloy, another would-be Roswell whistleblower.

Tex Murphy and Regan Madsen (Tanya Roberts).

But the Pandora Directive actor that absolutely everyone remembers is Barry Corbin, in the role of Jackson Cross, the government heavy who is prepared to shut down any and all investigations into the goings-on at Roswell by any and all means necessary. Whereas McCarthy and Agar built careers out of being handsome but not overly memorable presences on the screen — a quality which served them well in their multitude of supporting roles, in which they were expected to be competent enough to fulfill their character’s purpose but never so brilliant as to overshadow the real stars — Corbin was and is a character actor of a different, delightfully idiosyncratic type, with a look, voice, and affect so singular that millions of viewers who have never learned his name nevertheless recognize him as soon as he appears on their screen: “Oh, it’s that guy again…” At the time of The Pandora Directive, he was just coming off his most longstanding and, to my mind anyway, defining role: that of the former astronaut Maurice Minnifield, town patriarch of Cicely, Alaska, in the weird and beautiful television show Northern Exposure. In Corbin’s capable hands, Maurice became a living interrogation of red-blooded American manhood of the stoic John Wayne stripe, neglecting neither its nobility nor its toxicity, its comedy nor its pathos.

Give Barry Corbin a great script, and he’ll knock the delivery out of the park (to choose a sports metaphor of which Maurice Minnifield would approve).

Although The Pandora Directive didn’t give Corbin an opportunity to embody a character of such well-nigh Shakespearean dimensions, it did give him a chance to have some fun. For unlike Maurice Minnifield, Jackson Cross is exactly what he appears to be on the surface: a villain’s villain of the first order. Corbin delighted in chewing up the scenery and spitting it in the face of the hapless Chris Jones — a.k.a, Tex Murphy. Jones, revealing that he wasn’t completely over his inferiority complex when it came to professional actors even after giving up the director’s job:

It’s already a little intimidating to work with people who are just consummate professionals. Then the first scene we shot together, Tex was supposed to be grilled by Barry’s character, Jackson Cross. I’m sitting in this chair, and he just came up and scared the hell out of me. Really, he looked through me and I just melted. Fortunately, that’s what my character was supposed to do. I truly felt like I was going to die if I didn’t answer him right. It was frightening.

The Pandora Directive couldn’t offer Corbin writing on the level of Northern Exposure at its best, but he clearly had fun with it anyway.

The Pandora Directive was released with high hopes all around on July 31, 1996, about 21 months after Under a Killing Moon. It shipped on no fewer than six CDs, two more than its predecessor, a fairly accurate gauge of its additional scope and playing time.

Alas, its arrival coincided with the year of reckoning for the interactive movie as a viable commercial proposition. Despite the improved production values and the prominent placement of Barry Corbin’s unmistakable mug on the box, The Pandora Directive sold only about a third as many copies as Under a Killing Moon. Instead of pointing the way toward a new generation of interactive mainstream entertainment, it was doomed to go down in media history as an oddball artifact that could only have been created within a tiny window of time in the mid-1990s. Much like Jane Jensen in the case of The Beast Within, Chris Jones and Aaron Conners were afforded exactly one opportunity in their careers to make an interactive movie on such a scale and with such unfettered freedom as this, before the realities of a changing games industry sharply limited their options once again.

Small wonder, then, that both still speak of The Pandora Directive in wistful tones today. Developed in an atmosphere of overweening optimism, it is and will probably always remain The One for them, the game that came closest to realizing their dreams for the medium, having been created at a time when a merger of Silicon Valley and Hollywood still seemed like a real possibility, glittering and beckoning just over the far horizon.

And how well does The Pandora Directive stand up today, divorced from its intended role as a lodestar for this future of media that never came to be? Pretty darn well for such an undeniable period piece, I would say, with only a few reservations. If I could only choose one of them, I think I would be forced to go with Under a Killing Moon over this game, just because The Pandora Directive can occasionally feel a bit smothered under the weight of its makers’ ambitions, at the expense of some of its predecessor’s campy fun. That said, it’s most definitely a close-run thing; this game too has a lot to recommend it.

Certainly there’s more than a whiff of camp about it as well. As the video clip just above amply attests, not even the talented actors in the cast were taking their roles overly seriously. In fact, just like Under a Killing Moon, this game leaves me in a bit of a pickle as a critic. I’ve dinged quite a few other games on this site for “lacking the courage of their convictions,” as I’ve tended to put it, for using comedy as a crutch, a fallback position when they can’t sustain their drama for reasons of acting, writing, or technology — or, most commonly, all three. I can’t in good faith absolve The Pandora Directive of that sin, any more than I can Under a Killing Moon. And yet it doesn’t irritate me here like it usually does. I think this is because there’s such a likeability to these Tex Murphy games. They positively radiate creative joy and generosity; one never doubts for a moment that they were made by nice people. And niceness is, as I’ve also written from time to time on this site, a very undervalued quality, in art as in life. The Tex Murphy games are just good company, the kind you’re happy to invite into your home. Playing them is like watching a piece of community theater put on by your favorite neighbors. You want them to succeed so badly that you end up willing them over the rough patches with the power of your imagination.

The archetypal Access Software story for me involves a Pandora Directive character named Archie Ellis, a hapless young UFO researcher who, in the original draft of the script, stepped where he didn’t belong and got himself killed in grisly fashion by Jackson Cross. Barry Corbin “just dominates that scene,” said Aaron Conners later. “It was like we let this evil essence into the studio.” Everyone was shocked by what had been unleashed: “The mood on the set was just so oppressive.” So, Conner scurried off to doctor the script, to give the player some way to save poor little Archie, feeling as he did that what he had just witnessed was simply too “traumatic” to leave an inevitability. You can call this an abject failure on his part to stick to his dramatic guns, but it’s hard to dislike him or his game for it, any more than you can, say, make yourself dislike Steve Meretzky for bringing the lovable little robot Floyd back to life at the end of Planetfall, thereby undercutting what had been the most compelling demonstration to date of the power of games to move as well as entertain their players.

I won’t belabor the finer points of The Pandora Directive‘s gameplay and interface here because they don’t depart at all from Under a Killing Moon. The first-person 3D exploration, which lets you move freely about a space, looking up and down and peering into and over things, remains as welcome as ever; I would love it if more adventure games had been done in this style. And once again there are a bevy of set-piece puzzles to solve, from piecing torn-up notes back together to manipulating alien mechanisms. Nothing ever outstays its welcome. On the contrary, The Pandora Directive does a consistently great job of switching things up: cut scenes yield to explorations, set-piece visual puzzles yield to dialog menus. There are even action elements here, especially if you choose the Game Player mode; your furtive wanderings through the long-abandoned Roswell complex itself, dodging the malevolent alien entity who now lives here, are genuinely frightening. This 3D space and one or two others are far bigger than anything we saw in the last game, just as the puzzle chains have gotten longer and knottier. And yet there’s still nothing unfair in this game, even in Game Player mode; it’s eminently soluble if you pay attention to the details and apply yourself, and contains no hidden dead ends. Say it with me one more time: the folks who made this game were just too nice to mistreat their players in the way of so many other adventure games.

Exploring Tex’s bedroom in first-person 3D. His choice of wall art is… interesting.

I would like to write a few more words about the game’s one big formal innovation, letting the player determine Tex’s moral arc. Jones and Conners deserve a measure of credit for even attempting such a thing in the face of technological restrictions that militated emphatically against it. Live-action video clips filled a huge amount of space and cost a lot of money to produce, such that to offer a game with branching paths, thus leaving a good chunk of the content on the CDs unseen by many players, must have cut Chris Jones’s accountant’s heart to the quick. Points for effort, then.

As tends to the be the case with many such experiments, however, I’m not sure how much it truly adds to the player’s final experience. One of the big problems here is the vagueness of the dialog choices you’re given. Rather than showing you exactly what Tex will say, the menus offer options like “insensitive but cheerful” or “pretend nothing’s wrong,” which are open to quite a range of overlapping interpretations. In not making Tex’s next line of dialog explicit, the designers were trying to solve another problem, that of the inherent anticlimax of clicking on a sentence and then listening to Tex dutifully parrot it back. Unfortunately, though, the two solutions conflict with one another. Far too often, you click an option thinking it means one thing, only to realize that Tex has taken it in a completely different way. This doesn’t have to happen very often before the vision of Tex the game is depicting has diverged in a big way from the one you’re trying to inhabit, making the whole exercise rather moot if not actively frustrating.

Aaron Conners himself admitted that “95 percent of the people who play will end up on the B path,” meaning the one where Tex doesn’t prove himself to be a paragon of virtue and thus doesn’t get his lady love Chelsee — not yet, anyway — but doesn’t fall into a complete moral abyss either. And indeed, this was the ending I saw after trying to play as a reasonably standup guy. (For what it’s worth, I didn’t succeed in saving poor Archie either.) Predictably enough, the Internet was filled within days of the game’s release with precise instructions on how to hack your way through the thicket of dialog choices to arrive at the best ending (or the worst one, for that matter). All of which is well and good — fans gotta fan, after all — but is nevertheless the polar opposite of the organic experience that Jones and Conners intended. Why bother adding stuff that only 5 percent of your players will see without reading the necessary choices from a walkthrough? The whole thing strikes me as an example — thankfully, a rare one — of Jones and Conners rather outsmarting themselves, delivering a feature which sounded better in interviews about the amazing potential of interactive movies than it works in practice.

Then again, the caveat and saving grace which ought to be attached to my complaints is that none of it really matters all that much when all is said and done; the B ending that most of us will see is arguably the truest to the spirit of the Tex Murphy character anyway. And today, of course, you can pick and choose in the dialogs in whatever way feels best to you, then go hunt down the alternate endings on YouTube if you like.

It may sound strange to say in relation to a game about sinister government conspiracies, set in a post-apocalyptic dystopia, but playing The Pandora Directive today feels like taking a trip back to a less troubled time. It’s just about the most 1990s thing ever, thanks not only to its passé use of video clips of real actors but to its X-Files-derived visual aesthetic, its subject matter (oh, for a time when the most popular conspiracy theories were harmless fantasies about aliens!), and even the presence of Barry Corbin, featured player in one of the decade’s iconic television programs. So, go play it, I say; go revel in its Mormon niceness. The post-millennial real world, more complicated and vexing than a thousand Roswell conspiracy theories, will still be here waiting for you when you return.

(Sources: the book The Pandora Directive: The Official Strategy Guide by Rick Barba; Electronic Entertainment of July 1995; Computer Gaming World of March 1996 and December 1996; Retro Gamer 160. Online sources include the now-defunct Unofficial Tex Murphy Web Site and a documentary film put out by Chris Jones and Aaron Conners in recent years.

The Pandora Directive is available from GOG.com as a digital purchase.)

 

Tags: , , , ,