RSS

Category Archives: Interactive Fiction

A Tale of the Mirror World, Part 1: Calculators and Cybernetics

Back in my younger days, when the thought of sleeping for nights on end in campground tents and hostel cots awakened a spirit of adventure instead of a premonition of an aching back, I used to save up my vacation time and undertake a big backpacker-style journey every summer. In 2002, this habit took me to Russia.

I must confess that I found St. Petersburg and Moscow a bit of a disappointment. They just struck me as generic big cities of the sort that I’d seen plenty of in my life. While I’m sure they have their unique qualities, much of what I saw there didn’t look all that distinct from what one could expect to see in any of dozens of major European cities. What I was looking for was the Russia — or, better said, the Soviet Union — of my youth, that semi-mythical Mirror World of fascination and nightmare.

I could feel myself coming closer to my goal as soon as I quit Moscow to board the Trans-Siberian Railroad for the long, long journey to Vladivostok. As everyone who lived in Siberia was all too happy to tell me, I was now experiencing the real Russia. In the city of Ulan-Ude, closed to all outsiders until 1991, I found the existential goal I hadn’t consciously known I’d been seeking. From the central square of Ulan-Ude, surrounded on three sides by government offices still bearing faded hammers and sickles on their facades, glowered a massive bust of Vladimir Lenin. I’d later learn that at a weight of 42 tons the bust was the largest such ever built in the Soviet Union, and that it had been constructed in 1971 as one of the last gasps of the old tradition of Stalinist monumentalism. But the numbers didn’t matter on that scorching-hot summer day when I stood in that square, gazing up in awe. In all my earlier travels, I’d never seen a sight so alien to me. This was it, my personal Ground Zero of the Mirror World, where all the values in which I’d been indoctrinated as a kid growing up deep in the heart of Texas were flipped. Lenin was the greatest hero the world had ever known, the United States the nation of imperialist oppression… it was all so wrong, and because of that it was all so right. I’ve never felt so far from home as I did on that day — and this feeling, of course, was exactly the reason I’d come.

I’m a child of the 1980s, the last decade during which the Soviet Union was an extant power in the world. The fascination which I still felt so keenly in 2002 had been a marked feature of my childhood. Nothing, after all, gives rise to more fascination than telling people that something is forbidden to them, as the Kremlin did by closing off their country from the world. Certainly I wasn’t alone in jumping after any glimpse I could get behind the Iron Curtain.

Thus the bleakly alluring version of Moscow found in Martin Cruz Smith’s otherwise workmanlike crime novel Gorky Park turned it into a bestseller, and then a hit film a couple of years later. (I remember the film well because it was the first R-rated movie my parents ever allowed me to see; I remember being intrigued and a little confused by my first glimpse of bare breasts on film — as if the glimpse behind the Iron Curtain wasn’t attraction enough!) And when David Willis, an American journalist who had lived several years in Moscow, purported to tell his countrymen “how Russians really live” in a book called Klass, it too became a bestseller. Even such a strident American patriot as Tom Clancy could understand the temptation of the Mirror World. In Red Storm Rising, his novel of World War III, straitlaced intelligence officer Robert Toland gets a little too caught up in the classic films of Sergei Eisenstein.

The worst part of the drive home was the traffic to the Hampton Roads tunnel, after which things settled down to the usual superhighway ratrace. All the way home, Toland’s mind kept going over the scenes from Eisenstein’s movie. The one that kept coming back was the most horrible of all, a German knight wearing a crusader’s cross tearing a Pskov infant from his mother’s breast and throwing him — her? — into a fire. Who could see that and not be enraged? No wonder the rabble-rousing song “Arise, you Russian People” had been a genuinely popular favorite for years. Some scenes cried out for bloody revenge, the theme for which was Prokofiev’s fiery call to arms. Soon he found himself humming the song. A real intelligence officer you are … Toland smiled to himself, thinking just like the people you’re supposed to study … defend our fair native land … za nashu zyemlyu chestnuyu!

“Excuse me, sir?” the toll collector asked.

Toland shook his head. Had he been singing aloud? He handed over the seventy-five cents with a sheepish grin. What would this lady think, an American naval officer singing in Russian?

Those involved with computers were likewise drawn to the Mirror World. When Byte magazine ran a modest piece buried hundreds of pages deep in their November 1984 issue on a Soviet personal computer showing the clear “influence” of the Apple II, it became the second most popular article in the issue according to the magazine’s surveys. Unsurprisingly in light of that reception, similar tantalizing glimpses behind the Iron Curtain became a regular part of the magazine from that point forward. According to the best estimates of the experts, the Soviets remained a solid three years behind the United States in their top-end chip-fabrication capabilities, and much further behind than that in their ability to mass-produce dependable computers that could be sold for a reasonable price. If the rudimentary Soviet computers Byte described had come from anywhere else, in other words, no one would have glanced at them twice. Yet the fact that they came from the Mirror World gave them the attraction that clung to all glimpses into that fabled land. For jaded veterans grown bored with an American computer industry that was converging inexorably from the Wild West that had been its early days toward a few standard, well-defined — read, boring — platforms, Soviet computers were the ultimate exotica.

Before the end of the 1980s, an odd little game of falling blocks would ride this tidal wave of Soviet chic to become by some measures the most popular videogame of all time. An aura of inscrutable otherness clung to Tetris, which the game’s various publishers — its publication history is one of the most confusing in the history of videogames — were smart enough to tie in with the sense of otherness that surrounded the entirety of the Soviet Union, the game’s unlikely country of origin, in so many Western minds. Spectrum Holobyte, the most prominent publisher of the game on computers, wrote the name in Cyrillic script on the box front, subtitled it “the Soviet Challenge,” and commissioned background graphics showing iconic — at least to Western eyes — Soviet imagery, from Cosmonauts in space to the “Red Machine” hockey team on the ice. As usual, Nintendo cut more to the chase with their staggeringly successful Game Boy version: “From Russia with Fun!”

Tetris mania was at its peak as the 1990s began. The walls were coming down between West and East, both figuratively and literally, thanks to Mikhail Gorbachev’s impossibly brave choice to let his empire go — peacefully. Western eyes peered eagerly eastward, motivated now not only by innocent if burning curiosity but by the possibilities for tapping those heretofore untapped markets. Having reached this very point here in this blog’s overarching history of interactive entertainment and matters related, let’s hit pause long enough to join those first Western discoverers now in exploring the real story of computing in the Mirror World.


 

In the very early days of computing, before computer science was a recognized discipline in which you could get a university degree, the most important thinkers in the nascent field tended to be mathematicians. It was, for instance, the British mathematician Alan Turing who laid much of the groundwork for modern computer science in the 1930s, then went on to give many of his theories practical expression as part of the Allied code-breaking effort that did so much to win World War II. And it was the mathematics department of Cambridge University who built the EDSAC in 1949, the first truly programmable computer in the sense that we understand that term today.

The strong interconnection between mathematics and early work with computers should have left the Soviet Union as well-equipped for the dawning age as any nation. Russia had a long, proud tradition of mathematical innovation, dating back through centuries of Czarist rule. The list of major Russian mathematicians included figures like Nikolai Lobachevsky, the pioneer of non-Euclidean geometry, and Sofia Kovalevskaya, who developed equations for the rotation of a solid body around a fixed axis. Even Joseph Stalin’s brutal purges of the 1930s, which strove to expunge anyone with the intellectual capacity to articulate a challenge to his rule, failed to kill the Russian mathematical tradition. On the contrary, Leonid Kantorovich in 1939 discovered the technique of linear programming ten years before American mathematicians would do the same, while Andrey Kolmogorov did much fundamental work in probability theory and neural-network modeling over a long career that spanned from the 1920s through the 1980s. Indeed, in the decades following Stalin’s death, Soviet mathematicians in general would continue to solve fundamental problems of theory. And Soviet chess players — the linkage between mathematics and chess is almost as pronounced in history as that between mathematics and computers — would remain the best in the world, at least if the results of international competitions were any guide.

But, ironically in light of all this, it would be an electrical engineer named Sergei Alexeevich Lebedev rather than a mathematician who would pioneer Soviet computing. Lebedev was 46 years old in 1948 when he was transferred from his cushy position at the Lenin State Electrical Institute in Moscow to the relative backwater of Kiev, where he was to take over as head of the Ukraine Academy’s Electrotechnical Institute. There, free from the scrutiny of Moscow bureaucrats who neither understood nor wanted to understand the importance of the latest news of computing coming out of Britain and the United States, Lebedev put together a small team to build a Small Computing Machine; in Russian its acronym was MESM. Unlike the team of scientists and engineers who detonated the Soviet Union’s first atomic bomb in 1949, Lebedev developed the MESM without the assistance of espionage; he had access to the published papers of figures like Alan Turing and the exiled Hungarian mathematician John von Neumann, but no access to schematics or inside information about the machines on which they were working.

Lebedev had to build the MESM on a shoestring. Just acquiring the vacuum tubes and magnetic drums he needed in a backwater city of a war-devastated country was a major feat in itself, one that called for the skills of a junk trader as much as it did those of an electrical engineer. Seymour Goodman, one of the more notable historians of Soviet computing, states that “perhaps the most incredible aspect of the MESM was that it was successfully built at all. No electronic computer was ever built under more difficult conditions.” When it powered up for the first time in 1951, the MESM was not only the first stored-program computer in the Soviet Union but the first anywhere in continental Europe, trailing Britain by just two years and the United States by just one — a remarkable achievement by any standard.

Having already shown quite a diverse skill set in getting the MESM made at all, Lebedev proved still more flexible after it was up and running. He became the best advocate for computing inside the Soviet Union, a sort of titan of industry in a country that officially had no room for such figures. Goodman credits him with playing the role that a CEO would have played in the West. He even managed to get a script written for a documentary film to “advertise” his computer’s capabilities throughout the Soviet bureaucracy. In the end, the film never got made, but then it really wasn’t needed. The Soviet space and nuclear-weapons programs, not to mention the conventional military, all had huge need of the fast calculations the MESM could provide. At the time, the nuclear-weapons program was using what they referred to as calculator “brigades,” consisting of 100 or more mostly young girls, who worked eight-hour shifts with mechanical devices to crank out solutions to hugely complicated equations. Already by 1950, an internal report had revealed that the chief obstacle facing Soviet nuclear scientists wasn’t the theoretical physics involved but rather an inability to do the math necessary to bring theory to life fast enough.

Within months of his machine going online, Lebedev was called back to Moscow to become the leader of the Institute for Precision Mechanics and Computing Technology — or ITMVT in the Russian acronym — of the Soviet Academy of Sciences. There Lebedev proceeded to develop a series of machines known as the BESM line, which, unlike the one-off MESM, were suitable for — relatively speaking — production in quantity.

But Lebedev soon had rivals. Contrary to the image the Kremlin liked to project of a unified front — of comrades in communism all moving harmoniously toward the same set of goals — the planned economy of the Soviet Union was riddled with as much in-fighting as any other large bureaucracy. “Despite its totalitarian character,” notes historian Nikolai Krementsov, “the Soviet state had a very complex internal structure, and the numerous agents and agencies involved in the state science-policy apparatus pursued their own, often conflicting policies.” Thus very shortly after the MESM became operational, the second computer to be built in the Soviet Union (and continental Europe as well), a machine called the M-1 which had been designed by one Isaak Semyenovich Bruk, went online. If Lebedev’s achievement in building the MESM was remarkable, Bruk’s achievement in building the M-1, again without access to foreign espionage — or for that matter the jealously guarded secrets of Lebedev’s rival team — was equally so. But Bruk lacked Lebedev’s political skills, and thus his machine proved a singular achievement rather than the basis for a line of computers.

A much more dangerous rival  was a computer called Strela, or “Arrow,” the brainchild of one Yuri Yakovlevich Bazilevskii in the Special Design Bureau 245 — abbreviated SKB-245 in Russian — of the Ministry of Machine and Instrument Construction in Moscow. The BESM and Strela projects, funded by vying factions within the Politburo, spent several years in competition with one another, each project straining to monopolize scarce components, both for its own use and, just as importantly, to keep them out of the hands of its rival. It was a high-stakes war that was fought in deadly earnest, and its fallout could be huge. When, for instance, the Strela people managed to buy up the country’s entire supply of cathode-ray tubes for use as memory, the BESM people were forced to use less efficient and reliable mercury delay lines instead. As anecdotes like this attest, Bazilevskii was every bit Lebedev’s equal at the cutthroat game of bureaucratic politicking, even managing to secure from his backers the coveted title of Hero of Socialist Labor a couple of years before Lebedev.

The Strela computer. Although it’s hard to see it here, it was described by its visitors as a “beautiful machine in a beautiful hall,” with hundreds of lights blinking away in impressive fashion. Many bureaucrats likely chose to support the Strela simply because it looked so much like the ideal of high technology in the popular imagination of the 1950s.

During its first official trial in the spring of 1954, the Strela solved in ten hours a series of equations that would have taken a single human calculator about 100,000 days. And the Strela was designed to be a truly mass-produced computer, to be cranked out in the thousands in identical form from factories. But, as so often happened in the Soviet Union, the reality behind the statistics which Pravda trumpeted so uncritically was somewhat less flattering. The Strela “worked very badly” according to one internal report; according to another it “very often failed and did not work properly.” Pushed by scientists and engineers who needed a reliable computer in order to get things done, the government decided in the end to go ahead with the BESM instead of the Strela. Ironically, only seven examples of the first Soviet computer designed for true mass-production were ever actually produced. Sergei Lebedev was now unchallenged as the preeminent voice in Soviet computing, a distinction he would enjoy until his death in 1974.

The first BESM computer. It didn’t look as nice as the Strela, but it would prove far more capable and reliable.

Like so much other Soviet technology, Soviet computers were developed in secrecy, far from the prying eyes of the West. In December of 1955, a handful of American executives and a few journalists on a junket to the Soviet Union became the first to see a Soviet computer in person. A report of the visit appeared in the New York Times of December 11, 1955. It helpfully describes an early BESM computer as an “electronic brain” — the word “computer” was still very new in the popular lexicon — and pronounces it equal to the best American models of same. In truth, the American delegation had fallen for a bit of a dog-and-pony show. Soviet computers were already lagging well behind the American models that were now being churned out in quantities Lebedev could only dream of by companies like IBM.

Sergei Lebedev’s ITMVT. (Sorry for the atrocious quality of these images. Clear pictures of the Mirror World of the 1950s are hard to come by.)

In May of 1959, during one of West and East’s periodic periods of rapprochement, a delegation of seven American computer experts from business and government was invited to spend two weeks visiting most of the important hubs of computing research in the Soviet Union. They were met at the airport in Moscow by Lebedev himself; the Soviets were every bit as curious about the work of their American guests as said Americans were about theirs. The two most important research centers of all, the American delegation learned, were Lebedev’s ITMVT and the newer Moscow Computing Center of the Soviet Academy of Sciences, which was coming to play a role in software similar to that which the ITMVT played in hardware. The report prepared by the delegation is fascinating for the generalized glimpses it provides into the Soviet Mirror World of the 1950s as much as it is for the technical details it includes. Here, for instance, is its description of the ITMVT’s physical home:

The building itself is reminiscent more of an academic building than an industrial building. It is equipped with the usual offices and laboratory facilities as well as a large lecture hall. Within an office the decor tends to be ornate; the entrance door is frequently padded on both sides with what appeared to be leather, and heavy drapery is usually hung across the doorway and at the windows. The ceiling height was somewhat higher than that of contemporary American construction, but we felt in general that working conditions in the offices and in the laboratories were good. There appeared to be an adequate amount of room and the workers were comfortably supplied with material and equipment. The building was constructed in 1951. Many things testified to the steady and heavy usage it has received. In Russian tradition, the floor is parqueted and of unfinished oak. As in nearly every building, there are two sets of windows for weather protection.

The Moscow Computing Center

And here’s how a Soviet programmer had to work:

Programmers from the outside who come to the [Moscow] Computing Center with a problem apply to the scientific secretary of the Computing Center. He assigns someone from the Computing Center to provide any assistance needed by the outside programmer. In general an operator is provided for each machine, and only programmers with specific permission can operate the machine personally. Normally a programmer can expect only one code check pass per day at a machine; with a very high priority he might get two or three passes.

A programmer is required to submit his manuscript in ink. Examples of manuscripts which we saw indicated that often a manuscript is written in pencil until it is thought to be correct, and then redone in ink. The manuscript is then key-punched twice, and the two decks compared, before being sent to the machine. The output cards are handled on an off-line printer.

Other sections describe the Soviet higher-education system (“Every student is required to take 11 terms of ideological subjects such as Marxism-Leninism, dialectical materialism, history of the Communist Party, political economy, and economics.”); the roles of the various Academies of Sciences (“The All Union Academy of Sciences of the USSR and the 15 Republican Academies of Sciences play a dominant role in the scientific life of the Soviet Union.”); the economics of daily life (“In evaluating typical Russian salaries it must be remembered that the highest income tax in the Soviet Union is 13 percent and that all other taxes are indirect.”); the resources being poured into the new scientific and industrial center of Novosibirsk (“It is a general belief in Russia that the future of the Soviet Union is closely allied with the development of the immense and largely unexplored natural resources of Siberia.”).

But of course there are also plenty of pages devoted to technical discussion. What’s most surprising about these is the lack of the hysteria that had become so typical of Western reports of Soviet technology in the wake of the Sputnik satellite of 1957 and the beginning of the Space Race which it heralded. It was left to a journalist from the New York Times to ask the delegation upon their return the money question: who was really ahead in the field of computers? Willis Ware, a member of the delegation from the Rand Corporation and the primary architect of the final report, replied that the Soviet Union had “a wealth of theoretical knowledge in the field,” but “we didn’t see any hardware that we don’t have here.” Americans had little cause to worry; whatever their capabilities in the fields of aerospace engineering and nuclear-weapons delivery, it was more than clear that the Soviets weren’t likely to rival even IBM alone, much less the American computer industry as a whole, anytime soon. With that worry dispensed with, the American delegation had felt free just to talk shop with their Soviet counterparts in what would prove the greatest meeting of Eastern and Western computing minds prior to the Gorbachev era. The Soviets responded in kind; the visit proved remarkably open and friendly.

One interesting fact gleaned by the Americans during their visit was that, in addition to all the differences born of geography and economy, the research into computers conducted in the East and the West had also heretofore had markedly different theoretical scopes. For all that so much early Western research had been funded by the military for such plebeian tasks as code-breaking and the calculation of artillery trajectories, and for all that so much of that research had been conducted by mathematicians, the potential of computers to change the world had always been understood by the West’s foremost visionaries as encompassing far more than a faster way to do complex calculations. Alan Turing, for example, had first proposed his famous Turing Test of artificial intelligence all the way back in 1950.

But in the Soviet Union, where the utilitarian philosophy of dialectical materialism was the order of the day, such humanistic lines of research were, to say the least, not encouraged. Those involved with Soviet computing had to be, as they themselves would later put it, “cautious” about the work they did and the way they described that work to their superiors. The official view of computers in the Soviet Union during the early and mid-1950s hewed to the most literal definition of the word: they were electronic replacements for those brigades of human calculators cranking out solutions to equations all day long. Computers were, in other words, merely a labor-saving device, not a revolution in the offing; being a state founded on the all-encompassing ideology of communist revolution, the Soviet Union had no use for other, ancillary revolutions. Even when Soviet researchers were allowed to stray outside the realm of pure mathematics, their work was always expected to deliver concrete results that served very practical goals in fairly short order. For example, considerable effort was put into a program for automatically translating texts between languages, thereby to better bind together the diverse peoples of the sprawling Soviet empire and its various vassal states. (Although the translation program was given a prominent place in that first 1955 New York Times report about the Soviets’ “electronic brain,” one has to suspect that, given how difficult a task automated translation is even with modern computers, it never amounted to much more than a showpiece for use under carefully controlled conditions.)

And yet even by the time the American delegation arrived in 1959 all of that was beginning to change, thanks to one of the odder ideological alliances in the history of the twentieth century. In a new spirit of relative openness that was being fostered by Khrushchev, the Soviet intelligentsia was becoming more and more enamored with the ideas of an American named Norbert Wiener, yet another of those wide-ranging mathematicians who were doing so much to shape the future. In 1948, Wiener had described a discipline he called “cybernetics” in a book of the same name. The book bore the less-than-enticing subtitle Control and Communication in the Animal and the Machine, making it sound rather like an engineering text. But if it was engineering Wiener was practicing, it was social engineering, as became more clear in 1950, when he repackaged his ideas into a more accessible book with the title The Human Use of Human Beings.

Coming some 35 years before William Gibson and his coining of the term “cyberspace,” Norbert Wiener marks the true origin point of our modern mania for all things “cyber.” That said, his ideas haven’t been in fashion for many years, a fact which might lead us to dismiss them from our post-millennial perch as just another musty artifact of the twentieth century and move on. In actuality, though, Wiener is well worth revisiting, and with an eye to more than dubious linguistic trends. Cybernetics as a philosophy may be out of fashion, but cybernetics as a reality is with us a little more every day. And, most pertinently for our purposes today, we need to understand a bit of what Wiener was on about if we hope to understand what drove much of Soviet computing for much of its existence.

“Cybernetics” is one of those terms which can seem to have as many definitions as definers. It’s perhaps best described as the use of machines not just to perform labor but to direct labor. Wiener makes much of the increasing numbers of machines even in his time which incorporated a feedback loop — machines, in other words, that were capable of accepting input from the world around them and responding to that input in an autonomous way. An example of such a feedback loop can be something as simple as an automatic door which opens when it senses people ready to step through it, or as complex as the central computer in charge of all of the functions of an automated factory.

At first blush, the idea of giving computers autonomous control over the levers of power inevitably conjures up all sorts of dystopian visions. Yet Wiener himself was anything but a fan of totalitarian or collectivist governments. Invoking in The Human Use of Human Beings the popular metaphor of the collectivist society as an ant colony, he goes on to explore the many ways in which humans and ants are in fact — ideally, at any rate — dissimilar, thus seemingly exploding the “from each according to his ability, to each according to his need” founding principle of communism.

In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

This aspiration of the fascist for a human state based on the model of the ant results from a profound misapprehension both of the nature of the ant and of the nature of man. I wish to point out that the very physical development of the insect conditions it to be an essentially stupid and unlearning individual, cast in a mold which cannot be modified to any great extent. I also wish to show how these physiological conditions make it into a cheap mass-produced article, of no more individual value than a paper pie plate to be thrown away after it is used. On the other hand, I wish to show that the human individual, capable of vast learning and study, which may occupy almost half his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium — and are indeed key to man’s most noble flights — because variety and possibility belong to the very structure of the human organism.

While it is possible to throw away this enormous advantage that we have over the ants, and to organize the fascist ant-state with human material, I certainly believe that this is a degradation of man’s very nature, and economically a waste of the great human values which man possesses.

I am afraid that I am convinced that a community of human beings is a far more useful thing than a community of ants, and that if the human being is condemned and restricted to perform the same functions over and over again, he will not even be a good ant, not to mention a good human being. Those who would organize us according to personal individual functions and permanent individual restrictions condemn the human race to move at much less than half-steam. They throw away nearly all our human possibilities and, by limiting the modes in which we may adapt ourselves to future contingencies, they reduce our chances for a reasonably long existence on this earth.

Wiener’s vision departs markedly from the notion, popular already in science fiction by the time he wrote those words, of computers as evil overlords. In Wiener’s cybernetics, computers will not enslave people but give them freedom; the computers’ “slaves” will themselves be machines. Together computers and the machines they control will take care of all the boring stuff, as it were, allowing people to devote themselves to higher purposes. Wiener welcomes the “automatic age” he sees on the horizon, even as he is far from unaware of the disruptions the period of transition will bring.

What can we expect of its economic and social consequences? In the first place, we can expect an abrupt and final cessation of the demand for the type of factory labor performing purely repetitive tasks. In the long run, the deadly uninteresting nature of the repetitive task may make this a good thing and the source of leisure necessary for man’s full cultural development.

Be that as it may, the intermediate period of the introduction of the new means will lead to an immediate transitional period of disastrous confusion.

In terms of cybernetics, we’re still in this transitional period today, with huge numbers of workers accustomed to “purely repetitive tasks” cast adrift in this dawning automatic age; this explains much about recent political developments over much of the world. But of course our main interest right now isn’t contemporary politics, but rather how a fellow who so explicitly condemned the collectivist state came to be regarded as something of a minor prophet by the Soviet bureaucracy.

Wiener’s eventual acceptance in the Soviet Union is made all the more surprising by the Communist Party’s first reaction to cybernetics. In 1954, a year after Stalin’s death, the Party’s official Brief Philosophical Dictionary still called cybernetics “a reactionary pseudo-science originating in the USA after World War II and spreading widely in other capitalistic countries as well.” It was “in essence aimed against materialistic dialectics” and “against the scientific Marxist understanding of the laws of societal life.” Seemingly plucking words at random from a grab bag of adjectives, the dictionary concluded that “this mechanistic, metaphysical pseudo-science coexists very well with idealism in philosophy, psychology, and sociology” — the word “idealism” being a kiss of death under Soviet dogma.

In 1960, six years after the Soviets condemned cybernetics as an “attempt to transform toilers into mere appendices of the machine, into a tool of production and war,” Nobert Wiener lectures the Leningrad Mathematical Society. A colleague who visited the Soviet Union at the same time said that Wiener was “wined and dined everywhere, even in the privacy of the homes of the Russian scientists.” He died four years later, just as the influence of cybernetics was reaching a peak in the Soviet Union.

Still, when stripped of its more idealistic, humanistic attributes, there was much about cybernetics which held immense natural appeal for Soviet bureaucrats. Throughout its existence, the Soviet Union’s economy had been guided, albeit imperfectly at best, by an endless number of “five-year plans” that attempted to control its every detail. Given this obsession with economic command and control and the dispiriting results it had so far produced, the prospect of information-management systems — namely, computers — capable of aiding decision-making, or perhaps even in time of making the decisions, was a difficult enticement to resist; never mind how deeply antithetical the idea of computerized overlords making the decisions for human laborers was to Norbert Wiener’s original conception of cybernetics. Thus cybernetics went from being a banned bourgeois philosophy during the final years of Stalin’s reign to being a favorite buzzword during the middle years of Khrushchev’s. In December of 1957, the Soviet Academy of Sciences declared their new official position to be that “the use of computers for statistics and planning must have an absolutely exceptional significance in terms of its efficiency. In most cases, such use would make it possible to increase the speed of decision-making by hundreds of times and avoid errors that are currently produced by the unwieldy bureaucratic apparatus involved in these activities.”

In October of 1961, the new Cybernetics Council of the same body published an official guide called Cybernetics in the Service of Communism — essentially Norbert Wiener with the idealism and humanism filed off. Khrushchev may have introduced a modicum of cultural freedom to the Soviet Union, but at heart he was still a staunch collectivist, as he made clear:

In our time, what is needed is clarity, ideal coordination, and organization of all links in the social system both in material production and in spiritual life.

Maybe you think there will be absolute freedom under communism? Those who think so don’t understand what communism is. Communism is an orderly, organized society. In that society, production will be organized on the basis of automation, cybernetics, and assembly lines. If a single screw is not working properly, the entire mechanism will grind to a halt.

Soviet ambitions for cybernetics were huge, and in different circumstances might have led to a Soviet ARPANET going online years before the American version. It was envisioned that each factory and other center of production in the country would be controlled by its own computer, and that each of these computers would in turn be linked together into “complexes” reporting to other computers, all of which would send their data yet further up the chain, culminating in a single “unified automated management system” directing the entire economy. The system would encompass tens of thousands of computers, spanning the width and breadth of the largest country in the world, “from the Pacific to the Carpathian foothills,” as academician Sergei Sobolev put it. Some more wide-eyed prognosticators said that in time the computerized cybernetic society might allow the government to eliminate money from the economy entirely, long a cherished dream of communism. “The creation of an automated management system,” wrote proponent Anatolii Kitov, “would mean a revolutionary leap in the development of our country and would ensure a complete victory of socialism over capitalism.” With the Soviet Union’s industrial output declining every year between 1959 and 1964 while the equivalent Western figures skyrocketed, socialism needed all the help it could get.

In May of 1962, in an experiment trumpeted as the first concrete step toward socialism’s glorious cybernetic future, a computer located in Kiev poured steel in a factory located hundreds of kilometers away in Dniprodzerzhynsk (known today as Kamianske). A newspaper reporter was inspired to wax poetic:

In ancient Greece the man who steered ships was called Kybernetes. This steersman, whose name is given to one of the boldest sciences of the present — cybernetics — lives on in our own time. He steers the spaceships and governs the atomic installations, he takes part in working out the most complicated projects, he helps to heal humans and to decipher the writings of ancient peoples. As of today he has become an experienced metallurgist.

Some Soviet cybernetic thinking is even more astonishing than their plans for binding the country in a web of telecommunications long before “telecommunications” was a word in popular use. Driverless cars and locomotives were seriously discussed, and experiments with the latter were conducted in the Moscow subway system. (“Experiments on the ‘auto-pilot’ are being concluded. This device, provided with a program for guiding a train, automatically decreases and increases speed at corresponding points along its route, continually selecting the most advantageous speed, and stops the train at the required points.”) Serious attention was given to a question that still preoccupies futurists today: that of the role of human beings in a future of widespread artificially intelligent computers. The mathematician Kolmogorov wrote frankly that such computers could and inevitably would “surpass man in his development” in the course of time, and even described a tipping point that we still regard as seminal today: the point when artificial intelligence begins to “breed,” to create its own progeny without the aid of humans. At least some within the Soviet bureaucracy seemed to welcome humanity’s new masters; proposals were batted around to someday replace human teachers and doctors with computers. Sergei Sobolev wrote that “in my view the cybernetic machines are people of the future. These people will probably be much more accomplished than we, the present people.” Soviet thinking had come a long way indeed from the old conception of computers as nothing more than giant calculators.

But the Soviet Union was stuck in a Catch-22 situation: the cybernetic command-and-control network its economy supposedly needed in order to spring to life was made impossible to build by said economy’s current moribund state. Some skeptical planners drew pointed comparisons to the history of another sprawling land: Egypt. While the Pharaohs of ancient Egypt had managed to build the Pyramids, the cybernetics skeptics noted, legend held that they’d neglected everything else so much in the process that a once-fertile land had become a desert. Did it really make sense to be thinking already about building a computer network to span the nation when 40 percent of villages didn’t yet boast a single telephone within their borders? By the same token, perhaps the government should strive for the more tangible goal of placing a human doctor within reach of every citizen before thinking about replacing all the extant human doctors with some sort of robot.

A computer factory in Kiev, circa 1970. Note that all of the assembly work is still apparently done by hand.

The skeptics probably needn’t have worried overmuch about their colleagues’ grandiose dreams. With its computer industry in the shape it was, it was doubtful whether the Soviet Union had any hope of building its cybernetic Pyramids even with all the government will in the world.

In November of 1964, another American delegation was allowed a glimpse into the state of Soviet computing, although the Cuban Missile Crisis and other recent conflicts meant that their visit was much shorter and more restricted than the one of five and a half years earlier. Regardless, the Americans weren’t terribly impressed by the factory they were shown. It was producing computers at the rate of about seven or eight per month, and the visitors estimated its products to be roughly on par with an IBM 704 — a model that IBM had retired four years before. It was going to be damnably hard to realize the Soviet cybernetic dream with this trickle of obsolete machines; estimates were that about 1000 computers were currently operational in the Soviet Union, as compared to 30,000 in the United States. The Soviets were still struggling to complete the changeover from first-generation computer hardware, characterized by its reliance on vacuum tubes, to the transistor-based second generation. The Americans had accomplished this changeover years before; indeed, they were well on their way to an integrated-circuit-based third generation.  Looking at a Soviet transistor, the delegation said it was roughly equivalent to an American version of same from 1957.

But when the same group visited the academics, they were much more impressed, noting that the Soviets “were doing quite a lot of very good and forward-thinking work.” Thus was encapsulated what would remain the curse of Soviet computer science: plenty of ideas, plenty of abstract know-how, and a dearth of actual hardware to try it all out on. The reports of the Soviet researchers ooze frustration with their lot in life. Their computers break down “each and every day,” reads one, “and information on a tape lasts without any losses no longer than one month.”

Their American visitors were left to wonder just why it was that the Soviet Union struggled so mightily to build a decent computing infrastructure. Clearly the Soviets weren’t complete technological dunces; this was after all the country that had detonated an atomic bomb years before anyone had dreamed it could, that had shocked the world by putting the first satellite and then the first man into space, that was even now giving the United States a run for its money to put a man on the moon.

The best way to address the Americans’ confusion might be to note that exploding atomic bombs and launching things into space encompassed a series of individual efforts responsive to brilliant individual minds, while the mass-production of the standardized computers that would be required to realize the cybernetics dream required a sort of infrastructure-building at which the Soviet system was notoriously poor. The world’s foremost proponent of collectivism was, ironically, not all that good at even the most fundamental long-term collectivist projects. The unstable Soviet power grid was only one example; the builders of many Soviet computer installations had to begin by building their own power plant right outside the computer lab just to get a dependable electrical supply.

The Soviet Union was a weird mixture of backwardness and forwardness in terms of technology, and the endless five-year plans only exacerbated its issues by emphasizing arbitrary quotas rather than results that mattered in the real world. Stories abounded of factories that produced lamp shades in only one color because that was the easiest way to make their quota, or that churned out uselessly long, fat nails because the quota was given in kilograms rather than in numbers of individual pieces. The Soviet computer industry was exposed to all these underlying economic issues. It was hard to make computers to rival those of the West when the most basic electrical components that went into them had failure rates dozens of times higher than their Western equivalents. Whether a planned economy run by computers could have fixed these problems is doubtful in the extreme, but at any rate the Soviet cyberneticists would never get a chance to try. It was the old chicken-or-the-egg conundrum. They thought they needed lots of good computers to build a better economy — but they knew they needed a better economy to build lots of good computers.

As the 1960s became the 1970s, these pressures would lead to a new approach to computer production in the Soviet Union. If they couldn’t beat the West’s computers with their homegrown designs, the Soviets decided, then they would just have to  clone them.

(Sources: the academic-journal articles “Soviet Computing and Technology Transfer: An Overview” by S.E. Goodman, “MESM and the Beginning of the Computer Era in the Soviet Union” by Anne Fitzpatrick, Tatiana Kazakova, and Simon Berkovich, “S.A. Lebedev and the Birth of Soviet Computing” by G.D. Crowe and S.E. Goodman, “The Origin of Digital Computing in Europe” by S.E. Goodman, “Strela-1, The First Soviet Computer: Political Success and Technological Failure” by Hiroshi Ichikawa, and “InterNyet: Why the Soviet Union Did Not Build a Nationwide Computer Network” by Slava Gerovitch; studies from the Rand Corporation entitled “Soviet Cybernetics Technology I: Soviet Cybernetics, 1959-1962” and “Soviet Computer Technology — 1959”; the January 1970 issue of Rand Corporation’s Soviet Cybernetics Review; the books Stalinist Science by Nikolai Krementsov, The Human Use of Human Beings by Norbert Wiener, Red Storm Rising by Tom Clancy, and From Newspeak to Cyberspeak: A History of Soviet Cybernetics by Slava Gerovitch; The New York Times of December 11 1955, December 2 1959, and August 28 1966; Scientific American of October 1970; Byte of November 1984, February 1985, and October 1987.)

 

Tags:

Memos from Digital Antiquarian Corporate Headquarters, June 2017 Edition

From the Publications Department:

Those of you who enjoy reading the blog in ebook format will be pleased to hear that Volume 12 in that ongoing series is now available, full of articles centering roughly on the year 1990. As usual, the ebook is entirely the work of Richard Lindner. Thank you, Richard!

From the Security Department:

A few days ago, a reader notified me of an alarming development: he was getting occasional popup advertisements for a shady online betting site when he clicked article links within the site. Oddly enough, the popups were very intermittent; in lots of experimenting, I was only able to get them to appear on one device — an older iPad, for what it’s worth — and even then only every tenth or twelfth time I tapped a link. But investigation showed that there was indeed some rogue JavaScript that was causing them. I’ve cleaned it up and hardened that part of the site a bit more, but I remain a little concerned in that I haven’t identified precisely how someone or something got access to the file that was tampered with in the first place. If anything suspicious happens during your browsing, please do let me know. I don’t take advertisements of any sort, so any that you see on this site are by definition a security breach of some sort. In the meantime, I’ll continue to scan the site daily in healthily paranoid fashion. The last I thing I want is a repeat of the Great Handbag Hack of 2012. (Do note, however, that none of your Patreon or PayPal information is stored on the site, and the database containing commenters’ email addresses has remained uncompromised — so nothing to worry too much over.)

From the Scheduling Department:

I’ve had to skip publishing an article more weeks than I wanted to this year. First I got sick after coming home from my research trip to the Strong Museum in Rochester, New York. Then we moved (within Denmark) from Odense to Aarhus, and I’m sure I don’t need to tell most of you what a chaotic process that can be. Most recently, I’ve had to do a lot more research than usual for my next subject; see the next two paragraphs for more on that. In a couple of weeks my wife and I are going to take a little holiday, which means I’m going to have to take one more bye week in June. After that, though, I hope I can settle back into the groove and start pumping out a reliable article every week for a while. Thanks for bearing with me!

From the Long-Term-Planning Department:

I thought I’d share a taste of what I plan to cover in the context of 1991 — i.e., until I write another of these little notices to tell you the next ebook is available. If you prefer that each new article be a complete surprise, you’ll want to skip the next paragraph.

(Spoiler Alert!)

I’ve got a series in the works for the next few weeks covering the history of computing in the Soviet Union, culminating in East finally meeting West in the age of Tetris. I’m already very proud of the articles that are coming together on this subject, and hope you’re going to find this little-known story as fascinating as I do. Staying with the international theme, we’ll then turn our attention to Britain for a while; in that context, I’m planning articles on the great British tradition of open-world action-adventures, on the iconic software house Psygnosis, and finally on Psygnosis’s most enduring game, Lemmings. Then we’ll check in with the Amiga 3000 and CDTV. I’m hoping that Bob Bates and I will be able to put together something rather special on Timequest. Then some coverage of the big commercial online services that predated the modern World Wide Web, along with the early experiments with massively multiplayer games which they fostered. We’ll have some coverage of the amateur text-adventure scene; 1991 was a pretty good year there, with some worthy but largely forgotten games released. I may have more to say about the Eastgate school of hypertext, in the form of Sarah Smith’s King of Space, if I can get the thing working and if it proves worthy of writing about. Be that as it may, we’ll definitely make time for Corey Cole’s edutainment classic The Castle of Dr. Brain and other contemporary doings around Sierra. Then we’ll swing back around to Origin, with a look at the two Worlds of Ultima titles — yes, thanks to your recommendations I’ve decided to give them more coverage than I’d originally planned — and Wing Commander II. We’ll wrap up 1991 with Civilization, a game which offers so much scope for writing that it’s a little terrifying. I’m still mulling over how best to approach that one, but I’m already hugely looking forward to it.

(End Spoilers)

From the Accounting Department:

I’ve seen a nice uptick in Patreon participation in recent months, for which I’m very grateful. Thank you to every reader who’s done this writer the supreme honor of paying for the words I scribble on the (virtual) page, whether you’ve been doing so for years or you just signed up yesterday.

If you’re a regular reader who hasn’t yet taken the plunge, please do think about supporting these serious long-form articles about one of the most important cultural phenomenons of our times by signing up as a Patreon subscriber or making a one-time donation via the links to the right. Remember that I can only do this work thanks to the support of people just like you.

See you Friday! Really, I promise this time…

 

The Many Faces of Middle-earth, 1954-1989

The transformation of J.R.R. Tolkien’s The Lord of the Rings from an off-putting literary trilogy — full of archaic diction, lengthy appendixes, and poetry, for God’s sake — into some of the most bankable blockbuster fodder on the planet must be one of the most unlikely stories in the history of pop culture. Certainly Tolkien himself must be about the most unlikely mass-media mastermind imaginable. During his life, he was known to his peers mostly as a philologist, or historian of languages. The whole Lord of the Rings epic was, he once admitted, “primarily linguistic in inspiration, and was begun in order to provide the necessary background history” for the made-up languages it contained. On another occasion, he called the trilogy “a fundamentally religious and Catholic work.” That doesn’t exactly sound like popcorn-movie material, does it?

So, what would this pipe-smoking, deeply religious old Oxford don have made of our modern takes on his work, of CGI spellcraft and 3D-rendered hobbits mowing down videogame enemies by the dozen? No friend of modernity in any of its aspects, Tolkien would, one has to suspect, have been nonplussed at best, outraged at worst. But perhaps — just perhaps, if he could contort himself sufficiently — he might come to see all this sound and fury as at least as much validation as betrayal of his original vision. In writing The Lord of the Rings, he had explicitly set out to create a living epic in the spirit of Homer, Virgil, Dante, and Malory. For better or for worse, the living epics of our time unspool on screens rather than on the page or in the chanted words of bards, and come with niceties like copyright and trademark attached.

And where those things exist, so exist also the corporations and the lawyers. It would be those entities rather than Tolkien or even any of his descendants who would control how his greatest literary work was adapted to screens large, small, and in between. Because far more people in this modern age of ours play games and watch movies than read books of any stripe  — much less daunting doorstops like The Lord of the Rings trilogy — this meant that Middle-earth as most people would come to know it wouldn’t be quite the same land of myth that Tolkien himself had created so laboriously over so many decades in his little tobacco-redolent office. Instead, it would be Big Media’s interpretations and extrapolations therefrom. In the first 48 years of its existence, The Lord of the Rings managed to sell a very impressive 100 million copies in book form. In only the first year of its existence, the first installment of Peter Jackson’s blockbuster film trilogy was seen by 150 million people.

To understand how The Lord of the Rings and its less daunting predecessor The Hobbit were transformed from books authored by a single man into a palimpsest of interpretations, we need to understand how J.R.R. Tolkien lost control of his creations in the first place. And to begin to do that, we need to cast our view back to the years immediately following the trilogy’s first issuance in 1954 and 1955 by George Allen and Unwin, who had already published The Hobbit with considerable success almost twenty years earlier.

During its own early years, The Lord of the Rings didn’t do anywhere near as well as The Hobbit had, but did do far better than its publisher or its author had anticipated. It sold at least 225,000 copies (this and all other sales figures given in this article refer to sales of the trilogy as a whole, not to sales of the individual volumes that made up the trilogy) in its first decade, the vast majority of them in its native Britain, despite being available only in expensive hardcover editions and despite being roundly condemned, when it was noticed at all, by the very intellectual and literary elites that made up its author’s peer group. In the face of their rejection by polite literary society, the books sold mostly to existing fans of fantasy and science fiction, creating some decided incongruities; Tolkien never quite seemed to know how to relate to this less mannered group of readers. In 1957, the trilogy won the only literary prize it would ever be awarded, becoming the last recipient of the brief-lived International Fantasy Award, which belied its hopeful name by being a largely British affair. Tolkien, looking alternately bemused and uncomfortable, accepted the award, shook hands and signed autographs for his fans, smiled for the cameras, and got the hell out of there just as quickly as he could.

The books’ early success, such as it was, was centered very much in Britain; the trilogy only sold around 25,000 copies in North America during the entirety of its first decade. It enjoyed its first bloom of popularity there only in the latter half of the 1960s, ironically fueled by two developments that its author found thoroughly antithetical. The first was a legally dubious mass-market paperback edition published in the United States by Ace Books in 1965; the second was the burgeoning hippie counterculture.

Donald Wollheim, senior editor at Ace Books, had discovered what he believed to be a legal loophole giving him the right to publish the trilogy, thanks to the failure of Houghton Mifflin, Tolkien’s American hardcover publisher, to properly register their copyright to it in the United States. Never a man prone to hesitation, he declared that Houghton Mifflin’s negligence had effectively left The Lord of the Rings in the public domain, and proceeded to publish a paperback edition without consulting Tolkien or paying him anything at all. Condemned by the resolutely old-fashioned Tolkien for taking the “degenerate” form of the paperback as much as for the royalties he wasn’t paid, the Ace editions nevertheless sold in the hundreds of thousands in a matter of months. Elizabeth Wollheim, daughter of Donald and herself a noted science-fiction and fantasy editor, has characterized the instant of the appearance of the Ace editions of The Lord of the Rings in October of 1965 as the “Big Bang” that led to the modern cottage industry in doorstop fantasy novels. Along with Frank Herbert’s Dune, which appeared the same year, they obliterated almost at a stroke the longstanding tradition in publishing of genre novels as concise works coming in at under 250 pages.

Even as these cheap Ace editions of Tolkien became a touchstone of what would come to be known as nerd culture, they were also seized on by a very different constituency. With the Summer of Love just around the corner, the counterculture came to see in the industrialized armies of Sauron and Saruman the modern American war machine they were protesting, in the pastoral peace of the Shire the life they saw as their naive ideal. The Lord of the Rings became one of the hippie movement’s literary totems, showing up in the songs of Led Zeppelin and Argent, and, as later memorably described by Peter S. Beagle in the most famous introduction to the trilogy ever written, even scrawled on the walls of New York City’s subways (“Frodo lives!”). Beagle’s final sentiments in that piece could stand in very well for the counterculture’s as a whole: “We are raised to honor all the wrong explorers and discoverers — thieves planting flags, murderers carrying crosses. Let us at last praise the colonizers of dreams.”

If Tolkien had been uncertain how to respond to the earnest young science-fiction fans who had started showing up at his doorstep seeking autographs in the late 1950s, he had no shared frame of reference whatsoever with these latest readers. He was a man at odds with his times if ever there was one. On the rare occasions when contemporary events make an appearance in his correspondence, it always reads as jarring. Tolkien comes across a little confused by it all, can’t even get the language quite right. For example, in a letter from 1964, he writes that “in a house three doors away dwells a member of a group of young men who are evidently aiming to turn themselves into a Beatle Group. On days when it falls to his turn to have a practice session the noise is indescribable.” Whatever the merits of the particular musicians in question, one senses that the “noise” of the “Beatle group” music wouldn’t have suited Tolkien one bit in any scenario. And as for Beagle’s crack about “murderers carrying crosses,” it will perhaps suffice to note that his introduction was published only after Tolkien, the devout Catholic, had died. Like the libertarian conservative Robert Heinlein, whose Stranger in a Strange Land became another of the counterculture’s totems, Tolkien suffered the supreme irony of being embraced as a pseudo-prophet by a group whose sociopolitical worldview was almost the diametrical opposite of his own. As the critic Leonard Jackson has noted, it’s decidedly odd that the hippies, who “lived in communes, were anti-racist, were in favour of Marxist revolution and free love” should choose as their favorite “a book about a largely racial war, favouring feudal politics, jam-full of father figures, and entirely devoid of sex.”

Note the pointed reference to these first Ballantine editions of The Lord of the Rings as the “authorized” editions.

To what extent Tolkien was even truly aware of his works’ status with the counterculture is something of an open question, although he certainly must have noticed the effect it had on his royalty checks after the Ace editions were forced off the market, to be replaced by duly authorized Ballantine paperbacks. In the first two years after issuing the paperbacks, Ballantine sold almost 1 million copies of the series in North America alone.

In October of 1969, smack dab in the midst of all this success, Tolkien, now 77 years old and facing the worry of a substantial tax bill in his declining years, made one of the most retrospectively infamous deals in the history of pop culture. He sold the film rights to The Hobbit and Lord of the Rings to the Hollywood studio United Artists for £104,602 and a fixed cut of 7.5 percent of any profits that might result from cinematic adaptations. And along with film rights went “merchandising rights.” Specifically, United Artists was given rights to the “manufacture, sale, and distribution of any and all articles of tangible personal property other than novels, paperbacks, and other printed published matter.” All of these rights were granted “in perpetuity.”

What must have seemed fairly straightforward in 1969 would in decades to come turn into a Gordian Knot involving hundreds of lawyers, all trying to resolve once and for all just what part of Tolkien’s legacy he had retained and what part he had sold. In the media landscape of 1969, the merchandising rights to “tangible personal property” which Tolkien and United Artists had envisioned must have been limited to toys, trinkets, and souvenirs, probably associated with any films United Artists should choose to make based on Tolkien’s books. Should the law therefore limit the contract to its signers’ original intent, or should it be read literally? If the law chose the latter course, Tolkien had unknowingly sold off the videogame rights to his work before videogames even existed in anything but the most nascent form. Or did he really? Should videogames, being at their heart intangible code, really be lumped even by the literalists into the rights sold to United Artists? After all, the contract explicitly reserves “the right to utilize and/or dispose of all rights and/or interests not herein specifically granted” to Tolkien. This question only gets more fraught in our modern age of digital distribution, when games are often sold with no tangible component at all. And then what of tabletop games? They’re quite clearly neither novels nor paperbacks, but they might be, at least in part, “other printed published matter.” What precisely did that phrase mean? The contract doesn’t stipulate. In the absence of any clear pathways through this legal thicket, the history of Tolkien licensing would become that of a series of uneasy truces occasionally  erupting into open legal warfare. About the only things that were clear were that Tolkien — soon, his heirs — owned the rights to the original books and that United Artists — soon, the person who bought the contract from them — owned the rights to make movies out of them. Everything else was up for debate. And debated it would be, at mind-numbing length.

It would, however, be some time before the full ramifications of the document Tolkien had signed started to become clear. In the meantime, United Artists began moving forward with a film adaptation of The Lord of the Rings that was to have been placed in the hands of the director and screenwriter John Boorman. Boorman worked on the script for years, during which Tolkien died and his literary estate passed into the hands of his heirs, most notably his third son and self-appointed steward of his legacy Christopher Tolkien. The final draft of Boorman’s script compressed the entire trilogy into a single 150-minute film, and radically changed it in terms of theme, character, and plot to suit a Hollywood sensibility. For instance, Boorman added the element of sex that was so conspicuously absent from the books, having Frodo and Galadriel engage in a torrid affair after the Fellowship comes to Lothlórien. (Given the disparity in their sizes, one does have to wonder about the logistics, as it were, of such a thing.) But in the end, United Artists opted, probably for the best, not to let Boorman turn his script into a movie. (Many elements from the script would turn up later in Boorman’s Arthurian epic Excalibur.)

Of course, it’s unlikely that literary purity was foremost on United Artists’s minds when they made their decision. As the 1960s had turned into the 1970s and the Woodstock generation had gotten jobs and started families, Tolkien’s works had lost some of their trendy appeal, retaining their iconic status only among fantasy fandom. Still, the books continued to sell well; they would never lose the status they had acquired almost from the moment the Ace editions had been published of being the bedrock of modern fantasy fiction, something everyone with even a casual interest in the genre had to at least attempt to read. Not being terribly easy books, they defeated plenty of these would-be readers, who went off in search of the more accessible, more contemporary-feeling epic-fantasy fare so many publishers were by now happily providing. Yet even among the readers it rebuffed The Lord of the Rings retained the status of an aspirational ideal.

In 1975, a maverick animator named Ralph Bakshi, who had heretofore been best known for Fritz the Cat, the first animated film to earn an X rating, came to United Artists with a proposal to adapt The Lord of the Rings into a trio of animated features that would be relatively inexpensive in comparison to Boorman’s plans for a live-action epic. United Artists didn’t bite, but did signify that they might be amenable to selling the rights they had purchased from Tolkien if Bakshi could put together a few million dollars to make it happen. In December of 1976, following a string of proposals and deals too complicated and imperfectly understood to describe here, a hard-driving music and movie mogul named Saul Zaentz wound up owning the whole package of Tolkien rights that had previously belonged to United Artists. He intended to use his purchase first to let Bakshi make his films and thereafter for whatever other opportunities might happen to come down the road.

Saul Zaentz, seated at far left, with Creedence Clearwater Revival.

Saul Zaentz had first come to prominence back in 1967, when he’d put together a group of investors to buy a struggling little jazz label called Fantasy Records. His first signing as the new president of Fantasy was Creedence Clearwater Revival, a rock group he had already been managing. Whether due to Zaentz’s skill as a talent spotter or sheer dumb luck, it was the sort of signing that makes a music mogul rich for life. Creedence promptly unleashed eleven top-ten singles and five top-ten albums over the course of the next three and a half years, the most concentrated run of hits of any 1960s band this side of the Beatles. And Zaentz got his fair share of all that filthy lucre — more than his fair share, his charges eventually came to believe. When the band fell apart in 1972, much of the cause was infighting over matters of business. The other members came to blame Creedence’s lead singer and principal songwriter John Fogerty for convincing them to sign a terrible contract with Zaentz that gave away rights to their songs to him for… well, in perpetuity, actually. And as for Fogerty, he of course blamed Zaentz for all the trouble. Decades of legal back and forth followed the breakup. At one point, Zaentz sued Fogerty on the novel legal theory of “self-plagiarization”: the songs Fogerty was now writing as a solo artist, went the brief, were too similar to the ones he used to write for Creedence, all of whose copyrights Zaentz owned. While his lawyers pleaded his case in court, Fogerty vented his rage via songs like “Zanz Kant Danz,” the story of a pig who, indeed, can’t dance, but will happily “steal your money.”

I trust that this story gives a sufficient impression of just what a ruthless, litigious man now owned adaptation rights to the work of our recently deceased old Oxford don. But whatever else you could say about Saul Zaentz, he did know how to get things done. He secured financing for the first installment of Bakshi’s animated Lord of the Rings, albeit on the condition that he cut the planned three-film series down to two. Relying heavily on rotoscoping to give his cartoon figures an uncannily naturalistic look, Bakshi finished the film for release in November of 1978. Regarded as something of a cult classic among certain sectors of Tolkien fandom today, in its own day the film was greeted with mixed to poor reviews. The financial picture is equally muddled. While it’s been claimed, including by Bakshi himself, that the movie was a solid success, earning some $30 million on a budget of a little over $4 million, the fact remains that Zaentz was unable to secure funding for the sequel, leaving poor Frodo, Sam, and Gollum forever in limbo en route to Mount Doom. It is, needless to say, difficult to reconcile a successful first film with this refusal to back a second. But regardless of the financial particulars, The Lord of the Rings wouldn’t make it back to the big screen for more than twenty years, until the enormous post-millennial Peter Jackson productions that well and truly, once and for all, broke Middle-earth into the mainstream.

Yet, although the Bakshi adaptation was the only Tolkien film to play in theaters during this period, it wasn’t actually the only Tolkien film on offer. In November of 1977, a year before the Bakshi Lord of the Rings made its bow, a decidedly less ambitious animated version of The Hobbit had played on American television. The force behind it was Rankin/Bass Productions, who had previously been known in television broadcasting for holiday specials such as Rudolph the Red-Nosed Reindeer. Their take on Tolkien was authorized not by Saul Zaentz but by the Tolkien estate. Being shot on video rather than film and then broadcast rather than shown in theaters, the Rankin/Bass Hobbit was not, legally speaking, a “movie” under the terms of the 1969 contract. Nor was it a “tangible” product, thus making it fair game for the Tolkien estate to authorize without involving Zaentz. That, anyway, was the legal theory under which the estate was operating. They even authorized a sequel to the Rankin/Bass Hobbit in 1980, which rather oddly took the form of an adaptation of The Return of the King, the last book of The Lord of the Rings. A precedent of dueling licenses, authorizing different versions of what to casual eyes at least often seemed to be the very same things, was thus established.

But these flirtations with mainstream visibility came to an end along with the end of the 1970s. After the Ralph Bakshi and Rankin/Bass productions had all had their moments in the sun, The Lord of the Rings was cast back into its nerdy ghetto, where it remained more iconic than ever. Yet the times were changing in some very important ways. From the moment he had clear ownership of the rights Tolkien had once sold to United Artists, Saul Zaentz had taken to interpreting their compass in the broadest possible way, and had begun sending his lawyers after any real or alleged infringers who grew large enough to come to his attention. This marked a dramatic change from the earliest days of Tolkien fandom, when no one had taken any apparent notice of fannish appropriations of Middle-earth, to such an extent that fans had come to think of all use of Tolkien’s works as fair use. In that spirit, in 1975 a tiny game publisher called TSR, incubator of an inchoate revolution called Dungeons & Dragons, had started selling a non-Dungeons & Dragons strategy game called Battle of the Five Armies that was based on the climax of The Hobbit. In late 1977, Zaentz sent them a cease-and-desist letter demanding that the game be immediately taken off the market. And, far more significantly in the long run, he also demanded that all Tolkien references be excised from Dungeons & Dragons. It wasn’t really clear that Zaentz ought to have standing to sue, given that Battle of the Five Armies and especially Dungeons & Dragons consisted of so much of the “printed published matter” that was supposedly reserved to the Tolkien estate. But, hard charger that he was, Zaentz wasn’t about to let such niceties stop him. He was establishing legal precedent, and thereby cementing his position for the future.

The question of just how much influence Tolkien had on Dungeons & Dragons has been long obscured by this specter of legal action, which gave everyone on the TSR side ample reason to be less than entirely forthcoming. That said, certain elements of Dungeons & Dragons — most obviously the “hobbit” character class found in the original game — undeniably walked straight off the pages of Tolkien and into those of Gary Gygax’s rule books. At the same time, though, the mechanics of Dungeons & Dragons had, as Gygax always strenuously asserted, much more to do with the pulpier fantasy stories of Jack Vance and Robert E. Howard than they did with Tolkien. Ditto the game’s default personality, which hewed more to the “a group of adventurers meet in a bar and head out to bash monsters and collect treasure” modus operandi of the pulps than it did to Tolkien’s deeply serious, deeply moralistic, deeply tragic universe. You could play a more “serious” game of Dungeons & Dragons even in the early days, and some presumably did, but you had to bend the mechanics to make them fit. The more light-hearted tone of The Hobbit might seem better suited, but wound up being a bit too light-hearted, almost as much fairy tale as red-blooded adventure fiction. Some of the book’s episodes, like Bilbo and the dwarves’ antics with the trolls near the beginning of the story, verge on cartoon slapstick, with none of the swashbuckling swagger of Dungeons & Dragons. I love it dearly — far more, truth be told, than I love The Lord of the Rings — but not for nothing was The Hobbit conceived and marketed as a children’s novel.

Gygax’s most detailed description of the influence of Tolkien on Dungeons & Dragons appeared in the March 1985 issue of Dragon magazine. There he explicated the dirty little secret of adapting Tolkien to gaming: that the former just wasn’t all that well-suited for the latter without lots of sweeping changes.

Considered in the light of fantasy action adventure, Tolkien is not dynamic. Gandalf is quite ineffectual, plying a sword at times and casting spells which are quite low-powered (in terms of the D&D game). Obviously, neither he nor his magic had any influence on the games. The Professor drops Tom Bombadil, my personal favorite, like the proverbial hot potato; had he been allowed to enter the action of the books, no fuzzy-footed manling would have needed to undergo the trials and tribulations of the quest to destroy the Ring. Unfortunately, no character of Bombadil’s power can enter the games either — for the selfsame reasons! The wicked Sauron is poorly developed, virtually depersonalized, and at the end blows away in a cloud of evil smoke… poof! Nothing usable there. The mighty Ring is nothing more than a standard ring of invisibility, found in the myths and legends of most cultures (albeit with a nasty curse upon it). No influence here, either…

What Gygax gestures toward here but doesn’t quite touch is that The Lord of the Rings is at bottom a spiritual if not overtly religious tale, Middle-earth a land of ineffable unknowables. It’s impossible to translate that ineffability into the mechanistic system of causes and effects required by a game like Dungeons & Dragons. For all that Gygax is so obviously missing the point of Tolkien’s work in the extract above — rather hilariously so, actually — it’s also true that no Dungeon Master could attempt something like, say, Gandalf’s transformation from Gandalf the Grey to Gandalf the White without facing a justifiable mutiny from the players. Games — at least this kind of game — demand knowable universes.

Gygax claimed that Tolkien was ultimately far more important to the game’s commercial trajectory than he was to its rules. He noted, accurately, that the trilogy’s popularity from 1965 on had created an appetite for more fantasy, in the form of both books and things that weren’t quite books. It was largely out of a desire to ride this bandwagon, Gygax claimed, that Chainmail, the proto-Dungeons & Dragons which TSR released in 1971, promised players right there on the cover that they could use it to “refight the epic struggles related by J.R.R. Tolkien, Robert E. Howard, and other fantasy writers.” Gygax said that “the seeming parallels and inspirations are actually the results of a studied effort to capitalize on the then-current ‘craze’ for Tolkien’s literature.” Questionable though it is how “studied” his efforts really were in this respect, it does seem fairly clear that the biggest leg-up Tolkien gave to Gygax and his early design partner Dave Arneson was in giving so many potential players a taste for epic fantasy in the first place.

At any rate, we can say for certain that, beyond prompting a grudge in Gary Gygax against all things Tolkien — which, like most Gygaxian grudges, would last the rest of its holder’s life — Zaentz’s legal threat had a relatively modest effect on the game of Dungeons & Dragons. Hobbits were hastily renamed “halflings,” a handful of other references were scrubbed away or obfuscated, and life went on.

More importantly for Zaentz, the case against TSR and a few other even smaller tabletop-game publishers had now established the precedent that this field was within his licensing purview. In 1982, Tolkien Enterprises, the umbrella corporation Zaentz had created to manage his portfolio, authorized a three-employee publisher called Iron Crown Enterprises, heretofore known for the would-be Dungeons & Dragons competitor Rolemaster, to adapt their system to Middle-earth. Having won the license by simple virtue of being the first publisher to work up the guts to ask for it, Iron Crown went on to create Middle-earth Role Playing. The system rather ran afoul of the problem we’ve just been discussing: that, inspiring though so many found the setting in the broad strokes, the mechanics — or perhaps lack thereof — of Middle-earth just didn’t lend themselves all that well to a game. Unsurprisingly in light of this, Middle-earth Role Playing acquired a reputation as a “game” that was more fun to read, in the form of its many lengthy and lovingly detailed supplements exploring the various corners of Middle-earth, than it was to actually play; some wags took to referring to the line as a whole as Encyclopedia Middle-earthia. Nevertheless, it lasted more than fifteen years, was translated into twelve languages, and sold over 250,000 copies in English alone, thereby becoming one of the most successful tabletop RPGs ever not named Dungeons & Dragons.

But by no means was it all smooth sailing for Iron Crown. During the game’s early years, which were also its most popular, they were very nearly undone by an episode that serves to illustrate just how dangerously confusing the world of Tolkien licensing could become. In 1985, Iron Crown decided to jump on the gamebook bandwagon with a line of paperbacks they initially called Tolkien Quest, but quickly renamed to Middle-earth Quest to tie it more closely to their extant tabletop RPG. Their take on the gamebook was very baroque in comparison to the likes of Choose Your Own Adventure or even Fighting Fantasy; the rules for “reading” their books took up thirty pages on their own, and some of the books included hex maps for plotting your movements around the world, thus rather blurring the line between gamebook and, well, game. Demian Katz, who operates the definitive Internet site devoted to gamebooks, calls the Middle-earth Quest line “among the most complex gamebooks ever published,” and he of all people certainly ought to know. Whether despite their complexity or because of it, the first three volumes in the line were fairly successful for Iron Crown — and then the legal troubles started.

The Tolkien estate decided that Iron Crown had crossed a line with their gamebooks, encroaching on the literary rights to Tolkien which belonged to them. Whether the gamebooks truly were more book or game is an interesting philosophical question to ponder — particularly so given that they were such unusually crunchy iterations on the gamebook concept. Questions of philosophical taxonomy aside, though, they certainly were “printed published matter” that looked for all the world like everyday books. Tolkien Enterprises wasn’t willing to involve themselves in a protracted legal showdown over something as low-stakes as a line of gamebooks. Iron Crown would be on their own in this battle, should they choose to wage it. Deciding the potential rewards weren’t worth the risks of trying to convince a judge who probably wouldn’t know Dungeons & Dragons from Maze & Monsters that these things which looked like conventional paperback books were actually something quite different, Iron Crown pulled the line off the market and destroyed all copies as part of a settlement agreement. The episode may have cost them as much as $2.5 million. A few years later, the ever dogged Iron Crown would attempt to resuscitate the line after negotiating a proper license with the Tolkien estate — no mean feat in itself; Christopher Tolkien in particular is famously protective of that portion of his father’s legacy which is his to protect — but by then the commercial moment of the gamebook in general had passed. The whole debacle would continue to haunt Iron Crown for a long, long time. In 2000, when they filed for Chapter 11 bankruptcy, they would state that the debt they had been carrying for almost fifteen years from the original gamebook settlement was a big part of the reason.

By that point, the commercial heyday of the tabletop RPG was also long past. Indeed, already by the time that Iron Crown and Tolkien Enterprises had inked their first licensing deal back in 1982 computer-based fantasies, in the form of games like Zork, Ultima and Wizardry, were threatening to eclipse the tabletop varieties that had done so much to inspire them. Here, perhaps more so even than in tabletop RPGs, the influence of Tolkien was pervasive. Designers of early computer games often appropriated Middle-earth wholesale, writing what amounted to interactive Tolkien fan fiction. The British text-adventure house Level 9, for example, first made their name with Colossal Adventure, a re-implementation of Will Crowther and Don Woods’s original Adventure with a Middle-earth coda tacked onto the end, thus managing the neat trick of extensively plagiarizing two different works in a single game. There followed two more Level 9 games set in Middle-earth, completing what they were soon proudly advertising, in either ignorance or defiance of the concept of copyright, as their Middle-earth Trilogy.

But the most famous constant devotee and occasional plagiarist of Tolkien among the early computer-game designers was undoubtedly Richard Garriott, who had discovered The Lord of the Rings and Dungeons & Dragons, the two influences destined more than any other to shape the course of his life, within six months of one another during his teenage years. Garriott called his first published game Akalabeth, after Tolkien’s Akallabêth, the name of a chapter in The Silmarillion, a posthumously published book of Middle-earth legends. The word means “downfall” in one of Tolkien’s invented languages, but Garriott chose it simply because he thought it sounded cool; his game otherwise had little to no explicit connection to Middle-earth. Regardless, the computer-game industry wouldn’t remain small enough that folks could get away with this sort of thing for very long. Akalabeth soon fell out of print, superseded by Garriott’s more complex series of Ultima games that followed it, while Level 9 was compelled to scrub the erstwhile Middle-earth Trilogy free of Tolkien and re-release it as the Jewels of Darkness Trilogy.

In the long-run, the influence of Tolkien on digital games would prove subtler but also even more pervasive than these earliest forays into blatant plagiarism would imply. Richard Garriott may have dropped the Tolkien nomenclature from his subsequent games, but he remained thoroughly inspired by the example of Tolkien, that ultimate fantasy world-builder, when he built the world of Britannia for his Ultima series. Of course, there were obvious qualitative differences between Middle-earth and Britannia. How could there not be? One was the creation of an erudite Oxford don, steeped in a lifetime worth of study of classical and Medieval literature; the other was the creation of a self-described non-reader barely out of high school. Nowhere is the difference starker than in the area of language, Tolkien’s first love. Tolkien invented entire languages from scratch, complete with grammars and pronunciation charts; Garriott substituted a rune for each letter in the English alphabet and seemed to believe he had done something equivalent. Garriott’s clumsy mishandling of Elizabethan English, meanwhile, all “thees” and “thous” in places where the formal “you” should be used, is enough to make any philologist roll over in his grave. But his heart was in the right place, and despite its creator’s limitations Britannia did take on a life of its own over the course of many Ultima iterations. If there is a parallel in computer gaming to what The Lord of the Rings and Middle-earth came to mean to fantasy literature, it must be Ultima and its world of Britannia.

In addition to the unlicensed knock-offs that were gradually driven off the market during the early 1980s and the more abstracted homages that replaced them, there was also a third category of Tolkien-derived computer games: that of licensed products. The first and only such licensee during the 1980s was Melbourne House, a book publisher turned game maker located in far-off Melbourne, Australia. Whether out of calculation or happenstance, Melbourne House approached the Tolkien estate rather than Tolkien Enterprises in 1982 to ask for a license. They were duly granted the right to make a text-adventure adaptation of The Hobbit, under certain conditions, very much in character for Christopher Tolkien, intended to ensure respect for The Hobbit‘s status as a literary work; most notably, they would be required to include a paperback copy of the novel with the game. In a decision he would later come to regret, Saul Zaentz elected to cede this ground to the Tolkien estate without a fight, apparently deeming a computer game intangible enough to be dangerous to quibble over. Another uneasy, tacit, yet surprisingly enduring precedent was thus set: Tolkien Enterprises would have control of Tolkien tabletop games, while the Tolkien estate would have control of Tolkien videogames. Zaentz’s cause for regret would come as he watched the digital-gaming market explode into tens and then hundreds of times the size of the tabletop market.

In fact, that first adaptation of The Hobbit played a role in that very process. The game became a sensation in Europe — playing it became a rite of passage for a generation of gamers there — and a substantial hit in the United States as well. It went on to become almost certainly the best-selling single text adventure ever made, with worldwide sales that may have exceeded half a million units. I’ve written at length about the Hobbit text adventure earlier, so I’ll refer you back to that article rather than describe its bold innovations and weird charm here. Otherwise, suffice to say that The Hobbit‘s success proved, if anyone was doubting, that licenses in computer games worked in commercial terms, no matter how much some might carp about the lack of originality they represented.

Still, Melbourne House appears to have had some trepidation about tackling the greater challenge of adapting The Lord of the Rings to the computer. The reasons are understandable: the simple quest narrative that was The Hobbit — the book is actually subtitled There and Back Again — read like a veritable blueprint for a text adventure, while the epic tale of spiritual, military, and political struggle that was The Lord of the Rings represented, to say the least, a more substantial challenge for its would-be adapters. Melbourne House’s first anointed successor to The Hobbit‘s thus became Sherlock, a text adventure based on another literary property entirely. They didn’t finally return to Middle-earth until 1986, four years after The Hobbit, when they made The Fellowship of the Ring into a text adventure. Superficially, the new game played much like The Hobbit, but much of the charm was gone, with quirks that had seemed delightful in the earlier game now just seeming annoying. Even had The Fellowship of the Ring been a better game, by 1986 it was getting late in the day for text adventures — even text adventures like this one with illustrations. Reviews were lukewarm at best. Nevertheless, Melbourne House kept doggedly at the task of completing the story of Frodo and the One Ring, releasing The Shadows of Mordor in 1987 and The Crack of Doom in 1989. All of these games went largely unloved in their day, and remain so in our own.

In a belated attempt to address the formal mismatch between the epic narrative of The Lord of the Rings and the granular approach of the text adventure, Melbourne House released War in Middle-earth in 1988. Partially designed by Mike Singleton, and drawing obvious inspiration from his older classic The Lords of Midnight, it was a strategy game which let the player refight the entirety of the War of the Ring, on the level of both armies and individual heroes. The Lords of Midnight had been largely inspired by Singleton’s desire to capture the sweep and grandeur of The Lord of the Rings in a game, so in a sense this new project had him coming full circle. But, just as Melbourne House’s Lord of the Rings text adventures had lacked the weird fascination of The Hobbit, War in Middle-earth failed to rise to the heights of The Lords of Midnight, despite enjoying the official license the latter had lacked.

As the 1980s came to a close, then, the Tolkien license was beginning to rival the similarly demographically perfect Star Trek license for the title of the most misused and/or underused — take your pick — in computer gaming. Tolkien Enterprises, normally the more commercially savvy and aggressive of the two Tolkien licensers, had ceded that market to the Tolkien estate, who seemed content to let Melbourne House doddle along with an underwhelming and little-noticed game every year or two. At this point, though, another computer-game developer would pick up the mantle from Melbourne House and see if they could manage to do something less underwhelming with it. We’ll continue with that story next time.

Before we get to that, though, we might take a moment to think about how different things might have been had the copyrights to Tolkien’s works been allowed to expire with their creator. There is some evidence that Tolkien himself held to this as the fairest course. In the late 1950s, in a letter to one of the first people to approach him about making a movie out of The Lord of the Rings, he expressed his wish that any movie made during his lifetime not deviate too far from the books, citing as an example of what he didn’t want to see the 1950 movie of H. Rider Haggard’s Victorian adventure novel King’s Solomon’s Mines and the many liberties it took with its source material. “I am not Rider Haggard,” he wrote. “I am not comparing myself with that master of Romance, except in this: I am not dead yet. When the film of King’s Solomon’s Mines was made, it had already passed, one might say, into the public property of the imagination. The Lord of Rings is still the vivid concern of a living person, and is nobody’s toy to play with.” Can we read into this an implicit assumption that The Lord of the Rings would become part of “the public property of the imagination” after its own creator’s death? If so, things turned out a little differently than he thought they would. A “property of the imagination” Middle-earth has most certainly become. It’s the “public” part that remains problematic.

(Sources: the books Designers & Dragons Volume 1 and Volume 2 by Shannon Appelcline, Tolkien’s Triumph: The Strange History of The Lord of the Rings by John Lennard, The Frodo Franchise: The Lord of the Rings and Modern Hollywood by Kristin Thompson, Unfiltered: The Complete Ralph Bakshi by John M. Gibson, Playing at the World by Jon Peterson, and Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland; Dragon Magazine of March 1985; Popular Computing Weekly of December 30 1982; The Times of December 15 2002. Online sources include Janet Brennan Croft’s essay “Three Rings for Hollywood” and The Hollywood Reporter‘s archive of a 2012 court case involving Tolkien’s intellectual property.)

 

Tags: , , , ,

The View from the Trenches (or, Some Deadly Sins of CRPG Design)

From the beginning of this project, I’ve worked to remove the nostalgia factor from my writing about old games, to evaluate each game strictly on its own merits and demerits. I like to think that this approach has made my blog a uniquely enlightening window into gaming history. Still, one thing my years as a digital antiquarian have taught me is that you tread on people’s nostalgia at your peril. Some of what I’ve written here over the years has certainly generated its share of heat as well as light, not so much among those of you who are regular readers and commenters — you remain the most polite, thoughtful, insightful, and just plain nice readers any writer could hope to have — as among the ones who fire off nasty emails from anonymous addresses, who post screeds on less polite sites to which I’m occasionally pointed, or who offer up their drive-by comments right here every once in a while.

A common theme of these responses is that I’m not worthy of writing about this stuff, whether because I wasn’t there at the time — actually, I was, but whatever — or because I’m just not man enough to take my lumps and power through the really evil, unfair games. This rhetoric of inclusion and exclusion is all too symptomatic of the uglier sides of gaming culture. Just why so many angry, intolerant personalities are so attracted to computer games is a fascinating question, but must remain a question for another day. For today I will just say that, even aside from their ugliness, I find such sentiments strange. As far as I know, there’s zero street cred to be gained in the wider culture from being good at playing weird old videogames — or for that matter from being good at playing videogames of any stripe. What an odd thing to construct a public persona around. I’ve made a job out of analyzing old games, and even I sometimes want to say, “Dude, they’re just old games! Really, truly, they’re not worth getting so worked up over.”

That said, there do remain some rays of light amidst all this heat. It’s true that my experience of these games today — of playing them in a window on this giant monitor screen of mine, or playing them on the go on a laptop — must be in some fairly fundamental ways different from the way the same games were experienced all those years ago. One thing that gets obviously lost is the tactile, analog side of the vintage experience: handling the physical maps and manuals and packages (I now reference that stuff as PDF files, which isn’t quite the same); drawing maps and taking notes using real pen and paper (I now keep programs open in separate windows on that aforementioned giant monitor for those purposes); listening to the chuck-a-chunk of disk drives loading in the next bit of text or scenery (replacing the joy of anticipation is the instant response of my modern supercomputer). When I allow myself to put on my own nostalgia hat, just for a little while, I recognize that all these things are intimately bound up with my own memories of playing games back in the day.

And I also recognize that the discrepancies between the way I play now and the way I played back then go even further. Some of the most treasured of vintage games weren’t so much single works to be played and completed as veritable lifestyle choices. Ultima IV, to name a classic example, was huge enough and complicated enough that a kid who got it for Christmas in 1985 might very well still be playing it by the time Ultima V arrived in 1988; rinse and repeat for the next few entries in the series. From my jaded perspective, I wouldn’t brand any of these massive CRPGs as overly well-designed in the sense of being a reasonably soluble game to be completed in a reasonable amount of time, but then that wasn’t quite what most of the people who played them way back when were looking for in them. Actually solving the games became almost irrelevant for a kid who wanted to live in the world of Britannia.

I get that. I really do. No matter how deep a traveler in virtual time delves into the details of any era of history, there are some things he can never truly recapture. Were I to try, I would have to go away to spend a year or two disconnected from the Web and playing no other game — or at least no other CRPG — than the Ultima I planned to write about next. That, as I hope you can all appreciate, wouldn’t be a very good model for a blog like this one.

When I think in the abstract about this journey through gaming history I’ve been on for so long now, I realize that I’ve been trying to tell at least three intertwining stories.

One story is a critical design history of games. When I come to a game I judge worthy of taking the time to write about in depth — a judgment call that only becomes harder with every passing year, let me tell you — I play it and offer you my thoughts on it, trying to judge it not only in the context of our times but also in the context of its own times, and in the context of its peers.

A second story is that of the people who made these games, and how they went about doing so — the inevitable postmortems, as it were.

Doing these first two things is relatively easy. What’s harder is the third leg of the stool: what was it like to be a player of computer games all those years ago? Sometimes I stumble upon great anecdotes in this area. For instance, did you know about Clancy Shaffer?

In impersonal terms, Shaffer was one of the slightly dimmer stars among the constellation of adventure-game superfans — think Roe Adams III, Shay Addams, Computer Gaming World‘s indomitable Scorpia — who parlayed their love of the genre and their talent for solving games quickly into profitable sidelines if not full-on careers as columnists, commentators, play-testers, occasionally even design consultants; for his part, Shaffer contributed his long experience as a player to the much-loved Sir-Tech title Jagged Alliance.

Most of the many people who talked with Shaffer via post, via email, or via telephone assumed he was pretty much like them, an enthusiastic gamer and technology geek in his twenties or thirties. One of these folks, Rich Heimlich, has told of a time when a phone conversation turned to the future of computer technology in the longer view. “Frankly,” said Shaffer, “I’m not sure I’ll even be here to see it.” He was, he explained to his stunned interlocutor, 84 years old. He credited his hobby for the mental dexterity that caused so many to assume he was in his thirties at the oldest. Shaffer believed he had remained mentally sharp through puzzling his way through so many games, while he needed only look at the schedule of upcoming releases in a magazine to have something to which to look forward in life.  Many of his friends who, like him, had retired twenty years ago were dead or senile, a situation Shaffer blamed on their having failed to find anything to do with themselves after leaving the working world behind.

Shaffer died in 2010 at age 99. Only after his passing, after reading his obituary, did Heimlich and other old computer-game buddies realize what an extraordinary life Shaffer had actually led, encompassing an education from Harvard University, a long career in construction and building management, 18 patents in construction engineering, an active leadership role in the Republican party, a Golden Glove championship in heavyweight boxing, and a long and successful run as a yacht racer and sailor of the world’s oceans. And yes, he had also loved to play computer games, parlaying that passion into more than 500 published articles.

But great anecdotes like this one from the consumption side of the gaming equation are the exception rather than the rule, not because they aren’t out there in spades in theory — I’m sure there have been plenty of other fascinating characters like Clancy Shaffer who have also made a passion for games a part of their lives — but because they rarely get publicized. The story of the players of vintage computer games is that of a huge, diffuse mass of millions of people whose individual stories almost never stretch beyond their immediate families and friends.

The situation becomes especially fraught when we try to zero in on the nitty-gritty details of how games were played and judged in their day. Am I as completely out of line as some have accused me of being in harping so relentlessly on the real or alleged design problems of so many games that others consider to be classics? Or did people back in the day, at least some of them, also get frustrated and downright angry at betrayals of their trust in the form of illogical puzzles and boring busywork? I know that I certainly did, but I’m only one data point.

One would think that the magazines, that primary link between the people who made games and those who played them, would be the best way of finding out what players were really thinking. In truth, though, the magazines rarely provided skeptical coverage of the games industry. The companies whose games they were reviewing were of course the very same companies that were helping to pay their bills by buying advertising — an obvious conflict of interest if ever there was one. More abstractly but no less significantly, there was a sense among those who worked for the magazines and those who worked for the game publishers that they were all in this together, living as they all were off the same hobby. Criticizing individual games too harshly, much less entire genres, could damage that hobby, ultimately damaging the magazines as much as the publishers. Thus when the latest heavily hyped King’s Quest came down the pipe, littered with that series’s usual design flaws, there was little incentive for the magazines to note that this monarch had no clothes.

So, we must look elsewhere to find out what average players were really thinking. But where? Most of the day-to-day discussions among gamers back in the day took place over the telephone, on school playgrounds, on computer bulletin boards, or on the early commercial online services that preceded the World Wide Web. While Jason Scott has done great work snarfing up a tiny piece of the online world of the 1980s and early 1990s, most of it is lost, presumably forever. (In this sense at least, historians of later eras of gaming history will have an easier time of it, thanks to archive.org and the relative permanence of the Internet.) The problem of capturing gaming as gamers knew it thus remains one without a comprehensive solution. I must confess that this is one reason I’m always happy when you, my readers, share your experiences with this or that game in the comments section — even, or perhaps especially, when you disagree with my own judgments on a game.

Still, relying exclusively on first-hand accounts from decades later to capture what it was like to be a gamer in the old days can be problematic in the same way that it can be problematic to rely exclusively on interviews with game developers to capture how and why games were made all those years ago: memories can fade, personal agendas can intrude, and those rose-colored glasses of nostalgia can be hard to take off. Pretty soon we’re calling every game from our adolescence a masterpiece and dumping on the brain-dead games played by all those stupid kids today — and get off my lawn while you’re at it. The golden age of gaming, like the golden age of science fiction, will always be twelve or somewhere thereabouts. All that’s fine for hoisting a beer with the other old-timers, but it can be worse than useless for doing serious history.

Thankfully, every once in a while I stumble upon another sort of cracked window into this aspect of gaming’s past. As many of you know, I’ve spent a couple of weeks over the last couple of years trolling through the voluminous (and growing) game-history archives of the Strong Museum of Play. Most of this material, hugely valuable to me though it’s been and will doubtless continue to be, focuses on the game-making side of the equation. Some of the archives, though, contain letters from actual players, giving that unvarnished glimpse into their world that I so crave. Indeed, these letters are among my favorite things in the archives. They are, first of all, great fun. The ones from the youngsters are often absurdly cute; it’s amazing how many liked to draw pictures to accompany their missives.

But it’s when I turn to the letters from older writers that I’m gratified and, yes, made to feel a little validated when I read that people were in fact noticing that games weren’t always playing fair with them. I’d like to share a couple of the more interesting letters of this type with you today.

We’ll begin with a letter from one Wes Irby of Plano, Texas, describing what he does and especially what he doesn’t enjoy in CRPGs. At the time he sent it to the Questbusters adventure-game newsletter in October of 1988, Irby was a self-described “grizzled computer adventurer” of age 43. Shay Addams, Questbusters’s editor, found the letter worthy enough to spread around among publishers of CRPGs. (Perhaps tellingly, he didn’t choose to publish it in his newsletter.)

Irby titles his missive “Things I Hate in a Fantasy-Role-Playing Game.” Taken on its own, it serves very well as a companion piece to a similar article I once wrote about graphic adventures. But because I just can’t shut up, and because I can’t resist taking the opportunity to point out places where Irby is unusually prescient or insightful, I’ve inserted my own comments into the piece; they appear in italics in the text that follows. Otherwise, I’ve only cleaned up the punctuation and spelling a bit here and there. The rest is Irby’s original letter from 1988.


I hate rat killing!!! In Shard of Spring, I had to kill dozens of rats, snakes, kobolds, and bats before I could get back to the tower after a Wind Walk to safety. In Wizardry, the rats were Murphy’s ghosts, which I pummeled for hours when developing a new character. Ultima IV was perhaps the ultimate rat-killing game of all time; hour upon hour was spent in tedious little battles that I could not possibly lose and that offered little reward for victory. Give me a good battle to test my mettle, but don’t sentence me to rat killing!

Amen. The CRPG genre became the victim of an expectation which took hold early on that the games needed to be really, really long, needed to consume dozens if not hundreds of hours, in order for players to get their money’s worth. With disk space precious and memory space even more so on the computers of the era, developers had to pad out their games with a constant stream of cheap low-stakes random encounters to reach that goal. Amidst the other Interplay materials hosted at the Strong archive are several mentions of a version of Wasteland, prepared specially for testers in a hurry, in which the random encounters were left out entirely. That’s the version of Wasteland I’d like to play.

I hate being stuck!!! I enjoy the puzzles, riddles, and quests as a way to give some story line to the real heart of the game, which is killing bad guys. Just don’t give me any puzzles I can’t solve in a couple of hours. I solved Rubik’s Cube in about thirty hours, and that was nothing compared to some of the puzzles in The Destiny Knight. The last riddle in Knight of Diamonds delayed my completion (and purchase of the sequel) for nearly six months, until I made a call to Sir-Tech.

I haven’t discussed the issue of bad puzzle design in CRPGs to the same extent as I have the same issue in adventure games, but suffice to say that just about everything I’ve written in the one context applies equally in the other. Certainly riddles remain among the laziest — they require almost no programming effort to implement — and most problematic — they rely by definition on intuition and external cultural knowledge — forms of puzzle in either genre. Riddles aren’t puzzles at all really; the answer either pops into your head right away or it doesn’t, meaning the riddle turns into either a triviality or a brick wall. A good puzzle, by contrast, is one you can experiment with on your way to the correct solution. And as for the puzzles in The Bard’s Tale II: The Destiny Knight… much more on them a little later.

Perhaps the worst aspect of being stuck is the clue-book dilemma. Buying a clue book is demeaning. In addition, buying clue books could encourage impossible puzzles to boost the aftermarket for clue books. I am a reformed game pirate (that is how I got hooked), and I feel it is just as unfair for a company to charge me to finish the game I bought as it was for me to play the games (years ago) without paying for them. Multiple solutions, a la Might and Magic, are very nice. That game also had the desirable feature of allowing you to work on several things simultaneously so that being stuck on one didn’t bring the whole game to a standstill.

Here Irby brings up an idea I’ve also touched on once or twice: that the very worst examples of bad design can be read as not just good-faith disappointments but actual ethical lapses on the part of developers and publishers. Does selling consumers a game with puzzles that are insoluble except through hacking or the most tedious sort of brute-force approaches equate to breaching good faith by knowingly selling them a defective product? I tend to feel that it does.

As part of the same debate, the omnipresent clue books became a locus of much dark speculation and conspiracy theorizing back in the day. Did publishers, as Irby suggests, intentionally release games that couldn’t be solved without buying the clue book, thereby to pick up additional sales? The profit margins on clue books, not incidentally, tended to be much higher than that enjoyed by the games themselves. Still, the answer is more complicated than the question may first appear. Based on my research into the industry of the time, I don’t believe that any publishers or developers made insoluble games with the articulated motive of driving clue-book sales. To the extent that there was an ulterior motive surrounding the subject of clue books, it was that the clue books would allow them to make money off some of the people who pirated their games. (Rumors — almost certainly false, but telling by their very presence — occasionally swirled around the industry about this or that popular title whose clue-book sales had allegedly outstripped the number of copies of the actual game which had been sold.) Yet the fact does remain that even the hope of using clue books as a way of getting money out of pirates required games that would be difficult enough to cause many pirates to go out and buy the book. The human mind is a funny place, and the clue-book business likely did create certain almost unconscious pressures on game designers to design less soluble games.

I hate no-fault life insurance! If there is no penalty, there is no risk, there is no fear — translate that to no excitement. The adrenaline actually surged a few times during play of the Wizardry series when I encountered a group of monsters that might defeat me. In Bard’s Tale II, death was so painless that I committed suicide several times because it was the most expedient way to return to the Adventurer’s Guild.

When you take the risk of loss out of the game, it might as well be a crossword puzzle. The loss of possessions in Ultima IV and the loss of constitution in Might and Magic were tolerable compromises. The undead status in Phantasie was very nice. Your character was unharmed except for the fact that no further advancement was possible. Penalties can be too severe, of course. In Shard of Spring, loss of one battle means all characters are permanently lost. Too tough.

Here Irby hits on one of the most fraught debates in CRPG design, stretching from the days of the original Wizardry to today: what should be the penalty for failure? There’s no question that the fact that you couldn’t save in the dungeon was one of the defining aspects of Wizardry, the game that did more than any other to popularize the budding genre in the very early 1980s. Exultant stories of escaping the dreaded Total Party Loss by the skin of one’s teeth come up again and again when you read about the game. Andrew Greenberg and Bob Woodhead, the designers of Wizardry, took a hard-line stance on the issue, insisting that the lack of an in-dungeon save function was fundamental to an experience they had carefully crafted. They went so far as to issue legal threats against third-party utilities designed to mitigate the danger.

Over time, though, the mainstream CRPG industry moved toward the save-often, save-anywhere model, leaving Wizardry’s approach only to a hardcore sub-genre known as roguelikes. It seems clear that the change had some negative effects on encounter design; designers, assuming that players were indeed saving often and saving everywhere, felt they could afford to worry less about hitting players with impossible fights. Yet it also seems clear that many or most players, given the choice, would prefer to avoid the exhilaration of escaping near-disasters in Wizardry in favor of avoiding the consequences of unescaped disasters. The best solution, it seems to me, is to make limited or unlimited saving a player-selectable option. Failing that, it strikes me as better to err on the side of generosity; after all, hardcore players can still capture the exhilaration and anguish of an iron-man mode by simply imposing their own rules for when they allow themselves to save. All that said, the debate will doubtless continue to rage.

I hate being victimized. Loss of life, liberty, etc., in a situation I could have avoided through skillful play is quite different from a capricious, unavoidable loss. The Amulet of Skill in Knight of Diamonds was one such situation. It was not reasonable to expect me to fail to try the artifacts I found — a fact I soon remedied with my backup disk!!! The surprise attacks of the mages in Wizardry was another such example. Each of the Wizardry series seems to have one of these, but the worst was the teleportation trap on the top level of Wizardry III, which permanently encased my best party in stone.

Beyond rather putting the lie to some of Greenberg and Woodhead’s claims of having exhaustively balanced the Wizardry games, these criticisms again echo those I’ve made in the context of adventure games. Irby’s examples are the CRPG equivalents of the dreaded adventure-game Room of Sudden Death — except that in CRPGs like Wizardry with perma-death, their consequences are much more dire than just having to go back to your last save.

I hate extraordinary characters! If everyone is extraordinary then extraordinary becomes extra (extremely) ordinary and uninteresting. The characters in Ultima III and IV and Bard’s Tale I and II all had the maximum ratings for all stats before the end of the game. They lose their personalities that way.

This is one of Irby’s subtler complaints, but also I think one of his most insightful. Characters in CRPGs are made interesting, as he points out, through a combination of strengths and weaknesses. I spent considerable time in a recent article describing how the design standards of SSI’s “Gold Box” series of licensed Dungeons & Dragons CRPGs declined over time, but couldn’t find a place for the example of Pools of Darkness, the fourth and last game in the series that began with Pool of Radiance. Most of the fights in Pools of Darkness are effectively unwinnable if you don’t have “extraordinary” characters, in that they come down to quick-draw contests to find out whether your party or the monsters can fire off devastating area-effect magic first. Your entire party needs to have a maxed-out dexterity score of 18 to hope to consistently survive these battles. Pools of Darkness thus rewards cheaters and punishes honest players; it represents a cruel betrayal of players who had played through the entire series honestly to that point, without availing themselves of character editors or the like. CRPGs should strive not to make the extraordinary ordinary, and they should certainly not demand extraordinary characters that the player can only come by through cheating.

There are several more features which I find undesirable, but are not sufficiently irritating to put them in the “I hate” category. One such feature is the inability to save the game in certain places or situations. It is miserable to find yourself in a spot you can’t get out of (or don’t want to leave because of the difficulty in returning) at midnight (real time). I have continued through the wee hours on occasion, much to my regret the next day. At other times it has gotten so bad I have dozed off at the keyboard. The trek from the surface to the final set of riddles in Ultima IV takes nearly four hours. Without the ability to save along the way, this doesn’t make for good after-dinner entertainment. Some of the forays in the Phantasie series are also long and difficult, with no provision to save. This problem is compounded when you have an old machine like mine that locks up periodically. Depending on the weather and the phase of the moon, sometimes I can’t rely on sessions that average over half an hour.

There’s an interesting conflict here, which I sense that the usually insightful Irby may not have fully grasped, between his demand that death have consequences in CRPGs and his belief that he should be able to save anywhere. At the same time, though, it’s not an irreconcilable conflict. Roguelikes have traditionally made it possible to save anywhere by quitting the game, but immediately delete the save when you start to play again, thus making it impossible to use later on as a fallback position.

Still, it should always raise a red flag when a given game’s designers claim something which just happens to have been the easier choice from a technical perspective to have been a considered design choice. This skepticism should definitely be applied to Wizardry. Were the no-save dungeons that were such an integral part of the Wizardry experience really a considered design choice or a (happy?) accident arising from technical affordances? It’s very difficult to say this many years on. What is clear is that saving state in any sort of comprehensive way was a daunting challenge for 8-bit CRPGs spread over multiple disk sides. Wizardry and The Bard’s Tale didn’t really even bother to try; literally the only persistent data in these games and many others like them is the state of your characters, meaning not only that the dungeons are completely reset every time you enter them but that it’s possible to “win” them over and over again by killing the miraculously resurrected big baddie again and again. The 8-bit Ultima games did a little better, saving the state of the world map but not that of the cities or the dungeons. (I’ve nitpicked the extreme cruelty of Ultima IV’s ending, which Irby also references, enough on earlier occasions that I won’t belabor it any more here.) Only quite late in the day for the 8-bit CRPG did games like Wasteland work out ways to create truly, comprehensively persistent environments — in the case of Wasteland, by rewriting all of the data on each disk side on the fly as the player travels around the world (a very slow process, particularly in the case of the Commodore 64 and its legendarily slow disk drive).

Tedium is a killer. In Bard’s Tale there was one battle with 297 bersekers that always took fifteen or twenty minutes with the same results (this wasn’t rat-killing because the reward was significant and I could lose, maybe). The process of healing the party in the dungeon in Wizardry and the process of identifying discovered items in Shard of Spring are laborious. How boring it was in Ultima IV to stand around waiting for a pirate ship to happen along so I could capture it. The same can be said of sitting there holding down a key in Wasteland or Wrath of Denethenor while waiting for healing to occur. At least give me a wait command so I can read a book until something interesting happens.

I’m sort of ambivalent toward most aspects of mapping. A good map is satisfying and a good way to be sure nothing has been missed. Sometimes my son will use my maps (he hates mapping) in a game and find he is ready to go to the next level before his characters are. Mapping is a useful way to pace the game. The one irritating aspect of mapping is running off the edge of the paper. In Realms of Darkness mapping was very difficult because there was no “locater” or “direction” spell. More bothersome to me, though, was the fact that I never knew where to start on my paper. I had the same problem with Shard of Spring, but in retrospect that game didn’t require mapping.

Mapping is another area where the technical affordances of the earliest games had a major effect on their designs. The dungeon levels in most 8-bit CRPGs were laid out on grids of a consistent number of squares across and down; such a template minimized memory usage and simplified the programmer’s task enormously. Unrealistic though it was, it was also a blessing for mappers. Wizardry, a game that was oddly adept at turning its technical limitations into player positives, even included sheets of graph paper of exactly the right size in the box. Later games like Dungeon Master, whose levels sprawl everywhere, run badly afoul of the problem Irby describes above — that of maps “running off the edge of the paper.” In the case of Dungeon Master, it’s the one glaring flaw in what could otherwise serve as a masterclass in designing a challenging yet playable dungeon crawl.

I don’t like it when a program doesn’t take advantage of my second disk drive, and I would feel that way about my printer if I had one. I don’t like junk magic (spells you never use), and I don’t like being stuck forever with the names I pick on the spur of the moment. A name that struck my fancy one day may not on another.

Another problem similar to “junk magic” that only really began to surface around the time that Irby was writing this letter is junk skills. Wasteland is loaded with skills that are rarely or never useful, along with others that are essential, and there’s no way for the new player to identify which are which. It’s a more significant problem than junk magic usually is because you invest precious points into learning and advancing your skills; there’s a well-nigh irreversible opportunity cost to your choices. All of what we might call the second generation of Interplay CRPGs, which began with Wasteland, suffer at least somewhat from this syndrome. Like the sprawling dungeon levels in Dungeon Master, it’s an example of the higher ambitions and more sophisticated programming of later games impacting the end result in ways that are, at best, mixed in terms of playability.

I suppose you are wondering why I play these stupid games if there is so much about them I don’t like. Actually, there are more things I do like, particularly when compared to watching Gilligan’s Island or whatever the current TV fare is. I suppose it would be appropriate to mention a few of the things I do like.

In discussing the unavoidably anachronistic experience we have of old games today, we often note how many other games are at our fingertips — a luxury a kid who might hope to get one new game every birthday and Christmas most definitely didn’t enjoy. What we perhaps don’t address as much as we should is how much the entertainment landscape in general has changed. It can be a little tough even for those of us who lived through the 1980s to remember what a desert television was back then. I remember a television commercial — and from the following decade at that — in which a man checked into a hotel of the future, and was told that every movie ever made was available for viewing at the click of a remote control. Back then, this was outlandish science fiction. Today, it’s reality.

I like variety and surprises. Give me a cast of thousands over a fixed party anytime. Of course, the game designer has to force the need for multiple parties on me, or I will stick with the same group throughout because that is the best way to “win” the game. The Minotaur Temple in Phantasie I and the problems men had in Portsmouth in Might and Magic and the evil and good areas of Wizardry III were nice. More attractive are party changes for strategic reasons. What good are magic users in no-magic areas or a bard in a silent room? A rescue mission doesn’t need a thief and repetitive battles with many small opponents don’t require a fighter that deals heavy damage to one bad guy.

I like variety and surprises in the items found, the map, the specials encountered, in short in every aspect of the game. I like figuring out what things are and how they work. What a delight the thief’s dagger in Wizardry was! The maps in Wasteland are wonderful because any map may contain a map. The countryside contains towns and villages, the towns contain buildings, some buildings contain floors or secret passages. What fun!!!

I like missions and quests to pursue as I proceed. Some of these games are so large that intermediate goals are necessary to keep you on track. Might and Magic, Phantasie, and Bard’s Tale do a good job of creating a path with the “missions.” I like self-contained clues about the puzzles. In The Return of Heracles the sage was always there to provide an assist (for money, of course)  if you got stuck. The multiple solutions or sources of vital information in Might and Magic greatly enhanced the probability of completing the missions and kept the game moving.

I like the idea of recruiting new characters, as opposed to starting over from scratch. In Galactic Adventures your crew could be augmented by recruiting survivors of a battle, provided they were less experienced than your leader. Charisma (little used in most games) could impact recruiting. Wasteland provides for recruiting of certain predetermined characters you encounter. These NPCs can be controlled almost like your characters and will advance with experience. Destiny Knight allows you to recruit (with a magic spell) any of the monsters you encounter, and requires that some specific characters be recruited to solve some of the puzzles, but these NPCs can’t be controlled and will not advance in level, so they are temporary members. They will occasionally turn on you, an interesting twist!!!

I like various skills, improved by practice or training for various characters. This makes the characters unique individuals, adding to the variety. This was implemented nicely in both Galactic Adventurers and Wasteland.

Eternal growth for my characters makes every session a little different and intriguing. If the characters “top out” too soon that aspect of the game loses its fascination. Wizardry was the best at providing continual growth opportunities because of the opportunity to change class and retain some of the abilities of the previous class. The Phantasie series seemed nicely balanced, with the end of the quest coming just before/as my characters topped out.

Speaking of eternal, I have never in all of my various adventures had a character retire because of age. Wizardry tried, but it never came into play because it was cheaper to heal at the foot of the stairs while identifying loot (same trip or short run to the dungeon for that purpose). Phantasie kept up with age, but it never affected play. I thought Might and Magic might, but I found the Fountain of Youth. The only FRPG I have played where you had to beat the clock is Tunnels of Doom, a simple hack-and-slash on my TI 99/4A that takes about ten hours for a game. Of course, it is quite different to spend ten hours and fail because the king died than it is to spend three months and fail by a few minutes. I like for time to be a factor to prevent me from being too conservative.

This matter of time affecting play really doesn’t fit into the “like” or the “don’t like” because I’ve never seen it effectively implemented. There are a couple of other items like that on my wish list. For example, training of new characters by older characters should take the place of slugging it out with Murphy’s ghost while the newcomers watch from the safety of the back row.

The placing of time limits on a game sounds to me like a very dangerous proposal. It was tried in 1989, the year after Irby wrote this letter, by The Magic Candle, a game that I haven’t played but that is quite well-regarded by the CRPG cognoscenti. That game was, however, kind enough to offer three difficulty levels, each with its own time limit, and the easiest level was generous enough that most players report that time never became a major factor. I don’t know of any game, even from this much crueler era of game design in general, that was cruel enough to let you play 100 hours or more and then tell you you’d lost because the evil wizard had finished conquering the world, thank you very much. Such an approach might have been more realistic than the alternative, where the evil wizard cackles and threatens occasionally but doesn’t seem to actually do much, but, as Sid Meier puts it, fun ought to trump realism every time in game design.

A very useful feature would be the ability to create my own macro consisting of a dozen or so keystrokes. Set up Control-1 through Control-9 and give me a simple way to specify the keystrokes to be executed when one is pressed.

Interestingly, this exact feature showed up in Interplay’s CRPGs very shortly after Irby wrote this letter, beginning with the MS-DOS version of Wasteland in March of 1989. And we do know that Interplay was one of the companies to which Shay Addams sent the letter. Is this a case of a single gamer’s correspondence being responsible for a significant feature in later games? The answer is likely lost forever to the vagaries of time and the inexactitude of memory.

A record of sorts of what has happened during the game would be nice. The chevron in Wizardry and the origin in Phantasie is the most I’ve ever seen done with this. How about a screen that told me I had 93 sessions, 4 divine interventions (restore backup), completed 12 quests, raised characters from the dead 47 times, and killed 23,472 monsters? Cute, huh?

Another crazily prescient proposal. These sorts of meta-textual status screens would become commonplace in CRPGs in later years. In this case, though, “later years” means much later. Thus, rather than speculating on whether he actively drove the genre’s future innovations, we can credit Irby this time merely with predicting them.

One last suggestion for the manufacturers: if you want that little card you put in each box back, offer me something I want. For example, give me a list of all the other nuts in my area code who have purchased this game and returned their little cards.

Enough of this, Wasteland is waiting.


With some exceptions — the last suggestion, for instance, would be a privacy violation that would make even the NSA raise an eyebrow — I agree with most of Irby’s positive suggestions, just as I do his complaints. It strikes me as I read through his letter that my own personal favorite among 8-bit CRPGs, Pool of Radiance, manages to avoid most of Irby’s pitfalls while implementing much from his list of desirable features — further confirmation of just what a remarkable piece of work that game, and to an only slightly lesser extent its sequel Curse of the Azure Bonds, really were. I hope Wes Irby got a chance to play them.

I have less to say about the second letter I’d like to share with you, and will thus present it without in-line commentary. This undated letter was sent directly to Interplay by its writer: Thomas G. Gutheil, an associate professor at the Harvard Medical School Department of Psychiatry, on whose letterhead it’s written. Its topic is The Bard’s Tale II: The Destiny Knight, a game I’ve written about only in passing but one with some serious design problems in the form of well-nigh insoluble puzzles. Self-serving though it may be, I present Gutheil’s letter to you today as one more proof that players did notice the things that were wrong with games back in the day — and that my perspective on them today therefore isn’t an entirely anachronistic one. More importantly, Gutheil’s speculations are still some of the most cogent I’ve ever seen on how bad puzzles make their way into games in the first place. For this reason alone, it’s eminently worthy of being preserved for posterity.


I am writing you a combination fan letter and critique in regard to the two volumes of The Bard’s Tale, of which I am a regular and fanatic user.

First, the good news: this is a TERRIFIC game, and I play it with addictive intensity, approximately an hour almost every day. The richness of the graphics, the cute depictions of the various characters, monsters, etc., and rich complexity and color of the mazes, tasks, issues, as well as the dry wit that pervades the program, make it a superb piece and probably the best maze-type adventure product on the market today. I congratulate you on this achievement.

Now, the bad news: the one thing I feel represents a defect in your program (and I only take your time to comment on it because it is so central) and one which is perhaps the only area where the Wizardry series (of which I am also an avid player and expert) is superior, is the notion of the so-called puzzles, a problem which becomes particularly noticeable in the “snares of death” in the second scenario. In all candor, speaking as an old puzzle taker and as a four-time grand master of the Boston Phoenix Puzzle Contest, I must say that these puzzles are simply too personal and idiosyncratic to be fair to the player. I would imagine you are doing a booming business in clue books since many of the puzzles are simply not accomplishable otherwise without hours of frustrating work, most of it highly speculative.

Permit me to try to clarify this point, since I am aware of the sensitive nature of these comments, given that I would imagine you regard the puzzles as being the “high art” of the game design. There should be an organic connection between the clues and the puzzles. For example, in Wizardry (sorry to plug the competition), there is a symbolic connection between the clue and its function. As one simplistic example, at the simplest level a bear statuette get you through a gate guarded by a bear, a key opens a particular door, and a ship-in-a-bottle item gets you across an open expanse of water.

Let me try to contrast this with some of the situations in your scenarios. You may recall that in one of the scenarios the presence of a “winged one” in the party was necessary to get across a particular chasm. The Winged One introduces himself to the party as one of almost a thousand individual wandering creatures that come and offer to join the party, to be attacked, or to be left in peace. This level of dilution and the failure to separate out the Winged One in some way makes it practically unrecallable much later on when you need it, particularly since there are several levels of dungeon (and in real life perhaps many interposing days and weeks) between the time you meet the Winged One (who does not stand out among the other wandering characters in any particular way) and the time you actually need him. Even if (as I do) you keep notes, there would be no particular reason to record this creature out of all. Moreover, to have this added character stuck in your party for long periods of time, when you could instead have the many-times more effective demons, Kringles, and salamanders, etc., would seem strategically self-defeating and therefore counter-intuitive for the normal strategy of game play AS IT IS ACTUALLY PLAYED.

This is my point: in many ways your puzzles in the scenarios seem to have been designed by someone who is not playing the game in the usual sequence, but designed as it were from the viewpoint of the programmer, who looks at the scenario “from above” — that is, from omniscient knowledge. In many situations the maze fails to take into account the fact that parties will not necessarily explore the maze in the predictable direct sequence you have imagined. The flow of doors and corridors do not appropriately guide a player so that they will take the puzzles in a meaningful sequence. Thus, when one gets a second clue before a first clue, only confusion results, and it is rarely resolved as the play advances.

Every once in a while you do catch on, and that is when something like the rock-scissors-paper game is invoked in your second scenario. That’s generally playing fair, although not everyone has played that game or would recognize it in the somewhat cryptic form in which it is presented. Thus the player does not gain the satisfaction of use of intellect in problem solving; instead, it’s the frustration of playing “guess what I’m thinking” with the author.

Despite all of the above criticism, the excitement and the challenge of playing the game still make it uniquely attractive; as you have no doubt caught on, I write because I care. I have had to actively fight the temptation to simply hack my way through the “snares of death” by direct cribbing from the clue books, so that I could get on to the real interest of the game, which is working one’s way through the dungeons and encountering the different items, monsters, and challenges. I believe that this impatience with the idiosyncratic (thus fundamentally unfair) design of these puzzles represents an impediment, and I would be interested to know if others have commented on this. Note that it doesn’t take any more work for the programmer, but merely a shift of viewpoint to make the puzzles relevant and fair to the reader and also proof against being taken “out of order,” which largely confuses the meaning. A puzzle that is challenging and tricky is fair; a puzzle that is idiosyncratically cryptic may not be.

Thank you for your attention to this somewhat long-winded letter; it was important to me to write. Given how much I care for this game and how devoted I am to playing it and to awaiting future scenarios, I wanted to call your attention to this issue. You need not respond personally, but I would of course be interested in any of your thoughts on this.


I conclude this article as a whole by echoing Gutheil’s closing sentiments; your feedback is the best part of writing this blog. I hope you didn’t find my musings on the process of doing history too digressive, and most of all I hope you found Wes Irby and Thomas Gutheil’s all too rare views from the trenches as fascinating as I did.

 

Tags: , ,

From Wingleader to Wing Commander

No one at Origin had much time to bask in the rapturous reception accorded to Wingleader at the 1990 Summer Consumer Electronics Show. Their end-of-September deadline for shipping the game was now barely three months away, and there remained a daunting amount of work to be done.

At the beginning of July, executive producer Dallas Snell called the troops together to tell them that crunch time was beginning in earnest; everyone would need to work at least 55 hours per week from now on. Most of the people on the project only smiled bemusedly at the alleged news flash. They were already working those kinds of hours, and knew all too well that a 55-hour work week would probably seem like a part-timer’s schedule before all was said and done.

Dallas Snell

At the beginning of August, Snell unceremoniously booted Chris Roberts, the project’s founder, from his role as co-producer, leaving him with only the title of director. Manifesting a tendency anyone familiar with his more recent projects will immediately recognize, Roberts had been causing chaos on the team by approving seemingly every suggested addition or enhancement that crossed his desk. Snell, the brutal pragmatist in this company full of dreamers, appointed himself as Warren Spector’s new co-producer. His first action was to place a freeze on new features in favor of getting the game that currently existed finished and out the door. Snell:

The individuals in Product Development are an extremely passionate group of people, and I love that. Everyone is here because, for the most part, they love what they’re doing. This is what they want to do with their lives, and they’re very intense about it and very sensitive to your messing around with what they’re trying to accomplish. They don’t live for getting it done on time or having it make money. They live to see this effect or that effect, their visions, accomplished.

It’s always a continual antagonistic relationship between the executive producer and the development teams. I’m always the ice man, the ogre, or something. It’s not fun, but it gets the products done and out. I guess that’s why I have the room with the view. Anyway, at the end of the project, all of Product Development asked me not to get that involved again.

One problem complicating Origin’s life enormously was the open architecture of MS-DOS, this brave new world they’d leaped into the previous year. Back in the Apple II days, they’d been able to write their games for a relatively static set of hardware requirements, give or take an Apple IIGS running in fast mode or a Mockingboard sound card. The world of MS-DOS, by contrast, encompassed a bewildering array of potential hardware configurations: different processors, different graphics and sound cards, different mice and game controllers, different amounts and types of memory, different floppy-disk formats, different hard-disk capacities. For a game like Wingleader, surfing the bleeding edge of all this technology but trying at the same time to offer at least a modicum of playability on older setups, all of this variance was the stuff of nightmares. Origin’s testing department was working 80-hour weeks by the end, and, as we’ll soon see, the final result would still leave plenty to be desired from a quality-control perspective.

As the clock was ticking down toward release, Origin’s legal team delivered the news that it probably wouldn’t be a good idea after all to call the game Wingleader — already the company’s second choice for a name — thanks to a number of existing trademarks on the similar “Wingman.” With little time to devote to yet another naming debate, Origin went with their consensus third choice of Wing Commander, which had lost only narrowly to Wingleader in the last vote. This name finally stuck. Indeed, today it’s hard to imagine Wing Commander under any other name.

The game was finished in a mad frenzy that stretched right up to the end; the “installation guide” telling how to get it running was written and typeset from scratch in literally the last five hours before the whole project had to be packed into a box and shipped off for duplication. That accomplished, everyone donned their new Wing Commander baseball caps and headed out to the front lawn for Origin’s traditional ship-day beer bash. There Robert Garriott climbed onto a picnic table to announce that all of Chris Roberts’s efforts in creating by far the most elaborate multimedia production Origin had ever released had been enough to secure him, at long last, an actual fast job at the company. “As of 5 P.M. this afternoon,” said Garriott, “Chris is Origin’s Director of New Technologies. Congratulations, Chris, and welcome to the Origin team.” The welcome was, everyone had to agree, more than a little belated.

We’ll turn back to Roberts’s later career at Origin in future articles. At this point, though, this history of the original Wing Commander must become the story of the people who played it rather than that of the people who created it. And, make no mistake, play it the people did. Gamers rushed to embrace what had ever since that Summer CES show been the most anticipated title in the industry. Roberts has claimed that Wing Commander sold 100,000 copies in its first month, a figure that would stand as ridiculous if applied to just about any other computer game of the era, but which might just be ridiculous enough to be true in the case of Wing Commander. While hard sales figures for the game or the franchise it would spawn have never to my knowledge been made public, I can feel confident enough in saying that sales of the first Wing Commander soared into the many, many hundreds of thousands of units. The curse of Ultima was broken; Origin now had a game which had not just become a hit in spite of Ultima‘s long shadow, they had a game which threatened to do the unthinkable — to overshadow Ultima in their product catalog. Certainly all indications are that Wing Commander massively outsold Ultima VI, possibly by a factor of two to one or more. It would take a few years, until the release of Doom in 1993, for any other name to begin to challenge that of Wing Commander as the most consistent money spinner in American computer gaming.

But why should that have been? Why should this particular game of all others have become such a sensation? Part of the reason must be serendipitous timing. During the 1990s as in no decade before or since, the latest developments in hardware would drive sales of games that could show them off to best effect, and Wing Commander set the stage for this trend. Released at a time when 80386-based machines with expanded memory, sound cards, and VGA graphics were just beginning to enter American homes in numbers, Wing Commander took advantage of all those things like no other game on the market. It benefited enormously from this singularity among those who already owned the latest hardware setups, while causing yet many more jealous gamers who hadn’t heretofore seen a need to upgrade to invest in hot machines of their own — the kind of virtuous circle to warm any capitalist’s heart.

Yet there was also something more going on with Wing Commander than just a cool-looking game for showing off the latest hardware, else it would have suffered the fate of the slightly later bestseller Myst: that of being widely purchased, but very rarely actually, seriously played. Unlike the coolly cerebral Myst, Wing Commander was a crowd-pleaser from top to bottom, with huge appeal, even beyond its spectacular audiovisuals, to anyone who had ever thrilled to the likes of a Star Wars film. It was, in other words, computerized entertainment for the mainstream rather than for a select cognoscenti. Just as all but the most incorrigible snobs could have a good time at a Star Wars showing, few gamers of any stripe could resist the call of Wing Commander. In an era when the lines of genre were being drawn more and more indelibly, one of the most remarkable aspects of Wing Commander‘s reception is the number of genre lines it was able to cross. Whether they normally preferred strategy games or flight simulators, CRPGs or adventures, everybody wanted to play Wing Commander.

At a glance, Chris Roberts’s gung-ho action movie of a game would seem to be rather unsuited for the readership of Computer Gaming World, a magazine that had been born out of the ashes of the tabletop-wargaming culture of the 1970s and was still beholden most of all to computer games in the old slow-paced, strategic grognard tradition. Yet the magazine and its readers loved Wing Commander. In fact, they loved Wing Commander as they had never loved any other game before. After reaching the number-one position in Computer Gaming World‘s readers’ poll in February of 1991, it remained there for an unprecedented eleven straight months, attaining already in its second month on top the highest aggregate score ever recorded for a game. When it was finally replaced at number one in January of 1992, the replacement was none other than the new Wing Commander IIWing Commander I then remained planted right there behind its successor at number two until April, when the magazine’s editors, needing to make room for other games, felt compelled to “retire” it to their Hall of Fame.

In other places, the huge genre-blurring success of Wing Commander prompted an identity crisis. Shay Addams, adventure-game solver extraordinaire, publisher of the Questbusters newsletter and the Quest for Clues series of books, received so many requests to cover Wing Commander that he reported he had been “on the verge of scheduling a brief look” at it. But in the end, he had decided a little petulantly, it “is just a shoot-em-up-in-space game in which the skills necessary are vastly different from those required for completing a quest. (Then again, there is always the possibility of publishing Simulationbusters.)” The parenthetical may have sounded like a joke, but Addams apparently meant it seriously – or, at least, came to mean it seriously. The following year, he started publishing a sister newsletter to Questbusters called Simulations!. It’s hard to imagine him making such a decision absent the phenomenon that was Wing Commander.

So, there was obviously much more to Wing Commander than a glorified tech demo. If we hope to understand what its secret sauce might have been, we need to look at the game itself again, this time from the perspective of a player rather than a developer.

One possibility can be excised immediately. The “space combat simulation” part of the game — i.e., the game part of the game — is fun today and was graphically spectacular back in 1990, but it’s possessed of neither huge complexity nor the sort of tactical or strategic interest that would seem to be required of a title that hoped to spend eleven months at the top of the Computer Gaming World readers’ charts. Better graphics and embodied approach aside, it’s a fairly commonsense evolution of Elite‘s combat engine, complete with inertia and sounds in the vacuum of space and all the other space-fantasy trappings of Star Wars. If we hope to find the real heart of the game’s appeal, it isn’t here that we should look, but rather to the game’s fiction — to the movie Origin Systems built around Chris Roberts’s little shoot-em-up-in-space game.

Wing Commander casts you as an unnamed young pilot, square-jawed and patriotic, who has just been assigned to the strike carrier Tiger’s Claw, out on the front lines of humanity’s war against the vicious Kilrathi, a race of space-faring felines. (Cat lovers should approach this game with caution!) Over the course of the game, you fly a variety of missions in a variety of star systems, affecting the course of the wider war as you do so in very simple, hard-branching ways. Each mission is introduced via a briefing scene, and concluded, if you make it back alive, with a debriefing. (If you don’t make it back alive, you at least get the rare pleasure of watching your own funeral.) Between missions, you can chat with your fellow pilots and a friendly bartender in the Tiger’s Claw‘s officers lounge, play on a simulator in the lounge that serves as the game’s training mode, and keep track of your kill count along with that of the other pilots on the squadron blackboard. As you fly missions and your kill count piles up, you rise through the Tiger’s Claw‘s hierarchy from an untested rookie to the steely-eyed veteran on which everyone else in your squadron depends. You also get the chance to fly several models of space-borne fighters, each with its own flight characteristics and weapons loadouts.

A mission briefing.

The inspirations for Wing Commander as a piece of fiction aren’t hard to find in either the game itself or the many interviews Chris Roberts has given about it over the years. Leaving aside the obvious influence of Star Wars on the game’s cinematic visuals, Wing Commander fits most comfortably into the largely book-bound sub-genre of so-called “military science fiction.” A tradition which has Robert Heinlein’s 1959 novel Starship Troopers as its arguable urtext, military science fiction is less interested in the exploration of strange new worlds, etc., than it is in the exploration of possible futures of warfare in space.

There isn’t much doubt where Wing Commander‘s historical inspiration lies.

Because worldbuilding is hard and extrapolating the nitty-gritty details of future modes of warfare is even harder, much military science fiction is built out of thinly veiled stand-ins for the military and political history of our own little planet. So, for example, David Weber’s long-running Honor Harrington series transports the Napoleonic Wars into space, while Joe Haldeman’s The Forever War — probably the sub-genre’s best claim to a work of real, lasting literary merit — is based largely on the author’s own experiences in Vietnam. Hewing to this tradition, Wing Commander presents a space-borne version of the grand carrier battles which took place in the Pacific during World War II — entirely unique events in the history of human warfare and, as this author can well attest, sheer catnip to any young fellow with a love of ships and airplanes and heroic deeds and things that go boom. Wing Commander shares this historical inspiration with another of its obvious fictional inspirations, the fun if terminally cheesy 1978 television series Battlestar Galactica. (Come to think of it, much the same description can be applied to Wing Commander.)

Sparkling conversationalists these folks aren’t.

Wing Commander is also like Battlestar Galactica in another respect: it’s not so much interested in constructing a detailed technological and tactical framework for its vision of futuristic warfare — leave that stuff to the books! — as it is in choosing whatever thing seems coolest at any given juncture. We know nothing really about how or why any of the stuff in the game works, just that’s it’s our job to go out and blow stuff up with it. Nowhere is that failing, if failing it be, more evident than in the very name of the game. “Wing Commander” is a rank in the Royal Air Force and those of Commonwealth nations denoting an officer in charge of several squadrons of aircraft. It’s certainly not an appropriate designation for the role you play here, that of a rookie fighter pilot who commands only a single wingman. This Wing Commander is called Wing Commander strictly because it sounds cool.

In time, Origin’s decision to start hiring people to serve specifically in the role of writer would have a profound effect on the company’s games, but few would accuse this game, one of Origin’s first with an actual, dedicated “lead writer,” of being deathless fiction. To be fair to David George, it does appear that he spent the majority of his time drawing up the game’s 40 missions, serving in a role that would probably be dubbed “scenario designer” or “level designer” today rather than “writer.” And  it’s not as if Chris Roberts’s original brief gave him a whole lot to work with. This is, after all, a game where you’re going to war against a bunch of anthropomorphic house cats. (Our cat told me she thought about conquering the galaxy once or twice, but she wasn’t sure she could fit it into the three hours per day she spends awake.) The Kilrathi are kind of… well, there’s just no getting around it, is there? The whole Kilrathi thing is pretty stupid, although it does allow your fellow pilots to pile on epithets like “fur balls,” “fleabags,” and, my personal favorite, “Killie-cats.”

Said fellow pilots are themselves a collection of ethnic stereotypes so over-the-top as to verge on the offensive if it wasn’t so obvious that Origin just didn’t have a clue. Spirit is Japanese, so of course she suffixes every name with “-san” or “-sama” even when speaking English, right? And Angel is French, so of course she says “bonjour” a lot, right? Right?

My second favorite Wing Commander picture comes from the manual rather than the game proper. Our cat would look precisely this bitchy if I shoved her into a spacesuit.

Despite Chris Roberts’s obvious and oft-stated desire to put you into an interactive movie, there’s little coherent narrative arc to Wing Commander, even by action-movie standards. Every two to four missions, the Tiger’s Claw jumps to some other star system and some vague allusion is made to the latest offensive or defensive operation, but there’s nothing to really hang your hat on in terms of a clear unfolding narrative of the war. A couple of cut scenes do show good or bad events taking place elsewhere, based on your performance in battle — who knew one fighter pilot could have so much effect on the course of a war? — but, again, there’s just not enough detail to give a sense of the strategic situation. One has to suspect that Origin didn’t know what was really going on any better than the rest of us.

My favorite Wing Commander pictures, bar none. What I love best about these and the picture above is the ears on the helmets. And what I love best about the ears on the helmets is that there’s no apparent attempt to be cheeky or funny in placing them there. (One thing this game is totally devoid of is deliberate humor. Luckily, there’s plenty of non-deliberate humor to enjoy.) Someone at Origin said, “Well, they’re cats, so they have to have space in their helmets for their ears, right?” and everyone just nodded solemnly and went with it. If you ask me, nothing illustrates Wing Commander‘s charming naivete better than this.

In its day, Wing Commander was hugely impressive as a technological tour de force, but it’s not hard to spot the places where it really suffered from the compressed development schedule. There’s at least one place, for example, where your fellow pilots talk about an event that hasn’t actually happened yet, presumably due to last minute juggling of the mission order. More serious are the many and varied glitches that occur during combat, from sound drop-outs to the occasional complete lock-up. Most bizarrely of all to our modern sensibilities, Origin didn’t take the time to account for the speed of the computer running the game. Wing Commander simply runs flat-out all the time, as fast as the hosting computer can manage. This delivered a speed that was just about perfect on a top-of-the-line 80386-based machine of 1990, but that made it effectively unplayable on the next generation of 80486-based machines that started becoming popular just a couple of years later; this game was definitely not built with any eye to posterity. Wing Commander would wind up driving the development of so-called “slowdown” programs that throttled back later hardware to keep games like this one playable.

Still, even today Wing Commander remains a weirdly hard nut to crack in this respect. For some reason, presumably involving subtle differences between real and emulated hardware, it’s impossible to find an entirely satisfactory speed setting for the game in the DOSBox emulator. A setting which seems perfect when flying in open space slows down to a crawl in a dogfight; a setting which delivers a good frame rate in a dogfight is absurdly fast when fewer other ships surround you. The only apparent solution to the problem is to adjust the DOSBox speed settings on the fly as you’re trying not to get shot out of space by the Kilrathi — or, perhaps more practically, to just find something close to a happy medium and live with it. One quickly notices when reading about Wing Commander the wide variety of opinions about its overall difficulty, from those who say it’s too easy to those who say it’s way too hard to those who say it’s just right. I wonder whether this disparity is down to the fact that, thanks to the lack of built-in throttling, everyone is playing a slightly different version of the game.

The only thing worse than being a cat lover in this game is being a pacifist. And everyone knows cats don’t like water, Shotglass… sheesh.

It becomes clear pretty quickly that the missions are only of a few broad types, encompassing patrols, seek-and-destroy missions, and escort missions (the worst!), but the context provided by the briefings keeps things more interesting than they might otherwise be, as do the variety of spacecraft you get to fly and fight against. The mission design is pretty good, although the difficulty does ebb and spike a bit more than it ideally might. In particular, one mission found right in the middle of the game — the second Kurosawa mission, for those who know the game already — is notorious for being all but impossible. Chris Roberts has bragged that the missions in the finished game “were exactly the ones that Jeff George designed on paper — we didn’t need to do any balancing at all!” In truth, I’m not sure the lack of balancing isn’t a bug rather than a feature.

Um, yes. I’m standing here, aren’t I? Should this really be a judgment call?

Roberts’s decision to allow you to take your lumps and go on even when you fail at a mission was groundbreaking at the time. Yet, having made this very progressive decision, he then proceeded to implement it in the most regressive way imaginable. When you fail in Wing Commander, the war as a whole goes badly, thanks again to that outsize effect you have upon it, and you get punished by being forced to fly against even more overwhelming odds in inferior fighters. Imagine, then, what it’s like to play Wing Commander honestly, without recourse to save games, as a brand new player. Still trying to get your bearings as a rookie pilot, you don’t perform terribly well in the first two or three missions. In response, your commanding officer delivers a constant drumbeat of negative feedback, while the missions just keep getting harder and harder at what feels like an almost exponential pace, ensuring that you continue to suck every time you fly. By the time you’ve failed at 30 missions and your ineptitude has led to the Tiger’s Claw being chased out of the sector with its (striped?) tail between its legs, you might just need therapy to recover from the experience.

What ought to happen, of course, is that failing at the early missions should see you assigned to easier rather than harder ones — no matter the excuse; Origin could make something up on the fly, as they so obviously did so much of the game’s fiction — that give you a chance to practice your skills. Experienced, hardcore players could still have their fun by trying to complete the game in as few missions as possible, while newcomers wouldn’t have to feel like battered spouses. Or, if such an elegant solution wasn’t possible, Origin could at least have given us player-selectable difficulty levels.

As it is, the only practical way to play as a newcomer is to ignore all of Origin’s exhortations to play honestly and just keep reloading until you successfully complete each mission; only in this way can you keep the escalating difficulty manageable. (The one place where I would recommend that you take your lumps and continue is in the aforementioned second Kurosawa mission. Losing here will throw you briefly off-track, but the missions that follow aren’t too difficult, and it’s easier to play your way to victory through them than to try to beat Mission Impossible.) This approach, it should be noted, drove Chris Roberts crazy; he considered it nothing less than a betrayal of the entire premise around which he’d designed his game. Yet he had only himself to blame. Like much in Wing Commander, the discrepancy between the game Roberts wants to have designed and the one he’s actually designed speaks to the lack of time to play it extensively before its release, and thereby to shake all these problems out.

And yet. And yet…

Having complained at such length about Wing Commander, I find myself at something of an impasse, in that my overall verdict on the game is nowhere near as negative as these complaints would imply. It’s not even a case of Wing Commander being, like, say, most of the Ultima games, a groundbreaking work in its day that’s a hard sell today. No, Wing Commander is a game I continue to genuinely enjoy despite all its obvious problems.

In writing about all these old games over the years, I’ve noticed that those titles I’d broadly brand as classics and gladly recommend to contemporary players tend to fall into two categories. There are games like, say, The Secret of Monkey Island that know exactly what they’re trying to do and proceed to do it all almost perfectly, making all the right choices; it’s hard to imagine how to improve these games in any but the tiniest of ways within the context of the technology available to their developers. And then there are games like Wing Commander that are riddled with flaws, yet still manage to be hugely engaging, hugely fun, almost in spite of themselves. Who knows, perhaps trying to correct all the problems I’ve spent so many words detailing would kill something ineffably important in the game. Certainly the many sequels and spinoffs to the original Wing Commander correct many of the failings I’ve described in this article, yet I’m not sure any of them manage to be a comprehensively better game. Like so many creative endeavors, game design isn’t a zero-sum game. Much as I loathe the lazy critic’s cliché “more than the sum of its parts,” it feels hard to avoid it here.

It’s true that many of my specific criticisms have an upside to serve as a counterpoint. The fiction may be giddy and ridiculous, but it winds up being fun precisely because it’s so giddy and ridiculous. This isn’t a self-conscious homage to comic-book storytelling of the sort we see so often in more recent games from this Age of Irony of ours. No, this game really does think this stuff it’s got to share with you is the coolest stuff in the world, and it can’t wait to get on with it; it lacks any form of guile just as much as it does any self-awareness. In this as in so many other senses, Wing Commander exudes the personality of its creator, helps you to understand why it was that everyone at Origin Systems so liked to have this high-strung, enthusiastic kid around them. There’s an innocence about the game that leaves one feeling happy that Chris Roberts was steered away from his original plans for a “gritty” story full of moral ambivalence; one senses that he wouldn’t have been able to do that anywhere near as well as he does this. Even the Kilrathi enemies, silly as they are, take some of the sting out of war; speciesist though the sentiment may be, at least it isn’t people you’re killing out there. Darned if the fiction doesn’t win me over in the end with its sheer exuberance, all bright primary emotions to match the bright primary colors of the VGA palette. Sometimes you’re cheering along with it, sometimes you’re laughing at it, but you’re always having a good time. The whole thing is just too gosh-darned earnest to annoy me like most bad writing does.

Even the rogue’s gallery of ethnic stereotypes that is your fellow pilots doesn’t grate as much as it might. Indeed, Origin’s decision to include lots of strong, capable women and people of color among the pilots should be applauded. Whatever else you can say about Wing Commander, its heart is almost always in the right place.

Winning a Golden Sun for “surviving the destruction of my ship.” I’m not sure, though, that “sacrificing my vessel” was really an act of bravery, under the circumstances. Oh, well, I’ll take whatever hardware they care to give me.

One thing Wing Commander understands very well is the value of positive reinforcement — the importance of, as Sid Meier puts it, making sure the player is always the star of the show. In that spirit, the kill count of even the most average player will always advance much faster on the squadron’s leader board than that of anyone else in the squadron. As you play through the missions, you’re given promotions and occasionally medals, the latter delivered amidst the deafening applause of your peers in a scene lifted straight from the end of the first Star Wars film (which was in turn aping the Nuremberg Rally shown in Triumph of the Will, but no need to think too much about that in this giddy context). You know at some level that you’re being manipulated, just as you know the story is ridiculous, but you don’t really care. Isn’t this feeling of achievement a substantial part of the reason that we play games?

Another thing Wing Commander understands — or perhaps stumbled into accidentally thanks to the compressed development schedule — is the value of brevity. Thanks to the tree structure that makes it impossible to play all 40 missions on any given run-through, a typical Wing Commander career spans no more than 25 or 30 missions, most of which can be completed in half an hour or so, especially if you use the handy auto-pilot function to skip past all the point-to-point flying and just get to the places where the shooting starts. (Personally, I prefer the more organic feel of doing all the flying myself, but I suspect I’m a weirdo in this as in so many other respects.) The relative shortness of the campaign means that the game never threatens to run into the ground the flight engine’s rather limited box of tricks. It winds up leaving you wanting more rather than trying your patience. For all these reasons, and even with all its obvious problems technical and otherwise, Wing Commander remains good fun today.

Which doesn’t of course mean that any self-respecting digital antiquarian can afford to neglect its importance to gaming history. The first blockbuster of the 1990s and the most commercially dominant franchise in computer gaming until the arrival of Doom in 1993 shook everything up yet again, Wing Commander can be read as cause or symptom of the changing times. There was a sense even in 1990 that Wing Commander‘s arrival, coming so appropriately at the beginning of a new decade, marked a watershed moment, and time has only strengthened that impression. Chris Crawford, this medium’s eternal curmudgeon — every creative field needs one of them to serve as a corrective to the hype-merchants — has accused Wing Commander of nothing less than ruining the culture of gaming for all time. By raising the bar so high on ludic audiovisuals, runs his argument, Wing Commander dramatically raised the financial investment necessary to produce a competitive game. This in turn made publishers, reluctant to risk all that capital on anything but a sure bet, more conservative in the sorts of projects they were willing to approve, causing more experimental games with only niche appeal to disappear from the market. “It became a hit-driven industry,” Crawford says. “The whole marketing strategy, economics, and everything changed, in my opinion, much for the worse.”

There’s some truth to this assertion, but it’s also true that publishers had been growing more conservative and budgets had been creeping upward for years before Wing Commander. By 1990, Infocom’s literary peak was years in the past, as were Activison’s experimental period and Electronic Arts’s speculations on whether computers could make you cry. In this sense, then, Wing Commander can be seen as just one more point on a trend line, not the dramatic break which Crawford would claim it to be. Had it not come along when it did to raise the audiovisual bar, something else would have.

Where Wing Commander does feel like a cleaner break with the past is in its popularizing of the use of narrative in a traditionally non-narrative-driven genre. This, I would assert, is the real source of the game’s appeal, then and now. The shock and awe of seeing the graphics and hearing the sound and music for the first time inevitably faded even back in the day, and today of course the whole thing looks garish and a little kitschy with those absurdly big pixels. And certainly the space-combat game alone wasn’t enough to sustain obsessive devotion back in the day, while today the speed issues can at times make it more than a little exasperating to actually play Wing Commander at all. But the appeal of, to borrow from Infocom’s old catch-phrase, waking up inside a story — waking up inside a Star Wars movie, if you like — and being swept along on a rollicking, semi-interactive ride is, it would seem, eternal. It may not have been the reason most people bought Wing Commander in the early 1990s — that had everything to do with those aforementioned spectacular audiovisuals — but it was the reason they kept playing it, the reason it remained the best single computer game in the country according to Computer Gaming World‘s readers for all those months. Come for the graphics and sound, stay for the story. The ironic aspect of all this is that, as I’ve already noted, Wing Commander‘s story barely qualified as a story at all by the standards of conventional fiction. Yet, underwhelming though it was on its own merits, it worked more than well enough in providing structure and motivation for the individual missions.

The clearest historical antecedent to Wing Commander must be the interactive movies of Cinemaware, which had struggled to combine cinematic storytelling with modes of play that departed from traditional adventure-game norms throughout the second half of the 1980s, albeit with somewhat mixed success. John Cutter, a designer at Cinemaware, has described how Bob Jacob, the company’s founder and president, reacted to his first glimpse of Wing Commander: “I don’t think I’ve ever seen him look so sad.” With his company beginning to fall apart around him, Jacob had good reason to feel sad. He least of all would have imagined Origin Systems — they of the aesthetically indifferent CRPG epics — as the company that would carry the flag of cinematic computer gaming forward into the new decade, but the proof was right there on the screen in front of him.

There are two accounts, both of them true in their way, to explain how the adventure game, a genre that in the early 1990s was perhaps the most vibrant and popular in computer gaming, ended the decade an irrelevancy to gamers and publishers alike. One explanation, which I’ve gone into a number of times already on this blog, focuses on a lack of innovation and, most of all, a lack of good design practices among far too many adventures developers; these lacks left the genre identified primarily with unfun pixel hunts and illogical puzzles in the minds of far too many players. But another, more positive take on the subject says that adventure games never really went away at all: their best attributes were rather merged into other genres. Did adventure games disappear or did they take over the world? As in so many cases, the answer depends on your perspective. If you focus on the traditional mechanics of adventure games — exploring landscapes and solving puzzles, usually non-violently — as their defining attributes, the genre did indeed go from thriving to all but dying in the course of about five years. If, on the other hand, you choose to see adventure games more broadly as games where you wake up inside a story, it can sometimes seem like almost every game out there today has become, whatever else it is, an adventure game.

Wing Commander was the first great proof that many more players than just adventure-game fans love story. Players love the way a story can make them feel a part of something bigger as they play, and, more prosaically but no less importantly, they love the structure it can give to their play. One of the dominant themes of games in the 1990s would be the injection of story into genres which had never had much use for it before: the unfolding narrative of discovery built into the grand-strategy game X-COM, the campaign modes of the real-time-strategy pioneers Warcraft and Starcraft, the plot that gave meaning to all the shooting in Half-Life. All of these are among the most beloved titles of the decade, spawning franchises that remain more than viable to this day. One has to assume this isn’t a coincidence. “The games I made were always about narrative because I felt that was missing for me,” says Chris Roberts. “I wanted that sense of story and progression. I felt like I wasn’t getting that in games. That was one of my bigger drives when I was making games, was to get that, that I felt like I really wanted and liked from other media.” Clearly many others agreed.

(Sources: the books Wing Commander I and II: The Ultimate Strategy Guide by Mike Harrison and Game Design Theory and Practice by Richard Rouse III; Retro Gamer 59 and 123; Questbusters of July 1989, August 1990, and April 1991; Computer Gaming World of September 1989 and November 1992; Amiga Computing of December 1988. Online sources include documents hosted at the Wing Commander Combat Information Center, US Gamer‘s profile of Chris Roberts, The Escapist‘s history of Wing Commander, Paul Dean’s interview with Chris Roberts, and Matt Barton’s interview with George “The Fat Man” Sanger. Last but far from least, my thanks to John Miles for corresponding with me via email about his time at Origin, and my thanks to Casey Muratori for putting me in touch with him.

Wing Commander I and II can be purchased in a package together with all of their expansion packs from GOG.com.)

 
 

Tags: , ,