RSS

Tag Archives: eliza

Eliza, Part 3

The most obvious legacy of Eliza is the legions of similar chatterbots which have followed, right up to the present day. But what does Eliza mean to the history of interactive narrative? Or, put another way: why did I feel the need to backtrack and shoehorn it in now?

One answer is kind of blindingly obvious. When someone plays Eliza she enters into a text-based dialog with a computer program. Remind you of something? Indeed, if one took just a superficial glance at an Eliza session and at a session of Adventure one might assume that both programs are variations on the same premise. This is of course not the case; while Eliza is “merely” a text-generation engine, with no deeper understanding, Adventure and its antecedents allow the player to manipulate a virtual world through textual commands, and so cannot get away with pretending to understand the way that Eliza can. Still, it’s almost certain that Will Crowther would have been aware of Eliza when he began to work on Adventure, and its basic mode of interaction may have influenced him. Lest I be accused of stretching Eliza‘s influence too far, it’s also true that almost all computer / human interaction of the era was in the form of a textual dialog; command-line interfaces ruled the day, after all. The really unique element shared by Eliza and Adventure was the pseudo-natural language element of that interaction. Just on that basis Eliza stands as an important forerunner to full-fledged interactive fiction.

But to just leave it at that, as I’m afraid I kind of did when I wrote my little history of IF a number of years ago now, is to miss most of what makes Eliza such a fascinating study. At a minimum, the number of scholars who have been drawn to Eliza despite having little or no knowledge of or interest in its place in the context of IF history points to something more. Maybe we can tease out what that might be by looking at Eliza‘s initial reception, and at Joseph Weizenbaum’s reaction to it.

Perhaps the first person to interact extensively with Eliza was Weizenbaum’s secretary: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” Her reaction was not unusual; Eliza became something of a sensation at MIT and the other university campuses to which it spread, and Weizenbaum an unlikely minor celebrity. Mostly people just wanted to talk with Eliza, to experience this rare bit of approachable fun in a mid-1960s computing world that was all Business (IBM) or Quirky Esoterica (the DEC hackers). Some, however, treated the program with a seriousness that seems a bit baffling today. There were even suggestions that it might be useful for actual psychotherapy. Carl Sagan, later of Cosmos fame, was a big fan of this rather horrifying idea, which a trio of authors consisting of a psychiatrist, a computer scientist, and a statistician actually managed to get published as a serious article in The Journal of Nervous and Mental Diseases:

Further work must be done before the program will be ready for clinical use. If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Because of the time-sharing capabilities of modern and future computers, several hundreds patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of the system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist as now exists.

Weizenbaum’s reaction to all of this has become almost as famous as the Eliza program itself. When he saw people like his secretary engaging in lengthy heart-to-hearts with Eliza, it… well, it freaked him the hell out. The phenomenon Weizenbaum was observing was later dubbed “the Eliza effect” by Sherry Turkle, which she defined as the tendency “to project our feelings onto objects and to treat things as though they were people.” In computer science and new media circles, the Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is. Weizenbaum came to see this as not just personally disturbing but as dangerous to the very social fabric, an influence that threatened the ties that bind us together and, indeed, potentially threatened our very humanity. Weizenbaum’s view, in stark contrast to those of people like Marvin Minsky and John McCarthy at MIT’s own Artificial Intelligence Laboratory, was that human intelligence, with its affective, intuitive qualities, could never be duplicated by the machinery of computing — and that we tried to do so at our peril. Ten years on from Eliza, he laid out his ideas in his magnum opus, Computer Power and Human Reason, a strong push-back against the digital utopianism that dominated in many computing circles at the time.

Weizenbaum wrote therein of his students at MIT, which was of course all about science and technology. He said that they “have already rejected all ways but the scientific to come to know the world, and [they] seek only a deeper, more dogmatic indoctrination in that faith (although that word is no longer in their vocabulary).” He certainly didn’t make too many friends among the hackers when he described them like this:

Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and the world in which they move.

Although Weizenbaum claimed to be basing this description at least to some extent on his own experiences of becoming too obsessed with his work, there’s some evidence that his antipathy for the hardcore hackers at MIT was already partially in place even before Eliza. It’s worth noting that Weizenbaum chose to write Eliza not on the hackers’ beloved DEC, but rather on a big IBM 7094 mainframe located in another part of MIT’s campus; according to Steven Levy, Weizenbaum had “rarely interacted with” the hardcore hacker contingent.

Still, I’m to a large degree sympathetic with Weizenbaum’s point of view. Having watched a parade of young men come through his classes who could recite every assembler opcode on the PDP but had no respect or understanding of aesthetics, of history, of the simple good fellowship two close friends find over a bottle of wine, he pleads for balance, for a world where those with the knowledge to create and employ technology are also possessed of humanity and wisdom. It’s something we could use more of in our world of Facebook “friends” and Twitter “conversations.” I feel like Weizenbaum every time I wander over to Slashdot and its thousands of SLNs — Soulless Little Nerds, whose (non-videogame) cultural interests extend no further than Tolkien and superheroes, who think that Sony’s prosecution of a Playstation hacker is the human-rights violation of our times. It’s probably the reason I ended up studying the humanities in university instead of computer science; the humanities people were just so much more fun to talk with. I’m reminded of Watson’s initial description of his new roommate Sherlock Holmes’s character in A Study in Scarlet:

1. Knowledge of literature — nil.
2. Knowledge of philosophy — nil.
3. Knowledge of astronomy — nil.
4. Knowledge of politics — feeble.
5. Knowledge of botany — variable. Well up in belladonna, opium and poisons generally. Knows nothing of practical gardening.
6. Knowledge of geology — practical, but limited. Tells at a glance different soils from each other. After walks, has shown me splashes upon his trousers and told me by their color and consistence in what part of London he has received them.
7. Knowledge of chemistry — profound.
8. Knowledge of anatomy — accurate, but unsystematic.
9. Knowledge of sensational literature — immense. He appears to know every detail of every horror perpetuated in the century.
10. Plays the violin well.
11. Is an expert singlestick player, boxer, and swordsman.
12. Has a good practical knowledge of English law.

No wonder Watson moved out and Arthur Conan Doyle started adjusting his hero’s character pretty early on. Who’d want to live with this guy?

All that aside, I also believe that, at least in his strong reaction to the Eliza effect itself, Weizenbaum was missing something pretty important. He believed that his parlor trick of a program had induced “powerful delusional thinking in quite normal people.” But that’s kind of an absurd notion, isn’t it? Could his own secretary, who, as he himself stated, had “watched [Weizenbaum] work on the program for many months,” really believe that in those months he had, working all by himself, created sentience? I’d submit that she was perfectly aware that Eliza was a parlor trick of one sort or another, but that she willingly surrendered to the fiction of a psychotherapy session. It’s no great insight to state that human beings are eminently capable of “believing” two contradictory things at once, nor that we willingly give ourselves over to fictional worlds we know to be false all the time. Doing so is in the very nature of stories, and we do it every time we read a novel, see a movie, play a videogame. Not coincidentally, the rise of the novel and of the movie were both greeted with expressions of concern that were not all that removed from those Weizenbaum expressed about Eliza.

There’s of course a million philosophical places we could go with these ideas, drawing from McLuhan and Baudrillard and a hundred others, but we don’t want to entirely derail this little series on computer-game history, do we? So, let’s stick to Eliza and look at what Sherry Turkle wrote of the way that people actively helped along the fiction of a psychotherapy session:

As one becomes experienced with the ways of Eliza, one can direct one’s remarks either to “help” the program make seemingly pertinent responses or to provoke nonsense. Some people embark on an all-out effort to “psych out” the program, to understand its structure in order to trick it and expose it as a “mere machine.” Many more do the opposite. I spoke with people who told me of feeling “let down” when they had cracked the code and lost the illusion of mystery. I often saw people trying to protect their relationships with Eliza by avoiding situations that would provoke the program into making a predictable response. They didn’t ask questions that they knew would “confuse” the program, that would make it “talk nonsense.” And they went out of their way to ask questions in a form that they believed would provoke a lifelike response. People wanted to maintain the illusion that Eliza was able to respond to them.

If we posit, then, that Eliza‘s interactors were knowingly suspending their disbelief and actively working to maintain the fiction of a psychotherapy session, the implications are pretty profound, because now we have people in the mid-1960s already seriously engaging with a digital “interactive fiction” of sorts. We see here already the potential and the appeal of the computer as a storytelling medium, not as a tool to create stories from whole cloth. Eliza‘s interlocutors are engaging with a piece of narrative art generated by a very human artist, Weizenbaum himself (not that he would likely have described himself in those terms). This is what story writers and story readers have always done. Unlike Weizenbaum, I would consider the reception of Eliza not a cause for concern but a cause for excitement and anticipation. “If you think Eliza is exciting,” we might say to that secretary, “just wait until the really good stuff hits.” Hell, I get retroactive buzz just thinking about it.

And that buzz is the real reason why I wanted to talk about Eliza.

 

Tags: ,

Eliza, Part 2

Just to be sure we understand what Eliza does and doesn’t do, I thought it might be instructive to look at an actual conversation from under the hood. What follows is an only slightly modified version of the sample run included in the July/August, 1977, issue of Creative Computing that introduced the BASIC Eliza. (Specifically, I changed the original reference to an IMSAI 8080 to a Tandy in keeping with this blog’s recent theme.) It’s a much less compelling example than the famous transcript I included in my last post, which is partly down to the acknowledged inferiority of this version of Eliza and partly down to Creative Computing choosing to interact the way a person more typically might — i.e., by trying to take the piss out of the program just a bit rather than playing along with the psychologist / patient relationship. In that sense I’d call it a more honest reflection of Eliza‘s capabilities and limitations, and of the average user’s experience with it.

At the heart of the program is a routine that searches each input for one of a group of text sequences. In order of priority, they are:

1. “CAN YOU”
2. “CAN I”
3. “YOU ARE”
4. “YOU’RE”
5. “I DON’T”
6. “I FEEL”
7. “WHY DON’T YOU”
8. “WHY CAN’T I”
9. “ARE YOU”
10. “I CAN’T”
11. “I AM”
12. “I’M ”
13. “YOU ”
14. “I WANT”
15. “WHAT”
16. “HOW”
17. “WHO”
18. “WHERE”
19. “WHEN”
20. “WHY”
21. “NAME”
22. “CAUSE”
23. “SORRY”
24. “DREAM”
25. “HELLO”
26. “HI ”
27. “MAYBE”
28. ” NO”
29. “YOUR”
30. “ALWAYS”
31. “THINK”
32. “ALIKE”
33. “YES”
34. “FRIEND”
35. “COMPUTER”

As soon as one of these sequences is found, it is taken as the key sequence for the input as a whole, and no more searching is done. In other words, “COMPUTER” could be taken as the key sequence only if the input text does not contain any of the previous 34 sequences.

RUN
HI! I'M ELIZA. WHAT'S YOUR PROBLEM?
? I AM AN IBM 370
DID YOU COME TO ME BECAUSE YOU ARE AN IBM 370

When Eliza finds a key sequence, it responds with a set sequence of its own. In this case, it has latched onto the sequence “I AM” in the input. Having done so, it automatically throws away all of the input through that point, leaving just “AN IBM 370.” It then looks up the appropriate response pattern, which is “DID YOU COME HERE BECAUSE YOU ARE…”, followed by the remainder of the original input.

(Oh, and just so everyone gets the joke: an IBM 370 was an extremely large and powerful mainframe computer of the era; thus its repeated talking down to the little Radio Shack microcomputer.)

? YES I WANT A NEW LINEPRINTER
WHAT WOULD IT MEAN TO YOU IF YOU GOT A NEW LINEPRINTER

This time there are two possible matches: “YES” and “I WANT.” However, “I WANT” has higher priority, so Eliza throws away all of the input text through that point, which incidentally includes the “YES.” The appropriate response to “I WANT” is “WHAT WOULD IT MEAN TO YOU IF YOU GOT…”, followed by the remainder of the input.

? ABOUT 1800 LPM
SAY, DO YOU HAVE ANY PSYCHOLOGICAL PROBLEMS?

We’ve given Eliza nothing to latch onto here; no sequence matches at all. Therefore it falls back on a canned response for such situations.

? WHAT WOULD YOU KNOW I'M A 370 AND YOU'RE A TANDY
WHAT MAKES YOU THINK I AM A TANDY

There are a few matches this time, but “YOU’RE” has the highest priority, so Eliza throws everything away up to that point and appends the remainder to the appropriate response, “WHAT MAKES YOU THINK I AM…”

? IT SAYS THAT ON YOUR FRONT PANEL
WHY ARE YOU CONCERNED ABOUT MY FRONT PANEL

Key: “YOUR”
Result: “WHY ARE YOU CONCERNED ABOUT MY…” + “FRONT PANEL”

? LOOK YOU BROUGHT UP THE SUBJECT
WE WERE DISCUSSING YOU-- NOT ME.

Not all response sequences incorporate the original input. In this case, the key “YOU ” results in the canned response “WE WERE DISCUSSING YOU– NOT ME.”

? I'M FIXING A HOLE...
DID YOU COME HERE BECAUSE YOU ARE FIXING A HOLE...

The key sequence this time is “I’M “. Eliza turns to the same response sequence for both “I AM” and “I’M “, as one might expect.

? SORRY THAT WAS A TEN YEAR OLD SOFTWARE BUG
PLEASE DON'T APOLOGIZE

Key: “SORRY”
Result: “PLEASE DON’T APOLOGIZE!”

(Presumably, this is — leaving aside the obvious Beatles reference — an in-joke having to do with the IBM 370. If anyone reading this knows what exactly is being referred to here, by all means enlighten us in the comments.)

? I AM ALWAYS APOLOGIZING
HOW LONG HAVE YOU BEEN ALWAYS APOLOGIZING

Yet again “I AM” is the key sequence. However, notice that the response is not quite the same. For most keys, Eliza cycles through several response sequences, hopefully masking somewhat the automated nature of the whole process. In the case of “I AM,” there are four possible responses. (Due to an idiosyncrasy in the program, synonyms such as “I AM” and “I’M” are each run through their own cycle even though they draw from the same list of responses.)

? SINCE MY FIRST COLD START
WHAT DOES THAT SUGGEST TO YOU?

Once again we have given Eliza nothing to go on, so it falls back on the second of seven possible noncommittal responses.

? YOU'RE THE SHRINK
DOES IT PLEASE YOU TO BELIEVE I AM THE SHRINK

Our second use of “YOU’RE” results in the second of four possible responses to that sequence.

? DON'T YOU KNOW?
OH, I KNOW?

And our second use of “YOU ” results in the second of three possible response to that sequence.

? I AM BEGINNING TO WONDER ABOUT YOU
DO YOU BELIEVE IT IS NORMAL TO BE BEGINNING TO WONDER ABOUT I

Here we have the first real breakdown, and it’s an illustrative failure indeed. Eliza discovered yet another use of “I AM,” our third now, and so turns to the third response, “DO YOU BELIEVE IT IS NORMAL TO BE…” Based on what we’ve seen so far, we might expect the full reply to be, “DO YOU BELIEVE IT IS NORMAL TO BE BEGINNING TO WONDER ABOUT YOU,” which is obviously inappropriate. To account for situations like this, Eliza always attempts to modify the input-text fragments that it reprints, by changing first-person pronoun references to second-person and vice versa, and attempting to do some simplistic to-be verb conjugation. To be precise, it replaces sequences in the input text like so:

” AM ” <--> ” ARE ”
“WAS ” <--> “WERE ”
” I ” <--> ” YOU ”
“MY ” <--> “YOUR ”
” I’VE ” <--> ” YOU’VE ”
” I’M ” <--> ” YOU’RE ”

Eliza has in fact been searching reprinted fragments all along for these sequences, but has not found any until now. And now that it does, it gets it wrong, falling afoul of the fact that in English the second-person pronoun is the same in both the nominative and the accusative, but the first-person pronoun is different. Since it lacks any structural understanding of the English language and can only replace sequences of characters with other sequences, it replaces “YOU” with “I” rather than “ME.”

? HA HA HA YOU CAN'T EVEN TALK PROPERLY
YOU'RE NOT REALLY TALKING ABOUT ME, ARE YOU?

Key: “YOU ”
Result: “YOU’RE NOT REALLY TALKING ABOUT ME, ARE YOU?” (response 3 of 3)

? IS THERE ANYONE ELSE IN THE ROOM?
I SEE

Key: none
Result: Noncommittal response 3 of 7.

? COMPUTERS DON'T HAVE EYES
ARE YOU SURE?

Here we have another “failure” of sorts that this time actually works within the context of the conversation. Eliza takes the sequence “YES” from the word “EYES” as its key, resulting in the first of three possible responses to that sequence. Considering the non-sequitur that might have resulted, Eliza just dodged a bullet.

? ABOUT COMPUTERS?
DO COMPUTERS WORRY YOU?

Key: “COMPUTER”
Result: “DO COMPUTERS WORRY YOU?” (response 1 of 7)

? SHOULD THEY AFTER ALL I AM A 370
DO YOU ENJOY BEING A 370

Key: “I AM”
Result: “DO YOU ENJOY BEING” + “A 370” (response 4 of 4)

…and so on. I’m sure you get the gist of it by now. Having beaten this particular dead horse into the ground, I’ll wrap up this topic next time by talking about what the experience of Eliza ultimately means — to me, to Joseph Weizenbaum, and maybe to you.

 

Tags: ,

Eliza, Part 1

In 1965 a professor at MIT, Joseph Weizenbaum, introduced a computerized conversation simulator he called ELIZA. The most famous ELIZA transcript was published soon after in an article Weizenbaum wrote for the January, 1966, volume of Communications of the ACM. It is a record of a conversation between the program and a young woman; the program’s responses in the text below are in all caps.

Men are all alike.
IN WHAT WAY
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOU BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY
My mother takes care of me.
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU
My Father.
YOUR FATHER
You are like my father in some ways.
WHAT RESEMBLANCE DO YOU SEE
You are not very aggressive but I think you don’t want me to notice that.
WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE
You don’t argue with me.
WHY DO YOU THINK I DON’T ARGUE WITH YOU
You are afraid of me.
DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU
My father is afraid of everybody.
WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER
Bullies.
DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE

It is often assumed that “Eliza” is simply the name of the psychotherapist with which the human “patient” converses, but this is not in fact the case. ELIZA is actually the name of a generalized conversation system of Weizenbaum’s devising, being a reference to a working-class character in George Bernard Shaw’s play Pygmalion who learns to speak in an upper-class diction in spite of her origins. The metaphor, then, is that the computer has traded in the inflexible diction of a programming language for everyday natural language. Different “scripts” could be plugged into ELIZA to represent different characters. The first of these to be developed by Weizenbaum, as well as the one that generated the transcript above and the one that everyone remembers today, was called simply “Doctor.” In the film Rebel at Work Weizenbaum describes the process that led him to this rather brilliant character choice:

“And then all of a sudden it came to me: the psychiatrist. The psychiatrist asks questions in response to what the patient says. It may be partially or totally irrelevant, but the patient will interpret his words in terms of his own frame of mind. The patient assumes that the psychiatrist knows something, that he understands, that there is some sense to his words. ‘I don’t know what it is yet, but it’s not nonsense.’ And that’s how it started — then came ELIZA.

‘Well,’ says the psychiatrist, ‘perhaps… what does this remind you of?’

‘Hmm, very clever!’ thinks the patient. ‘This is a psychiatrist who really knows what I feel. I’m going to continue working with him.'”

As Weizenbaum was careful to describe in his article, in no sense does ELIZA actually understand anything its interlocutor enters. It is simply an elaborate text-generation engine, which searches for patterns in the entered text which can serve as hooks to be manipulated and recombined into its responses. The genius of the “Doctor” script is that this is also essentially what a psychotherapist often does during a session, at least from the perspective of the layman. Weizenbaum did prepare at least a few other ELIZA scripts, such as (keeping with the mental health theme) one for a paranoid schizophrenic, but these apparently did not have quite the same magic, and aren’t much remembered today. UPDATE: Actually, as Nick points out in the comments below, we have no evidence that Weizenbaum developed any scripts other than “Doctor.”

Even if we confine ourselves to “Doctor,” the famous script I included above is something of a best-case scenario. Weizenbaum, usually quite sober about these things, was stretching the truth considerably when he called it a “typical conversation” in his article. There inevitably comes a point in any ELIZA session that continues for any length of time when the program says something that clearly reveals it to be the elaborate parlor trick that it really is. Such breakdowns are at least as common as the several surprisingly apropos responses in the transcript above.

Weizenbaum wrote ELIZA in Lisp, a somewhat esoteric programming language developed at MIT for artificial intelligence and natural language processing applications. UPDATE: Make that MAD-SLIP, which originated at the University of Michigan. See Nick’s comment below for more details. However, his detailed ACM article served the same purpose as did Don Woods’s meticulously commented Adventure source code of ten years later, making the porting of ELIZA to other platforms and languages a relatively straightforward task. In the process, Weizenbaum’s original concept of a generalized conversation engine was forgotten, and ELIZA the system became Eliza the female psychotherapist. Creative Computing published a version in BASIC by Jeff Shrager and Steve North in its July/August, 1977, issue. In North’s words, “Although the program is an inferior imitation of the original, it does work.” Its limitations in comparison with Weizenbaum’s original derive from being written in BASIC and from the necessity of running in just 16 K of RAM. It’s nevertheless impressive in its way for what it is, and would serve as a springboard for countless sequels and derivations over the next decade. It seemed no one could own a microcomputer in the 1970s or 1980s without having some sort of Eliza variant somewhere in their software collection.

If you’d like to try out this version of Eliza on a virtual TRS-80, you can do so using the SDLTRS emulator and this state file.

1. Make sure the Level 2 ROM file and the NewDOS boot disk are in the emulator’s root directory, and that the state file is in some known location.
2. Start the emulator.
3. Turn your caps-lock on.
4. Press ALT-L to load a state.
5. Navigate to the state file and select it.

You’ll find yourself at a BASIC READY prompt, from which you can LIST the program, edit it, and of course RUN it. (Yes, it is very, very slow; such is life when doing lots of string processing in BASIC on a 1.78 MHz machine.) Type “SHUT” at any prompt to quit the program — and remember, you must have your caps lock on for it to “understand” you.

Finally, for those who know how to deal with such things, I’ve also made available the tokenized TRS-80 BASIC file of Eliza.

So, having talked about what ELIZA is we can soon get to the more interesting questions of how it works and what it means — and why I felt compelled to backtrack this way in the first place.

Postscript (June 17, 2011):

I’ve grown disenchanted with the SDLTRS emulator, and decided to use the one included with the MESS project from now on. Here’s a state file for use with that emulator. See my recently revised post on emulating the TRS-80 for more details on how to get a virtual TRS-80 working under MESS.

 

Tags: ,