The most obvious legacy of Eliza is the legions of similar chatterbots which have followed, right up to the present day. But what does Eliza mean to the history of interactive narrative? Or, put another way: why did I feel the need to backtrack and shoehorn it in now?
One answer is kind of blindingly obvious. When someone plays Eliza she enters into a text-based dialog with a computer program. Remind you of something? Indeed, if one took just a superficial glance at an Eliza session and at a session of Adventure one might assume that both programs are variations on the same premise. This is of course not the case; while Eliza is “merely” a text-generation engine, with no deeper understanding, Adventure and its antecedents allow the player to manipulate a virtual world through textual commands, and so cannot get away with pretending to understand the way that Eliza can. Still, it’s almost certain that Will Crowther would have been aware of Eliza when he began to work on Adventure, and its basic mode of interaction may have influenced him. Lest I be accused of stretching Eliza‘s influence too far, it’s also true that almost all computer / human interaction of the era was in the form of a textual dialog; command-line interfaces ruled the day, after all. The really unique element shared by Eliza and Adventure was the pseudo-natural language element of that interaction. Just on that basis Eliza stands as an important forerunner to full-fledged interactive fiction.
But to just leave it at that, as I’m afraid I kind of did when I wrote my little history of IF a number of years ago now, is to miss most of what makes Eliza such a fascinating study. At a minimum, the number of scholars who have been drawn to Eliza despite having little or no knowledge of or interest in its place in the context of IF history points to something more. Maybe we can tease out what that might be by looking at Eliza‘s initial reception, and at Joseph Weizenbaum’s reaction to it.
Perhaps the first person to interact extensively with Eliza was Weizenbaum’s secretary: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” Her reaction was not unusual; Eliza became something of a sensation at MIT and the other university campuses to which it spread, and Weizenbaum an unlikely minor celebrity. Mostly people just wanted to talk with Eliza, to experience this rare bit of approachable fun in a mid-1960s computing world that was all Business (IBM) or Quirky Esoterica (the DEC hackers). Some, however, treated the program with a seriousness that seems a bit baffling today. There were even suggestions that it might be useful for actual psychotherapy. Carl Sagan, later of Cosmos fame, was a big fan of this rather horrifying idea, which a trio of authors consisting of a psychiatrist, a computer scientist, and a statistician actually managed to get published as a serious article in The Journal of Nervous and Mental Diseases:
Further work must be done before the program will be ready for clinical use. If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Because of the time-sharing capabilities of modern and future computers, several hundreds patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of the system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist as now exists.
Weizenbaum’s reaction to all of this has become almost as famous as the Eliza program itself. When he saw people like his secretary engaging in lengthy heart-to-hearts with Eliza, it… well, it freaked him the hell out. The phenomenon Weizenbaum was observing was later dubbed “the Eliza effect” by Sherry Turkle, which she defined as the tendency “to project our feelings onto objects and to treat things as though they were people.” In computer science and new media circles, the Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is. Weizenbaum came to see this as not just personally disturbing but as dangerous to the very social fabric, an influence that threatened the ties that bind us together and, indeed, potentially threatened our very humanity. Weizenbaum’s view, in stark contrast to those of people like Marvin Minsky and John McCarthy at MIT’s own Artificial Intelligence Laboratory, was that human intelligence, with its affective, intuitive qualities, could never be duplicated by the machinery of computing — and that we tried to do so at our peril. Ten years on from Eliza, he laid out his ideas in his magnum opus, Computer Power and Human Reason, a strong push-back against the digital utopianism that dominated in many computing circles at the time.
Weizenbaum wrote therein of his students at MIT, which was of course all about science and technology. He said that they “have already rejected all ways but the scientific to come to know the world, and [they] seek only a deeper, more dogmatic indoctrination in that faith (although that word is no longer in their vocabulary).” He certainly didn’t make too many friends among the hackers when he described them like this:
Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and the world in which they move.
Although Weizenbaum claimed to be basing this description at least to some extent on his own experiences of becoming too obsessed with his work, there’s some evidence that his antipathy for the hardcore hackers at MIT was already partially in place even before Eliza. It’s worth noting that Weizenbaum chose to write Eliza not on the hackers’ beloved DEC, but rather on a big IBM 7094 mainframe located in another part of MIT’s campus; according to Steven Levy, Weizenbaum had “rarely interacted with” the hardcore hacker contingent.
Still, I’m to a large degree sympathetic with Weizenbaum’s point of view. Having watched a parade of young men come through his classes who could recite every assembler opcode on the PDP but had no respect or understanding of aesthetics, of history, of the simple good fellowship two close friends find over a bottle of wine, he pleads for balance, for a world where those with the knowledge to create and employ technology are also possessed of humanity and wisdom. It’s something we could use more of in our world of Facebook “friends” and Twitter “conversations.” I feel like Weizenbaum every time I wander over to Slashdot and its thousands of SLNs — Soulless Little Nerds, whose (non-videogame) cultural interests extend no further than Tolkien and superheroes, who think that Sony’s prosecution of a Playstation hacker is the human-rights violation of our times. It’s probably the reason I ended up studying the humanities in university instead of computer science; the humanities people were just so much more fun to talk with. I’m reminded of Watson’s initial description of his new roommate Sherlock Holmes’s character in A Study in Scarlet:
1. Knowledge of literature — nil.
2. Knowledge of philosophy — nil.
3. Knowledge of astronomy — nil.
4. Knowledge of politics — feeble.
5. Knowledge of botany — variable. Well up in belladonna, opium and poisons generally. Knows nothing of practical gardening.
6. Knowledge of geology — practical, but limited. Tells at a glance different soils from each other. After walks, has shown me splashes upon his trousers and told me by their color and consistence in what part of London he has received them.
7. Knowledge of chemistry — profound.
8. Knowledge of anatomy — accurate, but unsystematic.
9. Knowledge of sensational literature — immense. He appears to know every detail of every horror perpetuated in the century.
10. Plays the violin well.
11. Is an expert singlestick player, boxer, and swordsman.
12. Has a good practical knowledge of English law.
No wonder Watson moved out and Arthur Conan Doyle started adjusting his hero’s character pretty early on. Who’d want to live with this guy?
All that aside, I also believe that, at least in his strong reaction to the Eliza effect itself, Weizenbaum was missing something pretty important. He believed that his parlor trick of a program had induced “powerful delusional thinking in quite normal people.” But that’s kind of an absurd notion, isn’t it? Could his own secretary, who, as he himself stated, had “watched [Weizenbaum] work on the program for many months,” really believe that in those months he had, working all by himself, created sentience? I’d submit that she was perfectly aware that Eliza was a parlor trick of one sort or another, but that she willingly surrendered to the fiction of a psychotherapy session. It’s no great insight to state that human beings are eminently capable of “believing” two contradictory things at once, nor that we willingly give ourselves over to fictional worlds we know to be false all the time. Doing so is in the very nature of stories, and we do it every time we read a novel, see a movie, play a videogame. Not coincidentally, the rise of the novel and of the movie were both greeted with expressions of concern that were not all that removed from those Weizenbaum expressed about Eliza.
There’s of course a million philosophical places we could go with these ideas, drawing from McLuhan and Baudrillard and a hundred others, but we don’t want to entirely derail this little series on computer-game history, do we? So, let’s stick to Eliza and look at what Sherry Turkle wrote of the way that people actively helped along the fiction of a psychotherapy session:
As one becomes experienced with the ways of Eliza, one can direct one’s remarks either to “help” the program make seemingly pertinent responses or to provoke nonsense. Some people embark on an all-out effort to “psych out” the program, to understand its structure in order to trick it and expose it as a “mere machine.” Many more do the opposite. I spoke with people who told me of feeling “let down” when they had cracked the code and lost the illusion of mystery. I often saw people trying to protect their relationships with Eliza by avoiding situations that would provoke the program into making a predictable response. They didn’t ask questions that they knew would “confuse” the program, that would make it “talk nonsense.” And they went out of their way to ask questions in a form that they believed would provoke a lifelike response. People wanted to maintain the illusion that Eliza was able to respond to them.
If we posit, then, that Eliza‘s interactors were knowingly suspending their disbelief and actively working to maintain the fiction of a psychotherapy session, the implications are pretty profound, because now we have people in the mid-1960s already seriously engaging with a digital “interactive fiction” of sorts. We see here already the potential and the appeal of the computer as a storytelling medium, not as a tool to create stories from whole cloth. Eliza‘s interlocutors are engaging with a piece of narrative art generated by a very human artist, Weizenbaum himself (not that he would likely have described himself in those terms). This is what story writers and story readers have always done. Unlike Weizenbaum, I would consider the reception of Eliza not a cause for concern but a cause for excitement and anticipation. “If you think Eliza is exciting,” we might say to that secretary, “just wait until the really good stuff hits.” Hell, I get retroactive buzz just thinking about it.
And that buzz is the real reason why I wanted to talk about Eliza.
Bob Reeves
June 21, 2011 at 7:23 pm
I had Eliza on my personal computer in the ’80s and used to time how long it would take her to flunk the Turing test by saying something “machinelike” if I sincerely typed in things I’d say to a person. Sometimes right away; but surprisingly often, she could go a good long time sounding like a reasonable, attentive Rogerian therapist. It’s still an impressive program, remembering its limitations.
Sig
June 23, 2011 at 4:55 am
The fact that therapists seriously considered augmenting their work with Eliza may say more about therapy than it does about Eliza. Probably best not to think about that.
Almost entirely off topic, I really enjoyed “Let’s Tell a Story Together.” I don’t even remember how I stumbled upon it, but it was the catalyst that started me playing IF last year (I missed it the first time around), so thank you very much for that.
Sean
June 24, 2011 at 1:32 pm
Did you mean “There’s of course a million philosophical places we could [go] with these ideas”?
Jimmy Maher
June 24, 2011 at 1:36 pm
As a matter of fact I did. Thanks!
Gary
June 24, 2011 at 1:52 pm
I think Turkle’s first name is Sherry.
Jimmy Maher
June 24, 2011 at 2:16 pm
…and the hits just keep on coming. :)
Thanks!
Joe
June 24, 2011 at 5:54 pm
A great article!
I especially think the connection between interactive fiction and psychiatric dialogue is interesting. I don’t find the idea of using a computer program in a clinical setting as repugnant as you do, though.
Dr. Chandra
June 24, 2011 at 5:54 pm
“Who’d want to live with this guy? ” is precisely why we want to develop AIs, so that I can be free to be who I am and have a (virtual) friend to talk to and be patient with me and answer my questions about those things I don’t know about…
Harry Kaplan
June 26, 2011 at 11:05 pm
TELL ME MORE ABOUT YOUR QUESTIONS.
Nate
October 12, 2011 at 11:17 pm
First let me say I’m really enjoying this blog. A nostalgic trip down 8K memory lane with some new insights from history.
“The really unique element shared by Eliza and Adventure was the pseudo-natural language element of that interaction.”
While you probably don’t mean that Eliza and Adventure were the *only* pseudo-natural language programs of the era, one might get that impression, which unless I’ve misread my history isn’t at all the case. Natural language was an active area of human-machine interaction.
Have you covered Terry Winograd’s SHRDLU of 1968-1970 yet? I think there’s a logical development from SHRDLU to ADVENT – even more so, perhaps, than ELIZA.
Jimmy Maher
October 13, 2011 at 1:41 pm
No, I didn’t mean that ONLY Eliza and Adventure used natural-language-style input. I can see how the word “unique” could indeed imply that they were, well, unique — a poor choice of words on my part.
SHRDLU is a very interesting program, and one I was aware of without having studied it in any depth. Thanks for bringing it the attention of me and my readers again.
Nathan Segerlind
February 17, 2012 at 1:04 am
This a great series and I’m thoroughly enjoying it.
The quip about how you chose to be a humanities major resonated with me…..
I had had to go all the way through the PhD process and into postdoctoral research at a Very Elite Institution before I had the rather shaking realization that the historians were much more fun to talk to than my Very Serious Hard Science Crowd.
Nathan Laws
May 12, 2012 at 10:29 pm
Could you give a reference to Weisenbaum’s comment about his students at MIT? I’d like to be able to quote that.
Jimmy Maher
May 13, 2012 at 7:28 am
It’s drawn from Weizenbaum’s book, Computer Power and Human Reason.
Carl Read
July 8, 2016 at 8:47 am
The suspension of disbelief applies to most computer games that try to model the real world, I think. I remember a good air combat simulation that I enjoyed a lot – until I somehow landed on the enemy’s aircraft carrier and was informed my plane was being refueled. That killed it for me!
Ben Bilgri
January 24, 2018 at 7:35 pm
“eminently capable” > “imminently capable” ?
Phenomenal blog.
Jimmy Maher
January 25, 2018 at 11:58 am
Thanks!
Will Moczarski
May 6, 2019 at 5:11 am
we go could with these ideas -> could go
Jimmy Maher
May 7, 2019 at 9:01 am
Thanks!
Jeff Nyman
April 11, 2021 at 10:09 am
Interesting article on this. A few points. Regarding this:
“Weizenbaum was missing something pretty important. He believed that his parlor trick of a program had induced ‘powerful delusional thinking in quite normal people.’ But that’s kind of an absurd notion, isn’t it?”
No, it’s absolutely not. And even if it seemed so at the time, we now have the benefit of history which demonstrably and empirically shows us it was not an absurd notion.
We actually see many delusional effects with our technology, where people can anthropomorphize something, as just one example. Where we people think they have so many “friends” when, in fact, they just have people who they connected with via the click of a button. We have people who think that they are actually saying something profound based on the number of “likes” they have, when those “likes” may have been largely automatic (and in some cases, bot-driven).
This is very different than “surrender[ing] to the fiction” or even the notion of “willingly give ourselves over to fictional worlds.” That’s actually a key point: people are dealing with types of fictions (“friends on my Facebook”, “likes on my feed”) as being indicative of a reality (actual friends; people who actually read and agree with something I said). But that association tends to crumble when, say, people actually find out who their friends are; or when gamers get out of their MMO chat groups and engage in a more broad conversational reality.
So if we posit, as you state, that “interactors were knowingly suspending their disbelief and actively working to maintain the fiction of a psychotherapy session”, well — that is a form of delusional thinking. But then how is that different from a book or movie? Because the person is not trying to *influence* the book or movie. It’s a passive experience; it requires engaged attention, but not as much interaction. But with a computer program like ELIZA, you are effectively working with it to give it an aspect it simply lacks. You allow it its own form of agency that you would not with a movie or a book.
This is very, very similar, psychologically speaking and to use just one example, to people who are in battered situations and avoid trying to do things they know will set the other person off. You try very hard not to shatter the notion (delusion) that all is okay by making sure you don’t expose the underlying issue (saying the wrong thing leads to problems). Obviously that situation I just mentioned is very fraught but it happens in so many more subtle ways as well, such as when kids try to “fit in” with a group by only doing what the group seems to want, by not saying something that the group would find “uncool”, etc.
So I think this bit from the article is correct:
“We see here already the potential and the appeal of the computer as a storytelling medium,…”
Sure. But we also have to see the very real danger that Weizenbaum was seeing and also realize that he was completely accurate.
So to this statement:
“I would consider the reception of Eliza not a cause for concern but a cause for excitement and anticipation.”
Well, it probably has to be both. History has shown us this, without question and without doubt. As with most things in life — and something ELIZA would never understand — a binary either/or is rarely the case; it’s usually a spectrum.
Chris Billows
May 24, 2021 at 6:01 pm
I agree with much of what you wrote but don’t agree with your statement:
> Because the person is not trying to *influence* the book or movie. It’s a passive experience; it requires engaged attention, but not as much interaction. But with a computer program like ELIZA, you are effectively working with it to give it an aspect it simply lacks. You allow it its own form of agency that you would not with a movie or a book.
Engagement with passive media like books and movies is far more interactive than you give credit. The participants of passive media accept they are witnessing something they can’t influence. Yet a re-reading or re-watching of a book or show allows the participants to see new layers or meanings. It’s this practice that leads to fan role-taking and the creation of fandoms which become highly interactive including fan-fiction and cosplay.
We as humans need to update our ethical frameworks to understand how to become virtuous cyborgs.
Jeff Nyman
August 16, 2021 at 8:58 am
Fair point. I agree that it’s more interactive than is often given credit. But it’s still far less interactive than how people interact with a game or with something that changes as they interact with it.
Even that last point requires clarification. Games only “change” up to a point. Eventually you have seen all the game has to offer (such as hitting all text variations). So the game isn’t really changing, of course; it’s more just your interactions that allow it to seem to change how it behaves with you. That same thing is not what happens with a book or a movie (unless you choose to skip pages or fast forward).
My main point was that people tried to engage with the ELIZA system in a way they simply could not with a book or a film. The (relatively small) amount of interaction allowed is what made this possible. As the game industry evolved, and as the level of interactions became much wider, we see how the game industry outpaced much of the entertainment industry when it came to mind-share and revenue.
For sure there are “interactions” with books where we do interact with the book in a different way based on ourselves; how we’ve grown, what we’ve learned, etc. But that’s still what the person brings to the book, not what the book itself provides as a mode of interaction. With games, the opposite can, and usually is, the case.
A good case in point here would be the Dark Souls series, where you not only have to “unlearn” much you have learned about game playing but also have to figure out that the story and the lore will only come about by the items you find and read about. And even then, you often have to draw lots of connections between the text of various items. You also have to figure things out by looking at the world around you and understanding the context (such as why the bells are connected the way they are).
That level of interaction is very different from any from of interaction that a movie or book will get you. But we do see some series that come close, like the German time travel series “Dark”, which absolutely rewards repeated viewings. So there is interaction in both cases, to be sure and thus I take your point. But I would still maintain that the level of interaction people engage in with both is far, far different.
SpookyFM
July 30, 2024 at 3:49 pm
Thank you for your fascinating writings which I have been enjoying for years now! (I may have commented earlier but I’m not sure, so I wanted to preface my comment in this way.)
As a psychologist (even though my field isn’t clinical psychology), I have a (certainly minor) nit to pick: The “group of psychologists” whose article you quote really isn’t one. The article is “A computer method of psychotherapy: Preliminary communication” by Kenneth M. Colby, James B. Watt and John P. Gilbert, from the second 1966 issue of The Journal of Nervous and Mental Diseases.
Colby was a psychiatrist (i.e., a medical doctor, not a psychologist). Watt and Gilbert are a bit harder to find, but Watt worked in computer sciences, and Gilbert may have been a statistician, although he may also have been in mathematical psychology. But they certainly weren’t a “group of psychologists”.
Just my professional pride, I guess ;)
Jimmy Maher
August 1, 2024 at 2:49 pm
Thanks! I edited the article to hopefully more accurately reflect the reality. (I would like to revisit some of these topics someday in the light of ChatGPT and the like, but I haven’t yet found a place to slot it.)