RSS

A Web Around the World, Part 7: Computers On the Wire

The world’s first digital network actually predates the world’s first computer, in the sense that we understand the word “computer” today.

It began with a Bell Labs engineer named George Stibitz, who worked on the electro-mechanical relays that were used to route telephone calls. One evening in late 1937, he took a box of parts home with him and started to put together on his kitchen table a contraption that distinctly resembled the one that Claude Shannon had recently described in his MIT master’s thesis. By the summer of the following year, it worked well enough that Stibitz took it to the office to show it around. In a testament to the spirit of freewheeling innovation that marked life at Bell Labs, his boss promptly told him to take a break from telephone switches and see if he could turn it into a truly useful calculating machine. The result emerged fifteen months later as the Complex Computer, made from some 450 telephone relays and many other off-the-shelf parts from telephony’s infrastructure. It was slow, as all machines of its electro-mechanical ilk inevitably were: it took it about one minute to multiply two eight-digit numbers together. And it was not quite as capable as the machine Shannon had described in print: it had no ability to make decisions at branch points, only to perform rote calculations. But it worked.

It is a little unclear to what extent the Complex Computer was derived from Shannon’s paper. Stibitz gave few interviews during his life. To my knowledge he never directly credited Shannon as his inspiration, but neither was he ever quizzed in depth about the subject. It strikes me as reasonable to grant that his initial explorations may have been entirely serendipitous, but one has to assume that he became aware of the Shannon paper after the Complex Computer became an official Bell Labs project; the paper was, after all, being widely disseminated and discussed at that time, and even the most cursory review of existing literature would have turned it up.

At any rate, another part of the Complex Computer project most definitely was completely original. Stibitz’s managers wanted to make the machine available to Bell and AT&T employees working all over the country. At first glance, this would have to entail making a lot more Complex Computers, at considerable cost, and even though the individual offices that received them would only need to make use of them occasionally. Might there be a better way, Stibitz wondered. Might it be possible to let the entire country share a single machine instead?

Stibitz enlisted a more experienced switching engineer named Samuel B. Williams, who figured out how to connect the Complex Computer to a telegraph line. By this point, telegraphy’s old manually operated Morse keys had been long since replaced by teletype machines that looked and functioned like typewriters, doing the grunt work of translating letters into Morse Code for the operator; similarly, the various arcane receiving mechanisms of old had been replaced by a teleprinter.

The world’s first digital network made its debut in September of 1940, at a meeting of the American Mathematical Society that was held at Dartmouth College in New Hampshire. The attendees were given the chance to type out mathematical problems on the teletype, which sent them up the line as Morse Code to the Complex Computer installed at Bell Labs’s facilities in New York City. The latter translated the dots and dashes of Morse Code into numbers, performed the requested calculations, and sent the results back to Dartmouth, where they duly appeared on the teleprinter. The tectonic plates subtly shifted on that sunny September afternoon, while the assembled mathematicians nodded politely, with little awareness of the importance of what they were witnessing. The computer networks of the future would be driven by a binary code known as ASCII rather than Morse Code, but the principle behind them would be the same.

As it happened, Stibitz and Williams never took their invention much further; it never did become a part of Bell’s everyday operations. The war going on in Europe was already affecting research priorities everywhere, and was soon to make the idea of developing a networked calculating device simply for the purpose of making civilian phone networks easier to install and repair seem positively quaint. In fact, the Complex Computer was destined to go down in history as the last of its breed: the last significant blue-sky advance in American computing for a long time to come that wasn’t driven by the priorities and the funding of the national-security state.

That reality would give plenty of the people who worked in the field pause, for their own worldviews would not always be in harmony with those of the generals and statesmen who funded their projects in the cause of winning actual or hypothetical wars, with all the associated costs in human suffering and human lives. Nevertheless, as a consequence of this (Faustian?) bargain, the early-modern era of computers and computer networks in the United States is almost the polar opposite of that of telegraphy and telephony in an important sense: rather than being left to the private sphere, computing at the cutting edge became a non-profit, government-sponsored activity. The ramifications of this were and remain enormous, yet have become so embedded in the way we see computing writ large that we seldom consider them. Government funding explains, for example, why the very concept of a modern digital computer was never locked up behind a patent like the telegraph and the telephone were. Perhaps it even explains in a roundabout way why the digital computer has no single anointed father figure, no equivalent to a Samuel Morse or Alexander Graham Bell — for the people who made computing happen were institutionalists, not lone-wolf inventors.

Most of all, though, it explains why the World Wide Web, when it finally came to be, was designed to be open in every sense of the word, easily accessible from any computer that implements its well-documented protocols. Even today, long after the big corporations have moved in, a spirit of egalitarianism and idealism underpins the very technical specifications that make the Internet go. Had the moment when the technology was ripe to create an Internet not corresponded with the handful of decades in American history when the federal government was willing and able to fund massive technological research projects of uncertain ultimate benefit, the world we live in would be a very different place.


Programming ENIAC.

There is plenty of debate surrounding the question of the first “real” computer in the modern sense of the word, with plenty of fulsome sentiment on display from the more committed partisans. Some point to the machines built by Konrad Zuse in Nazi Germany in the midst of World War II, others to the ones built by the British code breakers at Bletchley Park around the same time. But the consensus, establishment choice has long been and still remains the American “Electronic Numerical Integrator and Calculator,” or ENIAC. It was designed primarily by the physicist John Mauchly and the electrical engineer J. Presper Eckert at the University of Pennsylvania, and was funded by the United States Army for the purpose of calculating the ideal firing trajectories of artillery shells. Because building it was largely a process of trial and error from the time that the project was officially launched on June 1, 1943, it is difficult to give a precise date when ENIAC “worked” for the first time. It is clear, however, that it wasn’t able to do the job the Army expected of it until after the war that had prompted its creation was over. ENIAC wasn’t officially accepted by the Army until July of 1946.

ENIAC’s claim to being the first modern computer rests on the fact that it was the first machine to combine two key attributes: it was purely electrical rather than electro-mechanical —  no clanking telephone relays here! — and it was Turing complete. The latter quality requires some explanation.

First defined by the British mathematician and proto-computer scientist Alan Turing in the 1930s, the phrase “Turing complete” describes a machine that is able to store numerical data in internal memory of some sort, perform calculations and transformations upon that data, and make conditional jumps in the program it is running based upon the results. Anyone who has ever programmed a computer of the present day is familiar with branching decision points such as BASIC’s “if, then” construction — if such-and-such is the case, then do this — as well as loops such as its “for, next” construction, which are used to repeat sections of a program multiple times. The ability to write such statements and see them carried out means that one is working on a Turing-complete computer. ENIAC was the first purely electrical computer that could deal with the contemporary equivalent of “if, then” and “for, next” statements, and thus the patriarch of the billions more that would follow.

That said, there are ways in which ENIAC still fails to match our expectations of a computer — not just quantitatively, in the sense that it was 80 feet long, 8 feet tall, weighed 30 tons, and yet could manage barely one half of one percent of the instructions per second of an Apple II from the dawn of the personal-computing age, but qualitatively, in the sense that ENIAC just didn’t function like we expect a computer to do.

For one thing, it had no real concept of software. You “programmed” ENIAC by physically rewiring it, a process that generally consumed far more time than did actually running the program thus created. The room where it was housed looked like nothing so much as a manual telephone exchange from the old days, albeit on an enormous scale; it was a veritable maze of wires and plugboards. Perhaps we shouldn’t be surprised to learn, then, that its programmers were mostly women, next-generation telephone operators who wandered through the machine’s innards with clipboards in their hands, remaking their surroundings to match the schematics on the page.

Another distinction between ENIAC and what came later is more subtle, but in its way even more profound. If you were to ask the proverbial person on the street what distinguishes a computer program from any other form of electronic media, she would probably say something about its “interactivity.” The word has become inescapable, the defining adjective of the computer age: “interactive fiction,” “interactive learning,” “interactive entertainment,” etc. And yet ENIAC really wasn’t so interactive at all. It operated under what would later become known as the “batch-processing” model. After programming it — or, if you like, rewiring it — you fed it a chunk of data, then sat back and waited however long it took for the result to come out the metaphorical other side of the pipeline. And then, if you wished, you could feed it some more data, to be massaged in exactly the same way. Ironically, this paradigm is much closer to the literal meaning of the word “computer” than the one with which we are familiar; ENIAC was a device for computing things. No more and no less. This made it useful, but far from the mind-expanding anything machine that we’ve come to know as the computer.

Thus the story of computing in the decade or two after ENIAC is largely that of how these two paradigms — programming by rewiring and batch processing — were shattered to yield said anything machine. The first paradigm fell away fairly quickly, but the second would persist for years in many computing contexts.


John von Neumann

In November of 1944, when ENIAC was still very much a work in progress, it was visited by John von Neumann. After immigrating to the United States from Hungary more than a decade earlier, von Neumann had become one of the most prominent intellectuals in the country, an absurdly accomplished mathematician and all-around genius for all seasons, with deep wells of knowledge in everything from atomic physics to Byzantine history. He was, writes computer historian M. Mitchell Waldrop, “a scientific superstar, the very Hollywood image of what a scientist ought to be, up to and including that faint, delicious touch of a Middle European accent.” A man who hobnobbed routinely with the highest levels of his adopted nation’s political as well as scientific establishment, he was now attached to the Manhattan Project that was charged with creating an atomic bomb before the Nazis could manage to do so. He came to see ENIAC in that capacity, to find out whether it or a machine like it might be able to help himself and his colleagues with the fiendishly complicated calculations that were part and parcel of their work.

Truth be told, he was somewhat underwhelmed by what he saw that day. He was taken aback by the laborious rewiring that programming ENIAC entailed, and judged the machine to be far too balky and inflexible to be of much use on the Manhattan Project.

But discussion about what the next computer after ENIAC ought to be like was already percolating, so much so that Mauchly and Eckert had already given the unfunded, entirely hypothetical machine a catchy acronym: EDVAC, for “Electronic Discrete Variable Automatic Computer.” Von Neumann decided to throw his own hat into the ring, to offer up his own proposal for what EDVAC should be. Written betwixt and between his day job in the New Mexico desert, the resulting document laid out five abstract components of any computer. There must be a way of inputting data and a way of outputting it. There must be memory for storing the data, and a central arithmetic unit for performing calculations upon it. And finally, there must be a central control unit capable of executing programmed instructions and making conditional jumps.

But the paper’s real stroke of genius was its description of a new way of carrying out this programming, one that wouldn’t entail rewiring the computer. It should be possible, von Neumann wrote, to store not only the data a program manipulated in memory but the program itself. This way new programs could be input just the same way as other forms of data. This approach to computing — the only one most of us are familiar with — is sometimes called a “von Neumann machine” today, or simply a “stored-program computer.” It is the reason that, writes M. Mitchell Waldrop, the anything machine sitting on your desk today “can transform itself into the cockpit of a fighter jet, a budget projection, a chapter of a novel, or whatever else you want” — all without changing its physical form one iota.

Von Neumann began to distribute his paper, labeled a “first draft,” in late June of 1945, just three weeks before the Manhattan Project conducted the first test of an atomic bomb. The paper ignited a brouhaha that will ring all too familiar to readers of earlier articles in this series. Mauchly and Eckert had already resolved to patent EDVAC in order to exploit it for commercial purposes. They now rushed to do so, whilst insisting that the design had included the stored-program idea from the start, that von Neumann had in fact picked it up from them. Von Neumann himself begged to differ, saying it was all his own conception and filing a patent application of his own. Then the University of Pennsylvania entered the fray as well, saying it automatically owned any invention conceived by its employees as part of their duties. The whole mess was yet further complicated by the fact that the design of ENIAC, from which much of EDVAC was derived, had been funded by the Army, and was still considered classified.

Thus the three-way dispute wound up in the hands of the Army’s lawyers, who decided in April of 1947 that no one should get a patent. They judged that von Neumann’s paper constituted “prior disclosure” of the details of the design, effectively placing it in the public domain. The upshot of this little-remarked decision was that, in contrast to the telegraph and telephone among many other inventions, the abstract design of a digital electronic stored-program computer was to be freely available for anyone and everyone to build upon right from the start.[1]Inevitably, that wasn’t quite the end of it. Mauchly and Eckert continued their quest to win the patent they thought was their due, and were finally granted it at the rather astonishingly late date of 1964, by which time they were associated with the Sperry Rand Corporation, a maker of mainframes and minicomputers. But this victory only ignited another legal battle, pitting Sperry Rand against virtually every other company in the computer industry, who were not eager to start paying one of their competitors a royalty on every single computer they made. The patent was thrown out once and for all in 1973, primarily on the familiar premise that Von Neumann’s paper constituted prior disclosure.

Mauchly and Eckert had left the University of Pennsylvania in a huff by the time the Army’s lawyers made their decision. Without its masterminds, the EDVAC project suffered delay after delay. By the time it was finally done in 1952, it did sport stored programs, but its thunder had been stolen by other computers that had gotten there first.


The Whirlwind computer in testing, circa 1950. Jay Forrester is second from left, Robert Everett the man standing by his side.

The first stored-program computer to be actually built was known as the Manchester Mark I, after the University of Manchester in Britain that was its home. It ran its first program in April of 1949, a landmark moment in the proud computing history of Britain, which stretches back to such pioneers as Charles Babbage and Ada Lovelace. But this series of articles is concerned with how the World Wide Web came to be, and that is primarily an American story prior to its final stages. So, I hope you will forgive me if I continue to focus on the American scene. More specifically, I’d like to turn to the Whirlwind, the first stored-program all-electrical computer to be built in the United States — and, even more importantly, the first to break away from the batch-processing paradigm.

The Whirlwind had a long history behind it by the time it entered regular service at MIT in April of 1951. It had all begun in December of 1944, when the Navy had asked MIT to build it a new flight simulator for its trainees, one that could be rewired to simulate the flight characteristics of any present or future model of aircraft. The task was given to Jay Forrester, a 26-year-old engineering graduate student who would never have been allowed near such a project if all of his more senior colleagues hadn’t been busy with other wartime tasks. He and his team struggled for months to find a way to meet the Navy’s expectations, with little success. Somewhat to his chagrin, the project wasn’t cancelled even after the war ended. Then, one afternoon in October of 1945, in the course of a casual chat on the front stoop of Forrester’s research lab, a representative of the Navy brass mentioned ENIAC, and suggested that a digital computer like that one might be the solution to his problems. Forrester took the advice to heart. “We are building a digital computer!” he barked to his bewildered team just days later.

Forrester’s chief deputy Robert Everett would later admit that they started down the road of what would become known as “real-time computing” only because they were young and naïve and had no clue what they were getting into. For all that it was the product of ignorance as much as intent, the idea was nevertheless an audacious conceptual leap for computing. A computer responsible for running a flight simulator would have to do more than provide one-off answers to math problems at its own lackadaisical pace. It would need to respond to a constant stream of data about the state of the airplane’s controls, to update a model of the world in accord with that data, and provide a constant stream of feedback to the trainee behind the controls. And it would need to do it all to a clock, fast enough to give the impression of real flight. It was a well-nigh breathtaking explosion of the very idea of what a computer could be — not least in its thoroughgoing embrace of interactivity, its view of a program as a constant feedback loop of input and output.

The project gradually morphed from a single-purpose flight simulator to an even more expansive concept, an all-purpose digital computer that would be able to run a variety of real-time interactive applications. Like ENIAC before it, the machine which Forrester and Everett dubbed the Whirlwind was built and tested in stages over a period of years. In keeping with its real-time mission statement, it ended up doing seven times as many instructions per second as ENIAC, mostly thanks to a new type of memory — known as “core memory” — invented by Forrester himself for the project.

In the midst of these years of development, on August 29, 1949, the Soviet Union tested its first atomic bomb, creating panic all over the Western world; most intelligence analysts had believed that the Soviets were still years away from such a feat. The Cold War began in earnest on that day, as all of the post-World War II dreams of a negotiated peace based on mutual enlightenment gave way to one based on the terrifying brinkmanship of mutually assured destruction. The stakes of warfare had shifted overnight; a single bomb dropped from a single Soviet aircraft could now spell the end of millions of American lives. Desperate to protect the nation against this ghastly new reality, the Air Force asked Forrester whether the Whirlwind could be used to provide a real-time picture of American airspace, to become the heart of a control center which kept track of friendlies and potential enemies 24 hours per day. As it happened, the project’s other sponsors had been growing impatient and making noises about cutting their funding, so Forrester had every motivation to jump on this new chance; the likes of flight simulation was entirely forgotten for the time being. On April 20, 1951, as its first official task, the newly commissioned Whirlwind successfully tracked two fighter planes in real time.

Satisfied with that proof of concept, the Air Force offered to lavishly fund a Project Lincoln that would build upon what had been learned from the Whirlwind, with the mission of protecting the United States from Soviet bombers at any cost — almost literally, given the sum of money the Air Force was willing to throw at it. It began in November of 1951, with Forrester in charge.

Whatever its implications about the gloomy state of the world, Project Lincoln was a truly visionary technological project, enough so as to warm the cockles of even a peacenik engineer’s heart. Soviet bombers, if they came someday, were expected to come in at low altitudes in order to minimize their radar exposure. This created a tremendous logistical problem. Even if the Air Force built enough radar stations to spot all of the aircraft before they reached their targets — a task it was willing to undertake despite the huge cost of it — there would be very little time to coordinate a response. Enter the Semi-Automatic Ground Environment (SAGE); it was meant to provide that rapid coordination, which would be impossible by any other means. Data from hundreds of radar stations would pour into its control centers in real time, to be digested by a computer and displayed as a single comprehensible strategic map on the screens of operators, who would then be able to deploy fighters and ground-based antiaircraft weapons as needed in response, with nary a moment’s delay.

All of this seems old hat today, but it was unprecedented at the time. It would require computers whose power must dwarf even that of the Whirlwind. And it would also require something else: each computer would need to be networked to all the radar stations in its sector, and to its peers in other control centers. This was a staggering task in itself. To appreciate why Jay Forrester and his people thought they had a ghost of a chance of bringing it off, we need to step back from the front lines of the Cold War for a moment and check in with an old friend.


Claude Shannon in middle age, after he had become a sort of all-purpose public intellectual for the press to trot out for big occasions. He certainly looked the part…

Claude Shannon had left MIT to work for Bell Labs on various military projects during World War II, and had remained there after the end of the war. Thus when he published the second earthshaking paper of his career in 1948, he did so in the pages of the Bell System Technical Journal.

“A Mathematical Theory of Communication” belies its name to some extent, in that it can be explained in its most basic form without recourse to any mathematics at all. Indeed, it starts off so simply as to seem almost childish. Shannon breaks the whole of communication — of any act of communication — into seven elements, six of them proactive or positive, the last one negative. In addition to the message itself, there are the “source,” the person or machine generating the message; the “transmitter,” the device which encodes the message for transport and sends it on its way; the “channel,” the medium over which the message travels; the “receiver,” which decodes the message at the other end; and the “destination,” the person or machine which accepts and comprehends the message. And then there is “noise”: any source of entropy that impedes the progress of the message from source to destination or garbles its content. Let’s consider a couple of examples of Shannon’s framework in action.

One of the oldest methods of human communication is direct speech. Here the source is a person with something to say, the transmitter the mouth with which she speaks, the channel the air through which the resulting sound waves travel, the receiver the ear of a second person, and the destination that second person herself. Noise in the system might be literal background or foreground noise such as another person talking at the same time, or a wind blowing in the wrong direction, or sheer distance.

We can break telegraphy down in the same way. Here the source is the operator with a message to send, the transmitter his Morse key or teletype, the channel the wire over which the Morse Code travels, the receiver an electromagnet-actuated pencil or a teleprinter, and the destination the human operator at the other end of the wire. Noise might be static on the line, or a poor signal caused by a weak battery or something else, or any number of other technical glitches.

But if we like we can also examine the process of telegraphy from a greater remove. We might prefer to think of the source as the original source of the message — say, a soldier overseas who wants to tell his fiancée that he loves her. Here the telegraph operator who sends the message is in a sense a part of the transmitter, while the operator who receives the message is indeed a part of the receiver. The girl back home is of course the destination. When using this scheme, we consider the administration of telegraph stations and networks also to be a part of the overall communications process. In this conception, then, strictly human mistakes, such as a message dropped under a desk and overlooked, become a part of the noise in the system. Shannon provides us, in other words, with a framework for conceptualizing communication at whatever level of granularity might happen to suit our current goals.

Notably absent in all of this is any real concern over the content of the message being sent. Shannon treats content with blithe disinterest, not to say contempt. “The ‘meaning’ of a message is generally irrelevant,” he writes. “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. [The] semantic aspects of communication are irrelevant to the engineering problem.” Rather than content or meaning, Shannon is interested in what he calls “information,” which is related to the actual meaning of the message but not quite the same thing. It is rather the encoded form the meaning takes as it passes down the channel.

And here Shannon clearly articulated an idea of profound importance, one which network engineers had been groping toward for some time: any channel is ultimately capable of carrying any type of content — text, sound, still or moving images, computer code, you name it. It’s just a matter of having an agreed-upon protocol for the transmitter and receiver to use to package it into information at one end and then unpack it at the other.

In practical terms, however, some types of content take longer to send over any given channel than others; while a telegraph line could theoretically be used to transmit video, it would take so long to send even a single frame using its widely spaced dots and dashes that it is effectively useless for the purpose, even though it is perfectly adequate for sending text as Morse Code. Some forms of content, that is to say, are denser than others, require more information to convey. In order to quantify this, one needs a unit for measuring quantities of information itself. This Shannon provides, in the form of a single on-or-off state — a yes or a no, a one or a zero. “The units may be called binary digits,” he writes, “or, more briefly, bits.”

And so a new word entered the lexicon. An entire universe of meaning can be built out of nothing but bits if you have enough of them, as our modern digital world proves. But some types of channel can send more bits per second than others, which makes different channels more or less suitable for different types of content.

There is still one more thing to consider: the noise that might come along to corrupt the information as it travels from transmitter to receiver. A message intended for a human is actually quite resistant to noise, for our human minds are very good at filling in gaps and working around mistakes in communication. A handful of garbled characters seldom destroys the meaning of a textual message for us, and we are equally adept at coping with a bad telephone connection or a static-filled television screen. Having a lot of noise in these situations is certainly not ideal, but the amount of entropy in the system has to get pretty extreme before the process of communication itself breaks down completely.

But what of computers? Shannon was already looking forward to a world in which one computer would need to talk directly to another, with no human middleman. Computers cannot use intuition and experience to fill in gaps and correct mistakes in an information stream. If they are to function, they need every single message to reach them in its original, pristine state. But, as Shannon well realized, some amount of noise is a fact of life with any communications channel. What could be done?

What could be done, Shannon wrote, was to design error correction into a communication protocol. The transmitter could divide the information to be sent into packets of fixed length. After sending a packet, it could send a checksum, a number derived from performing a series of agreed-upon calculations on the bits in the packet. The receiver at the other end of the line would then be expected to perform the same set of calculations on the information it had received, and compare it with the transmitter’s checksum. If the numbers matched, all must be well; it could send an “okay” back to the transmitter and wait on the next packet. But if the numbers didn’t match, it knew that noise on the channel must have corrupted the information. So, it would ask the transmitter to try sending the last packet again. It was in essence the same principle as the one that had been employed on Claude Chappe’s optical-telegraph networks of 150 years earlier.

To be sure, there were parameters in the scheme to be tinkered with on a situational basis. Larger packets, for example, would be more efficient on a relatively clean channel that gave few problems, smaller ones on a noisy channel where re-transmission was often necessary. Meanwhile the larger the checksum and more intense the calculations done to create it, the more confident one could be that the information really had been received correctly, that the checksums didn’t happen to match by mere coincidence. But this extra insurance came with a price of its own, in the form of the extra computing horsepower required to generate the more complex checksums and the extra time it took to send them down the channel. It seemed that success in digital communications was, like success in life, a matter of making wise compromises.

Two years after Shannon published his paper, another Bell Labs employee by the name of R.W. Hamming published “Error Detecting and Error Correcting Codes” in the same journal. It made Shannon’s abstractions concrete, laying out in careful detail the first practical algorithms for error detection and correction on a digital network, using checksums that would become known as “Hamming codes.”

Even before Hamming’s work came along to complement it, Shannon’s paper sent shock waves through the nascent community of computing, whilst inventing at a stroke a whole new field of research known as “information theory.” The printers of the Bell System Technical Journal, accustomed to turning out perhaps a few hundred copies for internal distribution through the company, were swamped by thousands of requests for that particular issue. Many of those involved with computers and/or communications would continue to speak of the paper and its author with awe for the rest of their lives. “It was like a bolt out of the blue, a really unique thing,” remembered a Bell Labs researcher named John Pierce. “I don’t know of any other theory that came in a complete form like that, with very few antecedents or history.” “It was a revelation,” said MIT’s Oliver Selfridge. “Around MIT the reaction was, ‘Brilliant! Why didn’t I think of that?’ Information theory gave us a whole conceptual vocabulary, as well as a technical vocabulary.” Word soon spread to the mainstream press. Fortune magazine called information theory that “proudest and rarest [of] creations, a great scientific theory which could profoundly and rapidly alter man’s view of the world.” Scientific American proclaimed it to encompass “all of the procedures by which one mind may affect another. [It] involves not only written and oral speech, but also music, the pictorial arts, the theatre, the ballet, and in fact all human behavior.” And that was only the half of it: in the midst of their excitement, the magazine’s editors failed to even notice its implications for computing.

And those implications were enormous. The fact was that all of the countless digital networks of the future would be built from the principles first described by Claude Shannon. Shannon himself largely stepped away from the table he had so obligingly set. A playful soul who preferred tinkering to writing or working to a deadline, he was content to live off the prestige his paper had brought him, accepting lucrative seats on several boards of directors and the like. In the meantime, his theories were about to be brought to vivid life by Project Lincoln.


The Lincoln Lab complex, future home of SAGE research, under construction.

In their later years, many of the mostly young people who worked on Project Lincoln would freely admit that they had had only the vaguest notion of what they were doing during those halcyon days. Having very little experience with the military or aviation among their ranks, they extrapolated from science-fiction novels, from movies, and from old newsreel footage of the command-and-control posts whence the Royal Air Force had guided defenses during the Battle of Britain. Everything they used in their endeavors had to be designed and made from whole cloth, from the input devices to the display screens to the computers behind it all, which were to be manufactured by a company called IBM that had heretofore specialized in strictly analog gadgets (typewriters, time clocks, vote recorders, census tabulators, cheese slicers). Fortunately, they had effectively unlimited sums of money at their disposal, what with the Air Force’s paranoid sense of urgency. The government paid to build a whole new complex to house their efforts, at Laurence G. Hanscom Airfield, about fifteen miles away from MIT proper. The place would become known as Lincoln Lab, and would long outlive Project Lincoln itself and the SAGE system it made; it still exists to this day.

AT&T — who else? — was contracted to set up the communications lines that would link all of the individual radar stations into control centers scattered all over the country, and in turn link the latter together with one another; it was considered essential not to have a single main control center which, if knocked out of action, could take the whole system down with it. The lines AT&T provided were at bottom ordinary telephone connections, for nothing better existed at the time. No matter; an engineer named John V. Harrington took Claude Shannon’s assertion that all information is the same in the end to heart. He made something called a “modulator/de-modulator”: a gadget which could convert a stream of binary data into a waveform and send it down a telephone line when it was playing the role of transmitter, or convert one of these waveforms back into binary data when it was playing the role of receiver, all at the impressive rate of 1300 bits per second. Its name was soon shortened to “modem,” and bits-per-second to “baud,” borrowing a term that had earlier been applied to the dots and dashes of telegraphy. Combined with the techniques of error correction developed by Shannon and R.W. Hamming, Harrington’s modems would become the basis of the world’s first permanent wide-area computer network.

At a time when the concept of software was just struggling into existence as an entity separate from computer hardware, the SAGE system would demand programs an order of magnitude more complex than anyone had ever attempted before — interactive programs that must run indefinitely and respond constantly to new stimuli, not mere algorithms to be run on static sets of data. In the end, SAGE would employ more than 800 individual programmers. Lincoln Lab created the first tools to separate the act of programming from the bare metal of the machine itself, introducing assemblers that could do some of the work of keeping track of registers, memory locations, and the like for the programmer, to allow her to better concentrate on the core logic of her task. Lincoln Lab’s official history of the project goes so far as to boast that “the art of computer programming was essentially invented for SAGE.”

In marked contrast to later years, programmers themselves were held in little regard at the time; hardware engineers ruled the roost. With no formal education programs in the discipline yet in existence, Lincoln Lab was willing to hire anyone who could get a security clearance and pass a test of basic reasoning skills. A substantial percentage of them wound up being women.

Among the men who came to program for SAGE was Severo Ornstein, a geologist who would go on to a notable career in computing over the following three decades. In his memoir, he captures the bizarre mixture of confusion and empowerment that marked life with SAGE, explaining how he was thrown in at the deep end as soon as he arrived on the job.

It seemed that not only was an operational air-defense program lacking, but the overall system hadn’t yet been fully designed. The OP SPECS (Operational Specifications) which defined the system were just being written, and, with no more background in air defense than a woodchuck, I was unceremoniously handed the task of writing the Crosstelling Spec. What in God’s name was Crosstelling? The only thing I knew about it was that it came late in the schedule, thank heavens, after everything else was finished.

It developed that the country was divided into sectors, and that the sectors were in turn divided into sub-sectors (which were really the operational units) with a Direction Center at the heart of each. Since airplanes, especially those that didn’t belong to the Air Force (or even the U.S.), could hardly be forbidden from crossing between sub-sectors, some coordination was required for handing over the tracking of planes, controlling of interceptors, etc., between the sub-sectors. This function was called Crosstelling, a name inherited from an earlier manual system in which human operators followed the tracks of aircraft on radar screens and coordinated matters by talking to one another on telephones. Now it had somehow fallen to me to define how this coordination should be handled by computers, and then to write it all down in an official OP SPEC with a bright-red cover stamped SECRET.

I was horrified. Not only did I feel incapable of handling the task, but what was to become of a country whose Crosstelling was to be specified by an ignoramus like me? My number-two daughter was born at about that time, and for the first time I began to fear for my children’s future…

In spite of it all, SAGE more or less worked out in the end. The first control center became officially operational at last in July of 1958, at McGuire Air Force Base in New Jersey. It was followed by 21 more of its kind over the course of the next three and a half years, each housing two massive IBM computers; the second was provided for redundancy, to prevent the survival of the nation from being put at risk by a blown vacuum tube. These computers could communicate with radar stations and with their peers on the network for the purpose of “Crosstelling.” The control centers went on to become one of the iconic images of the Cold War era, featuring prominently in the likes of Dr. Strangelove.[2]That film’s title character was partially based on John Von Neumann, who after his work on the Manhattan Project and before his untimely death from cancer in 1957 became as strident a Cold Warrior as they came. “I believe there is no such thing as saturation,” he once told his old Manhattan Project boss Robert Oppenheimer. “I don’t think any weapon can be too large.” Many have attributed his bellicosity to his pain at seeing the Iron Curtain come down over his homeland of Hungary, separating him from friends and family forever. SAGE remained in service until the early 1980s, by which time its hardware was positively neolithic but still did the job asked of it.

Thankfully for all of us, the system was never subjected to a real trial by fire. Would it have actually worked? Most military experts are doubtful — as, indeed, were many of the architects of SAGE after all was said and done. Severo Ornstein, for his part, says bluntly that “I believe SAGE would have failed utterly.” During a large-scale war game known as Operation Sky Shield which was carried out in the early 1960s, SAGE succeeded in downing no more than a fourth of the attacking enemy bombers. All of the tests conducted after that fiasco were, some claim, fudged to one degree or another.

But then, the fact is that SAGE was already something of a white elephant on the day the very first control center went into operation; by that point the principal nuclear threat was shifting from bombers to ballistic missiles, a form of attack the designers had not anticipated and against which their system could offer no real utility. For all its cutting-edge technology, SAGE thus became a classic example of a weapon designed to fight the last war rather than the next one. Historian Paul N. Edwards has noted that the SAGE control centers were never placed in hardened bunkers, which he believes constitutes a tacit admission on the part of the Air Force that they had no chance of protecting the nation from a full-on Soviet first nuclear strike. “Strategic Air Command,” he posits, “intended never to need SAGE warning and interception; it would strike the Russians first. After SAC’s hammer blow, continental air defenses would be faced only with cleaning up a weak and probably disorganized counter-strike.” There is by no means a consensus that SAGE could have managed to coordinate even that much of a defense.

But this is not to say that SAGE wasn’t worth it. Far from it. Bringing so many smart people together and giving them such an ambitious, all-encompassing task to accomplish in such an exciting new field as computing could hardly fail to yield rich dividends for the future. Because so much of it was classified for so long, not to mention its association with passé Cold War paranoia, SAGE’s role in the history of computing — and especially of networked computing — tends to go underappreciated. And yet many of our most fundamental notions about what computing is and can be were born here. Paul N. Edwards credits SAGE and its predecessor the Whirlwind computer with inventing:

  • magnetic-core memory
  • video displays
  • light guns [what we call light pens today]
  • the first effective algebraic computer language
  • graphic display techniques
  • simulation techniques
  • synchronous parallel logic (digits transmitted simultaneously rather than serially through the computer)
  • analog-to-digital and digital-to-analog conversion techniques
  • digital data transmission over telephone lines
  • duplexing
  • multiprocessing
  • networks (automatic data exchange among different computers)

Readers unfamiliar with computer technology may not appreciate the extreme importance of these developments to the history of computing. Suffice it to say that much-evolved versions of all of them remain in use today. Some, such as networking and graphic displays, comprise the very backbone of modern computing.

M. Mitchell Waldrop elaborates in a more philosophical mode:

SAGE planted the seeds of a truly powerful idea, the notion that humans and computers working together could be far more effective than either working separately. Of course, SAGE by itself didn’t get us all the way to the modern idea of personal computers being used for personal empowerment; the SAGE computers were definitely not “personal,” and the controllers could use them only for that one, tightly constrained task of air defense. Nonetheless, it’s no coincidence that the basic setup still seems so eerily familiar. An operator watching his CRT display screen, giving commands to a computer via a keyboard and a handheld light gun, and sending data to other computers via a digital communications link: SAGE may not have been the technological ancestor of the modern PC, mouse, and network, but it was definitely their conceptual and spiritual ancestor.

So, ineffective though it probably was as a means of national defense, the real legacy of SAGE is one of swords turning into plowshares. Consider, for example, its most direct civilian progeny.


SAGE in operation. For a quarter of a century, hundreds of Air Force Personnel were to be found sitting in antiseptic rooms like this one at any given time, peering at their displays in case something showed up there. It’s one way to make a living…

One day in the summer of 1953, long before any actual SAGE computers had been built, a senior IBM salesman who was privy to the project, whose name was R. Blair Smith, chanced to sit next to another Smith on a flight from Los Angeles to New York City. This other Smith was none other than Cyrus Rowlett Smith, the president of American Airlines.

Blair Smith had caught the computer fever, and believed that they could be very useful for airline reservations. Being a salesman, he didn’t hesitate to tell his seatmate all about this as soon as he learned who he was. He was gratified to find his companion receptive. “Now, Blair,” said Cyrus Smith just before their airplane landed, “our reservation center is at LaGuardia Airport. You go out there and look it over. Then you write me a letter and tell me what I should do.”

In his letter, Blair Smith envisioned a network that would bind together booking agents all over the country, allowing them to search to see which seats were available on which flights and to reserve them instantly for their customers. Blair Smith:

We didn’t know enough to call it anything. Later on, the word “Sabre” was adopted. By the way, it was originally spelled SABER — the only precedent we had was SAGE. SAGE was used to detect incoming airplanes. Radar defined the perimeter of the United States and then the information was signaled into a central computer. The perimeter data was then compared with what information they had about friendly aircraft, and so on. That was the only precedent we had. When the airline system was in research and development, they adopted the code name SABER for “Semi-Automatic Business Environment Research.” Later on, American Airlines changed it to Sabre.

Beginning in 1960, Sabre was gradually rolled out over the entire country. It became the first system of its kind, an early harbinger of the world’s networked future. Spun off as an independent company in 2000, it remains a key part of the world’s travel infrastructure today, when the vast majority of the reservations it accepts come from people sitting behind laptops and smartphones.

Sabre and other projects like it led to the rise of IBM as the virtually unchallenged dominant force in business computing from the middle of the 1950s until the end of the 1980s. But even as systems like Sabre were beginning to demonstrate the value of networked computing in everyday life, another, far more expansive vision of a networked world was taking shape in the clear blue sky of the country’s research institutions. The computer networks that existed by the start of the 1960s all operated on the “railroad” model of the old telegraph networks: a set of fixed stations joined together by fixed point-to-point links. What about a computer version of a telephone network instead — a national or international network of computers all able to babble happily together, with one computer able to call up any other any time it wished? Now that would really be something…

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Information by James Gleick, The Dream Machine by M. Mitchell Waldrop, The Closed World: Computers and the Politics of Discourse in Cold War America by Paul N. Edwards, Project Whirlwind: The History of a Pioneer Computer by Kent C. Redmond and Thomas M. Smith, From Whirlwind to MITRE: The R&D Story of the SAGE Air Defense Computer by Kent C. Redmond and Thomas N. Smith, The SAGE Air Defense System: A Personal History by John F. Jacobs, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Computing in the Middle Ages by Severo M. Ornstein, and Robot: Mere Machine to Transcendent Mind by Hans Moravec. Online sources include Lincoln Lab’s history of SAGE and the Charles Babbage Institute’s interview with R. Blair Smith.)

Footnotes

Footnotes
1 Inevitably, that wasn’t quite the end of it. Mauchly and Eckert continued their quest to win the patent they thought was their due, and were finally granted it at the rather astonishingly late date of 1964, by which time they were associated with the Sperry Rand Corporation, a maker of mainframes and minicomputers. But this victory only ignited another legal battle, pitting Sperry Rand against virtually every other company in the computer industry, who were not eager to start paying one of their competitors a royalty on every single computer they made. The patent was thrown out once and for all in 1973, primarily on the familiar premise that Von Neumann’s paper constituted prior disclosure.
2 That film’s title character was partially based on John Von Neumann, who after his work on the Manhattan Project and before his untimely death from cancer in 1957 became as strident a Cold Warrior as they came. “I believe there is no such thing as saturation,” he once told his old Manhattan Project boss Robert Oppenheimer. “I don’t think any weapon can be too large.” Many have attributed his bellicosity to his pain at seeing the Iron Curtain come down over his homeland of Hungary, separating him from friends and family forever.
 

Tags:

A Web Around the World, Part 6: Routing Calls

The telegraph networks of the late nineteenth century functioned much like the railroad networks with which they were so closely associated in the minds of the public. Each pair of Morse keys and receivers was connected to exactly one other pair via a fixed “track.” Messages traveled from station to station through the network like railroad passengers. A telegram sent from Smalltown, USA, would first be sent up the line to a larger hub station, where it would be dropped into the “outgoing” basket of another line connected to the same station that would take it to its next stop. And so on and so on, until it reached its final destination.

But the telephone wasn’t conducive to this approach. Alexander Graham Bell’s dream of being “able to chat pleasantly with friends in Europe while sitting in his Boston home” would require a different sort of network model, one more akin to the roads that would soon be built to handle automobile traffic. It would need to be possible for a message to steer its own way down a multitude of highways and byways to reach one of thousands or millions of individual addresses accessible on the network. And each message would need to do so at the same time that many other messages were doing the same thing, using the same roads. Network engineers would never again have it so easy as they had in the days when the telegraph was the only game in town.

Indeed, in contrast to this puzzle of dynamic routing, the invention of the telephone itself would soon seem a fairly minor challenge to have overcome. This new problem was too difficult, diffuse, and abstract to be solved in one eureka moment, or even a dozen of them. The worldwide telecommunications network that came into existence by the middle of the twentieth century was instead the result of steady incremental progress over the course of the decades, guided by people whose names have not found a place in history textbooks alongside those of Samuel Morse, Alexander Graham Bell, and Thomas Alva Edison. Yet the worldwide web these institutional inventors slowly pieced together was in its way more remarkable than any of the aforementioned men’s discrete creations. And it was also both the necessary precursor to and the medium of the computer-communications networks that would follow in the second half of the twentieth century.


The New Haven District Telephone Company’s exchange was the first of its type, heralding as much as the telephone itself a new era in communications.

The first system for letting any one telephone on a large network communicate with any other came into being in New Haven, Connecticut, on January 28, 1878. It was operated by the New Haven District Telephone Company, a spinoff of Bell Telephone, and connected 21 founding subscribers using a very simple, very physical method. The wire from each telephone on the network ran to a central exchange manned by a human operator. When you picked up your home phone to make a call, you were thus immediately connected to this individual. You told him which other subscriber you wished to speak to — the concept of phone numbers did not yet exist — whereupon he cranked a magneto to cause a bell to ring at the other end of your desired interlocutor’s line. If the individual in question picked up, the operator then linked your two telephones together using a patch cable.

It may strike us as a crude arrangement today. Certainly it was beset by obvious practical problems (what happened when more people tried to make calls than the operator could handle?) and privacy concerns (the operator could tell if a call was finished only by periodically listening in). Yet it spread like wildfire in lieu of any alternatives. The world’s second telephone exchange opened just three days after its first; by the end of 1878 there were several dozen of them in the United States, and a ringer had become an essential piece of telephony’s standard equipment. By the beginning of 1881, there were only nine cities with a population over 10,000 in the United States which didn’t boast at least one telephone exchange.

An early telephone exchange manned by boys, circa 1880. Such a place was called the “operating room” in telephony parlance, creating some amusing connotations.

The first exchange operators were, in the words of John Brooks,

an instant and memorable disaster. The lads, most of them in their late teens, who manned the telephone exchanges were simply too impatient and high-spirited for the job, which, in view of the imperfections of the equipment and inexperience of the subscribers, demanded above all patience and calm. They were given to lightening the tedium of their work by roughhousing, shouting constantly at each other, and swearing frequently at the customers.

Southwestern Bell historian David G. Park shares a typical anecdote:

In Little Rock, [Arkansas,] a prominent saloon keeper rang up and told one of the boy operators, fifteen-year-old Ashley Peay, “Connect me with my telephone at home. I want to talk to my wife.”

Ashley replied, “Your wife is talking to someone else.”

“What do you mean, my wife is talking to someone else?” the saloon keeper growled.

“I mean your line is busy,” Ashley snapped.

The saloon keeper wasn’t accustomed to being turned down by fifteen-year-old boys. “Get my wife on the line right now!” he shouted.

Young Peay’s reaction was to say, “Aw, shut up,” or words to that effect, and yank the connection.

The boy went on to handle other calls. Suddenly he was seized from behind, lifted from the floor, and shaken up and down by a furious saloon keeper. Just as the man was about to fling Peay through a glass window onto the street below, a man in the office came to the operator’s rescue.

Incidents like these occurred throughout the country…

But soon the telephone exchanges hit upon a solution: they replaced the boys with girls, who were not only more demure but willing to work for even lower wages. A newspaper article listed the job requirements:

The physical requirements of girls who are given positions in the telephone exchange are almost as stringent as those insisted upon in men enlisting in the army. To become a “hello” girl, the applicant must be not more than 30 years old [and] not less than five feet six inches tall. Her sight must be good, her hearing excellent, her voice soft, her perception quick, and her temper agile.

Every girl’s sight and hearing is tested and her height is measured before she is hired. Tall, slim girls with long arms are preferred for work on the switchboards. Fat, short girls occupy too much room and are not able to reach all of the six feet of space allocated to each operator.

With regard to nationality, it is said that girls of Irish parentage make the best operators.

The Little Rock, Arkansas, telephone exchange circa 1920, long after the unruly boys had been replaced with girls.

Almost from the very beginning, then, the job of telephone operator was seen as a female occupation, joining the jobs of schoolteacher and nanny in the eyes of the broader culture as another transitory way station for women between the onset of adulthood and marriage. The standard pay of between $1.00 and $1.50 per day reflected this. Those numbers would go up with inflation, but the other parameters of the job would remain the same for well over a century, for as long as it existed. Meanwhile the realization that female voices tend to be less threatening and more soothing in the ears of both genders would become even more embedded in the culture. (When was the last time a computer, smartphone, or GPS gadget spoke to you in a male voice?)

The systems and processes that drove the telephone exchanges improved steadily after 1878, even as the core model of a subscriber asking an operator to manually route his call via a patch wire and a switchboard remained in place for a surprisingly long time. The first telephone numbers made an appearance already in 1879, and quickly became commonplace, what with the way they eased the burden on the operators’ memory and provided telephony’s customers with at least an impression of anonymity. In December of 1887, the first Switchboard Conference was held in New York City. Tellingly, it devoted as much time to social engineering as it did to the technical side of telephony. Many a hand was wringed over the tendency of operators to say, “They won’t answer,” rather than “they don’t answer” in the case of a call that wasn’t picked up, what with the former’s intimation of neglectful intent. And it was agreed that operators should employ short rather than long rings when placing a call because “a short ring excites the curiosity of the subscriber.”

It wasn’t that no one was interested in an automated alternative to manual exchanges. The latter were inherently inefficient; a rule of thumb said that one operator was required during peak hours for every 100 telephone subscribers on a network, constituting an enormous financial drain on service providers even given the minimal salaries they paid to these employees. Despite this ample incentive, the problem kept engineers stymied for years. It was first partially solved by, of all people, an undertaker living in Kansas City, Missouri. Coming along in the last decade of the nineteenth century, Almon B. Strowger was one of the last of the breed of maverick independent inventors cum entrepreneurs who had built the telegraphy and telephony industries in earlier decades, who were soon to give way once and for all to the corporate institutionalists.


Almon B. Strowger

That said, Strowger conformed to no one’s stereotype of the genius inventor. Already 50 years old at the time of his achievement, he was a crotchety character whose irascibility verged on paranoia. The stage was set for his stroke of genius when he became convinced that the operators at his local telephone exchange had it in for him, and were deliberately misrouting his calls or not even bothering to place them. (If the anecdotes about his personality are anything to go by, there was perhaps another reason that so few people wanted to talk to him…) One of the operators was the wife of his principal rival in the undertaking business; he believed she was routing his potential customers’ calls to her husband’s establishment instead of his own.

So, he set out to remove the human operator from the equation altogether. His pique and grievance became the impetus behind the first workable automated switching system in the field of telephony.

Imagine a telephone whose cable terminates in a rotating electro-mechanical switch or relay, which looks rather like a windshield wiper. There is a button on the telephone. Every time the user presses it, a pulse of current goes down the line which causes the wiper to rotate one step, making a connection with a different receiving telephone. When the user has pressed the button a number of times corresponding to the “phone number” of the person she wishes to call, she presses a second button to cause that phone to ring, and proceeds to have a conversation. When she sets her phone down again, a switch is triggered that resets the system, dropping the wiper back to its home position in preparation for the next call. This is the Strowger system in its most basic form. Routing is still based on changing the physical connections between wires, but those physical changes are themselves now driven by electricity. For this reason, we call it an “electro-mechanical” design.

A very basic single-stage Strowger switch.

A network of more than ten or so nodes would be irredeemably tedious for the end-user of such a system, what with all the button-pressing it would require. But, crucially, the system could also be expanded by wiring more relays into it, and adding more buttons to the individual phones to control them. The system which Strowger first publicly demonstrated, for example, used two relay/button combinations to accommodate up to 100 phones, each with a unique two-digit number; the user tapped out the tens digit on one button, the ones digit on the other. In principle, the system could be extended to infinity by wiring yet more relays and buttons into the circuit.

Strowger was awarded a patent for his invention on March 10, 1891, and formed his own company soon after to exploit it. The first fully automated telephone exchange opened in La Porte, Indiana, on November 3, 1892. It was billed as the “girl-less, cuss-less, and wait-less telephone.” Strowger’s company would continue in the exchange business until 1983, first under the name of the Strowger Automatic Telephone Exchange Company and then as simply Automatic Electric.

But automated telephone exchanges would remain the exception to the rule for a long time after 1892; most people understandably preferred speaking a number to a fellow human being over pecking out long strings of digits manually and hoping for the best. Not until the 1920s would automated exchanges come to outnumber the manual ones, relegating the job of telephone operator to that of an occasional provider of information or extra help rather than the essential conduit of every single call. The key breakthrough that finally led to automated telephony’s widespread acceptance was the replacement of Strowger’s push buttons with spring-loaded dials; such “rotary phones” would remain the standard for decades to come, and would continue to function into the 1980s and beyond.

Rotary telephones like this one replaced buttons with a spring-loaded dial that sent the necessary bursts of electricity to move the switching relays at the exchange as it spun back to its resting position.



In the meantime, telephony made do with the manual exchanges. All of their inefficiencies and infelicities were thoroughly outweighed by the magic of the telephone itself. By the turn of the century, 1.4 million telephones were in service in the United States, and 25,000 or more girls and women were employed as operators. The impact of the telephone was different in nature from that of the telegraph, but no less socially significant. While it perhaps didn’t have the same immediate transformative effect on big business and international diplomacy, it was a vastly more democratic instrument, making a far more tangible change in the lives of its millions of individual users. The telegraph was a service, and thus to a large extent an abstraction; the telephone was a personally empowering technology, one you could literally hold in your hand.

Like the smartphones and tablets of our own day, telephones were condemned by certain segments of the intelligentsia, for destroying the old art of letter writing and for being a nuisance and a distraction from the truly important things in life; one article called them “an unmitigated domestic curse,” only good for “the exchange of twaddle between foolish women.” In another uncanny harbinger of more recent history, local newspapers fretted that telephones would slake the public’s thirst for their articles, columns, and calendars. (Unlike our more recent history, such fears would prove largely unfounded in this case.)

But the people couldn’t get enough of the telephone. American Bell — as Bell Telephone was now known, having adopted the new name in 1880 — was rather surprised to discover that the allegedly backward, rural areas of the country actually took to the telephone more readily than many of the nation’s urban centers. Farmers and particularly farmers’ wives, some of whom had heretofore been accustomed to going months at a time without talking to anyone outside their household, jumped on the telephone like a Titanic survivor on a lifeboat. The rural exchanges fostered a welcome new sense of community, becoming deeply embedded in the lives of the people they served, spreading news and gossip to all and sundry. Before Siri and “Hey, Google!,” there was the friendly local telephone operator to play the role of personal assistant, as captured in one housewife’s dialog from a gently satirical magazine article: “Oh, Central! Ring me up in fifteen minutes, so I don’t forget to take the bread out of the oven.” “Central, ring me up half an hour before the 2:17 train in the morning. See if it’s late before you call, please..”


For all the social changes it wrought, telephony extended its range much more slowly than telegraphy had. Cyrus Field’s transatlantic telegraph line had come to be just 22 years after the first telegraph line of any stripe was placed in service. The first transatlantic phone call, by contrast, didn’t take place until January 7, 1927, almost precisely 50 years after Roswell C. Downer had become the first person to have a telephone installed in his home. The delay was down to the nature of the two technologies.

The electrification of the Western world was in full swing at the turn of the century, to telephony’s immense benefit: hand-cranked magnetos and discrete batteries disappeared as companies like American Bell began to flood their networks with current from the grid. But the complex waveforms of telephony required much more power than a telegraph signal to travel an equivalent distance, due to a phenomenon known as attenuation: the tendency of a waveform to shed its peaks and valleys of amplitude and collapse toward uniformity as it travels farther and farther. Attenuation is in fact the same phenomenon in the broad strokes as the “signal retardation” which dogged the early days of undersea telegraphy, but it was never really an issue in terrestrial telegraphy, what with its staccato on-off approach to signaling. It could, however, play havoc with a sound waveform on a wire. The only way anyone knew of to fight attenuation was to add more power to the circuit, which in turn required thicker and thicker cables made of pure copper. This made the telephone into a peculiarly localized technology for instantaneous communication; it could and did foster a new sense of togetherness within communities, but struggled to reach between them. For decades, the American telephone network writ large was actually a bunch of local networks, connected to their peers if at all by just one or two long-distance lines.

Although the market for local telephone service became much more competitive after the expiration of the first of Alexander Graham Bell’s telephone patents in 1891, American Bell remained the 800-pound gorilla. The Bell executives had realized even well before that date that long-distance telephony was an area where their superior resources combined with their head start in the telephone business could allow them to sustain their monopoly without leaning on the crutch of patent law. Accordingly, American Bell on February 28, 1885, had formed a new subsidiary to specialize in long-distance telephony, with a name destined to outlive even that of its parent: the American Telephone and Telegraph Company, better known then and now as AT&T.[1]Even at the time of its inception, the name behind the acronym was anomalous if not meaningless, given that AT&T had no holdings in telegraphy; AT&T was content to leave that monopoly to Western Union. The name is perhaps best explained as a warning shot across Western Union’s bows, in case it should ever feel tempted to reenter the telephone market…

The thick, custom-made cables that AT&T employed were expensive to buy and string up, and could only carry one call at a time. These realities were reflected in the prices AT&T charged its subscribers: a ten-minute call over the 292-mile line from Boston to New York City — the longest and most celebrated line on the network at the turn of the century — cost $2 during the day or $1 at night. These were prices that only bankers and investors and other members of the well-heeled set could afford. Long-distance telephony would continue to be their prerogative alone for quite some time to come. Everyone else would have to rely on the telegraph or the even more old-fashioned medium of the hand-written paper missive for their long-distance communications needs. And needless to say, there was little point in thinking about a transatlantic telephone line while the length of even a terrestrial line was limited to 300 miles at the outside.

Rather than crossing the Atlantic, telephony’s overarching goal became to bridge the continent — to string a single telephone cable from the East to the West Coast. In addition to its practical utility, it would be an achievement of immense symbolic significance, a sort of telephonic parallel to the famous driving of the golden spike that had marked the completion of the transcontinental railroad in 1869.

One milestone came courtesy of a Serbian immigrant named Mihajlo Pupin. In 1900, he patented something called a loading coil, which, when placed at intervals along a telephone wire, could greatly reduce if not entirely eliminate a signal’s attenuation by magnetically increasing its inductance, or resistance to change. But there were limits to what loading coils could do. In combination with a very thick cable, they were enough to get a signal from New York City to Denver, but it couldn’t be coaxed any further. What was needed was an equivalent to Samuel Morse’s old telegraphic concept of the repeater: a way of actively boosting a signal as it traveled down a wire. Unfortunately, the simple system of discrete circuits joined by electromagnetic switches which Morse had proposed, and which had indeed become commonplace on telegraph lines by now, was useless for telephony, being unable to preserve the character of an audio waveform.

Then, in 1906, a researcher named Lee De Forest proposed something he called an audion. It was nothing less than the world’s first self-contained audio amplifier, built using vacuum tubes, a technology that would become hugely important outside as well as inside of telephony in the decades to come. The engineers at AT&T realized that it should be possible to install these audions — or simply repeaters, as they would quickly become known — along a terrestrial telephone line to make the voices it carried travel absolutely any distance. The details turned out to be a little bit more complicated than they first appeared, as generally happens in any form of engineering, but AT&T found a way to make it work at last. The company’s marketers came up with the perfect way to mark the occasion.

Alexander Graham Bell, center, prepares to make the first transcontinental phone call.

On January 25, 1915, a 67-year-old Alexander Graham Bell, stouter and grayer than once upon a time but still bursting with his old Scottish bonhomie, picked up a telephone before assembled press and public in New York City. “Hoy! Hoy!” he said in his booming brogue. (From the first days of his invention until the end of his own days, Bell loathed the standard telephonic greeting of “Hello.”) “Mr. Watson? Are you there? Do you hear me?”

In front of another assemblage in San Francisco, Bell’s old friend and helper Thomas A. Watson answered him. “Yes, Mr. Bell. I hear you perfectly. Do you hear me well?”

“Yes, your voice is perfectly distinct,” said Bell. “It is as clear as if you were in New York.”

Inevitably, Bell was soon cajoled into repeating those famous first words ever spoken into a working telephone: “Mr. Watson, come here. I want to see you.” Whereupon Watson noted that, instead of seven seconds, the journey would now take him seven days. It may not have been a transatlantic link quite yet, but it did feel like a culmination of sorts.



Alexander Graham Bell and Thomas Watson weren’t the only ones on the line that memorable day. Theodore N. Vail, the erstwhile mastermind of Bell Telephone’s successful legal campaign against Western Union, had returned after a lengthy hiatus to serve as president of the company once again in 1907. He listened in to the historic conversation from a telephone on Jekyll Island, Georgia, where he was convalescing from the heart and kidney afflictions that would kill him in 1920.

But before his death, Vail established a new research-and-development division unlike any seen before in corporate America, a place designed to bring the best engineers in the country together and give them carte blanche to solve problems that the world might not even know it had yet. It would become known as Bell Labs, at first informally and then officially, and it would do much to shape the course of not just communications but the entirety of technology — not least the field of computing — over the balance of the twentieth century.

On its home turf of telephony, Bell Labs steadily improved the state of the art of automated switching and developed techniques for multiplexing, so that calls could be routed together along trunk lines instead of always requiring a wire of their own. And it devised ways to integrate Italian inventor Guglielmo Marconi’s technology of wireless radio with the network, in order to bridge gaps where wired telephony simply wouldn’t serve. Because no one had yet found a way of installing repeaters on an undersea cable, a transatlantic connection would have to depend on these new techniques of “radiotelephony.”

The call of January 7, 1927, was a curiously muted affair in contrast to the completion of the first transatlantic telegraph cable or even the first transcontinental phone call, involving no greater luminaries than Walter S. Giffords, Vail’s successor as president of American Bell and AT&T, and Evelyn P. Murray, the head of the British mail service, which held a government-granted monopoly over telephony in that country. Nevertheless, it was a landmark moment; while Alexander Graham Bell’s dream of easy, casual conversation across an ocean was still decades away from fulfillment, a conversation was at least possible now, four and a half years after his death. Wireless links such as the one which facilitated this conversation would remain a vital part of the telephone networks of the future, whether in the form of conventional radio waves, microwave beams, or satellite feeds. “Distance doesn’t mean anything anymore,” said one of the engineers behind the first transatlantic call. “We are on the verge of a very high-speed world.” Truer words were never spoken.



Outside of telephony, the Bell Labs boffins created the first motion-picture projector with audio as well as video, and saw it used it in 1927’s The Jazz Singer, that harbinger of a new era of cinema. That same year — a banner one in its history — Bell Labs conducted the first American demonstration of television, starring Secretary of Commerce (and future President) Herbert Hoover. Two years later, it broadcast television for the first time in color. AT&T and American Bell may very well have extended their telephone empire to television in the next decade, had the Great Depression not intervened to put the damper on the consumer economy.

As it was, the fallout from the stock-market crash of late 1929 slowed the march of technology, but could hardly turn back the hands of time. By that point there were more than 15 million telephones in service under the auspices of American Bell alone. Their numbers dropped for a while in the aftermath of the crash, but relatively modestly. By 1937, there were more telephones than ever in the United States and, indeed, around the world.

A review of the literature surrounding the telephone during the decade provides yet more evidence that the concerns surrounding the trendy communications mediums of our own age are not as unique as we might like to think. It seems that worries about communications technologies leading to a dumbing-down of the populace and egotism running rampant did not begin with Facebook and Instagram. A sociological study of 1000 telephone conversations, for example, revealed with horror that only 2240 separate words were used in the course of all of them, which amounted to no more than 10 percent of the words heretofore considered fairly commonplace in English. Worse, the most frequently used words of all were “I” and “me.”

On a more positive note, the telephone was promoted — perchance a bit excessively — as the Great Leveler which would allow the proverbial little people to communicate directly with the movers and shakers of the world, just as Twitter and its ilk sometimes are today. An Ohioan with the delightfully folksy name of Abe Pickens took this lesson to heart, attempting to call up Francisco Franco, Benito Mussolini, Neville Chamberlain, Emperor Hirohito, and Adolf Hitler among others to give them a piece of his mind. He reportedly did manage to get himself connected directly to Hitler at one point, but Pickens spoke no German and Hitler spoke no English; the baffled Führer quickly fobbed his interlocutor off on an aide. Sadly, Pickens did not succeed in preventing World War II.

Even by this late date, the telephone had not yet annihilated its more static predecessor the telegraph. Western Union’s tacit bargain with Bell Telephone of 1878 — you take telephony, we’ll take telegraphy — could still be construed as a wise move on the part of both, in that both companies were still hugely powerful and hugely profitable. The field of journalism remained completely in thrall to telegraphy, as did large swaths of government and business. During the war to come, telegraphy would provide a precious lifeline to loved ones back home for countless soldiers serving in faraway places where telephones couldn’t reach. Still, the telegraph had now become a legacy technology, destined only for stagnation and gradual decline. The future lay in telephony.

This sprawling amalgamation of transmitters, receivers, lines, switches, and gates was one of the wonders of its world — so wondrous that it can still inspire awe when we step back to really think about it today. You could pick up a phone at any arbitrary location and, by dialing some numbers and perhaps talking with an operator or two, make a connection with any arbitrary other phone elsewhere in your country — or in many cases elsewhere on your continent or even planet. And then you could chat with the person who answered that other phone as if the two of you were sitting together in the same parlor. If you ask me, this is amazingstill amazing.

The technological web which allowed such interconnections was arguably the most complex thing yet created by human ingenuity — so complex that no one fully understood all of its nooks and crannies. The fact that it actually worked was flabbergasting, the fact  that it did so less than a century after Samuel Morse had first figured out how to send single bursts of electronic current down a single wire nothing short of mind-blowing. When we look at it today, when we think about its bustling dynamism, its little packets of conversation and meaning flying to and fro, it’s easy to see it as a sort of massive cyber-organic computer, doing the work of the world. If most contemporary people weren’t discussing the telephone network in those terms, it was because half of the analogy literally didn’t yet exist for them: the concept of an “anything machine” in the form of a programmable computer, while by no means a new one in some academic and intellectual circles, was still a foreign one to the general public.

But it wasn’t foreign to a young man named Claude Shannon.


Anything but a stuffy academic, Claude Shannon was one of the archetypes of the playful hacker spirit which would fully emerge at MIT during the postwar years. “When researchers at the Massachusetts Institute of Technology or Bell Laboratories had to leap aside to let a unicycle pass,” writes James Gleick in The Information, “that was Claude Shannon.”

Shannon had grown up on a farm in rural Michigan, tinkering with homemade telegraphs that repurposed barbed-wire fences for communication. After taking a bachelor degree in electrical engineering and mathematics from the University of Michigan, he came to the Massachusetts Institute of Technology as a 20-year-old prodigy in 1936, having been personally recruited by Dean of Engineering Vannevar Bush to work on the Differential Analyzer, a 100-ton semi-programmable analog calculating machine designed to relieve the grunt work of solving complex mathematical problems. Inside Shannon’s fecund mind, the Differential Analyzer collided with his abiding interest in telegraphy and telephony and his memories of a class he had taken in Michigan on symbolic logic, and out popped “A Symbolic Analysis of Relays and Switching Circuits,” a paper which has been called “the most important master’s thesis of the twentieth century.”

Within his thesis, Shannon presented a plan for an electro-mechanical computer built around the digital logic of ones and zeroes — a machine far more flexible than the likes of the Differential Analyzer, yet one that required only the off-the-shelf equipment of telephony rather than the many bespoke wheels and gears of its gargantuan steampunk inspiration. Shannon’s pivotal insight was that switches on a circuit could not only route information but constitute information: an open switch could indicate a one, a closed switch a zero, and everything else could be built up from there. Abstract logic could be rendered concrete in circuitry: “Any operation that can be completely described in a finite number of steps using the words ‘if,’ ‘or,’ ‘and,’ etc., can be done automatically with relays.” I should hasten to clarify that the only way to reprogram one of Shannon’s hypothetical computers was to physically rewire it — effectively to remake it into a brand new machine. And again, it was still at bottom an electro-mechanical rather than a purely electrical device. Still, it was a major milestone on the road to the modern digital computer.

The technologies of telephony would continue to be repurposed to suit the needs of the burgeoning field of computing in the years that followed. The vacuum tubes that served American Bell so well for so long, for example, found a new application at the heart of the first programmable digital computers of the postwar era. And that technology in turn gave way to another one first developed for telephony: the transistor, which was invented at Bell Labs in 1947 and went on to become, as John Brooks wrote in 1976, “the key to modern electronics,” facilitating everything from hearing aids to the Moon landing. The transistor also lay behind the first wave of truly widespread institutional computing, over the two decades prior to the arrival of personal computers on the scene in the late 1970s.

But these developments, important though they are, are not the main reason I’ve chosen to tell the story of the analog technologies of the telegraph and telephone on a site about the history of digital culture. I’ve rather done so because computer engineers did more than borrow from the tool kits of the electrical-communications infrastructure of their day: they also came to borrow the existing communication networks themselves. This was the result of an insight which seems so self-evident as to be almost banal once it has been grasped, but which took the brilliant mind of Claude Shannon to appreciate and articulate for the first time: the fact that an electric current which could carry the dots and dashes of Morse code or the sound of a human voice could be made to carry any kind of information. This simple realization was the key that opened the door to the Internet.

(Sources: the books Alexander Graham Bell and the Conquest of Solitude by Robert V. Bruce, Telephone: The First Hundred Years by John Brooks, Good Connections: A Century of Service by the Men and Women of Southwestern Bell by David G. Park Jr., From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Information by James Gleick, The Dream Machine by M. Mitchell Waldrop, and The Practical Telephone Exchange Handbook by Joseph Poole. Online sources include Bob’s Old Phones by Bob Estreich, “Telephone History” by Tom Farley, “Telephone Switches” by Mark Csele, “The Strowger Telecomms Page” of SEG Communications, and “Today in History: The First Transatlantic Phone Call” by Priscilla Escobedo for UTA Libraries.)

Footnotes

Footnotes
1 Even at the time of its inception, the name behind the acronym was anomalous if not meaningless, given that AT&T had no holdings in telegraphy; AT&T was content to leave that monopoly to Western Union. The name is perhaps best explained as a warning shot across Western Union’s bows, in case it should ever feel tempted to reenter the telephone market…
 
 

Tags:

A Web Around the World, Part 5: Selling the Telephone

Our history textbooks tell us that Alexander Graham Bell and his assistant Thomas A. Watson built and tested the world’s first working telephone on March 10, 1876. This statement is, broadly speaking, correct. Yet it can obscure what a crude instrument that first telephone really was, with its one end terminating in a tuning fork dunked in a bowl of pungent liquid, its other in a metal reed that functioned as the most rudimentary imaginable form of speaker. The device was unidirectional, which made holding an actual conversation over it an impossibility. If you breathed in when you leaned down to talk into the transmitting diaphragm, you would be rewarded with a lungful of fumes and a coughing fit. And as you used the telephone an ugly black deposit on the exposed wire in the bowl gradually ruined the connection, unless and until you scraped the toxic gunk away with a knife. The whole contraption looked and acted more like something from Dr. Frankenstein’s laboratory than a tool of modern communications.

Certainly Gardiner Greene Hubbard was thoroughly unimpressed with what he saw when he visited his protege’s workshop on March 13: he “seemed rather skeptical,” according to Bell’s laconic diary entry. Even now the telephone continued to strike him as a pointless distraction from the lucrative field of telegraphy. Seeing that they had probably lost the race to create a viable multiplex telegraph that improved on Joseph B. Stearns’s duplex design, Hubbard and Bell had recently agreed to pivot to what they called an “autograph” telegraph, which smacks of nothing so much as the fax machines of our own recent past. In an inadvertent echo of Samuel Morse’s original conception of the telegraph as a sort of electronic printing press, the autograph telegraph would allow an entire document to be “typeset” electronically and sent down the wire, using multiplexing to increase the transmission speed. To be sure, the idea was visionary in its way, but it was also most likely unachievable in the context of 1876, especially by one of Bell’s modest technical skills. At any rate, progress on it had been painfully slow. Yet Hubbard’s heart remained set on it.

Hubbard wrote to Bell shortly after his visit that he should devote himself exclusively to the autograph telegraph: “If you would work steadily on one thing [emphasis original] until you had perfected it, you would soon make it a success. While you are flying from one thing to another you may accidentally accomplish something, but you probably will never perfect anything.” Then he brought out his big gun: he persuaded his daughter Mabel to write to the lovelorn Bell that she could never think of marrying him until he had honored his agreement with her father to create the autograph telegraph. Bell was devastated. “I want to marry you, darling, because I love you,” he wrote in reply. “I wish to feel that you would marry me for the same reason.”

The ruthless pressure Hubbard was applying wasn’t quite enough to get Bell to abandon telephony altogether. But, not knowing how to package up his variable-resistance transmitter in some way that didn’t involve a lung-scalding bowl of acidulated water, he did lose faith on that front, returning to his older researches into the possibilities of unpowered magnetic-induction transmission. Within weeks, he and Watson had built a magnetic-induction telephone that could also transmit intelligible speech. Continuing with this method, which required no messy bowls of acidulated water and easily permitted a bi-directional conversation over a single wire, struck him as the most reasonable way forward. Bell would spend the rest of his fairly brief-lived career as an inventor in the fields of telegraphy and telephony chasing down the blind alleys of the autograph telegraph and the magnetic-induction telephone, never returning to his stroke of genius of March 10, 1876.


Much of the 1876 Philadelphia World’s Fair was devoted to the wonders of technology. Here we see the Machinery Hall, where a colossal Corliss steam engine dwarfs the full-size locomotives lined up in front of it. The telephone, the most important of all the technologies to make their debut at the fair, was seen only by a select few and attracted little press attention at the time.

The period between the American Civil War and World War II was the heyday of the World’s Fairs, international exhibitions of science, invention, and industry on a lavish scale. The very first World’s Fair to be held in the United States took place from May 10 to November 10 of 1876. It was presented in honor of the nation’s centennial in Philadelphia, the city where the Declaration of Independence had been signed. Hubbard used his connections to secure Bell a slot at a by-invitation-only demonstration of the latest techniques in telegraphy, which was to take place on June 25.

The day in question proved a brutally hot one; the air inside the temporary auditorium that had been erected on the fairgrounds was stifling. With no commercial record and no name recognition, the Bell Patent Association was relegated to the very last presentation of a long program of them. By the time Alexander Graham Bell took the stage, following such men of distinction as Elisha Gray, the audience of scientific, business, and political luminaries — among them was none other than William Thomson, the principal technical architect of the first transatlantic telegraph cable — was positively lethargic. While 2000 miles to the west Lieutenant Colonel George Custer was launching his ill-fated attack at Little Big Horn, Bell droned on about multiplex telegraphy and the autograph telegraph to a bored audience who had already heard enough of that sort of thing on this day. Then, just before he finished, he said that he would like to demonstrate another invention that was still “in embryo.”

Showing a flair for showmanship which his presentation to this point had never so much as hinted at, Bell invited Thomson to join him onstage, seating him before a table on which lay something that looked for all the world like a useless lump of iron. He told his august guinea pig to press the lump to his ear, then ran to a room behind the stage where its twin lay hidden. He began to declaim into it the famous soliloquy from Hamlet — “To be or not to be, that is the question” — in his dulcet Scottish brogue, itself a tribute to his family’s tradition of research in elocution. Onstage, Thomson’s face lit up in astonishment. Forgetting himself completely in the moment, the distinguished scientist jumped up and ran off like a schoolboy in search of Bell, leaving the audience perplexed as to what was going on here.

Bell’s next guinea pig made it clear to everyone. Emperor Pedro II of Brazil was something of a celebrity throughout the Americas, a portly, jolly man who looked and acted rather like Santa Claus, whose down-to-earth humanity belied his majestic station. “Dom Pedro,” as he was known, pushed the lump ever tighter to his ear and screwed up his face in concentration. Then he leaped up from his seat. “I hear! I hear!” he shouted in his broken English. Then, in Portuguese: “My God! It talks!” The room erupted in pandemonium. Forgetting about the heat and the long day stretching up to this point, the audience detained Bell for hours; every single one of them insisted on having his own chance to try out Bell’s magical telephone. The reaction finally convinced Hubbard that it was the telephone rather than Bell’s experiments in telegraphy that could make them both a fortune. He forgot everything he had ever said about his protege’s misplaced priorities. From this day forward, it would be full speed ahead on the telephone alone.

When he returned home to Britain, William Thomson said that the telephone had been the “most wonderful thing” he had seen at the Centennial Exhibition. Still not grasping that Bell’s invention was so revolutionary as to deserve a name of its own, he called it “the greatest marvel hitherto achieved by the electric telegraph,” noting as well that it had been “obtained by appliances of quite a homespun and rudimentary character.” (“I have never quite forgiven Sir William for that last sentence,” Thomas Watson would later remark with a wink.) But the public at large was slower to catch on, largely because not a single member of the mainstream press had attended the telephone’s coming-out party; journalists had all assumed that the day would contain nothing but incremental, fairly plebeian improvements on the existing technologies of telegraphy, interesting for those in the trade no doubt but hardly riveting for the general reader.

Still, word that something kind of amazing was afoot did slowly begin to spread. On August 3, Hubbard arranged to borrow a five-mile stretch of existing telegraph line connecting the towns of Mount Pleasant and Brantford in Ontario, and Bell conducted the first demonstration of his telephone to use outdoor wires that crossed a non-trivial distance. On October 9, again using a borrowed telegraph line, Bell and Watson had the first two-way conversation at a distance, speaking across the Charles River that separates Boston from Cambridge. On November 27, they communicated over the sixteen miles that separate Boston from Salem; they were able to extend the range this far by shifting from electromagnetic transmitters, relying upon a residual electrical charge from a battery, to more powerful permanent magnets that had no need at all for a battery.

On January 30, 1877, Bell was awarded a second telephony patent, one that much more fully described his design for a magnetic-induction telephone than had the previous one. By now the press was well and truly onto the story. “Professor Bell,” wrote the Boston Herald after the November 27 test, “doubts not that he will ultimately be able to chat pleasantly with friends in Europe while sitting comfortably in his Boston home.”

But such accommodating journalism was rare. Taking their lead from Western Union and the other established powers in the telegraph industry, most reporters treated the telephone as a novel curiosity rather than a supplement to — much less a threat to — the extant telegraph network. And there was in truth ample reason for skepticism. Even with the best permanent magnets Bell and Watson could find, the voices that came down their wires were whisper-faint. Ironically given Bell’s lifelong dedication to helping the deaf participate in the society around them, they were audible and decipherable only by people like him with excellent hearing. A comparison with that first transatlantic telegraph cable of 1858 is apt: these first telephones worked after a fashion, but they didn’t work all that well. In practice, most people tended to spend most of their time screaming “What did you say?” into them; the wonder the telephone initially provoked tended to shade with disarming speed into rank frustration. In his personal journal, Thomas Watson didn’t shy away from acknowledging the magnetic-induction telephone’s infelicities: it “would talk moderately well over a short line, but the apparatus was delicate and complicated and didn’t talk distinctly enough for practical use.”

Hubbard too showed signs of losing heart. At one point in late 1876, he reportedly asked Western Union whether they would be interested in buying Bell’s telephone lock, stock, and barrel for $100,000. He was turned down flat.

Bell lacked the requisite patience for the sort of slow, plodding laboratory work that might have improved his telephone, but he still needed to bring some money in for himself and Hubbard if he was to win the hand of the fair Mabel. So, he found an alternative to which his personality was more naturally suited: he hit the traveling-exhibition circuit with Watson in tow, crisscrossing the Northeast through much of the first half of 1877 like a boffinish P.T. Barnum. After his magic-lantern slideshow — the nineteenth century’s equivalent to Microsoft PowerPoint — he would present telephonic performances by brass bands, string quartets, opera singers, or church organs — the louder the racket they could make, the better — while his audience strained their ears to make sense of what they were hearing, or thought they heard. The disembodied human voices especially were fraught with sinister implications for many of those who assembled. In fact, the delicious thrill they provoked was doubtless a big part of the reason that audiences paid good money for a ticket; the seances of Spiritualism were becoming all the rage in the broader culture at the time. The Providence Star noted that it was “difficult, hearing the sounds out of the mysterious box, to wholly resist the notion that the powers of darkness are somehow in league with it.” “Had the hall been darkened,” wrote the Manchester Union, “we really believe some [from the audience] would have left unceremoniously.” The Boston Advertiser called the demonstration “weird”; the New York Herald declared it “almost supernatural.”


A Bell magnetic-induction “box” telephone from 1877. The cone mounted on the end served as both transmitter and receiver, necessitating some dexterous juggling on the part of the user.

The proprietors of the telephone are now prepared to furnish telephones for the transmission of articulate speech through instruments not more than twenty miles apart. Conversation can easily be carried on after slight practice and with occasional repetition of a word or sentence. On first listening to the telephone, though the sound is perfectly audible, the articulation seems to be indistinct. But after a few trials the ear becomes accustomed to the peculiar sound.

— The first advertisement for the Bell telephone, May 1877

By the late spring of 1877, Bell and Watson’s roadshow showed signs of running out of steam. It seemed they had to put up or shut up: the partners needed either to make a serious attempt to commercialize the telephone or just move on with their lives. After much debate, they chose the former course. That May, they signed their first customer, an enterprising banker named Roswell C. Downer, who paid for a telephone line connecting his home with his office. This harbinger of the modern condition was followed by no fewer than 600 more of his ilk by August 1. All of the connections were point-to-point setups linking one telephone to exactly one other telephone. But one decision the partners made would prove crucial to the eventual development of a more flexible telephone network: they leased telephones rather than sold them to their customers, and retained ownership and control of the cables binding them together as well. To state the case in modern terms, the telephone industry was to be a service rather than a hardware provider.

Each of these early telephones looked like a block of wood with a hole on one end and some wire sticking out the other. After shouting into the hole, one then had to shift it quickly to one’s ear to catch the response. “When replying to communication from another, do not speak too promptly,” pleaded the instruction manual. “Much trouble is caused from both parties speaking at the same time. When you are not speaking, you should be listening.” Being completely unpowered, these first telephones had no ability to ring; if someone didn’t happen to be standing at the other end when you shouted down the line, you were just out of luck. They were so heavy that using them was a veritable workout; Thomas Watson described the experience as akin to holding a suitcase up to one’s ear for minutes at a time. And yet there was a reasonably substantial group of people willing to pay for the dream of being in instant voice communication with others a considerable distance away, however circumscribed the reality of the telephone in service might have been.

The summer of 1877 was an exciting one for Alexander Graham Bell. On July 9, the Bell Telephone Company was formed, superseding the old Bell Patent Association. Two days later, he was finally allowed to marry Mabel. And on August 1, the Bell Telephone Company issued its first 5000 shares: 1497 of them to the mostly silent partner Thomas Sanders; 1497 to the young woman who was now known as Mabel Bell; 1387 to Gardiner Hubbard; 499 to Thomas A. Watson; 100 to Hubbard’s wife; ten to Hubbard’s brother; and all of ten shares to Bell himself, who in the throes of his newlywed bliss had signed all of the rest that he had coming over to his wife.

Shortly thereafter, Alexander Graham and Mabel Bell sailed for Britain, both to enjoy an extended honeymoon — it was Bell’s first return to his homeland since his emigration seven years before — and to act as ambassadors for the telephone on the other side of the Atlantic. In the latter capacity, they demonstrated it to Queen Victoria on January 14, 1878. There were some problems getting the connection going over the borrowed telegraph wire, and the queen’s attention began to wander. But suddenly Bell heard through the gadget the voice of a woman he had hired to sing “Kathleen Mavourneen,” one of the queen’s favorite ballads. In his excitement, he reached out and grabbed her by the arm. Everyone in the room gasped — but Queen Victoria didn’t even seem to notice, merely pressed the box to her ear and listened with a rapt expression. She wrote in her diary that night that Bell’s telephone was “most extraordinary.”

The audience with the queen created a widespread frisson of excitement over the telephone in Britain the likes of which had ironically not yet been seen in its birth country, where its thunder had recently been stolen by the announcement of Thomas Edison’s phonograph. Toy telephones became popular on Britain’s high streets. “Wherever you go,” wrote Mabel Bell in a letter back home to her mother, “on newspaper stands, at news stores, stationers, photographers, toy shops, fancy-goods shops, you see the eternal little black box with red face, and the word ‘Telephone’ in large black letters. Advertisements say that 700,000 have been sold in a few weeks.” If Bell Telephone could have leased anywhere near as many of the real thing back in the United States, everyone involved would have been thrilled.

But the harsh truth was that, even as the Bells were doing their public relations overseas, the company that bore their name was floundering in the domestic market. Many or most of the customers who had initially signed up in such gratifying numbers were dissatisfied by the underwhelming reality of their telephones, and no amount of patiently pedantic instruction manuals were going to get them to love a device that just didn’t work all that well. Worst of all, there was now a formidable competitor about to enter the field with a telephone much better than the one being peddled by Bell, thanks to the inventive genius of one Thomas Edison.


Thomas Edison at about age 30, when he was active in telegraphy and telephony and also in the process of inventing the phonograph.

Thomas Alva Edison was born in Ohio on February 11, 1847, the seventh and last child of parents who had just been driven out of British Canada for backing an insurrection against the provincial government there. When his father wasn’t rebelling, he was an odd-jobber and striver whose schemes never quite seemed to pan out. His mother was a former schoolteacher; almost all of the limited education Edison received came from her in the family home. Already at age twelve, he started riding the rails, working as a newsboy on trains. Showing the same entrepreneurial streak as his father but demonstrating more ability to turn his schemes into profits, he soon became a full-fledged mobile shopkeeper, buying snacks, books, and magazines cheap and selling them at a mark-up to passengers. He even published his own newspaper for a time from a compartment on the train with the help of an old printing press he had acquired. But it was the telegraph houses that stood everywhere the trains traveled that really captured the teenage Edison’s interest.

He happened to be sitting on a station platform one day when he saw a young boy wander onto the tracks in front of an approaching locomotive. Edison leaped to the rescue, successfully. The boy’s father happened to be the telegraph master at the station. The grateful man agreed to teach Edison some of the basics of telegraphy, and also lent him a number of books on the subject. Edison studied the description of Morse Code found therein with fanatical dedication — “about eighteen hours a day” was his own claim later in life — and got his first paying gig as a telegraph operator in Stratford Junction, Ontario, at the age of sixteen.

He quickly became a star among the telegraph fraternity. The speed with which he could decode messages coming down the wire became legendary; if one of his colleagues was sick, he could do this other’s job as well as his own, decoding two separate messages from two skilled senders simultaneously. And, because even brainy boys will be boys, he became equally legendary for his practical jokes. One of his favorites combined a wet floor with an induction coil to give his fellow operators a very unpleasant electrical shock as soon as they sat down in front of their Morse keys.

As that anecdote would indicate, Edison was fast becoming more than just a skilled end-user of the telegraph. He was fascinated by electrical technology in the abstract in a way that Alexander Graham Bell would never be; he lived and breathed it, experimenting and tinkering endlessly whenever he wasn’t on duty in a telegraph house. He applied for his first patent at age 21, for an automated vote recorder that he imagined would be used by the United States Congress; each representative need only push either the aye or the nay button installed at his seat, and the results would be automatically tabulated and displayed on a big dial mounted on the wall. But no one in the capital proved to be interested in it — because, as it was belatedly explained to Edison, the slow, inefficient method of voting that was currently used was actually an essential part of the legislative process, providing as it did ample opportunities to lobby, whip, and negotiate for votes. He took away from the experience a lesson that would never leave him: an inventor who wishes to be successful must ask what the people want, and invent that thing instead of the thing that makes him feel clever. With this lesson in hand, Edison would go on to become history’s archetype of the commercially successful inventor.

Though he was rough-hewn in demeanor and largely uneducated in anything other than the vagaries of mechanisms and circuits, Edison nonetheless displayed a peculiar ability to talk to moneyed men of business and win their support. In 1869, he retired from his career as a telegraph operator and became a sort of telegraphy consultant, helping his clients to improve their systems and processes. In 1874, he scored his first major triumph as an inventor of things that people really wanted, and crushed the first telegraphy dream of Alexander Graham Bell in the process: he patented a quadriplex telegraph with the ability to double again the throughput of Joseph B. Stearns’s recently introduced duplex system. Unlike Bell’s design, which stamped each of its signals with a unique frequency on the sending end and used these as a way to sort them out again on the receiving end, Edison’s system combined Stearns’s innovations with differing electrical polarities that served as another way of separating signals from one another. Most importantly, his system consistently worked, which was more than could ever be said for Bell’s.

The quadriplex telegraph catapulted him into the next stage of his career. In the spring of 1876, Edison moved into his soon-to-be-famous laboratory in Menlo Park, New Jersey, from which he would invent so many of the trappings of our modern world. Later that year, as we’ve seen, press notices about Bell’s magnetic-induction telephone began to appear. Edison had been very hard of hearing since boyhood, which meant that Bell’s invention as currently constituted was useless to him. So, he decided to make a better telephone, one that even he would be able to use without difficulty.

There no longer remained any mystery about the best theoretical approach to such a goal. Clearly the key to a louder telephone was the use of a variable-resistance transmitter instead of one that relied on magnetic induction; this Bell himself had demonstrated before losing heart. Bell had given up because he didn’t know of any substances other than acidulated water whose resistance could be made to vary in tandem with the vibrations of a diaphragm that was being struck by sound waves issuing from a human mouth. But Edison was possessed of both a much wider range of electrical knowledge and a methodical patience which eluded the high-strung Bell.

Edison made his own experimental telephone, and throughout most of 1877 used it to test many solid or semi-solid materials, keeping careful notes on the results. He tried paper, lead, copper, graphite, felt, and platinum among other substances, all of them in countless forms, combinations, and configurations, conducting over 2000 separate trials in all. In the end, he wound up back at the very first material he had tested: carbon, in the form of lampblack — i.e., residual soot scraped from a lamp or chimney. Lampblack was, he judged, as close as a solid could come to the properties of acidulated water.

Edison’s final design for a variable-resistance transmitter used a cone with a thin metal diaphragm installed at its base, much like Bell’s extant telephones. But instead of a magnet, his diaphragm rested atop a sealed container of lampblack, through which a powered electrical circuit flowed. As the diaphragm vibrated in rhythm with the user’s words, its movements varied the resistance of this circuit to create a facsimile of the sound wave in electrical current — just as had the acidulated water in Bell’s experiment of the previous year, but in a far more practical and reliable way. An electromagnet and diaphragm, designed by a prolific telegraph engineer and occasional associate of Edison named George Phelps, served as a receiver at the other end of the line in lieu of Bell’s metal reed, giving much better fidelity. Edison’s telephone did have the disadvantage of being unidirectional; a two-way conversation required two wires, each fitted with its own transmitter and receiver. Then again, such a setup meant that the user no longer needed to keep moving the telephone between mouth and ear; she could speak and listen at the same time, and do the latter without straining her ears.

All told, it was a tremendous breakthrough, one with the potential to increase not only the volume but also the range of the telephone. Edison applied for a patent on his variable-resistance transmitter already on April 27, 1877, when he was still very much in the process of inventing it. After much back and forth, the patent was finally granted in February of 1878. By the time it was, Edison himself had become famous, thanks not to the telephone but the phonograph, which he had been working on concurrently with his experiments in telephony.


An early Western Union telephone. The user spoke into the round piece on the left, whilst holding the star-shaped receiver on the right up to her ear.

Already two months before the final patent on Edison’s transmitter was issued, Western Union cut a deal with the inventor for the right to use it, forming a new subsidiary called the American Speaking Telephone Company to put it into service. A David-and-Goliath fight was now in the offing between Bell Telephone and Western Union. The latter corporation was in many ways a model for the other great trusts in this emerging Gilded Age of American business; for all intents and purposes it owned telegraphy writ large in the United States, as it seemed it now intended to own telephony. To make that happen, it had a market capitalization of $41 million (the equivalent of $1.4 billion in 2022 currency), net annual profits of more than $3 million, and established rights-of-way for its wires in every corner of the nation. And now it had a telephone that was by any objective standard vastly superior to the one being peddled by its puny rival.

The telephones which Western Union began leasing to customers in 1878 were the first in commercial service to be recognizable as such things to modern eyes, having separate attachments for talking and listening. A variable-resistance telephone of course required a powered circuit; in lieu of expensive and maintenance-heavy batteries, the end-users provided this power via elbow grease, by cranking from time to time a magneto attached to the telephones. It was a small price to pay for a device that was ergonomically superior and, most importantly of all, louder than anything that Bell Telephone could offer. For the first time, it was possible to have something resembling an ordinary conversation using these telephones.

Justifiably unnerved by these developments, Gardiner Hubbard asked a businessman named Theodore N. Vail to take over as head of Bell Telephone. Only 32 years old at the time he agreed to do so, Vail had, like Thomas Edison, gotten his start as an ordinary telegraph operator. But his genius ran in the direction of finance and management rather than the nuts and bolts of technology itself. He left his prestigious and well-paid post as head of the Railway Mail Service largely because he was bored with it and wanted a challenge. Whatever else one could say about it, Bell Telephone certainly qualified on that front.

After arriving at the company’s recently opened headquarters in New York City, Vail sat down to consider what he had gotten himself into. He realized that victory in the war with Western Union would have to come through the courts; as matters currently stood, Bell Telephone had no chance of winning via the free market alone. The patent situation was confusing to say the least. Alexander Graham Bell had patented the first working telephone, but had mentioned the principle of variable resistance that could make the telephone truly usable only in an addendum hand-scrawled in the margin of that patent. And now Thomas Edison instead of Bell had patented the carbon transmitter that was the key to a practical variable-resistance telephone, suitable for mass production and deployment. It seemed that Bell Telephone and Western Union each owned half of the telephone. This clearly wouldn’t do it at all.

So, Vail had Thomas Watson troll through the records at the patent office, looking for some way out of this impasse. In an incredible stroke of luck, he found just what they needed. On April 14, 1877 — thirteen days before Edison had filed a patent application for his variable-resistance transmitter — a German immigrant, janitor, and amateur inventor named Emile Berliner had filed for a patent caveat on a variable-resistance transmitter of his own. It used a different approach than Edison’s design: a wire inside it was only loosely screwed onto its terminal, which allowed the point of contact to vibrate in tandem with the diaphragm mounted above the wire, thus varying the resistance of the circuit. Berliner’s design was, everyone could agree, nowhere near as effective as Edison’s finalized carbon transmitter — but it had come first, and that was the important thing. Vail tracked down Berliner, who was still pushing a broom for a living, and hired him at a generous salary in return for the rights to his patent caveat. Vail’s intention was never to put Berliner’s transmitter into production, but rather to create a plausible legal argument that the principle of the variable transmitter, like all of the other aspects of a practical telephone, was now the sole intellectual property of the Bell Telephone Company. The new variable-resistance telephones which Bell began sending to its customers as soon as it had acquired the rights to Berliner’s transmitter actually cloned Edison’s carbon-transmitter design.

On September 12, 1878, the Bell Telephone Company filed for an injunction against Western Union’s telephones in the Circuit Court of the United States for the District of Massachusetts. Following some preliminary skirmishing, Western Union, whose telegraphy business still dwarfed that of telephony, decided on November 10, 1879, that the telephony sideshow just wasn’t worth the trouble. It agreed to give up all of its claims to telephone technology and to get out of the telephone business altogether in return for 20 percent of all of its rival’s earnings from telephony for the lifetime of the patents around which the whole conflict had revolved. It was the Gilded Age in a nutshell: one established monopolist politely made space for another, would-be monopolist in a related but separate field.

But it wasn’t the end of the disputes over the origins of the telephone. Far from it: over the course of the following decade, Bell Telephone beat back some 600 separate legal challenges to its monopoly — including one from Elisha Gray, striking out on his own from Western Union, the company he had co-founded. The record of court filings came to fill 149 thick volumes. One of the cases went as far as the Supreme Court in March of 1888, where it was won by Bell Telephone by the thinnest of possible margins: the vote was four to three in favor of the validity of the Bell patents. By this point, however, the point verged on becoming moot: Bell Telephone now had a well-nigh unassailable head start over any potential competition, and the patents were due to expire in a few years anyway.

Alexander Graham Bell himself regarded the realities of the telephone business with ever-increasing distaste, and felt himself ever more estranged from the enterprise that bore his name. And to a large extent, the feeling was mutual: he had disappointed and angered his ostensible partners in Bell Telephone by, as they saw it, deserting them in their time of greatest need. He had been entirely absent from the American scene from August of 1877 until September of 1878, when he grudgingly agreed to return from Britain to act as a witness in court. “Business is hateful to me at all times,” he wrote to Gardiner Hubbard on one occasion. “I am sick of the telephone and [wish to] have done with it altogether, excepting as a plaything to amuse my leisure moments,” he wrote on another. “Why should it matter to the world who invented the telephone, so long as the world gets the benefit of it?” he wrote on yet a third occasion. “I have not kept up with the literature of telephonic research,” he remarked dismissively when he did finally turn up in person for the legal proceedings. These were not the messages which the men behind a company girding for the battle for its life — a company with the petulant messenger’s name on the marque — wished to hear.

Alexander Graham and Mabel Bell gradually cashed out of said company between 1879 and 1883. They were left wealthy, but not extraordinarily so. Ditto Sanders, Hubbard, and Watson, all of whom likewise sold most of their shares in Bell Telephone before the company was ten years old. “No mighty, self-perpetuating fortunes came out of telephony in America,” noted the historian John Brooks in 1975. “No counterpart to a Ford, Rockefeller, or Duke now survives as a ‘telephone heir.'” But this shouldn’t be construed to imply that the telephone didn’t make an enormous amount of money for Bell Telephone and others in the decades after its founders left the scene.

Alexander Graham Bell continued for the rest of his life to split his time between invention — he dabbled with somewhat mixed results in everything from medical technology to aviation, nautical transport to cinema — and his passion for improving the lot of the deaf. Mabel Bell provided a suitable epitaph when he died in 1922 at the age of 75: “He is big enough to stand as he is, very imperfect, lacking in things that are lovely in other men, but a good big man all the same…” It is true that, in a juster or at least more painstakingly accurate world, we might all agree to call the telephone a joint triumph, to be credited not only to Bell but to Edison, Gray, and perhaps some worthy others whose names have appeared not at all or only in passing in these articles. But history in the world we do have doesn’t like to become muddied with so many equivocations. Thus it has chosen to credit the telephone to Alexander Graham Bell alone. And, if one man must be chosen, he is as good a choice as any.

(Sources: the books Alexander Graham Bell and the Conquest of Solitude by Robert V. Bruce, Alexander Graham Bell: The Life and Times of the Man Who Invented the Telephone by Edwin S. Grosvenor, Reluctant Genius: Alexander Graham Bell and the Passion for Invention by Charlotte Gray, Telephone: The First Hundred Years by John Brooks, and The Wizard of Menlo Park: How Thomas Alva Edison Invented the Modern World by Randall E. Stross. Online sources include Bob’s Old Phones by Bob Estreich and “George M. Phelps” by John Casale on his website Telegraph History.)

 

Tags:

A Web Around the World, Part 4: From Telegraphy to Telephony

For ten very odd days during the late summer of 1866, the entire world directed its attention toward the tiny Newfoundland fishing village of Heart’s Content, population about 100 souls. Then the Great Eastern sailed again, and the telegraph house there became just another unnoticed part of the world’s communications infrastructure, one of those thousands upon thousands of installations that no one thinks about until they stop working. The once wondrous Atlantic telegraph cable itself joined the same category not long after, almost as soon as the Great Eastern completed the final part of its assignment for the year: that of fishing the broken cable from the previous year up out of the ocean’s depths and completing its run to Newfoundland. Thus well before the end of 1866, there were two Atlantic cables in service, the second providing additional bandwidth and, just as importantly, redundancy in the case of a break in the first. Never since has the link between the two continents been severed.

The Anglo-American Telegraph Company’s final bill for this permanent remaking of the time scale of international diplomacy, business, and journalism came to £2.5 million, equivalent to about £320 million or $430 million in 2022 currency; this total includes all of the earlier failed attempts to lay the cable, but ignores the costs to American and British taxpayers entailed by the loaning of the Niagara and the Agamemnon and many other forms of government support. Thanks more to Cyrus Field’s stubbornness than any grand design, the transatlantic cable had become an international infrastructure project more expensive than any yet undertaken in the history of the world. And yet in the long term the cost of the cable was paltry in comparison to how much it did to change the way all of the people of the world viewed themselves in relation to the rest of their planet.

In the shorter term, however, this latest, working transatlantic cable was greeted with fewer ecstatic poems and joyful jubilees than the sadly muddled one of 1858 had enjoyed. The reaction was especially muted in the United States. Perhaps the long years of war that separated the two events had made those old dreams of a new epoch of international harmony seem hopelessly quaint, or perhaps the impatient Americans just thought it was high time already that this cable they’d been hearing about for so long started working properly. One of the few eloquent exceptions to the rule of blasé acceptance was provided by a prominent New York attorney named William Maxwell Evans. He noted the inscription on the base of a statue of Christopher Columbus in Madrid: “There was one world. He said, ‘Let there be two.’ And there were two.” Now, said Evans, Field had dared to reverse Columbus: “There were two worlds. He said, ‘Let there be one.’ And there was one.”

In lieu of more windy speeches, the working transatlantic telegraph prompted “a commercial revolution in America,” as Henry Field puts it — prompted a whole new era of globalized trade which has changed more in magnitude than in character in all the years since:

Every morning, as [Cyrus] Field went to his office [in New York City], he found laid on his desk at nine o’clock the quotations on the Royal Exchange at twelve! Lombard Street and Wall Street talked with each other as two neighbors across the way. This soon made an end of the tribe of speculators who calculated on the fact that nobody knew at a particular moment the state of the market on the other side of the sea, a universal ignorance by which they profited by getting advances. Now everybody got them as soon as they, for the news came with the rising of each day’s sun, and the occupation of a class that did much to demoralize trade on both sides of the ocean was gone.

The same restoration of order was seen in the business of importations, which had been hitherto almost a matter of guess-work. A merchant who wished to buy silks in Lyons sent his orders months in advance, and of course somewhat at random, not knowing how the market might turn, so that when the costly fabrics arrived he might find that he had ordered too many or too few. A China merchant sent his ship round the world for a cargo of tea, which returned after a year’s absence bringing not enough to supply the public demand, leaving him in vexation at the thought of what he might have made “if he had known,” or, what was still worse, bringing twice too much, in which case the unsold half remained on his hands. This was a risk against which he had to be insured, as much as against fire or shipwreck. And the only insurance he could have was to take reprisals by an increased charge on his unfortunate customers.

This double risk was now greatly reduced if not entirely removed. The merchant need no longer send out orders a year beforehand, nor order a whole shipload of tea when he needed only a hundred chests, since he could telegraph his agent for what he wanted and no more. With this opportunity for getting the latest intelligence, the element of uncertainty was eliminated and the importer no longer did business at a venture. Buying from time to time, so as to take advantage of low markets, he was able to buy cheaper, and of course to sell cheaper. It would be a curious study to trace the effect of the cable upon the prices of all foreign goods. A New York merchant who has been himself an importer for forty years tells me that the saving to the American people cannot be less than many millions every year.

That said, it was the well-heeled who most directly benefited from the Atlantic cables in their early months and years. For all of William Thomson’s work, the bandwidth of each of them was still limited to little more than twelve words per minute, making them a precious resource indeed. The initial going rate for sending a message between continents was a rather staggering £1 or $7.50 per word, at a time when a skilled craftsman’s weekly wage might be around $10.

But that was merely the curse of the early adopter, something with which a technology-mad world would become all too familiar over the century and a half to come. In time, the pressure of competition combined with ever-improving cables and systems brought the price down dramatically. The Anglo-American Telegraph Company’s first competitor entered the ring already in 1869, when a French cooperative laid a cable of its own from Brest to Newfoundland and then on to Boston. By 1875, a transatlantic telegram cost a slightly more manageable $1 per word; by 1892, the price was down to 25¢ per word — still a stretch for the average American or European to use for private correspondence, but cheap enough for markets, businesses, governments, and news organizations to use very profitably, given their economies of scale. Soon “the wire” was synonymous with news itself.

By 1893, no fewer than ten transatlantic telegraph cables were in service, all of them transmitting at several times the speed of the cables of 1866; just seven years later, the total was fifteen. Other undersea cables pulled India, Australia, China, and Japan into this first worldwide web. It was now possible to send a message from any reasonably sized city in the world all the way around the world, until it made it back to its starting point from the opposite direction just a few hours later.

Henry Field again, writing in 1893:

The morning news comes after a night’s repose, and we are wakened gently to the new day which has dawned upon the world. That which serves to such an end, which is a connecting link between countries and races of men, is not a mere material thing, an iron chain, lying cold and dead in the icy depths of the Atlantic. It is a living, fleshly bond between severed portions of the human family, thrilling with life, along which every human impulse runs swift as the current in human veins, and will run forever. Free intercourse between nations, as between individuals, leads to mutual kindly offices that make those who at once give and receive feel that they are not only neighbors but friends. Hence the “mission” of submarine telegraphy is to be the minister of peace.

Sentiments like these had once again become commonplace even in the United States by the end of the nineteenth century, as the memories of civil war faded. It was now widely believed that the developed world at least had become too intimately intertwined, thanks largely to the telegraph, to ever seriously contemplate war again. The bloody twentieth century to come would prove such sentiments sadly naïve, but it was a nice thought while it lasted. (Internet idealists would of course be slowly and painfully disabused of much the same sentiments a century later; human technology, it seems, cannot so easily overcome human nature.)

By the time the century turned, the machines and men who had created this revolution in communications were mostly gone.

The Great Eastern, that colossal white elephant that had finally found a purpose with the laying of the first transatlantic cables, continued in its new role for some time thereafter, laying three further cables across the Atlantic and still more of them in the Indian Ocean, the Pacific, and the Mediterranean. But its new career was ended by the completion of the CS Faraday, the first ship designed from the hull up for the purpose, in 1874; this vessel could lay cables far cheaper and more efficiently. Cast adrift on the waters of life once more with no clear purpose, the Great Eastern spent some time as a floating concert hall and tourist attraction, even at one point became a mobile billboard sailing up and down the Mersey River. Its glory days now a distant memory, the rusting hulk was sold for scrap in 1888.

The Great Eastern near the end of its days, when it was reduced to serving as a floating billboard for Lewis’s department stores.

Charles Bright died the same year at age 55, after a high-profile public career as a proponent of electrical technology in all its forms and a three-year stint in the House of Commons.

William Thomson was blessed with a longer, even more spectacular career that encompassed a diverse variety of achievements in the theoretical and applied sciences, from atomic physics to geology, as well as five years spent as the president of the Royal Society. In 1891, Queen Victoria ennobled him, making him Lord Kelvin, after the river that flowed through the University of Glasgow where he taught and researched. He didn’t die until 1907 at age 83, whereupon he was given a funeral in Westminster Abbey commensurate with his status as the grand old man of British science. A system for measuring temperature on an absolute thermodynamic scale, which he had first begun working on well before the transatlantic cable, became known after his death as “the kelvin scale” by the universal consensus of the international scientific community.

His erstwhile arch-rival Wildman Whitehouse, on the other hand, shrank from public life after it became clear to everyone that he had been wrong and Thomson had been right about the best design for the first Atlantic cable. When Whitehouse died in 1890 at age 73, the event went entirely unremarked.

Cyrus Field was made richer than ever for a while by the transatlantic telegraph. He splashed his millions around Wall Street both in the hope of making more millions and out of that spirit of idealism that was such an indelible part of the man’s character. For example, he funded much of the construction of New York City’s “El” lines of elevated trains, the precursor to its current subway system, by all indications out of a simple conviction that the people of the city deserved better than “crowded to suffocation” streetcars. Prone as he was to prioritize his ideals over his pocketbook, he gradually fell back out of the first rank of Gilded Age money men. He died in 1892 at age 72, whereupon he was buried behind the family church in Stockbridge, Massachusetts. His unremarkable gravestone bears an epitaph that is as straightforward as the man himself:

Cyrus West Field, to whose courage, energy, and perseverance the world owes the Atlantic telegraph.

Samuel Morse, that brilliant but deeply flawed original motivating force behind the telegraph, left behind a more mixed legacy. Even as Field had been struggling to make the transatlantic telegraph a reality, Morse had taken to occupying himself mostly with litigation of one form or another; cases brought by him reached the Supreme Court on no fewer than fifteen separate occasions. When Morse died in 1872 at the age of 80, his private reputation inside a telegraph industry that publicly eulogized him wasn’t much better than that of the typical patent troll of today, thanks to his meanness about payments and credit. Thankfully, his telegraph patents had expired eleven years earlier, which had served to draw the worst of his venom. Morse’s design for the telegraph itself as well as for the Morse key and Morse Code had thus been freed to take on a life of their own independent of their inventor, as all important inventions eventually must.



In addition to changing the world in the here and now, those same inventions paved the way for the next stage in the evolution of the global village. What that stage might entail had begun to show itself already one day in May of 1846, when the telegraph in service was still a curiosity and the idea of a transatlantic telegraph still a pipe dream. On the day in question, Joseph Henry — the most respected American theoretical scientist of telegraphy, whose advocacy had been so crucial for winning support for Morse’s design — happened to be visiting Philadelphia, where he was invited to witness a mechanical “speaking figure” created by a German immigrant named Joseph Faber. The automaton could, it was claimed, literally speak in recognizable English. Henry always took a certain ironic pleasure in revealing the fraud behind inventions that seemed too good to be true, a species to which he surely must have suspected Faber’s speaking figure to belong. But what he saw and heard that day instead thrilled him in a different way.

The astute German’s contraption took the physical form of a Turkish-looking boy sitting crossed-legged on a table. Faber made it “talk” by forcing air through a mechanical replica of the human mouth, tongue, glottis, and larynx, which could be reconfigured on the fly to produce any of sixteen elementary sounds. By “playing” it on a repurposed organ keyboard, Faber could indeed bring his puppet to produce labored but basically comprehensible English speech. Joseph Henry was entranced — not so much by the puppet itself, which he rightly judged to be no more nor less than a clever parlor trick, but by the potential of combining mechanical speech with telegraphy. “The keys,” he noted, “could be worked by means of electromagnets, and with a little contrivance, not difficult to execute, words might be spoken at one end of the telegraph line which had their origin at the other.” It was the world’s first documented inkling of the possibility of a telephone — a tool for “distant speaking,” as opposed to the “distant writing” of the telegraph. That tool, when it came, would transmit the speech of real humans rather than a synthetic version of it, but Henry’s words were nonetheless prescient.

Many of the others who saw Faber’s automaton were less thrilled. The very idea of the human voice being reproduced mechanically had an occult aura about it in the mid-nineteenth century. It thus comes as little surprise that the legendary showman and conman P.T. Barnum, who specialized in all things uncanny and disturbing, recruited Faber and his artificial boy for one of his traveling exhibitions. In this capacity, the two made their way across the Atlantic to London’s Egyptian Hall. The description provided by one witness who saw them there sounds almost like an extract from a macabre tale by Edgar Allan Poe or H.P. Lovecraft:

The exhibitor, Professor Faber, was a sad-faced man, dressed in respectable well-worn clothes that were soiled by contact with tools, wood, and machinery. The room looked like a laboratory and workshop, which it was. The professor was not too clean, and his hair and beard sadly wanted the attention of a barber. I had no doubt that he slept in the same room as the figure — his scientific Frankenstein monster — and I felt the secret influence of an idea that the two were destined to live and die together. The professor, with a slight German accent, put his wonderful toy in motion. He explained its action: it was not necessary to prove the absence of deception. The keyboard, touched by the professor, produced words which slowly and deliberately in a hoarse sepulchral voice came from the mouth of the figure, as if from the depths of a tomb. It wanted little imagination to make the very few visitors believe that the figure contained an imprisoned human — or half human — being, bound to speak slowly when tormented by the unseen power outside.

As a crowning display, the head sang a sepulchral version of “God Save the Queen,” which suggested inevitably God save the inventor. This extraordinary effect was achieved by the professor working two keyboards — one for the words and one for the music. Never probably before or since has the national anthem been so sung. Sadder and wiser, I and the few visitors crept slowly from the place, leaving the professor with his one and only treasure — his child of infinite labour and unmeasurable sorrow.

Joseph Faber with his “Euphonia,” or speaking machine.

Alas, Joseph Faber met a fate worthy of an Edgar Allan Poe protagonist. Exploited and underpaid like all of P.T. Barnum’s entourage of curiosities, he committed suicide in 1850 on the squalid streets of London’s East End.

Before he did so, however, there came to his room in the Egyptian Hall one open-minded visitor who was more fascinated than appalled by the performance: a Scottish phonetician named Alexander Melville Bell, who had spent most of his life studying the mechanisms of speech in the cause of teaching the deaf to communicate with the hearing. This man’s son, who was still in the womb when his father saw Faber’s automation, would go on to create a different form of mechanical speech, making his family name virtually synonymous with the telephone.


The young Alexander Graham Bell.

Alec is a good fellow and, I have no doubt, will make an excellent husband. He is hot-headed but warm-hearted — sentimental, dreamy, and self-absorbed, but sincere and unselfish. He is ambitious to a fault, and is apt to let enthusiasm run away with judgment. I have told you all the faults I know in him, and this catalogue is wonderfully short.

— Gardiner Greene Hubbard, writing to his daughter Mabel on the subject of Alexander Graham Bell

When a 23-year-old Alexander Graham Bell fetched up on North American shores from his hometown of Edinburgh on August 1, 1870, he resembled a sullen, lovesick adolescent more than a brilliant inventor. Earlier that year, his elder brother had died of tuberculosis. Devastated by grief, disappointed at the cool reception his techniques for teaching the deaf to read lips and to enunciate understandable speech in return had garnered in Britain, his father Alexander Melville had opted for a fresh start in Canada. The younger Alexander had initially agreed to join his father, mother, and widowed sister-in-law in the adventure, but almost immediately regretted it, thanks not least to a girl in Edinburgh whom he hoped to marry. But his pointed hints about his change of heart availed him nothing; his father didn’t let him off from his promise. On the passage over, young Alexander filled his journal with petulant musings about how “a man’s own judgment should be the final appeal in all that relates to himself. Many men do this or that because someone else has thought it right.”

But he wasn’t a malcontent by nature, and he soon made the best of things in the New World. Like his father, he would always consider his true life’s calling to be improving the plight of the deaf. Their dedication had a common source: Eliza Bell, Alexander Graham Bell’s mother, was herself so hard of hearing as to be effectively deaf. In April of 1871, her son became a teacher at the School for Deaf Mutes in Boston. A kindly, generous man at bottom, he approached his work there with an altruistic zeal. “My feelings and sympathies are every day more and more aroused,” he wrote home to his family. “It makes my very heart ache to see the difficulties the little children have to contend with.”

He wasn’t just empathetic; he was also effective. By combining instruction in elocution and lip-reading with sign language, he did wonders for many of his students’ ability to engage with the hearing world around them. He wrote articles for prestigious journals, and earned the reputation of something of a miracle worker among the wealthy families of New England, who clamored to employ him as a private tutor for their hearing-impaired children.

Worthy though Bell’s work as a teacher of the deaf was, it would seem to be far removed from the telegraph and other marvels of the burgeoning new Age of Electricity. But there was another side to Alexander Graham Bell. His interest in elocution in the abstract had led from an early age to an interest in the biological mechanisms of human speech, and possible ways of artificially reproducing them. When he was just sixteen, he and his now-deceased brother had made a crude duplicate of a human soft palate and tongue out of wood, rubber, and cotton; by manipulating it in just the right way whilst blowing through an attached tube, they could get it to say a few simple words like “Mama.” One day when they were playing with it on the stairwell outside the family apartment, a neighbor poked her head out to see “what was wrong with the baby”; they viewed this as a triumph. Now the boys’ focus shifted to the family dog. They trained it to growl on cue while they manipulated the poor, patient animal’s mouth and throat — and out came some semi-recognizable facsimile of, “How are you, grandmama?”

Needless to say, the young Alexander had listened to his father’s stories of Joseph Faber’s talking automaton with rapt attention. Another phonetician told him that another German scientist and inventor by the name of Hermann von Helmholtz had recently written a book on the possibility of synthetic speech. It explained how vowel sounds could be generated by passing electrical currents through different combinations of tuning forks. The operator sat behind a keyboard not dissimilar to the one used by Faber, and like him pressed different combinations of keys to make different sounds; the big difference was that, while Faber’s puppet was powered by compressed air, Helmholtz’s gadget was entirely electrical. But Bell didn’t read German, and so could do little more than look at the diagrams of Helmholtz’s device that were included in the book. This led to an important misunderstanding: whereas in reality each tuning fork was connected to the master keyboard via its own wire, Bell thought that one wire passed through all of the forks, and that it was the characteristics of the current on that wire — more specifically, its frequency — that caused some of them to ring out while others remained silent. “The notion was entirely mistaken,” writes the historian of telephony John Brooks, “but the mistake was an accident of destiny.”

Bell’s destiny became manifest on October 18, 1872, when he opened a Boston newspaper to see an article about the “duplex telegraphy” system of a local man named Joseph B. Stearns. An important advance in the state of the art of electrical communication in its own right, duplex telegraphy allowed one to send separate messages simultaneously in opposite directions along a single telegraph wire. In a world where telegraph congestion was becoming a major issue, this was a more than significant gain in efficiency. Being quite fast and cheap to retro-fit onto existing telegraph lines in busy areas, Stearns’s system would soon become commonplace. But already other inventors were beginning to think about how to go even farther, how to send even more messages simultaneously down a single wire. Oddly enough, Alexander Graham Bell, teacher of the deaf, became one of these.

Joseph Stearns’s ingenious system for duplex telegraphy, which inspired Alexander Graham Bell’s initial investigations in the field. B in the diagram above is an iron bar. The wire running from the local battery (b) is split in two. Both of these wires are wound around the bar, but only wire 1 goes on to connect with the station at the other end of the line; wire 2 runs directly to ground. An electromagnetic switch is connected to the bar at N, and to the other side of this switch is connected the receiving apparatus. Because a locally generated signal passes evenly through the bar, the bar does not become magnetically unbalanced, and thus does not activate this switch. But a signal originating from the other station passes through only half of the bar, magnetizing it and tripping the switch, which allows the signal to go on to the receiving apparatus.

Bell’s idea was to pass the signal from each of several Morse keys attached to a single wire through a device known as a rheotome, which interrupted the flow of an electrical current at a user-adjustable speed, causing it to “vibrate” at a distinct frequency — akin to a distinct pitch when thought of in acoustic terms, as Bell most assuredly did. At the far end of the line would be a set of steel reeds attuned to each of these frequencies via tension screws, so that they would resonate and become magnetized only when their matching frequency reached them. These reeds, in combination with electromagnetic switches which they would trigger, would serve to sort out all of the different frequencies coming down the same wire, matching each Morse key at the sending end with the appropriate receiver by means of its unique electro-acoustic thumbprint. By the end of 1873, however, Bell had gotten only as far as being able to produce audible, simultaneous tones on his receiving reeds by pressing different Morse keys; he had done little more than duplicate the functionality of Hermann von Helmholtz’s vowel-speaking machine, albeit by wiring his reeds serially rather than in parallel like Helmholtz’s tuning forks.

Nevertheless, in January of 1874 Bell, still a loyal Briton despite his residency in the United States, wrote to the British Superintendent of Telegraphs explaining that he believed himself to be on the verge of an important breakthrough in the emerging field of multiplex telegraphy, one which he wished to offer to Her Majesty’s government free of charge. The reply was coldly impersonal, not to say disinterested: “If you will submit your invention it will be considered, on the understanding, however, that the department is not bound to secrecy in the matter, nor to indemnify you for any loss or expense you may incur in the furtherance of your object, and that in the event of your method of telegraphy appearing to be both original and useful, all questions of remuneration shall rest entirely with the postmaster-general.” Bell understandably took this as “almost a personal affront,” and decided to turn to private industry in the United States instead. The whole incident thus became another of those hidden, fateful linchpins of history. In so rudely rejecting its citizen inventor, the British government ensured that the telephone, like the telegraph before it, would go down in history as a product of the American can-do spirit.

Then again, the British government’s skepticism about this amateur inventor working so far outside of his usual field would scarcely have been questioned by any reasonable person at the time. Bell was not deeply versed in the vagaries of electricity, and his progress always seemed to be a matter of two steps forward, one step back — or the inverse.

Still, his experiments were intriguing enough that he attracted a pair of patrons, both of whose deaf children he had taught. Thomas Sanders was a wealthy leather merchant, while Gardiner Greene Hubbard was a prominent lawyer and public-spirited scion of old Boston wealth. Of the two, Hubbard would take the more active role, becoming at some times a vital source of moral support for Bell and at others a vexing micromanager. Their relationship was further complicated by the fact that Bell was desperately in love with Hubbard’s deaf daughter — and his own former student — Mabel.

Sanders and Hubbard joined their charge in forming the Bell Patent Association. They provided him with his first proper workshop and hired a part-time assistant to join him, a young machinist named Thomas A. Watson. Bell and Watson became fast friends despite their differences in socioeconomic status, their rapport taking on something of the flavor of another famous pairing which involves the name of Watson; instead of “Elementary, my dear Watson,” Bell’s catchphrase became, “Watson, we are on the verge of a great discovery!” And yet their demonstrable progress remained damnably slow. Even with the help of his assistant, who had many of the practical skills he lacked, Bell just couldn’t seem to get his “harmonic telegraph” to work reliably.

Everyone involved was keenly aware that Bell was not the only person in hot pursuit of further advances in multiplex telegraphy. Among his competition were the distinguished electrical engineer Elisha Gray, co-founder of a company known as Western Union that had come to dominate virtually all American telegraphy, and a young whiz kid named Thomas Edison. Bell was in a race, one that he felt himself to be losing to these men of vastly greater experimental know-how, who lived and breathed electric current in a way that he never would. Trying to keep up nearly killed him; he was still spending his days teaching the deaf students he couldn’t bear to abandon, even as he spent every evening in his laboratory.

From the perspective of today, it may seem that Bell was missing the forest for the trees as he continued to fashion ever more baroque devices for combining and then separating signals of different frequencies running down the same wire. He understood well that an electrical waveform could theoretically be made into an exact duplicate of a sound wave; all of his work was contingent on the similarities between the two. Yet it took him a long time to fully embrace a goal which seems obvious to us: that of transmitting sound electronically as a purpose unto itself, a revolutionary advance to which any potential incremental advances in multiplex telegraphy couldn’t hold a candle.

There was one central problem which prevented Bell from making that leap: he knew how to create an electronic waveform that captured only half of the data encoded by a sound wave in the real world. His circuits were all powered by an external battery, providing direct current at a fixed amplitude. He could vary the frequency of this current using a rheotome, but he had no way of changing its amplitude. In other words, he could transmit a sound’s pitch (or frequency) but not its volume (or amplitude). This meant that he could mimic uniform tones in electric current, but not the complexities of, say, human speech.

Using a rheotone, Bell could transmit uniform sounds of low (left) or high (right) pitch.

He couldn’t, however, transmit a more complex waveform like the one above.

June 2, 1875, was a miserably hot day in Boston. Bell and Watson were working in a rather desultory fashion on their harmonic telegraph in their cramped laboratory; their progress of late had been as slow as ever. Bell was on the sending end in one room, Watson on the receiving end in the other, and, as usual, the thing wasn’t working correctly; one reed on the receiving end stubbornly refused to sound. So, they shut down the battery, and Watson started plucking the recalcitrant reed to make sure it was free to move as it should.

Because the system would need to be able to send messages in both directions, it was equipped with both rheotomes and receiving reeds on each of its ends. But, because they weren’t in use at the moment, the reeds on Bell’s end had been left untuned. And it was these latter that now gave Alexander Graham Bell one of the shocks of his life: he found that he could see and faintly hear the reeds on his side vibrate in time with Watson’s plucking, even with no power flowing through the circuit. He realized that a residual magnetism in Watson’s reed must be creating a faint electrical signal of its own on the wire. And, crucially, this signal varied not just in pitch but in amplitude. It seemed that one counterintuitive trick to sending sound down a wire was to remove the amplitude-obscuring battery from the circuit entirely. “I have accidentally made a discovery of the greatest importance,” Bell wrote in a letter to Hubbard. “I have succeeded today in transmitting signals without any battery whatsoever!” The harmonic telegraph was momentarily forgotten in favor of this new possibility.

Bell sketched for Watson a design that used identical devices on each end of a wire for both sending and receiving the spoken word. They consisted of a single untuned metal reed, an electromagnet, and a thin diaphragm. If one spoke into one of them shortly after power had been supplied to the wire — i..e, when the electromagnets still retained some residual magnetism — the resulting vibrations of the diaphragm ought to induce a very faint electrical signal of the same character as the sound wave that had caused the vibration. At the other end of the wire, this signal would be translated back into sound when it caused the reed to vibrate.

Alexander Graham Bell’s very first attempt at a telephone, using unpowered magnetic induction. It was later given the rather morbid nickname of the “gallows telephone,” after its resemblance to an execution gallows when turned on end.

Experts who have looked at the design since have concluded that it is workable in principle. In practice, however, it stubbornly refused to function properly. Bell and Watson just couldn’t seem to get the fine-tuning right, could get it to transmit some form of sound but not comprehensible speech. The Achilles heel of the “magnetic induction” method of sound transmission was the vanishing faintness of the signals it produced. Even under perfect conditions, a human voice could reach the other end of a wire as the barest whisper, audible only to a person with very keen hearing — and the slightest technical infelicity would mean it couldn’t even manage that much.

Faced with this latest setback, and with his harmonic telegraph also seemingly going nowhere, Alexander Graham Bell came very close to giving up on electrical invention altogether. He and Watson were both utterly frazzled, having worked themselves to the bone in recent months. Gardiner Hubbard remained enthusiastic about telegraphy, but was less interested in telephony, and didn’t hesitate to tell Bell this. Bell himself now believed that his harmonic telegraph stood little chance against its competition even if he could get it working — by now Thomas Edison had already patented a design for a telegraph capable of sending four messages simultaneously down the same wire — but he hesitated to say as much to his prospective father-in-law. Instead he prevaricated, devoting more time and energy once again to his teaching. Needless to say, this too displeased Hubbard. “I have been sorry to see how little interest you seem to take in telegraph matters,” he wrote to Bell that fall. “Your whole course has been a very great disappointment to me, and a sore trial.” What Bell and Hubbard didn’t know, but would doubtless have been even more consternated to learn, was that Elisha Gray had also turned away from multiplex telegraphy in the wake of Edison’s patent and begun pursuing the possibility of telephony.

What time Bell did spend on his electrical pursuits during the second half of 1875 was largely devoted to preparing a patent application for his inventions, even though none of them quite worked yet. Hubbard helped him to file it, on February 14, 1876. Incredibly, just a few hours later on that same day Gray filed a “caveat” — a claim of primacy submitted before a formal patent application — detailing his plans for a “speaking telephone.” Had the order been reversed, the history of the telephone in service might have gone much differently, with the name of “Gray” replacing that of “Bell” in the annals of invention.

But as it was, Bell’s own patent application, which was approved on March 7, 1876, would go on to become one of the most valuable and controversial in American history. To say it buries the lede is an understatement: rather than Gray’s speaking telephone, it promises only “improvements in telegraphy,” never even using the word “telephone.” And rather than the transmission of intelligible speech, it promises only the transmission of “vocal or other sounds” — which was accurate enough, considering that this was all that Bell and Watson had managed to date by even the most generous possible interpretation.

Still, the patent filing did reinvigorate the young inventor and his assistant: they returned to their laboratory and began working in earnest again. The day after his patent was approved, Bell was futzing about alone when he did something that seems almost inexplicable on the face of it, being out of keeping with all of his experiments to date. First he attached a battery to a wire. He then split one end of the wire into two leads, running one of them to a tuning fork and dropping the other into a dish of water. At the other end of the wire he attached one of his metal reeds, but left it untuned so that it would vibrate freely in response to any signal. He tapped the tuning fork to make it vibrate and dipped one of its arms into the dish of water, whereupon he was rewarded with a “faint sound” from the reed. Excited now, he added some sulfuric acid to the water to make it a better conductor, then repeated the experiment. Sure enough, the sound from the reed got louder. He attached the lead in the water to a submerged ribbon of brass, and the sound got louder still.

What was happening here? The liquid in the dish and the metal of the tuning fork both being conductive, they were serving to bridge the two leads, allowing the current from the battery to flow between them. But the vibrations of the tuning fork were varying the resistance of the circuit, which in turn varied the frequency and amplitude of the current flowing along it. This “variable resistance” method of transmitting a sound wave was far superior to the unpowered magnetic induction Bell had been relying on earlier, which had been able to create the merest trace of a signal on the line. This signal, by contrast, was stronger to begin with, and could be further amplified to whatever extent one desired simply by using more and/or larger batteries. It was the great breakthrough on the road to a practical, usable telephone. Bell immediately went in search of Watson.

Two days later, all was in readiness for the pivotal test. Watson, who had by now taken on a recording function for the duo’s adventures not that far removed from his literary namesake, describes the scene:

I had made Bell a new transmitter, in which a wire, attached to a diaphragm, touched acidulated water contained in a metal cup, both included in a circuit through the battery and receiving telephone. The depth of the wire in the acid and consequently the resistance of the circuit was varied as the voice made the diaphragm vibrate, which made the galvanic current undulate in speech form.

At the other end of the wire was of course an untuned metal reed, waiting to receive whatever electrical signal came down the wire and turn it back into sound waves.

Bell’s crude sketch of his first “liquid transmitter” telephone.

Bell took his spot at the transmitting station, while Watson went to the receiving station behind a closed door in the adjacent room. And then Watson heard the canonical first words ever spoken into a working telephone: “Mr. Watson, come here. I want to see you.”

I rushed into his room and found he had upset the acid of a battery over his clothes. He forgot the accident in the joy of his new transmitter when I told him how plainly I had heard his words.

The two men spent hours running between the rooms testing out their contraption, which did indeed work — not perfectly, mind you, but vastly more reliably than anything they had created to date. In an inadvertent homage to poor Joseph Faber, Bell concluded the evening’s festivities by singing “God Save the Queen” into the wire. “This is a great day with me,” he wrote. “I feel that I have at last struck the solution of a great problem — and the day is coming when telegraph [sic] wires will be laid on to houses just like water or gas — and friends converse with each other without leaving home.” The words were prescient. Alexander Graham Bell, elocutionist and teacher of the deaf, working alone except for one talented assistant, had invented the telephone before anyone else.

Or had he?

In the very near future, individuals and courts would come to speculate endlessly about where the sudden burst of insight that a sound wave could be transmitted on a powered wire by varying the circuit’s resistance had actually come from. The possibility is mentioned in Bell’s patent application, but only as a last-minute, hand-scrawled notation in the margin. Elisha Gray’s patent caveat, by contrast, includes not only the principle but a detailed description of how a transmitter very similar to the one Bell employed might be made, right down to a diaphragm with a lead dangling into a dish of acidulated water. Bell himself wrote in a letter to his father that he had become friendly with the clerk who had accepted both documents, and continued to talk with him regularly while his own patent was going through the approval process. Did the clerk let slip these details of Gray’s design, or possibly even allow Bell to look at the document itself? Did he let Bell add that crucial note to the margin of his own patent application after its submission? (Bell did later acknowledge that he was allowed to “clarify” some other terms that the patent office deemed too vague in the first draft.) All of these things would soon be insinuated in court.

Elisha Gray, the man who some insist deserves at least equal credit with Alexander Graham Bell for the invention of the telephone.

Alexander Graham Bell’s personal papers did provide some exculpatory evidence after they were donated to the Library of Congress in 1976. Bell’s notes show that he was thinking about the potential of using variable resistance to transmit sound as early as May 4, 1875, and even conducted some experiments in that direction shortly thereafter. Likewise, he did tinker with “liquid transmitters” from time to time prior to that fateful date of March 8, 1876. Still, he never thought to combine a transmitter using acidulated water with the principle of variable resistance until suspiciously close to the moment that Elisha Gray submitted a detailed plan for doing so to a man with whom Bell later had several fairly long conversations. The evidence is highly circumstantial, to be sure, but is no less hard to discount entirely for that. Historians have combed through all of the relevant papers thoroughly without finding any more definitive smoking gun pointing one way or the other. It seems that the truth of the matter will never be known with complete certainty.

On the other hand, if we judge that the credit for an invention should go to the first person to make a working version of it, full stop, then we can comfortably declare Alexander Graham Bell to be the inventor of the telephone; there is no suggestion that Gray actually built the telephone he designed on paper prior to Bell’s first successful test on March 10, 1876. The whole controversy serves to remind us that any remotely modern technology is a mishmash of ideas and discoveries, and the order and primacy of the whole is not always as clear as we might wish.

At any rate, the telephone was now a reality. And now that it was invented, it needed to be put into service.

(Sources: the books The Victorian Internet by Tom Standage, Power Struggles: Scientific Authority and the Creation of Practical Electricity Before Edison by Michael B. Schiffer, Lightning Man: The Accursed Life of Samuel F.B. Morse by Kenneth Silverman, A Thread across the Ocean: The Heroic Story of the Transatlantic Telegraph by John Steele Gordon, The Story of the Atlantic Telegraph by Henry M. Field, Alexander Graham Bell and the Conquest of Solitude by Robert V. Bruce, Alexander Graham Bell: The Life and Times of the Man Who Invented the Telephone by Edwin S. Grosvenor, Reluctant Genius: Alexander Graham Bell and the Passion for Invention by Charlotte Gray, Telephone: The First Hundred Years by John Brooks, and American Telegraphy and Encyclopedia of the Telegraph by William Maver, Jr. Online sources include History of the Atlantic Cable & Undersea Communications and “Joseph Faber and the Euphonia Talking Device” at History Computer.)

 
19 Comments

Posted by on February 18, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags:

A Web Around the World, Part 3: …Try, Try Again

A major financial panic struck the United States in August of 1857, just as the Niagara was making the first attempt to lay the Atlantic cable. Cyrus Field had to mortgage his existing businesses heavily just to keep them going. But he was buoyed by one thing: as the aftershocks of the panic spread to Europe, packet steamers took to making St. John’s, Newfoundland, their first port of call in the Americas for the express purpose of passing the financial news they carried to the island’s telegraph operators so that it could reach Wall Street as quickly as possible. It had taken the widespread threat of financial ruin, but Frederick Gisborne’s predictions about the usefulness of a Newfoundland telegraph were finally coming true. Now just imagine if the line could be extended all the way across the Atlantic…

While he waited for the return of good weather to the Atlantic, Field sought remedies for everything that had gone wrong with the first attempt to lay a telegraph cable across an ocean. The Niagara‘s chief engineer, a man named William Everett, had examined Charles Bright’s paying-out mechanism with interest during the last expedition, and come up with a number of suggestions for improving it. Field sought and was granted Everett’s temporary release from the United States Navy, and brought him to London to redesign the machine. The result was actually simpler in most ways, being just one-fourth of the weight and one-third of the size of Bright’s design. But it incorporated a critical new feature: the brake now set and released itself automatically in response to the level of tension on the cable. “It seemed to have the intelligence of a human being, to know when to hold on and when to let go,” writes Henry Field. In reality, it was even better than a human being, in that it never got tired and never let its mind wander; no longer would a moment’s inattention on the part of a fallible human operator be able to wreck the whole project.

Charles Bright accepted the superseding of his original design with good grace; he was an engineer to the core, the new paying-out machine was clearly superior to the old one, and so there wasn’t much to discuss in his view. There was ongoing discord, however, between two more of Cyrus Field’s little band of advisors.

Wildman Whitehouse and William Thomson had been competing for Field’s ear for quite some time now. At first the former had won out, largely because he told Field what he most wished to hear: that a transatlantic telegraph could be made to work with an unusually long but otherwise fairly plebeian cable, using bog-standard sending and receiving mechanisms. But Field was a thoughtful man, and of late he’d begun losing faith in the surgeon and amateur electrical experimenter. He was particularly bothered by Whitehouse’s blasé attitude toward the issue of signal retardation.

Meanwhile Thomson was continuing to whisper contrary advice in his ear. He said that he still thought it would be best to use a thicker cable like the one he had originally proposed, but, when informed that there just wasn’t money in the budget for such a thing, he said that he thought he could get even Whitehouse’s design to work more efficiently. His scheme exploited the fact that even a heavily retarded signal probably wouldn’t become completely uniform: the current at the far end of the wire would still be full of subtle rises and falls where the formerly discrete dots and dashes of Morse Code had been. Thomson had been working on a new, ultrasensitive galvanometer, which ingeniously employed a lamp, a magnet, and a tiny mirror to detect the slightest variation in current amplitude. Two operators would work together to translate a signal on the receiving end of the cable: one, trained to interpret the telltale patterns of reflected light bobbing up and down in front of him, would translate them into Morse Code and call it out to his partner. Over the strident objections of Whitehouse, Field agreed to install the system, and also agreed to give Thomson access to the enormous spools of existing cable that were now warehoused in Plymouth, England, waiting for the return of spring. Thomson meticulously tested the cable one stretch at a time, and convinced Field to let him cut out those sections where its conductivity was worst.

The United States and Royal Navies agreed to lend the Atlantic Telegraph Company the same two vessels as last time for a second attempt at laying the cable. To save time, however, it was decided that the ships would work simultaneously: they would sail to the middle of the Atlantic, splice their cables together there, then each head toward a separate continent. So, in April of 1858, the Niagara and the Agamemnon arrived in Plymouth to begin the six-week process of loading the cable. They sailed together from there on June 10. Samuel Morse elected not to travel with the expedition this time, but Charles Bright, William Thomson, Cyrus Field and his two brothers, and many of the other principals were aboard one or the other ship.

They had been told that “June was the best month for crossing the Atlantic,” as Henry Field writes. They should be “almost sure of fair weather.” On the contrary, on June 13 the little fleet sailed into the teeth of one of the worst Atlantic storms of the nineteenth century. The landlubbers aboard had never imagined that such a natural fury as this could exist. For three days, the ships were lashed relentlessly by the wind and waves. With 1250 tons of cable each on their decks and in their holds, both the Niagara and the Agamemnon rode low in the water and were a handful to steer under the best of circumstances; now they were in acute danger of foundering, capsizing, or simply breaking to pieces under the battering.

The Agamemnon was especially hard-pressed: bracing beams snapped below decks, and the hull sprang leaks in multiple locations. “The ship was almost as wet inside as out,” wrote a horrified Times of London reporter who had joined the expedition. The crew’s greatest fear was that one of the spools of cable in the hold would break loose and punch right through the hull; they fought a never-ending battle to secure the spools against each successive onslaught. While they were thus distracted, the ship’s gigantic coal hampers gave way instead, sending tons of the filthy stuff skittering everywhere, injuring many of the crew. That the Agamemnon survived the storm at all was thanks to masterful seamanship on the part of its captain, who remained awake on the bridge for 72 hours straight, plotting how best to ride out each wave.

An artist’s rendering of the Agamemnon in the grip of the storm, as published in the Illustrated London News.

Separated from one another by the storm, the two ships met up again on June 25 smack dab in the middle of an Atlantic Ocean that was once again so tranquil as to “seem almost unnatural,” as Henry Field puts it. The men aboard the Niagara were shocked at the state of the Agamemnon; it was so badly battered and so covered in coal dust that it looked more like a garbage scow than a proud Royal Navy ship of the line. But no matter: it was time to begin the task they had come here to carry out.

So, the cables were duly spliced on June 26, and the process of laying them began — with far less ceremony than last time, given that there were no government dignitaries on the scene. The two ships steamed away from one another, the Niagara westward toward Newfoundland, the Agamemnon eastward toward Ireland, with telegraph operators aboard each ship constantly testing the tether that bound them together as they went. They had covered a combined distance of just 40 miles when the line suddenly went dead. Following the agreed-upon protocol in case of such an eventuality, both crews cut their end of the cable, letting it drop uselessly into the ocean, then turned around and steamed back to the rendezvous point; neither crew had any idea what had happened. Still, the break had at least occurred early enough that there ought still to be enough cable remaining to span the Atlantic. There was nothing for it but to splice the cables once more and try again.

This time, the distance between the ships steadily increased without further incident: 100 miles, 200 miles, 300 miles. “Why not lay 2000 [miles]!” thought Henry Field with a shiver of excitement. Then, just after the Agamemnon had made a routine splice from one spool to the next, the cable snapped in the ship’s wake. Later inspection would reveal that that section of it had been damaged in the storm. Nature’s fury had won the day after all. Again following protocol for a break this far into the cable-laying process, the two ships sailed separately back to Britain.

It was a thoroughly dejected group of men who met soon after in the offices of the Atlantic Telegraph Company. Whereas last year’s attempt to lay the cable had given reason for guarded optimism in the eyes of some of them, this latest attempt seemed an unadulterated fiasco. The inexplicable loss of signal the first time this expedition had tried to lay the cable was in its way much more disconcerting than the second, explicable disaster of a physically broken cable, as our steadfast Times of London reporter noted: “It proves that, after all that human skill and science can effect to lay the wire down with safety has been accomplished, there may be some fatal obstacle to success at the bottom of the ocean, which can never be guarded against, for even the nature of the peril must always remain as secret and unknown as the depths in which it is encountered.” The task seemed too audacious, the threats to the enterprise too unfathomable. Henry Field:

The Board was called together. It met in the same room where, six weeks before, it had discussed the prospects of the expedition with full confidence of success. Now it met as a council of war is summoned after a terrible defeat. When the Directors came together, the feeling — to call it by the mildest name — was one of extreme discouragement. They looked blankly in each other’s faces. With some, the feeling was almost one of despair. Sir William Brown of Liverpool, the first Chairman, wrote advising them to sell the cable. Mr. Brooking, the Vice-Chairman, who had given more time than any other Director, sent in his resignation, determined to take no further part in an undertaking which had proved hopeless, and to persist in which seemed mere rashness and folly.

Most of the members of the board assumed they were meeting only to deal with the practical matter of winding up the Atlantic Telegraph Company. But Cyrus Field had other ideas. When everyone was settled, he stood up to deliver the speech of his life. He told the room that he had talked to the United States and Royal Navies, and they had agreed to extend the loan of the Niagara and the Agamemnon for a few more weeks, enough to make one more attempt to lay the cable. And he had talked to his technical advisors as well, and they had agreed that there ought to be just enough cable left to span the Atlantic if everything went off without a hitch. Even if the odds against success were a hundred to one, why not try one more time? Why not go down swinging? After all, the money they stood to recoup by selling a second-hand telegraph cable wasn’t that much compared to what had already been spent.

It is a tribute to his passion and eloquence that his speech persuaded this roomful of very gloomy, very pragmatic businessmen. They voted to authorize one more attempt to create an electric bridge across the Atlantic.

The Niagara and the poor, long-suffering Agamemnon were barely given time to load coal and provisions before they sailed again, on July 17, 1858. This time the weather was propitious: blue skies and gentle breezes the whole way to the starting point. On July 29, after conducting tests to ensure that the entirety of the remaining cable was still in working order, they began the laying of it once more. Plenty of close calls ensued in the days that followed: a passing whale nearly entangled itself in the cable, then a passing merchant ship nearly did the same; more sections of cable turned up with storm-damaged insulation aboard the Agamemnon and had to be cut away, to the point that it was touch and go whether Ireland or the end of the last spool would come first. And yet the telegraph operators aboard each of the ships remained in contact with one another day after day as they crept further and further apart.

At 1:45 AM on August 6, the Niagara dropped anchor in Newfoundland at a point some distance west of St. John’s, in Trinity Bay, where a telegraph house had already been built to receive the cable. One hour later, the telegraph operator aboard the ship received a message from the Agamemnon that it too had made landfall, in Ireland. Cyrus Field’s one-chance-in-a-hundred gamble had apparently paid off.

Shouting like a lunatic, Field burst upon the crew manning the telegraph house, who had been blissfully asleep in their bunks. At 6:00 AM, the men spliced the cable that had been carried over from the Niagara with the one that went to St. John’s and beyond. Meanwhile, on the other side of the ocean, the crew of the Agamemnon was doing the same with a cable that stretched from the backwoods of southern Ireland to the heart of London. “The communication between the Old and the New World [has] been completed,” wrote the Times of London reporter.


The (apparently) successful laying of the cable in 1858 sparked almost a religious fervor, as shown in this commemorative painting by William Simpson, in which the Niagara is given something very like a halo as it arrives in Trinity Bay.

The news of the completed Atlantic cable was greeted with elation everywhere it traveled. Joseph Henry wrote in a public letter to Cyrus Field that the transatlantic telegraph would “mark an epoch in the advancement of our common humanity.” Scientific American wrote that “our whole country has been electrified by the successful laying of the Atlantic telegraph,” and Harper’s Monthly commissioned a portrait of Field for its cover. Countless cities and towns on both sides of the ocean held impromptu jubilees to celebrate the achievement. Ringing church bells, booming cannon, and 21-gun rifle salutes were the order of the day everywhere. Men who had or claimed to have sailed aboard the Niagara or the Agamemnon sold bits and pieces of leftover cable at exorbitant prices. Queen Victoria knighted the 26-year-old Charles Bright, and said she only wished Cyrus Field was a British citizen so she could do the same for him. On August 16, she sent a telegraph message to the American President James Buchanan and was answered in kind; this herald of a new era of instantaneous international diplomacy brought on yet another burst of public enthusiasm.

Indeed, the prospect of a worldwide telegraph network — for, with the Atlantic bridged, could the Pacific and all of the other oceans be far behind? — struck many idealistic souls as the facilitator of a new era of global understanding, cooperation, and peace. Once we allow for the changes that took place in rhetorical styles over a span of 140 years, we find that the most fulsome predictions of 1858 have much in common with those that would later be made with regard to the Internet and its digital World Wide Web. “The whole earth will be belted with electric current, palpitating with human thoughts and emotions,” read the hastily commissioned pamphlet The Story of the Telegraph.[1]No relation to the much more comprehensive history of the endeavor which Henry Field would later write under the same title. “It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for the exchange of thoughts between all the nations of the earth.” Indulging in a bit of peculiarly British wishful thinking, the Times of London decided that “the Atlantic telegraph has half undone the Declaration of 1776, and has gone far to make us once again, in spite of ourselves, one people.” Others found prose woefully inadequate for the occasion, found they could give proper vent to their feelings only in verse.

‘Tis done! The angry sea consents,
The nations stand no more apart,
With clasped hands the continents
Feel throbbings of each other’s heart.

Speed, speed the cable; let it run
A loving girdle round the earth,
Till all the nations ‘neath the sun
Shall be as brothers of one hearth;

As brothers pledging, hand in hand,
One freedom for the world abroad,
One commerce every land,
One language and one God.

But one fact was getting lost — or rather was being actively concealed — amidst all the hoopla: the Atlantic cable was working after a fashion, but it wasn’t working very well. Even William Thomson’s new galvanometer struggled to make sense of a signal that grew weaker and more diffuse by the day. To compensate, the operators were forced to transmit more and more slowly, until the speed of communication became positively glacial. Queen Victoria’s 99-word message to President Buchanan, for example, took sixteen and a half hours to send — a throughput of all of one word every ten minutes. The entirety of another day’s traffic consisted of:

Repeat please.

Please send slower for the present.

How?

How do you receive?

Send slower.

Please send slower.

How do you receive?

Please say if you can read this.

Can you read this?

Yes.

How are signals?

Do you receive?

Please send something.

Please send Vs and Bs.

How are signals?

Cyrus Field managed to keep these inconvenient facts secret for some time while his associates scrambled fruitlessly for a solution. When Thomson could offer him no miracle cure, he turned back to Wildman Whitehouse. Insisting that there was no problem with his cable design which couldn’t be solved by more power, Whitehouse hooked it up to giant induction coils to try to force the issue. Shortly after he did so, on September 1, the cable failed completely. Thomson and others were certain that Whitehouse had burned right through the cable’s insulation with his high-voltage current, but of course it is impossible to know for sure. Still, that didn’t stop Field from making an irrevocable break with Whitehouse; he summarily fired him from the company. In response, Whitehouse went on a rampage in the British press, denouncing the “frantic fooleries of the Americans in the person of Cyrus W. Field”; he would soon publish a book giving his side of the story, filled with technical conclusions which history has demonstrated to be wrong.

On October 20, with all further recourse exhausted, Field bit the bullet and announced to the world that his magic thread was well, truly, and hopelessly severed. The press at both ends of the cable turned on a dime. The Atlantic Telegraph Company and its principal face were now savaged with the same enthusiasm with which they had so recently been praised. Many suspected loudly that it had all been an elaborate fraud. “How many shares of stock did Mr. Field sell in August?” one newspaper asked. (The answer: exactly one share.) The Atlantic Telegraph Company remained nominally in existence after the fiasco of 1858, but it would make no serious plans to lay another cable for half a decade.

Cyrus Field himself was, depending on whom you asked, either a foolish dreamer or a cynical grifter. His financial situation too was not what it once had been. His paper business had suffered badly in the panic of 1857; then came a devastating warehouse fire in 1860, and he sold it shortly thereafter at a loss. In April of 1861, the American Civil War, the product of decades of slowly building tension between the country’s industrial North and the agrarian, slave-holding South, finally began in earnest. Suddenly the paeans to universal harmony which had marked a few halcyon weeks in August of 1858 seemed laughable, and the moneyed men of Wall Street turned their focus to engines of war instead of peace.

Yet the British government at least was still wondering in its stolid, sluggish way how a project to which it had contributed considerable public resources, which had in fact nearly gotten one of Her Majesty’s foremost ships of the line sunk, had wound up being so useless. The same month that the American Civil War began, it formed a commission of inquiry to examine both this specific failure and the future prospects for undersea telegraphy in general. The commission numbered among its members none other than Charles Wheatstone, along with William Cooke one of the pair of inventors who had set up the first commercial telegraph line in the world. It read its brief very broadly, and ranged far afield to address many issues of importance to a slowly electrifying world. Most notably, it defined the standardized units of electrical measurement that we still use today: the watt, the volt, the ohm, and the ampere.

But much of its time was taken up by a war of words between Wildman Whitehouse and William Thomson, each of whom presented his case at length and in person. While Whitehouse laid the failure of the first transatlantic telegraph at the feet of a wide range of factors that had nothing to do with his cable but much to do with the gross incompetence of the Atlantic Telegraph Company in laying and operating it, Thomson argued that the choice of the wrong type of cable had been the central, precipitating mistake from which all of the other problems had cascaded. In the end, the commission found Thomson’s arguments more convincing; it did seem to it that “the heavier the cable, the greater its durability.” Its final conclusions, delivered in July of 1863, were simultaneously damning toward many of the specific choices of the Atlantic Telegraph Company and optimistic that a transatlantic telegraph should be possible, given much better planning and preparation. The previous failures were, it said, “due to causes which might have been guarded against had adequate preliminary investigation been made.” Nevertheless, “we are convinced that this class of enterprise may prove as successful as it has hitherto been disastrous.”

Meanwhile, even in the midst of the bloodiest conflict in American history, all Cyrus Field seemed to care about was his once and future transatlantic telegraph. Graduating from the status of dreamer or grifter, he now verged on becoming a laughingstock in some quarters. In New York City, for example, “he addressed the Chamber of Commerce, the Board of Brokers, and the Corn Exchange,” writes Henry Field, “and then he went almost literally door to door, calling on merchants and bankers to enlist their aid. Even of those who subscribed, a large part did so more from sympathy and admiration of his indomitable spirit than from confidence in the success of the enterprise.” One of his marks labeled him with grudging admiration “the most obstinately determined man in either hemisphere.” Yet in the course of some five years of such door-knocking, he managed to raise pledges amounting to barely one-third of the purchase price of the first Atlantic cable — never mind the cost of actually laying it. This was unsurprising, in that there lay a huge unanswered question at the heart of any renewal of the enterprise: a cable much thinner than the one which almost everyone except Wildman Whitehouse now agreed was necessary had dangerously overburdened two of the largest ships in the world, very nearly with tragic results for one of them. And yet, in contrast to the 2500 tons of Whitehouse’s cable, Thomson’s latest design was projected to weigh 4000 tons. How on earth was it to be laid?

But Cyrus Field’s years in the wilderness were not to last forever. In January of 1864, in the course of yet another visit to London, he secured a meeting with Thomas Brassey, one of the most famous of the new breed of financiers who were making fortunes from railroads all over the world. Field wrote in a letter immediately after the meeting that “he put me through such a cross-examination as I had never before experienced. I thought I was in the witness box.” (He doesn’t state in his letter whether he noticed the ironic contrast with the way this whole adventure had begun exactly one decade earlier, when it had been Frederick Gisborne who had come with hat in hand to his own stateroom for an equally skeptical cross-examination.)

It seems that Field passed the test. Brassey agreed to put some of his money and, even more importantly, his sterling reputation as one of the world’s foremost men of business behind the project. And just like that, things started to happen again. “The wheels were unloosed,” writes Henry Field, “and the gigantic machinery began to revolve.” The money poured in; the transatlantic telegraph was on again. Cyrus Field placed an order for a thick, well-insulated cable matching Thomson’s specifications. The only problem remaining was the same old one of how to actually get it aboard a ship. But, miraculously, Thomas Brassey believed he had a solution for that problem too.

During the previous decade, Isambard Kingdom Brunel, arguably the greatest steam engineer of the nineteenth century, had designed and overseen the construction of what he intended as his masterpiece: an ocean liner called the Great Eastern, which displaced a staggering 19,000 tons, could carry 4000 passengers, and could sail from Britain to Australia without ever stopping for coal. It was 693 feet long and 120 feet wide, with ten steam engines producing up to 10,000 horsepower and delivering it through both paddle wheels and a screw propeller. And, most relevantly for Brassey and Field, it could carry up to 7000 tons of cargo in its hold.

T.G. Dutton’s celebratory 1859 rendering of the Great Eastern.

Alas, its career to date read like a Greek tragedy about the sin of hubris. The Great Eastern almost literally killed its creator; undone by the stresses involved in getting his “Great Babe” built, Brunel died at the age of only 53 shortly after it was completed in 1859. During its sea trials, the ship suffered a boiler explosion that killed five men. And once it entered service, those who had paid to build it discovered that it was just too big: there just wasn’t enough demand to fill its holds and staterooms, even as it cost a fortune to operate. “Her very size was against her,” writes Henry Field, “and while smaller ships, on which she looked down with contempt, were continually flying to and fro across the sea, this leviathan could find nothing worthy of her greatness.” The Great Eastern developed the reputation of an ill-starred, hard-luck ship. Over the course of its career, it was involved in ten separate ship-to-ship collisions. In 1862, it ran aground outside New York Harbor; it was repaired and towed back to open waters only at enormous effort and expense, further burnishing its credentials as an unwieldy white elephant. Eighteen months later, the Great Eastern was retired from service and put up for sale. A financier named Daniel Gooch bought the ship for just £25,000, less than its value as scrap metal. And indeed, scrapping it for profit was quite probably foremost on his mind at the time.

But then Thomas Brassey came calling on his friend, asking what it would cost to acquire the ship for the purpose of laying the transatlantic cable. Gooch agreed to loan the Great Eastern to him in return for £50,000 in Atlantic Telegraph Company stock. And so Cyrus Field’s project acquired the one ship in the world that was actually capable of carrying Thomson’s cable. One James Anderson, a veteran captain with the Cunard Line, was hired to command it.

Observing the checkered record of the Atlantic Telegraph Company in laying working telegraph cables to date, Brassey and his fellow investors insisted that the latest attempt be subcontracted out to the recently formed Telegraph Construction and Maintenance Company, the entity which also provided the cable itself. During the second half of 1864, the latter company extensively modified the Great Eastern for the task before it. Intended as it was for a life lived underwater, the cable was to be stored aboard the ship immersed in water tanks in order to prevent its vital insulation from drying out and cracking.

Then, from January to July of 1865, the Great Eastern lay at a dock in Sheerness, England, bringing about 20 miles of cable per day onboard. The pendulum had now swung again with the press and public: the gargantuan ship became a place of pilgrimage for journalists, politicians, royalty, titans of industry, and ordinary folks, all come to see the progress of this indelible sign of Progress in the abstract. Cyrus Field was so caught up in the excitement of an eleven-year-old dream on the cusp of fulfillment that he hardly noticed when the final battle of the American Civil War ended with Southern surrender on April 9, 1865, nor the shocking assassination of the victorious President Abraham Lincoln just a few days later.

On July 15, the Great Eastern put to sea at last, laden with the 4000 tons of cable plus hundreds more tons of dead weight in the form of the tanks of water that were used to store it. Also aboard was a crew of 500 men, but only a small contingent of observers from the Atlantic Telegraph Company, among them the Field brothers and William Thomson. Due to its deep draft, the Great Eastern had to be very cautious when sailing near land; witness its 1862 grounding in New York Harbor. Therefore a smaller steamer, the Caroline, was enlisted to bring the cable ashore on the treacherous southern coast of Ireland and to lay the first 23 miles of it from there. On the evening of July 23, the splice was made and the Great Eastern took over responsibility for the rest of the journey.

So, the largest ship in the world made its way westward at an average speed of a little over six knots. Cyrus Field, who was prone to seasickness, noted with relief how different an experience it was to sail on a behemoth like this one even in choppy seas. He and everyone else aboard were filled with optimism, and with good reason on the whole; this was a much better planned, better thought-through expedition than those of the Niagara and the Agamemnon. Each stretch of cable was carefully tested before it fell off the stern of the ship, and a number of stretches were discarded for failing to meet Thomson’s stringent standards. Then, too, William Everett’s paying-out mechanism had been improved such that it could now reel cable back in again if necessary; this did indeed prove to be the case twice, when stretches of cable proved not to be as water-resistant as they ought to have been despite all of Thomson’s efforts.

The days went by, filled with minor snafus to be sure, but nothing that hadn’t been anticipated. The stolid and stable Great Eastern, writes Henry Field, “seemed as if made by Heaven to accomplish this great work of civilization.” And the cable itself continued to work even better than Thomson had said it would; the link with Ireland remained rock-solid, with a throughput to which Whitehouse’s cable could never have aspired.

At noon on August 2, the Great Eastern was well ahead of schedule, already almost two-thirds of the way to Newfoundland, when a fault was detected in the stretch of cable just laid. This was annoying, but nothing more than that; it had, after all, happened twice before and been dealt with by pulling the bad stretch out of the water and discarding it. But in the course of hauling it back in this time, an unfortunate burst of wind and current spelled disaster: the cable was pulled taut by the movement of the ship and snapped.

Captain Anderson had one gambit left — one more testament to the Telegraph Construction and Maintenance Company’s determination to plan for every eventuality. He ordered the huge grappling hook with which the Great Eastern had been equipped to be deployed over the side. It struck the naïve observers from the Atlantic Telegraph Company as an absurd proposition; the ocean here was two and a half miles deep — so deep that it took the hook two hours just to touch bottom. The ship steamed back and forth across its former course all night long, dragging the hook patiently along the ocean floor. Early in the morning, it caught on something. The crew saw with excitement that, as the grappling machinery pulled the hook gently up, its apparent weight increased. This was consistent with a cable, but not with anything else that anyone could conceive. But in the end, the increasing weight of it proved too much. When the hook was three quarters of a mile above the ocean floor, the rope snapped. Two more attempts with fresh grappling hooks ended the same way, until there wasn’t enough rope left aboard to touch bottom.

It had been a noble attempt, and had come tantalizingly close to succeeding, but there was nothing left to do now but mark the location with a buoy and sail back to Britain. “We thought you went down!” yelled the first journalist to approach the Great Eastern when it reached home. It seemed that, in the wake of the abrupt loss of communication with the ship, a rumor had spread that it had struck an iceberg and sunk.



Although the latest attempt to lay a transatlantic cable had proved another failure, one didn’t anymore have to be a dyed-in-the-wool optimist like Cyrus Field to believe that the prospects for a future success were very, very good. The cable had outperformed expectations by delivering a clear, completely usable signal from first to last. The final sticking point had not even been the cable’s own tensile strength but rather that of the ropes aboard the Great Eastern. Henry Field:

This confidence appeared at the first meeting of directors. The feeling was very different from that after the return of the first expedition of 1858. So animated were they with hope, and so sure of success the next time, that all felt that one cable was not enough, they must have two, and so it was decided to take measures not only to raise the broken end of the cable and to complete it to Newfoundland, but also to construct and lay an entirely new one, so as to have a double line in operation the following summer.

Nothing was to be left to chance next time around. William Thomson worked with the Telegraph Construction and Maintenance Company to make the next cable even better, incorporating everything that had been learned on the last expedition plus all the latest improvements in materials technology. The result was even more durable, whilst weighing about 10 percent less. The paying-out mechanism was refined further, with special attention paid to the task of pulling the cable in again without breaking it. And the Great Eastern too got a refit that made it even more suited to its new role in life. Its paddle wheels were decoupled from one another so each could be controlled separately; by spinning one forward and one backward, the massive ship could be made to turn in its own length, an improvement in maneuverability which should make grappling for a lost cable much easier. Likewise, twenty miles of much stronger grappling rope was taken onboard. Meanwhile the Atlantic Telegraph Company was reorganized and reincorporated as the appropriately trans-national Anglo-American Telegraph Company, with an initial capitalization of £600,000.

This time the smaller steamer William Corry laid the part of the cable closest to the Irish shore. On Friday, July 13, 1866, the splice was made and the Great Eastern took over. The weather was gray and sullen more often than not over the following days, but nothing seemed able to dampen the spirit of optimism and good cheer aboard; many a terrible joke was made about “shuffling off this mortal coil.” As they sailed along, the crew got a preview of the interconnected world they were so earnestly endeavoring to create: the long tether spooling out behind the ship brought them up-to-the-minute news of the latest stock prices on the London exchange and debates in Parliament, as well as dispatches from the battlefields of the Third Italian War of Independence, all as crystal clear as the weather around them was murky.

The Great Eastern maintained a slightly slower pace this time, averaging about five knots, because some felt that some of the difficulties last time had resulted from rushing things a bit too much. Whether due to the slower speed or all of the other improvements in equipment and procedure, the process did indeed go even more smoothly; the ship never failed to cover at least 100 miles — usually considerably more — every day. The Great Eastern sailed unperturbed beyond the point where it had lost the cable last time. By July 26, after almost a fortnight of steady progress, the excitement had reached a fever pitch, as the seasoned sailors aboard began to sight birds and declared that they could smell the approaching land.

The following evening, they reached their destination. “The Great Eastern,” writes Henry Field, “gliding in as if she had done nothing remarkable, dropped her anchor in front of the telegraph house, having trailed behind her a chain of 2000 miles, to bind the Old World to the New.” A different telegraph house had been built in Trinity Bay to receive this cable, in a tiny fishing village with the delightful name of Heart’s Content. The entire village rowed out to greet the largest ship by almost an order of magnitude ever to enter their bay, all dressed in their Sunday best.

The Great Eastern in Trinity Bay, 1866. This photograph does much to convey the sheer size of the ship. The three vessels lying alongside it are all oceangoing ships in their own right.

But there was one more fly in the ointment. When he came ashore, Cyrus Field learned that the underwater telegraph line he had laid between Newfoundland and Cape Breton ten years before had just given up the ghost. So, there was a little bit more work to be done. He chartered a coastal steamer to take onboard eleven miles of Thomson’s magic cable from the Great Eastern and use it to repair the vital span; such operations in relatively shallow water like this had by now become routine, a far cry from the New York, Newfoundland, and London Telegraph Company’s wild adventure of 1855. While he waited for that job to be completed, Field hired another steamer to bring news of his achievement to the mainland along with a slew of piping-hot headlines from Europe to serve as proof of it. It was less dramatic than an announcement via telegraph, but it would have to do.

Thus word of the completion of the first truly functional transatlantic telegraph cable, an event which took place on July 27, 1866, didn’t reach the United States until July 29. It was the last delay of its kind. Two separate networks had become one, two continents sewn together using an electric thread; the full potential of the telegraph had been fulfilled. The first worldwide web, the direct ancestor and prerequisite of the one we know today, was a reality.

(Sources: the books The Victorian Internet by Tom Standage, Power Struggles: Scientific Authority and the Creation of Practical Electricity Before Edison by Michael B. Schiffer, Lightning Man: The Accursed Life of Samuel F.B. Morse by Kenneth Silverman, A Thread across the Ocean: The Heroic Story of the Transatlantic Telegraph by John Steele Gordon, and The Story of the Atlantic Telegraph by Henry M. Field. Online sources include “Heart’s Content Cable Station” by Jerry Proc, Distant Writing: A History of the Telegraph Companies in Britain between 1838 and 1868 by Steven Roberts, and History of the Atlantic Cable & Undersea Communications.)

Footnotes

Footnotes
1 No relation to the much more comprehensive history of the endeavor which Henry Field would later write under the same title.
 
 

Tags: