In 1936, the British logician Alan Turing imagined a universal computing machine. In the wake of World War II, at Princeton's Institute for Advanced Study, a team of mathematicians and engineers built one.
The machine stood roughly the size of four refrigerators. People called it Maniac, for Mathematical and Numerical Integrator and Computer. At its heart was "a 32-by-32-by-40 bit matrix of high-speed, random-access memory—the nucleus of all things digital ever since," writes George Dyson in a new book, Turing's Cathedral (Pantheon Books).
How that computer came to be, he says, is the story of "a deal with the devil." Mathematicians built a machine that helped create the hydrogen bomb. In exchange, they got a new breed of computer that enabled incredible scientific progress.
Son of the physicist Freeman Dyson, who arrived at the institute in 1948, George Dyson played with scrapped parts from the computer project as a boy. He went on to an unusual career that included dropping out of high school; running a kayak business; and writing books about various topics that piqued his interest, such as Russian kayaks, computers, and nuclear weapons. The Chronicle reached Mr. Dyson at his home in Bellingham, Wash. An edited version of the conversation follows.
Q. Turing's Cathedral took you 10 years to write—which, as you point out in the book, is longer than it took to build the computer that your story chronicles. What led you to this project, and why does the story matter today?
A. We didn't really have a coherent common creation myth of how this began. If you ask most people where did this digital universe start, you'll get answers from Apple, to IBM, to Charles Babbage—no real clear lineage of how we really ended up where so much of what we do is actually controlled by this world of numbers. The other answer is just that it was a very interesting story. It turns out the that the real people who really made this happen led extremely fascinating lives.
Q. One reviewer called the lead character of your book—the Hungarian-American mathematician John von Neumann—"the Steve Jobs of early computers." Tell me about him and the other early "hackers" on his team.
A. As we all should know, Steve Jobs did not actually invent anything. He didn't invent the Apple computer. It was completely standard existing design, and the language it uses is Unix, and there's nothing new there. He just had this genius for putting it all together and marketing it. And it's in a way true of von Neumann also. Most of these fundamental ideas about computing were in the air. But he was the guy who grabbed them and had the connections to put it all together and essentially market it, to the government and to IBM, in a way that nobody else did.
The Institute for Advanced Study had no laboratories, and was sort of the peak of the ivory tower. People were there to think great ideas, but not to build things. So von Neumann, he brought these engineers into this theoretical paradise against very strong objections: We don't want dirty engineers bringing wires and soldering guns and machine tools. Physically, they came into the building where all these great historians of classical art and so on were working, so there was great animosity. The great tradition at the institute that still goes on today is tea at three o'clock. Oppenheimer [director from 1947 to 1966] said that tea is where we explain to each other what we do not understand. And the computer people started showing up at tea and stealing sugar. There's a wonderful memorandum from the director, who is complaining to von Neumann that he needs to restrain his computer people and keep them under adult supervision.
The other important thing is that very few of these people were actually Americans. Most were imported from Eastern Europe and England. In today's world, they would have a hard time getting visas. They were just scraping by from one of what we would call a postdoc appointment to another. In today's world, we would send them back. You don't have a full-time professorship—sorry, you gotta go back to Poland.
Q. Technically, what was the novelty of the machine they built?
A. This was not in any way the first computer. What's important is that this digital world in a technical sense is an address matrix of numbers. That's how you can go to a Web site and get any document you want. We have this massive global index of digital material. And what's important here is that this is the beginning of this. This small group of people in Princeton created this address matrix that was 32-by-32-by-40 bits. This tiny little nucleus that still is—just greatly expanded—but that's the matrix in which we all live now. That's the source of this digital world we live in. Not so much the machine itself but simply the abstract idea that everything would be given a numerical location. Sort of like the ZIP-code system. We take for granted now that every address has a ZIP code, but that was a very new idea at that time.
It was one of the first machines that could run software, and with this one machine you could do anything. In a very few years, they solved a number of big important problems without changing the machine at all, just by changing the codes. It could do this because it had, essentially, this memory that could be expanded infinitely. Even though the machine had a very small amount of memory—in modern terms, it had 5 kilobytes of memory—you could add to that address space by punching out on cards or writing to tape. It was a fundamental new idea that you could build this very small machine that could do an infinite number of things and remember an infinite number of things. Essentially, it was Alan Turing's idea. But they made it physically real.
Q. How does the hydrogen bomb come into this?
A. The hydrogen bomb was the driver behind this, because that was the big problem that desperately needed to be answered that required a completely different kind of computer to answer. It was just too big a problem to run on any of the existing machines. We forget the time pressure that was going on. In August 1949 the Soviets exploded their first atomic bomb. That was a little bit like Sputnik—it was a big shock in the United States. We thought we were way in the lead with nuclear weapons. At that time nobody knew whether it was possible or not possible to build a hydrogen bomb that would work. But if it was possible, it would be very, very frightening if the Russians pulled ahead and built hydrogen bombs. And so that question, which was a really abstract question of mathematical physics, suddenly politically had huge implications. That was really why von Neumann was able to get so much support from the government to build this machine, because this was the machine that could answer the question of whether a hydrogen bomb was possible or not. You could answer that by essentially running the physics in this artificial digital universe. It was created to simulate the physics of the real universe. Of course, once you did that to answer the bomb question you had a machine that could answer all kinds of other questions.
Q. What was it like coming of age in that environment, around the computer project? And how did that upbringing affect your own trajectory?
A. I was born the year after it became operational. It shut down when I was 4 or 5. But it was very much a mythical presence there at this institute where my father worked. So the remains of it were still there, and I grew up playing with pieces of it that had been scrapped and put in a barn. It was just fascinating. A kid like me who liked real things and machines and stuff that you could take apart—nothing else at that institute was accessible to a child at all.
You could very much differentiate the people. It's sort of like a cat can kind of spot cat people. Some of these great famous people loved children, like Edward Teller [physicist who championed the hydrogen bomb]. And some were very stuck up and didn't like children. Einstein had died when I was 2 years old, but he was also still sort of mythical: They preserved his office, and Helen Dukas, his secretary, was still working, sorting out all his papers. She did not have children, so she sort of adopted my younger sisters, and was really their babysitter. I made her life difficult, by interrupting her a lot.
It left me with a very skewed view of what higher education was. This institute—it had no preconceived requirements of what it took to be invited there, other than that you had some problem that you wanted to work on. I never finished high school. I tried to finish, and very carefully fulfilled all the academic requirements in three years so I could leave. I was just bored. And then there was sort of an evil, authoritarian assistant principal who decided that I didn't have four years of physical education and therefore could not graduate. My older sister got married in Vancouver; I went up there for the wedding, saw a job on a boat, and became a boat builder. Never looked back. I took it for granted that it was completely normal to work on whatever interested you.
Q. What was the role of openness in spreading this work?
A. The decision was made not to patent anything, and to publish all the blueprints and specifications—everything that nowadays would be most secret and proprietary was put in the open and published and freely circulated. That's really why this particular design became the one that, if you take apart any microprocessor today, you'll find it's essentially an exact functional copy of this original institute machine. That decision was on the one hand was very altruistic. On the other hand it was extremely good for IBM. And the dark side of that is that von Neumann was actually being paid as a private consultant for IBM, so you could say there was really a conflict of interest.
What's I think most frightening, and most the concern of your audience, is that universities are among the worst actors in the situation today. Universities are putting all these proprietary controls on their own work. In most universities now, if a professor wants to do research that has any conceivable practical results, they sign patent agreements, and the university pursues the patents and tries to license them. Maybe it's to the good of the endowment of the university, but is it to the good of research and science? It's not at all clear.
Q. In the end of the book, you seem a bit down on the future.
A. We have to be eternally vigilant. Our job, as it was with nuclear energy or drugs or any of these amazing technologies with power, is to make sure they are used for the good of humanity, not for evil. In the case of computers particularly, the jury just is still out. And the story I try to tell is how this computer very specifically originated from what can be very accurately described as a deal with the devil that von Neumann made. That if the mathematicians built this machine that could help build the hydrogen bomb which could destroy all life on earth, they would then get this machine that could just make amazing progress in science. We haven't used the hydrogen bombs. It's sort of like we escaped. And it seems that we got all these wonderful things from computers. We all have our iPads and our iPhones and things we never dreamed of. We have to be careful that computers likewise can be a tool for liberating people or for controlling people. That was seen at the beginning, by people like Norbert Wiener [American mathematician], as a real threat. That could still turn out to be true. Tomorrow a totalitarian government could take over the world and control everybody through computers. I'm not saying that's going to happen, but we need to be aware of that.