Despite the ostensibly significant etymological link between "humanities" and "humane," humanities scholars, for some reason, love to eat their young. And in the event that market conditions make no young available, those scholars will, and happily, start to eat themselves.
Over the past year or so, Stanley Fish has occasionally devoted his New York Times blog to the notion that, as he put it recently, higher education is "distinguished by the absence" of a relationship between its activities and any "measurable effects in the world." He has singled out the humanities for lacking what he called "instrumental value," writing that "the value of the humanities cannot be validated by some measure external" to the peculiar obsessions of scholars. The humanities, Fish claimed, do not have an extrinsic utility—an instrumental value—and therefore cannot increase economic productivity, fashion an informed citizenry, sharpen moral perceptions, or reduce prejudice and discrimination.
Unsurprisingly, many rose to the bait, and for a month or so, the professing classes bickered about the usefulness of the humanities. This argument always reappears during the recurring, if increasingly frequent, periods of public suspicion of humanities professors and their research. There seems to be an unstated (or, on occasion, quite loudly stated) feeling that humanities professors are somehow ... what ... trying to get away with something; that they are ... how shall we say ... trying to put one over on us. This sentiment reached its logical apex last year in an article in The New York Times titled, "In Tough Times, the Humanities Must Justify Their Worth."
(Given that these "tough times" were almost single-handedly caused by graduates of our nation's business colleges, it seems that they, if anyone, should have to "justify their worth." But maybe it's just me.)
So when Fish claimed that the benefits of humanities research were limited to the researcher or the classroom, and that the public should therefore not have to "subsidize my moments of aesthetic wonderment," he was drill-baby-drilling into the zeitgeist quite nicely.
The real issue, as Fish concedes, is not whether art, music, history, or literature has instrumental value, but whether academic research into those subjects has such value. Few would claim that art and literature have no intrinsic worth, and very few would claim that they possess no measurable utility. Students at Harvard Medical School, for instance, like students at a growing number of medical schools across the country, now take art courses. Studying works of art, researchers believe, makes students more observant, more open to complexity, and more-flexible thinkers—in short, better doctors.
And bioethicists, working at least in part out of the discipline of philosophy, have been at the forefront of applied humanities since 1974, when Congress created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Since then, and especially after President Clinton established the National Bioethics Advisory Commission, in 1995, bioethicists have played a crucial role in policy debates arising in medicine, biotechnology, and the law.
While teaching Picasso or applying Kant probably involves some scholarly mediation, those examples do not prove the usefulness of humanities research so much as they prove the usefulness of the subjects of that research. Medical-school art classes aren't concerned with scholarship on Monet's paintings, they are concerned with Monet's actual paintings. So the question is this: Can academic analyses of art, philosophy, literature, or history—that is, academic research in the humanities—have recognizable utility?
In fact, humanities research already has instrumental value. That value, however, is rarely immediate or predictable. Consider the following examples:
In the 1970s and 1980s, the computer scientist Donald E. Knuth was struck by how designing computer software was essentially an aesthetic act, analogous to writing literature. As detailed in his book Literate Programming (1992), his work on computer languages was shaped by his engagement with the humanities. Knuth wrote two computer programming systems—WEB and CWEB—in part because he sought a language that would allow "a person to express programs in a 'stream of consciousness'" style.
"Stream of consciousness" was a phrase first used by William James, in 1890, to describe the flow of perception in the human mind. It was later adopted by literary critics like Melvin J. Friedman, author of the 1955 book Stream of Consciousness: A Study in Literary Method, who used the term to explain the unedited forms of interior monologue common in modernist novels of the 1920s. However, by his own acknowledgment, Knuth's innovations were most clearly influenced by the work of the Belgian computer scientist Pierre-Arnoul de Marneffe, who was in turn inspired by Arthur Koestler's 1967 book, The Ghost in the Machine, on the structure of complex organisms. And that book took its title and its point of departure from a key piece of 20th-century humanities research, Gilbert Ryle's The Concept of Mind (1949), which challenged Cartesian dualism.
There is, then, a visible legacy of utility that begins with research into Descartes and leads to important innovations in computer science.
Research in history and literary studies has also shaped the world of national intelligence. When the Office of Strategic Services (OSS, predecessor of the CIA) was established, in 1942, the director, William J. Donovan staffed his agency with humanities professors. More than 50 historians alone were hired to develop the OSS's analytical methods. These scholars adopted the framework of humanities research—the footnote, the endnote, the bibliography, cross- and counter-indexing—to give order and form to the practice of intelligence analysis. That, in turn, enabled the OSS to do things like compile a list of foreign targets in order of importance on less than a day's notice.
James J. Angleton, who became chief of counterintelligence for the CIA, understood that the interpretive skills he had cultivated by studying works of literary scholarship like I.A. Richards's Practical Criticism (1929) and William Empson's Seven Types of Ambiguity (1930) could help create new methods of intelligence synthesis and information management. Research methods developed by humanities scholars, in sum, essentially invented the science of intelligence analysis.
What unites those stories is not that they exemplify times when humanities research has had instrumental value, but rather times when it has had unintended instrumental value. Those scholars did not intend, nor could they have anticipated, the applied value of their work. Yet that's not to say the application of their work was the point of their work. After all, scholars weren't studying Shakespeare with an eye toward establishing the CIA. Instead, research in the humanities, like research in all disciplines, is valuable precisely because we never know where new knowledge will lead us.
This principle was demonstrated recently by scientists at Los Alamos National Laboratory, who put together a graph demonstrating the complex web of influences among researchers in different disciplines. They tracked the reading patterns of nearly 100,000 online scholarly journals, charting when a researcher in one academic field cited an article in another academic field. The resulting graph—a "clickstream map of science"—looks like a wheel in which the hub is composed of humanities and social-science journals, and the rim is made up mostly of natural-science publications. The spokes are formed by journals from emerging interdisciplinary fields like alternative energy, human geography, and biodiversity. The graph suggests that the humanities act as a bridge among disciplines, sparking new ideas and areas of research. As the study's authors conclude, their findings correct the "underrepresentation of the social sciences and humanities" in outcomes of scientific research. So even if, as Stanley Fish argues, the humanities provide pleasures of "aesthetic wonderment," that's obviously not all they do.
One might reasonably point out that all of the historical examples above are from humanities research performed in the early to middle part of the 20th century. ("Where's the new stuff? Where's the frozen-yogurt technology inspired by Foucault's The History of Sexuality?") But that's part of the point. It takes decades to make sense of the present, and it is only now that the unintended contributions of 20th-century humanities research can be discerned. Forty years from now, people might look back on the 1980s and 1990s as a golden age of inadvertently useful humanities research. We simply don't know how or if such research might yet acquire use value.
Consequently the issue of utility and intended outcomes is a thorny one. Fish hedges by claiming that he's talking about only a "direct and designed" relationship between humanities research and any ultimate instrumental value. But no research project, in any discipline, has a direct and immediate relationship between the academic procedures characterizing it as research and any eventual, extracurricular effects it might have. If it did, the researcher wouldn't be doing research, but rather something else entirely, like policy implementation or asset management. So if one wants to claim that humanities research has no immediately obvious nonacademic utility, I suppose that claim is basically correct. But making that argument is like hurdling a toothpick. The yawning gap between intended outcomes and eventual use value is one common to all research, regardless of discipline. That's what makes it research. We don't know what we don't know, and we also don't know how—if at all—what we learn might be used in the future.
This is nothing new. Examples of research with unclear instrumental value abound, in all disciplines. Scientists at the University of British Columbia have found that working in front of a blue wall (and not a red one) improves creative thinking. Scholars of business and sociology at the University of Nebraska and at Washburn University have discovered that female golfers often feel unwelcome on golf courses, in part because of the misperception that they are slow players. And civil engineers at the University of California at Davis have found that the kind of vehicle one purchases is determined partly by lifestyle considerations—status seekers, they learned, are more likely to buy expensive cars, while family-oriented people are more likely to buy minivans. Viewed superficially, that sort of research may seem obvious, or at least devoid of instrumental value. But its real usefulness is, paradoxically, that we don't yet fully know how it will be useful.
Americans have long appreciated the virtues of pure research. In an address to Congress advocating the establishment of the National Aeronautics and Space Administration, in 1958, President Eisenhower said the new agency would be necessary in part for national security—the Soviet Union had launched Sputnik the year before—but also for more-abstract reasons. He indicated that NASA would be a boon because of the "compelling urge of man to explore the unknown"; because it would increase "national prestige," which would then lead to additional "science and exploration"; and because it would create further "opportunities for scientific observation and experimentation which will add to our knowledge."
Even if you assume that Eisenhower's downplaying the military potential was a rhetorical ploy, it is striking how abstract his justification for NASA was—how rooted in pure research, ambiguity, and the pleasures of unknown outcomes—and how much that basic logic has persisted to this day. NASA costs a tremendous amount of money. But ask any fourth grader, or any adult, for that matter, about the purpose of NASA, about what it has produced, and you will very likely get some mumblings about "the effect of gravity on tomato seeds," or something about "satellite technology," or maybe just that "Tang is delicious." (Contrary to popular myth, NASA didn't actually develop Tang, Teflon, or Velcro, three useful inventions for which it is commonly credited.) But in the post-cold-war era, the point of NASA is not to acquire some questionable data about floating tomato seeds; the point of NASA is to learn new things. We go into space because we learn stuff, not because we get stuff. NASA is our greatest monument to pure research, and it is foolish to suggest that its importance can be determined on the basis of particular utilitarian outcomes.
We can't know the ultimate instrumental value of research in advance. But we perform that research anyway, because we have decided that, on balance, it is good to learn new things, whether or not they eventually lead to new technologies or other useful things. All researchers, NASA scientists and poetry scholars alike, possess an essential cluelessness about the ultimate outcomes of their work. Common to the act of research across all disciplines is the core principle of the unknown outcome: We don't know exactly what we're going to find out—and that is precisely the point. After all, if we knew in advance precisely how a research project would be useful, why would we need to do it at all?