Over the past 15 years, the humanities have undergone dizzying changes. Scholars are now blogging, learning to code, writing collaboratively, and mining vast digital libraries. Many of these changes are bound up with computers, and observers often characterize them collectively as “digital humanities.” But so far, digital humanities hasn’t become a separate field or even a distinct school of thought. The term is a loose label for a series of social and intellectual changes taking place in humanistic disciplines.
Disciplinary change is always controversial, and the attack on DH has become a recognized genre. Timothy Brennan’s recent article, “The Digital-Humanities Bust,” is the latest of these pieces. What troubles him about DH, he says, is that it harbors “an epistemology.”
Brennan is willing to accept that computers can help with strictly linguistic problems: “compiling concordances,” for example, or “deciphering Mayan stelae.” But he dismisses the idea that they can help address the core questions of the humanities. He writes:
The interpretive problems that computers solve are not the ones that have long stumped critics. On the contrary, the technology demands that it be asked only what it can answer, changing the questions to conform to its own limitations. These turn out to be not very interesting.
Why can computers advance other fields of knowledge, but not the humanities? Brennan’s central assumption is that humanists, in particular literary critics, have little to gain from numbers. Our core questions are purely “interpretive,” and a computer’s mode of expressing those questions could only confine them or transform them into “empty signifiers,” in Brennan’s words.
This assumption is based on an oddly selective history of the humanities. It obviously doesn’t hold true for economic or social history. But even literary study — admittedly an interpretive discipline — has long turned to numbers to address questions of degree and difference.
For instance, as Nicholas Dames has shown, numbers were used by Victorian critics interested in measuring and understanding the physiology of reading. In the middle of the 20th century, scholars gathered quantitative evidence about publishing in order to connect the history of literary form to material changes in production and distribution. By the 1980s, feminist scholars like Gaye Tuchman were building regression models to explain the unequal distribution of Victorian literary fame. For this more complex question of degree, they needed computers.
Much of what is now happening under the aegis of digital humanities continues and expands those projects. Scholars are still grappling with familiar human questions; it is just that technology helps them address the questions more effectively and often on a larger scale. We can now confirm, for instance, that the inequalities Gaye Tuchman described in the Victorian era continued to worsen all the way to the 1970s. (Men and women were writing English-language fiction in roughly equal numbers in the 1860s; by the 1960s, the ratio was three to one.)
The range of questions that can be explored with quantitative methods is much wider than Brennan acknowledges. A Ph.D. student in classics at the University of Iowa, Caitlin Marley, is using textual analysis to study emotion in Cicero’s works and letters. Computers allow her to extract the sentiment of the language he uses when interacting with specific friends and to visualize those patterns, providing us with a new understanding of Roman communication networks and of a figure whom people have been studying for more than 2,000 years. Similar analysis has shown the effect of social networks on the exile of clerics in late antiquity.
For a different example, we need only look at the cover of The Chronicle Review a week before Brennan’s critique. Andrew Piper and Chad Wellmon’s “How the Academic Elite Reproduces Itself” analyzed contemporary evidence about publishing patterns in the humanities in order to raise questions about institutional hierarchy and inequality.
Critics of digital humanities will very likely counter that these examples use computers to study social questions (How many letters were sent? How many articles were published?), not purely interpretive ones. What can computers possibly have to say about the life inside books? Here, skepticism is understandable, because change has been rapid. It used to be true, as Brennan fiercely insists, that 1,700 occurrences of the word “whale” could mean nothing more or less than “the appearance of the word ‘whale’ 1,700 times.”
But as statistical models get better at describing complex, blurry relationships, scholars have learned to do more than count single words. They now use words to build models that represent phenomena like genre, character, and literary judgment. These types of modeling cannot be reduced to mere word-counting: Models of literary patterns are shaped by social evidence, as well by as the cultural knowledge of the scholars who build them.
The range of questions that can be explored with quantitative methods is much wider than Brennan acknowledges.
In his discussion of Hoyt Long and Richard Jean So’s article “Literary Pattern Recognition” (2016), Brennan acknowledges that models of genre might identify formal features that distinguish haiku from other short poems, but he dismisses this as “pointless” because, to his mind, “We already knew that.”
We agree. Rediscovering 5-7-5 would be a pointless exercise. But defining the haiku form was not even remotely the point of that article. Reading it more carefully, we learn that the authors stage the very theoretical conversation that Brennan defers — a conversation between computer-aided interpretation of literary form and more-familiar critical practices. The authors situate their computational model of the haiku within a longer history of interpretive practice. At the same time, they argue that their model supplements older practices by extending the literary critic’s ability to read haiku across different geographical and social contexts in ways that blur any fixed definition of the form. The payoff is an expanded history of the haiku that challenges our thinking on how literary forms circulate, where they circulate, and why — all questions “that have long stumped critics.”
The approach developed in “Literary Pattern Recognition” has since been used to explore the nature of fiction, the life cycles of literary genres, and the gendered assumptions that shape representation of character. Those questions, long central to literary history, can now be explored on a new scale. Nor is literary history the only field in which computers are making a difference. Geographic information systems have enhanced social-justice initiatives by unveiling the spatial patterns of lynching, urban segregation, and “white flight.” Expanding the scope of our knowledge needn’t make scholars less critical of power.
“Digital humanities” describes not only new research methods; the term also covers new forms of public outreach that have affected museums, journalism, and libraries as much as academic departments. The recent map-a-thons that used open-source mapping to aid the Red Cross with hurricane relief were not just building “smooth or attractive delivery systems,” as Brennan puts it, for the ideas of professors. On the contrary, a critical literacy of information technology allows humanists to teach and collaborate with the public in unprecedented and meaningful ways.
Critiques of disciplinary change, digital or otherwise, are always useful. But the most valuable criticism of digital approaches to date has come from scholars who try to understand the methods they are criticizing. Neither technophilic enthusiasm nor uninformed caricature contributes much to this goal. What we need is a conversation that recognizes the continuity of past and present, and that situates new approaches to culture in a longer history of human curiosity and theoretical debate.