Andrew Piper and Chad Wellmon observe that a small subset of elite universities are disproportionately represented in the most prestigious journals in the literary humanities. This “epistemic inequality,” they write, “would surely be as undesirable as economic inequality. In fact, most of us would presume a relationship between the two.” No doubt they are closely related: We might begin with the fact that elite graduate programs pay better and require less teaching than their nonelite peers, thereby allowing more time for research. Such advantages ramify, and surely have much to do with the end result: Ph.D.s from the top 10 graduate programs “account for just over half of all articles published.”
But Piper and Wellmon’s point isn’t simply that elite advantages in publication distribute prestige inequitably — it’s that they produce a damaged body of knowledge. “By limiting the circulation of ideas to a precious few institutional frameworks,” they suggest, the academy limits its ability “to create and share different kinds of knowledge, new kinds of knowledge, and more diverse kinds of knowledge.”
This is where the problem becomes most interesting, and arguably most urgent. How is humanities scholarship deficient, and how might greater institutional diversity correct its deficiencies? The authors decline to address these questions — in part because they abdicate the necessary tools of interpretation and critical evaluation. Their emphasis on difference and novelty reflects a distrust of judgments of quality, which are, they write, “contaminated by the very networks of influence and patronage that produce [them].”
True enough, but can we do without them? If “scholarly notions of quality … are themselves products of the norms, practices, and values that organize the system,” then it is necessary to ask which norms are wanting and which are sound. If a leading journal like PMLA is deformed by its biases, we need to understand the nature of the deformation. But such critiques would necessarily involve judgments of value. Fixing our biases — and improving our work — requires more qualitative thinking about what makes scholarship significant, not less.
Numbers cannot tell us which interpretations matter. Scholars cannot take their cues from algorithms.
Instead, Piper and Wellmon offer data, or rather the idea of it. “The university is a technology,” they write. “Let’s treat it like one.” One can call any institution one likes — a town hall, an AA meeting, a tri-county soccer league — a “technology,” though it’s not clear what’s gained. What the authors seem to mean is that, as a “technology,” the university’s chief product — research — can be assessed algorithmically. They imagine “a new form of algorithmic openness, in which computation is used not as an afterthought or means of searching for things that have already been selected and sorted, but instead as a form of forethought, as a means of generating more diverse ecosystems of knowledge.” The technologism of this utopia has a slightly ominous ring: Whose forethought, exactly? It’s hard to say how their prescriptions might be institutionalized, but at first glance, as Stanley Fish observes, all of them appear vulnerable to producing bizarre incentives.
More worrying, however, is the possibility that the deliberative work of judgment might be replaced by quantification: by citation numbers, by downloads or page visits, or by such brave new measures as “citational novelty” or indices of “public concern.” Piper and Wellmon dismiss what they call the “ideal of incalculability,” which they consider a mystification meant to preserve entrenched power structures. But the opposite of calculation is not superstition. And even if “incalculability” is sometimes invoked tactically to preserve an existing “concentration of power,” it remains a real, even indispensable, value for the humanities. Humanistic knowledge is not easily counted, nor is it in any simple way progressive or cumulative: What literary critics and historians and philosophers know arises through the collective work of argument, interpretation, and evaluation.
Piper and Wellmon know all this, of course. But they are dangerously cavalier about the threat that the enthusiasm for “data” poses. It is true, as they write, that notions of “epistemic quality” in the humanities are produced by institutional norms, practices, and values. Of course they are; so is humanistic knowledge tout court. That is why we cannot simply hand the task of filtering knowledge to computers. Piper and Wellmon might say that they know perfectly well that algorithms are no better than the people designing and using them. We worry that they are sometimes worse.
Elite institutional dominance is probably bad for the quality of humanities scholarship, but we need to know how. Algorithms will not specify those effects, or save us from them. Instead, they will be abused by administrators to produce program evaluations, and they will compel graduate students and early-career researchers to shape their work to the perceived constraints and desires of the new algorithms. They will almost certainly be a force for conformity, directing rather than enabling research agendas.
Moreover, the alleged need to replace the folk-knowledge of the discipline with a set of algorithms suggests a rather dim view of the basic competence of humanists to know what, in their own fields, matters. Isn’t this why humanists tend, quite rightly, to reject the importance of citation rankings and impact factors? We are less sanguine than Piper and Wellmon are about the innocence of algorithmic mediation, and we are not encouraged by their cheery invocation of Britain’s Research Excellence Framework, which, by tying funding to assessment outcomes, risks exacerbating epistemic and economic inequality alike.
The great promise of the digital humanities lies in the new horizons of interpretation opened up by new kinds of evidence. But numbers cannot tell us which interpretations matter. Scholars cannot take their cues from algorithms. When, for instance, feminist critics launched recovery projects around neglected women authors — projects which indeed “generat[ed] more diverse ecosystems of knowledge” — they didn’t need a quantitative rubric to tell them do it. And they certainly don’t need numbers to tell them that the work is important. Such work, we suspect, would be far less compelling if performed in response to a filter for “citational novelty” rather than out of a passionate investment in its subject. We are not ready to outsource our critical imaginations to the bots.
Len Gutkin is a postdoctoral fellow at Harvard University. Sam Fallon is a visiting assistant professor of English at Wesleyan University.