We’re sorry. Something went wrong.
We are unable to fully display the content of this page.
If you continue to experience issues, contact us at 202-466-1032 or firstname.lastname@example.org
Richard Jean So’s Redlining Culture: A Data History of Racial Inequality and Postwar Fiction (Columbia University Press) is one such work. Redlining Culture presents an ambitious thesis about racism in publishing. The prevailing story of postwar American literature, according to So, is of increasing diversity, in particular the rise of multiculturalism as a “defining feature of postwar American literary history.” In So’s view, this is strikingly incomplete. So is a professor of English and cultural analytics at McGill University, and his book is one of the more influential recent works in the burgeoning academic field of digital humanities. The story he tells is one that many readers are eager to hear. It is not often that academic works are published with accompanying articles in Literary Hub and The New York Times.
According to So, the underlying feature of postwar American literature was the “inertia of whiteness” — by which he means the predominance of white, male writers like John Updike, Philip Roth, and Saul Bellow in terms of awards, reviews, and prestige. From 1950 to 2000, 97 percent of books published by Random House were by white authors, 98 percent of best sellers were by white authors, and 91 percent of the major American book prizes, like the Pulitzer and the National Book Award, were awarded to white authors, in So’s accounting. Notable figures like Toni Morrison, who won both a Pulitzer (1988) and the Nobel (1993), and who, as an editor at Random House, significantly expanded its roster of African American authors, are not examples of any lasting shift but merely occasional exceptions.
To support these contentions, Redlining Culture presents us with — and this is So’s most novel contribution — a “data history” of postwar American fiction. But just what is a data history? Digital humanities, or DH, is the application of quantitative and computational methods of analysis to the humanities, especially to literature. Although not a new research program per se (early examples include the Index Thomisticus, a concordance of the works of St. Thomas Aquinas, which began in the 1940s under the auspices of IBM), DH has become one of the few areas of growth in the humanities in recent years. In presenting us with a data history of postwar American literature, So aims to empirically reveal ostensibly overlooked literary trends.
He does not succeed, in large part because of how narrowly he chooses his data, but also because of the faux-rigorous technical methods employed — methods that verge on fictional at times — and because, whenever So’s data generate results that don’t match his conclusions, he simply ignores them. The book is important, then, for what it suggests about the field it emerges from. What happens to the digital humanities when it’s not very sophisticated about determining what to count? What happens when the methods and models employed obscure the history being examined — especially when those methods and models are used incorrectly in the first place? Redlining Culture is an object lesson in the importance of respecting both the digital and the humanities part of the digital humanities. DH should aim as much at humanistic erudition and skepticism as it does at technical manipulation. Otherwise it’s a one-sided marriage with unhealthy long-term prospects, the fruits of which will always be deformed.
So presents us not so much with a history of American literature from 1950 to 2000 as with a quantitative analysis of Random House’s fiction catalog during those years, as well as a few other surveys of official and conventional high literary culture. But the very argument for why Random House is a good proxy for postwar literature in general — namely that it was “one of the largest and most powerful publishing houses in America” — reinforces the elitism it supposedly exposes. And this is precisely what should make us pause before conflating one publisher with a half century of literature. Random House is an excellent measure of a certain slice of high literary culture and publishing, but it’s for that very reason unrepresentative of other trends and innovations. In fact, we could venture to say Random House is as unrepresentative of publishing as, say, Harvard is of higher education.
Now don’t get us wrong: It’s entirely worth raging against the hidebound exclusivity and elitism that pervade our most prestigious institutions. But one of the largest problems with Redlining Culture is that So doesn’t grapple with the glaring fact that elite institutions like prize committees aren’t representative or innovative. They have a generally mediocre, if not poor, record of recognizing what will last, and their prestige is directly tied up with an aversion to new things. Ignoring this elemental dynamic — asking why the establishment has not changed quickly when establishments, especially of the cultural kind, are precisely among those slowest to change — allows So to give us a narrow and moralistic history of late-20th-century American literature.
In other words, white characters are people, and their representation changes; Black ones are stereotypes, and don’t, or at least not as much. Now in all too many ways, sadly, this portrait of American literature (and hardly only American literature) does ring true, and it is a legacy of discrimination. But it’s hardly simply a racial issue, or a legacy of “literary whiteness.” Religious, cultural, and ethnic minorities of all kinds would say they are stereotyped in the dominant discourse; that’s why it’s always a breakthrough when some mainstream writer actually presents their community in nuanced form. Assuming “literary whiteness” as the crux of the issue here misses how the phenomenon is tied to larger and longer majority-minority conflicts and divisions, and hardly unique to modern American or English literature.
So ignores the real texture of debates around minorities in the period.
Which brings us to the Jews. By So’s own analysis, the term most strongly associated with Black characters in the 1950s is “Frenchman,” and in the 1970s it’s “gentleman,” but in the ’60s, ’80s, and ’90s, it’s “Jew”! The analysis presented by So raises, of its own accord, something strange: Why is the term “Jew” so common when talking about Black characters? So includes this odd result in a chart accompanying his argument, but he doesn’t mention it at all. According to So, it’s the most commonly associated “similar term” for Black characters for the majority of the decades under examination, but since it doesn’t fit cleanly into the analysis of “whiteness,” he passes it by. (It is also especially striking given his later claim that Jews become white in this period, but one thing at a time.) If DH is going to be a helpful method or toolbox, it’s not going to be because quantitative analyses primarily generate corroborating evidence for one’s views, but because DH helps us to see and document things we hadn’t noticed before. But it won’t help if we ignore such evidence when it presents itself.
Speaking of Jews: In order to show the dearth of Black writers with the most cultural power or regard, So lists the “top 10 authors in the top 1 percent most reviewed titles,” a list headed by Philip Roth (over all, the list is 30 percent Jewish). The only Black member of this select group is Toni Morrison, with Alice Walker clocking in next, down at No. 47. Jews are, well, overrepresented. So tries to explain away this fact by imposing his overarching white/Black binary on it: “Jewishness articulates a specific type of ‘whiteness.’” Well, sure, some do say that, but others would very strongly disagree! (Unintentionally, So confirms the comedian and critic David Baddiel’s recent book, which touches on precisely this issue, namely that in many ways Jews Don’t Count.) But either way, why are Jews not “minority writers”? After all, Roth famously appeared in 1962 at a symposium that included Ralph Ellison and Pietro di Donato and was devoted to a topic that seemed urgent at the time, “A Study in Artistic Conscience: Conflict of Loyalties in Minority Writers of Fiction.” (The symposium, in fact, became notorious in Roth’s retelling, and served as one excuse for his own conflicted relationship with American Jewry.)
Either way, So ignores the real texture of debates around minorities in the period. One of the most memorable anecdotes in this regard comes from Saul Bellow, who had hoped to study English literature in graduate school but was told that, as a Jew, he would never have “the right feeling for Anglo-Saxon traditions, for English words.” This points to an essential missing piece of So’s narrative, namely the (declining) centrality of religion and ethnicity to literature. Depicting as part of the dominant ethnos authors whose careers consisted of raging fables about how they were on the outside of American society peering in, desperate and angry, misses one of the central tensions of postwar literature: It ignores the way a certain kind of Protestantism helped define who counted and who didn’t, who was Waspy enough and who wasn’t. Redlining Culture anachronistically chops up postwar literature into the academic categorizations of today rather than examining the terms and transformations of the period itself.
In his second chapter, So takes to task what he calls a “multiculturalism of the 1 percent” in order to show, among other things, how unequal the book-review practices of the literary world are in multiple respects (race, gender, ethnicity, etc.). He begins by gathering the most reviewed novels in English by Americans, as collected by the Book Review Index from 1965 to 2000 — the “top 1 percent of the most-reviewed books” — and finds that the list is 90 percent white authors. That sounds quite lopsided, until you realize that no figures have been provided for who is being published in the first place. If the novels being reviewed are a function of what is being published (and how could they not be?), this is fundamental. It is the baseline against which you need to measure things. But that simple point is passed over. Apparently in its stead, So provides a racial breakdown of the authors Random House published in a similar, but not identical, period — that number is 97 percent white. Since which books are reviewed is a function of which books are published, the books reviewed were actually much more diverse than the books published, or at least than Random House’s offerings. This is, of course, the opposite of the point the chapter wants to make, which is about bias among reviewers. And what is actually the case? Who knows? We’re never provided the demography of what was being published over all in the first place.
OK, now let’s get technical. (Not too technical, we promise.) The heart of the second chapter is an attempt to show that less attention was given to minority writers as opposed to white ones from 1965 to 2000. In order to demonstrate this claim, So introduces a set of intimidating terms, namely eigenvector centrality (EC) and the Gini coefficient. EC is a measure of how central or influential somebody or something is within a network; the Gini coefficient is a measure of inequality in a system, generally applied to income or wealth figures. In network analysis, networks are composed of nodes, and the connections between them are called edges. The eigenvector centrality of a node is a value assigned to each node that attempts to rank the node’s relative “importance.”
As his first step, then, So takes the authors of the top 1 percent most-reviewed books (that is, by simply counting which books received the most reviews), and connects the authors, who are the nodes, to each other by linking them if they have been reviewed in the same publication. For example: Philip Roth, node, and Alice Walker, node, both reviewed in The New York Times Book Review — connection! Using this graph, So computes the EC scores (remember, the value of how central something is within a network according to this metric) for this network, and in so doing obtains a ranking of the authors. The higher the EC score, the more importance was granted the author in question. Philip Roth is No. 1 out of 1,003 authors (he would no doubt be quite proud), with a score of .0697; Toni Morrison is No. 2 and is the only Black woman to crack the top 10, with an EC score of .0692. (We’re using these clunky numbers for a reason, we promise.) From a purely mathematical perspective, nothing about this so far is alarming. What So does next, however, certainly is.
Next, So splits the EC values (like Roth’s and Morrison’s scores) into those corresponding to the white authors and those corresponding to the Black authors, and attempts to measure the relative inequality among each of those two groups. (Remember, all the authors are in the top 1 percent of most-reviewed writers in the first place.) But in order to “make them comparable,” So transforms said values using the MinMax Scaler function (don’t worry about the name) so that they all lie in the range of 0 to 1. Then he computes the Gini coefficient, a measure of inequality, and claims that inequality among white authors is 0.256 and for Black authors is 0.329. (In other words, there’s more inequality for Black authors.) Strikingly, in claiming the distribution of EC scores of Black authors is markedly different from that of white authors, So doesn’t even present the sample mean or the sample standard deviation of these scores. Nor does he plot a histogram (a bar plot that depicts the distribution of data). In any principled analysis comparing two distributions, such basic computations should invariably come well before the use of the Gini coefficient.
The principal problem here is that So’s application of the MinMax Scaler function is conceptually incorrect and imposes significant bias on the data. In other words, the use of this tool is not only gratuitous but also distorting. One can, for example, compare the inequality of a list of salaries expressed in dollars to that of a list of salaries expressed in pounds without using any such tool. Furthermore, the transformation that So employs makes the data absurd. Here is an example of how this can happen: Suppose you were considering the salaries of 11 employees of a company. Suppose their salaries were 90K, 91K, 92K, … , 99K, and 100K. In this case, the Gini coefficient of those 11 numbers is about 0.02. This is a small degree of inequality, which makes sense, since the salaries are fairly similar. If one erroneously first performs the MinMax Scaler transformation, as in So’s methodology, then those 11 numbers would be transformed to 0.0, 0.1, 0.2, 0.3, … , 0.9, and 1.0. Got it? The 90K becomes 0 in this scale, since it’s the lowest value in a scale where everything is between 0 and 1. There would be an employee making nothing! Using this new scale, with 0 as one of the salaries, the Gini coefficient of the 11 numbers is about 0.36. A bigger Gini coefficient means more inequality — and 0.36 is a lot bigger than 0.02. Using the MinMax function would lead one mistakenly to conclude that the inequality in salaries was much more substantial than it truly is.
To further grasp the absurdity, consider a second company, whose employees have salaries of 10K, 11K, 12K, … , 19K and 20K. Now the top earner makes twice what the bottom earner makes. Not surprisingly, the Gini coefficient for this company, which is 0.12, is much higher than the 0.02 for the first company. However, using So’s methodology, these salaries would be transformed to identical figures, as above, and thus they would be considered identical in their level of inequality.
The inequality being examined is, to a degree, a result of the technique employed.
And this error is magnified in smaller samples! And that seems to have been what happened here in comparing the inequality between white authors (of which there are more) and Black ones (of which there are fewer). The inequality being examined is, to a degree, a result of the technique employed.
We contacted Richard Jean So in order to examine the data. So told us that he had bought and curated the Book Review Index data set with a colleague, and that they had agreed not to publicize the data until the latter’s book was finished. When we asked for an unlabeled abstract graph in order to compute the eigenvector centrality scores (a measure of the importance of the nodes within the graph) and confirm that we get the same numbers, So ignored our request. Put simply, if he had sent us what we’d asked for, we could have done a kind of statistical audit of his findings. Indeed, we had hoped to show that the inequality he claimed was present was indeed there, but to do so in a statistically rigorous fashion. Without So’s sharing even the anonymized data, we are left without any evidence to support his conclusion.
Without access to the precise network that was used for the analysis, one cannot make a direct comparison. Nevertheless, we think the following experiment is telling. We created a network that had many of the same features as the authorship network So describes. Recall So had found that, by breaking the data into Black versus white authors and (incorrectly) applying the MinMax Scaler function, he ended up with higher levels of inequality for Black authors. As far as we can tell, his data drew from 100 Black authors and 903 white authors. Our hypothesis was that his method had the tendency to make the sample size of 100 yield a higher Gini coefficient merely due to its smaller size. And that is what we found: In 91.12 percent of the 10,000 trials we ran, the Gini coefficient was higher for samples of 100 than for samples of 903. This is hardly surprising; as we have already observed, smaller samples exhibit more bias when one first applies the MinMax Scaler transformation. Thus, in comparing two statistics drawn from these two different populations, care must be taken so as to make an appropriate comparison. In this case, such care seems not to have been taken. This is, bluntly, Stats 101.
The plain-language takeaway from all this: Although evidence of inequality may be in the data, So’s analysis is so misleading that one cannot make a conclusion one way or the other. We had hoped that, had he shared the data with us and such inequality was present, we could have demonstrated it in a principled fashion.
If what one wants to measure is the amount of attention paid to authors by race, it is not at all clear that the abstruse techniques So deploys illuminate more than would simply adding up the number of reviews of certain authors. At least the simplicity of that measure would have the benefit of clearly bringing to the fore the benefits and drawbacks of such calculations.
The model So presents implies that coverage or attention or inclusion is a zero-sum game. “In a system where recognition is finite, there can be no other way. Every time the gatekeepers decide to push someone up, they must, however invisibly, push someone else down.” Sure, often. Yet there are plenty of examples of writers who receive attention partially due to the fact that they are part of a coterie or trend: from the “angry young men” of the ’50s to the Beats to the New York School poets. The rise of a particular figure may simultaneously “push down” another figure but also bring up a few others. In fact, if we’re going to start pilfering from economics, we could look at the way firms and stores sometimes cluster together to suit consumer preferences — an observation stemming from the foundational work of Harold Hotelling almost a century ago. There is a reason the Gini coefficient is generally employed for objects that come in clearly defined units, like income, while it is wickedly mocked by many economists, and statisticians universally urge caution when using it to compare two populations. Which technical tools one chooses can help shape the evidence and data one has to work with.
Good statistical models offer probabilities, nuances, and qualifications, not the hard and fast certainties that most people, including many in DH, associate with mathematics. Redlining Culture, to its credit, gathers an enormous amount of new data. But the most important part of DH research often depends on how you parse and define items in the first place. That work requires a broad historical sense and the interpretive capaciousness associated with the “humanities” half of the digital humanities.
To abandon that sense is interpretively irresponsible. Literature is not demography, nor is it politics, even if it is quite often political. Progress, at least when it comes to cultural production, becomes lasting not when one is trying to join the reigning establishment, or lamenting how exclusionary it is (it so often is!), but, to quote that anti-Semite Ezra Pound, when one seeks, in the first place, to make it new.