To the Editor:
In my original article, I demonstrated, using simple data-analysis, that FIRE’s college free speech rankings are arbitrary and misleading (“Harvard Last in Free Speech? Don’t Trust FIRE’s Rankings,” The Chronicle Review, February 16). In their response, in an attempt to defend their rankings, FIRE uses the same flawed approach that they used to make the rankings in the first place.
Even though it is about universities, this is no mere academic debate. These rankings have been weaponized to attack universities for political gain. Because of this, it is important to set the record straight. In their response, FIRE includes the trite point that “sunlight is the best disinfectant.” Who could disagree with this cliché? Unfortunately, though, FIRE hasn’t provided any sunlight with these rankings, rather they have created meaningless numbers that shouldn’t be taken seriously by anybody who cares about free speech. And FIRE certainly hasn’t provided any disinfectant, rather their misleading rankings have become a tool for opponents of free speech who are attacking the independence of higher education.
FIRE notes what they consider to be a “number of mistakes” in my critique of their rankings. None of what they consider to be mistakes changes the substance of my critique and correcting them, if they needed correcting, wouldn’t make their rankings valid.
In my original article, I accepted FIRE’s survey results as valid, but FIRE notes that I said they weighted their survey data to attempt to be nationally representative, rather than, as they point out in their reply, attempting to be demographically representative of a particular campus. This is a useful clarification because their posted methodology doesn’t specify the target population of the weighting and weighting to an individual campus is preferable. Nevertheless, a reader should probably remain very skeptical about the ability of a survey with an opt-in panel using small samples—as few as 38 per college—and an unproven weighting scheme to obtain an unbiased sample at any particular college. There is no way to know if the survey, even after weighting, creates samples that are representative of opinion on the issues in the survey. Frankly, I doubt the surveys are accurate. But, given that it is the only part of their approach that isn’t just arbitrary adjustments to the data, the survey is the most valid way to rank colleges on free speech and FIRE probably should have just stopped there rather than making a series of arbitrary adjustments to the scores.
In their response, FIRE does not understand why I use standard deviations to discuss the penalties and bonuses they apply to the raw survey scores. Describing outcomes in terms of standard deviations is a common technique to allow for the interpretation of effect sizes when the numbers themselves have no natural meaning. Because FIRE creates these penalties and bonuses out of thin air, standard deviations are simply a way to understand the size of the adjustments FIRE applies. Had I not discussed them in terms of standard deviations, FIRE’s adjustments would remain just as baseless.
FIRE also notes that I incorrectly reported the penalties FIRE applies for their “spotlight ratings” — noting that they penalize 5 points for a yellow rating (rather than 10) and 10 for a red rating (rather than 20). This correction is especially revealing because it makes no substantive difference to my critique. The problem with FIRE’s method remains regardless of the numeric value of these penalties: FIRE could have removed 5 points, 10 points, 20 points, or points based on the average daily rainfall in Bogota in August — all the numbers would be similarly arbitrary and are indefensible adjustments to the data.
Now, let’s consider the substance.
FIRE admits the obvious point that their use of media reports to tally targeted speakers and scholars “could result in more frequent detection of controversies at high profile colleges” thereby biasing their analysis in a way that penalizes high-profile colleges, such as Harvard. But they then defend this practice by claiming that “statistically speaking, the more controversies with documented outcomes we have, the more reliable our assessment of the speech climate on a given campus becomes.” Statistically speaking, this just isn’t true. For this to be true, one must assume FIRE is not collecting a biased sample of controversies. Yet, as I explained in my article, it is likely that the media fails to report instances of speakers being supported on campus. You can count all the media reports you want, but if you are more likely to count instances where speakers are canceled, rather than supported, your assessment of the speech climate will be biased. Consider this simple illustration: If you wanted to determine if a basketball player was a good free-throw shooter, you could watch games and record what portion of total free throws were made by the player. The more games you watched, the better idea you would have about their free-throw shooting ability. But, if you recorded only the free throws that the player missed, it doesn’t matter how many games you watch, your impression of the player’s free-throw ability would be wrong. If FIRE disproportionately records colleges canceling speakers, their impression of free speech will be wrong.
FIRE repeats this same mistake when they defend their count of speakers who have been sanctioned saying, that there “is no denying that Harvard has a consistent pattern of responding poorly” to events where free speech was threatened. Of course, you can’t claim there is a “consistent pattern” at all because if you only see half the data, you cannot declare something a consistent pattern.
But, moreover, as I explained in my original article, when making comparisons across colleges it is obvious that we have to consider the proportion of total speech sanctioned at a college, not just a count of sanctions. Considering a proportion, the obviously right approach, leads to very different conclusions about campus speech climate: In the years covered by FIRE’s database, Harvard has had thousands of speakers — many of whom were potentially controversial. But nobody said anything about them, so they don’t end up in FIRE’s database.
Think about this another way: in a given year, if one college has 1,000 speakers (this is a realistic number at Harvard) and 10 of those speakers are canceled, that would mean that the college cancels 1 percent of speakers. If a second college has 10 speakers and 6 of them are canceled, that college cancels 60 percent of speakers. But FIRE would tell you that the first college has a worse free speech culture than the second college. I applaud FIRE’s efforts to shine a light on these events because one speaker canceled is too many, but creating a ranking based on these simple tallies is, essentially, meaningless.
Finally, I want to take one last very direct issue with one of FIRE’s claims in their response to me. I pointed out that FIRE’s system of penalties hurts high-profile universities because controversial speakers are most likely to come to these universities and that, arguably, this is a sign of the promotion of open inquiry, not the suppression, as FIRE claims. I demonstrate this by reporting that the correlation between penalties dished out by FIRE and the endowment size of a college is strongly negative so that high profile colleges are penalized more than low profile colleges. In their response FIRE claims that this relationship is “not clear” and to make this point, they list colleges that they believe do not fit the pattern of more high-profile universities being included in their database. But here’s the thing — a relationship isn’t established by anecdotally cherry-picking examples, rather a relationship is established by systematically examining the data. The correlation between endowment size and the arbitrary bonuses and penalties dished-out by FIRE is strongly negative: higher profile colleges receive more penalties from FIRE (the Pearson’s correlation is -.51, yielding a t-value of -6.06). Like it or not, that’s what the data shows.
One can take a very cynical view of FIRE’s approach to these meaningless rankings. It might be that FIRE was well-meaning and just clumsy in the changes they made to the scores. But when FIRE self-publishes articles with headlines like “Harvard gets worst score ever in FIRE’s College Free Speech Rankings” you have to wonder if they see the publicity advantage of putting Harvard at the bottom of the rankings. Given that the rankings are based on adjustments to the data that are nearly impossible to defend, it is natural to question their motives. If a person really wanted to be cynical, they could question if FIRE decided to drop colleges like Liberty and Hillsdale from their rankings precisely because the fact that these speech-suppressing colleges, on average, outperform other colleges on FIRE’s survey, shows the absurdity of the entire enterprise.
But FIRE claims to have good intentions in creating these rankings and I believe in the fundamental importance of protecting free speech, so I welcome FIRE’s efforts to shine light on the challenges of free speech on college campuses. And, as I stated in my original article, I am certainly not trying to give Harvard a pass on speech. Rather, because free speech is so central to our mission at universities, indeed to a free society more generally, let’s make sure we do it the right way.
Ryan D. Enos
Director of the Center for American Political Studies
Professor, Department of Government
Harvard University
Cambridge