Censorship is widespread in academe and has grown worse in recent decades. Indeed, the expressive environment in higher ed seems less free than in society writ large, even though most other places of employment have basically no protections for freedom of expression, conscience, research, etc.
Almost everyone is opposed to censorship in the abstract, but when confronted with ideas they personally find offensive — arguments about the relationship between genes and inequality, for instance — people support censorship more often than their generic views would suggest.
Many presume censorship is mostly driven by right-wing agitators, such as Fox News, or by lefty “kids these days” who don’t properly understand or value academic freedom. However, as we (and our co-authors) demonstrate in a new study published in the Proceedings of the National Academy of Sciences, censorship is more typically driven by scientists themselves.
Consider the results of a national survey of faculty members at four-year colleges conducted last year by the Foundation for Individual Rights and Expression: Sixteen percent of the faculty had been disciplined or threatened with discipline for their teaching, research, talks, or nonacademic publications. Depending on the issue being discussed, between 6 percent and 36 percent of faculty members supported soft punishment (condemnation, investigations) for peers who make controversial claims, with higher support among younger, more left-leaning, and female academics. Thirty-four percent of professors had been pressured by peers to avoid controversial research; 25 percent reported being “very” or “extremely” likely to self-censor in academic publications; and 91 percent reported being at least somewhat likely to self-censor in academic publications, meetings, presentations, or on social media.
The motives behind censorship are commonly misunderstood. Scientists sometimes censor one another because of power struggles or other unsavory reasons. Most of the time, however, benign motives are at play.
Editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities, or those of their most sensitive readers.
Many academics self-censor to protect themselves — not just because they’re concerned about preserving their jobs, but also out of a desire to be liked, accepted, and included within their disciplines and institutions, or because they don’t wish to create problems for their advisees. Other times, scholars attempt to suppress findings because they view them as incorrect, misleading, or potentially dangerous. Sometimes scientists try to quash public dissent of contentious issues for fear that it undermines public trust or scientific authority, as happened at various points during the Covid-19 pandemic.
Moral motives have long influenced scientific decision-making. What’s new is that journals are now explicitly endorsing moral concerns as legitimate reasons to suppress science. Following the publication (and retraction) of an article reporting that the mentees of male mentors, on average, had more scholarly success than did the mentees of female mentors, Nature Communications released an editorial promising increased attention to potential harms. A subsequent Nature editorial stated that authors, reviewers, and editors must consider the potentially harmful implications of research, and a Nature Human Behaviour editorial declared the publication might reject or retract articles that have the potential to undermine the dignity of particular groups of people. In effect, editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities, or those of their most sensitive readers.
It is reasonable to consider potential harms before disseminating scientific findings that pose a clear and present danger, such as scholarship that increases risks of nuclear war, pandemics, or other existential catastrophes.
However, the suppression of science and ideas, even with the best of intentions, often has significant adverse consequences. Censorship can durably limit our understanding of important phenomena and slow down scientific progress, causing people to struggle, suffer, and die needlessly. It can lead to misinformation cascades or null entire fields of research (because the people who would declare that the emperor has no clothes are locked out of the conversation), wasting enormous amounts of resources and effort that could be better directed elsewhere. Although scientists sometimes quell dissent to preserve their perceived authority, if this suppression becomes public knowledge, it tends to drastically undermine public trust in science.
To more effectively balance the risks of disseminating potentially dangerous information against the costs of censorship, we need to empirically and openly measure purported harms, rather than the current approach of largely relying on the often arbitrary intuitions and authority of small and unrepresentative editorial boards of journals.
We should also improve accountability in peer review by making the process more visible. Reviews and editorial-decision letters could be provided in online repositories available to all scholars (with reviewer and editor names redacted if appropriate). Professional societies could make available the submissions, reviews, and acceptance/rejection decisions for their conferences (perhaps with identities redacted). This would allow scholars to discern double standards in decision-making that often mask censorship against unpopular views. As a consequence of this increased openness, editors and reviewers would very likely become more consistent and careful in their decision-making. Studies show that people behave in less biased ways when other people can more easily observe disparities, or when people might have to explain their decisions.
Scholars could increase accountability for peer-reviewers and editors by conducting more audits of journals in their fields. Scholars have long submitted nearly identical papers to journals, changing only things that should not matter (like the author’s name or institutional affiliation), or reversing the direction of the findings (all else the same), to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on whom the authors are or what they find. Up to now, studies like these have provided important insights into how censorship works and against whom it is deployed. However, if scholars were more consistent and systematic in auditing journals, it would become easier to compare journals against one another and highlight publications that are especially biased or objective in their decision-making.
Scholars could also conduct large-scale surveys of scientists who have submitted to various journals to evaluate perceived procedural fairness. Some journals (e.g., Proceedings of the National Academy of Sciences) already survey submitters on relevant questions. However, to our knowledge, none currently share this information publicly. If journals were pushed to collect and publish this data more consistently, it would allow scholars to understand their colleagues’ perceptions about bias at various journals and may help scientists more effectively target their work to publications where it would get a fair hearing.
Collectively, measures like these would create new forms of competition among scientific journals. New metrics could be created based on these data to tie the reputations of journals, editors, and peer reviewers to the openness and fairness of their publication practices. Scholars who are doing important and groundbreaking work would most likely seek out journals that are credibly objective, while biased journals would probably see their reputations slide.
Just as institutional and cultural factors make censorship and self-censorship more pronounced, measures such as these would render censorship easier to perceive and more reputationally costly.