You may have heard of the study published in 2020 concluding that Black newborns have higher survival rates when Black doctors attend to them. It got a huge amount of coverage in the popular press. It was even cited by Supreme Court Justice Ketanji Brown Jackson in her dissent last year on the court’s ruling against racial preferences in college admissions. The research, Jackson claimed, shows the benefits of diversity. “It saves lives,” she wrote.
The same journal just published a re-analysis of the data. It turns out that the “effect is substantially weakened, and often becomes statistically insignificant,” once you take into account that Black doctors are less likely to see the higher-risk population of newborns with low birth weight.
I wasn’t surprised when I saw the re-analysis because I didn’t believe the original finding. It’s not that I thought: “Oh, that has to be bullshit!” It was more: “Meh — I reserve judgment.”
I have a similar attitude about other politically relevant findings I see, including reports that:
- Minority children do worse in school because of the implicit biases of their teachers.
- Microaggressions have negative effects.
- Trigger warnings have positive effects.
- Conservatives are stupider, more biased, more fearful, or in some other way psychologically inferior to liberals.
Meh. Meh. Meh. Meh. Maybe the findings are true, but I wouldn’t bet on them.
To see the problem, compare these findings to those without political import. Suppose I read a paper in Nature Human Behaviour claiming that there is a relationship between the presence of specific bacterial populations in someone’s gut and their performance on memory and attention tasks. Now, one should always be skeptical about any single scientific finding. Due to the nature of statistical inference, experiments sometimes yield positive findings when there are no real effects. There is outright fraud (rare, but it happens), poor experiment design, selective reporting of results, misuse of statistics, and so on — all sorts of ways that scientists, eager to get published, intentionally or unintentionally overinflate their results.
Still, Nature Human Behaviour is a good journal. To get published there, you must go through a robust review process with expert scholars who critically examine your methods and findings. So, yes, this publication would raise my confidence that bacteria in the gut do influence memory and attention.
Now, imagine I read a paper in the same journal claiming that racially diverse teams do better at solving scientific problems. All the same concerns about fraud, poor statistics, and so on apply. But now there’s something else. This sort of finding fits the ideology of most people who review papers for Nature Human Behaviour. (A 2012 Perspectives on Psychological Science study found that only 6 percent of social and personality psychologists identified as conservative.) It’s the sort of finding that improves the journal’s prestige. It’s a result that would end up reported in The New York Times and The Guardian; it would be cited in briefs to the Supreme Court that support progressive policies.
These are all additional reasons — above and beyond the paper’s scientific quality and the possibility that the finding is true — that make it more likely to be published. So, while you shouldn’t dismiss the finding entirely, you should take it less seriously.
Conversely, if Nature Human Behaviour published a paper reporting that racially diverse teams do worse at solving scientific problems, you should think: Wow, this is a conclusion that the reviewers and editors likely wouldn’t like, so the evidence for it must be strong, the researchers must have carefully ruled out other explanations, and so on. One should take it more seriously.
It’s like what someone once said about Ginger Rogers and Fred Astaire: They’re both going through all the same moves, but Ginger Rogers is doing them backward and in high heels. A published finding that clashes with the political prejudices of reviewers and editors is a Ginger Rogers finding. It had to be twice as good.
It shouldn’t be controversial that the source matters when judging whether something is true. In a recent presidential debate, Donald Trump said crime went way up in the United States under Joe Biden and Vice President Kamala Harris. Even if you have no idea about the facts, you should be skeptical, because Trump has an incentive for saying that — his political interest. Suppose that Harris said the same thing: “Although our administration has much to be proud of, I admit that there has been a sharp rise in crime.” Now, you should be less skeptical, because saying that goes against her interests.
As another example, I’m a big fan of the Foundation for Individual Rights and Expression, or FIRE. I respect their work in defending the free-speech rights of academics. But suppose they started their own journal and published articles suggesting that censorship of professors’ views has all sorts of bad effects. You should think: Meh. That’s just what a free-speech organization would want to publish.
People get this. In an unpublished study by the behavioral scientist Cory J. Clark and colleagues, people were asked how they perceived the political slant of different organizations and professions, such as journalists, scientists, the Supreme Court, and psychologists. The organizations and professions perceived as most slanted were judged least trustworthy and least worthy of deference. This was true even when study participants were sympathetic toward the slant:
Even left-leaning participants were less trusting and less willing to support and defer to left-leaning institutions that appeared more politicized, and even right-leaning participants were less trusting and less willing to support and defer to right-leaning institutions that appeared more politicized.
I don’t want to overstate the extent of bias in journals. The vast majority of published findings have nothing to do with politics, and so the considerations that we’ve been talking about don’t apply. Also, I can think of many journal articles with findings that oppose a progressive worldview — Ginger Rogers findings do get out there. After all, the same journal (Proceedings of the National Academy of Sciences) that published the original newborn article also published the contrary re-analysis of the data.
Then again, sometimes an unpopular finding makes it through the system, and the system strikes back. A few years ago, a Nature Communications paper found that female scientists benefit more career-wise from collaborating with male (as opposed to female) mentors. This was not a message people wanted to hear; there was outrage on social media, and the authors were pressured into retracting the paper.
This is just an anecdote, though, and it would be fair to dismiss it as an exceptional case. A better reason to believe there’s a political bias in what gets published is that scientists, reviewers, and journals explicitly say there is.
The aforementioned 2012 Perspectives on Psychological Science study surveyed about 800 social psychologists and found that conservatives (again, about 6 percent of the sample) fear the negative consequences of revealing their political beliefs to their colleagues. They are right to do so. The same study finds that many of their non-conservative colleagues, particularly the more liberal ones, tended to agree that if they encountered a grant or paper with “a politically conservative perspective,” it would negatively influence their decision to award the grant or accept the paper for publication.
A 2024 study found that most professors — including those on the left, though it was more common on the right — self-censor their publications about controversial matters in psychology. This is because they are concerned about negative social and career consequences. While they don’t have much to worry about from most of their colleagues (most academics “viewed harm concerns as illegitimate reasons to retract papers or fire scholars” and “had great contempt for peers who petition to retract papers on moral grounds”), they aren’t being completely paranoid. There is a minority of professors who believe that a proper response to certain views can include “ostracism, public labeling with pejorative terms, talk disinvitations, refusing to publish work regardless of its merits, not hiring or promoting even if typical standards are met, terminations, social-media shaming, and removal from leadership positions.”
Finally, certain major journals explicitly state that policy implications partially determine what gets published. This is how editors and reviewers are told to do their jobs. Consider these guidelines from Nature Communications, developed in the wake of the female-mentor article:
As part of our investigation, we also reviewed our editorial practices and policies and, in the past few weeks, have developed additional internal guidelines and updated information for authors on how we approach this type of paper. As part of these guidelines, we recognize that it is essential to ensure that such studies are considered from multiple perspectives including from groups concerned by the findings. We believe that this will help us ensure that the review process takes into account the dimension of potential harm, and that claims are moderated by a consideration of limitations when conclusions have potential policy implications.
And here is a more recent editorial from Nature Human Behaviour outlining their new procedures for reviewers and editors. As they put it, “Science has for too long been complicit in perpetuating structural inequalities and discrimination in society. With this guidance, we take a step towards countering this.”
This is a controversial issue, and some academics will side with the journals. You might believe, say, that a paper concluding that female scholars do better with male mentors should be harder to publish because these findings, even if they are true, will cause “potential harm” or will make the journal “complicit in perpetuating structural inequalities and discrimination in society.”
My point here isn’t to argue the issue. I’m saying that if this is your view, congratulations — your side has won. Journals have the ideological slant you want them to. This may have all sorts of benefits, but one of the costs is that you can’t always trust the more progressive-friendly findings these journals report.
This is particularly a problem for progressives, for two reasons. First, people tend to believe findings that fit their worldview. Just as parents are credulous when someone tells them how smart their child is, progressives are inclined to take progressive-friendly findings too seriously.
Second, people tend to notice when systems, organizations, and people are biased against them and ignore biases in their favor. Many progressives don’t know about the editorial positions of journals like Nature Human Behaviour, so they assume these journals just tell it like it is. Non-progressives are more aware of the partisan bias of the individuals who publish in and review for these journals and tend to be suspicious (perhaps too suspicious) of politically relevant findings from a community they believe doesn’t like them or their politics.
How should progressives solve this problem?
One solution is to reform journals so there isn’t political bias in what gets accepted and rejected. This would help everyone — on the left and the right — have more trust in the journals. But I’m not sure it’s possible. Many of the people who run the journals disagree with this solution. Also, regardless of what the journals decide, much of the decision-making power is in the hands of individual editors and reviewers, and, as we see in the excerpts above, some think that articles going against their political position shouldn’t be published.
A more humble solution is to become more educated about how personal and institutional biases shape the credibility of certain claims. Just as any intelligent observer should be skeptical when politicians say terrible things about their enemies and when parents say wonderful things about their children, consumers of scientific information should be skeptical when journals produce findings that are in lockstep with the political views of their editors, reviewers, and readers. The problem with this solution is that it assumes we all share the goal of getting things right, and that isn’t always the case.
Some people do care about how scientific findings bear on issues of political and social relevance. It matters to them whether implicit stereotypes lead to discrimination against members of certain groups, whether diverse organizations are more or less efficient, whether there is racial bias in police shootings, whether trigger warnings help or hurt, and so on. Some of these people, including scientists in the field, are interested in such findings for what they tell us about the mind. Others see them as relevant to dealing with certain real-world problems. Someone concerned about racism in the workplace, for instance, might be genuinely interested in whether diversity training works, and so they turn to the many studies that ask exactly this question.
But many people have a different attitude. They are interested in social-science research published in journals only insofar as it supports their positions, persuades others, or can be used to dunk on their foes. This is understandable. In most of life, it’s important to get the facts right, but when it comes to the political/moral domain, one might have other priorities. Here’s a story I tell in my book Psych: The Story of the Human Mind.
I was at a dinner once when Donald Trump was president, and we were all complaining bitterly about him. Someone mentioned the latest ridiculous thing he did, and we were all laughing, and then a young man, no fan of Trump, politely pointed out that this event didn’t really happen the way we thought it did. It was a misreporting by a partisan source; Trump was blameless. People pushed back, but the man knew his stuff, and gradually most of the room became convinced. There was an awkward silence, and then someone said, “Well, it’s just the sort of thing that Trump would do,” and we all nodded, and the conversation moved on.
Was the young man’s contribution a rational one? It depends on his goals. What was he most hoping to accomplish — to know and speak the truth, or to be liked? If your goal is truth, then … being biased to defend the positions of your group because of loyalty and affiliation is plainly irrational. Truth-seeking individuals should ignore political affiliation when learning about the world. When forming opinions on gun control, evolution, vaccination rates, and so on, they should seek out the most accurate sources possible. …
But we are social animals. While one of the goals that our brains have evolved to strive for is truth — to see things as they are, to remember them as they really happened, to make the most reasonable inferences based on the limited information we have — it’s not the only one. We also want to be liked and accepted, and one way to do this is by sharing others’ prejudices and animosities.
If your goal is getting things right, then the advice to be skeptical of findings that flatter your views and to pay special attention to Ginger Rogers results is just the thing. But what if your goal is to feel good about yourself? To persuade people to join your cause? To mock your ideological opponents? To make yourself popular within a group of like-minded individuals? Being skeptical about findings that support your view is great if you want to pursue the truth, but it is an awful strategy if you want to satisfy these other goals.
For whatever it’s worth, I believe that pursuing these other goals at the expense of truth, though understandable for individuals, makes the world worse. We should be primarily interested in accuracy, if only because we’re more likely to solve the many problems that plague us if we get our facts right. And so I wish the incentives for truth-seeking were higher. I hope for a culture where we cheer on those who work hard to get things right — like the young man in my story — even if the truth clashes with the narratives we’re most fond of.
But maybe this is a naive hope.
This essay is adapted from the author’s Substack, Small Potatoes.