I ’m starting to think that some of the strange behavior that has been gripping college students in the United States has begun seeping north into Canada, where I teach. For the first time the other day, I came across the suggestion — made by a graduate student — that a philosophical research talk should be a “safe space.” The concern was not that department members were abusive, merely that we were sometimes insufficiently “supportive” of the speaker. Apparently we’re supposed to find nicer ways of telling people how wrong they are.
The dominant impulse among the professoriate, when confronted with demands of this nature, has been to reject them through appeal to that old stalwart, freedom of speech. It is, after all, not the university’s business to go around telling its members what they can and cannot say, or to regulate etiquette. And yet while all of this is true, the defense remains deficient in several respects.
As people who are familiar with how philosophy works will know, it is one of several disciplines that has an adversarial culture. This manifests itself most clearly in the Q&A after a research talk. Basically, after people present their views, the audience tries to tear them apart. Every question is a variation on “Here’s why I think you’re wrong. …" The environment is not supportive; in fact, it is the opposite of supportive. Furthermore, because this is the disciplinary culture, philosophers tend not to preface their comments with ingratiating verbiage like, “First let me thank you for the rich and thought-provoking discussion.” Philosophers go straight to the “Here’s why I think you’re wrong” part.
When being high-minded, we call this the “Socratic elenchus.” As the name suggests, it has been around for a very long while. And philosophy is not alone in this — economics and law also have highly adversarial cultures. Philosophy isn’t even the most antagonistic. For instance, the disciplinary culture does not tolerate interruption — speakers are given time to make their case, after which we tell them why they’re wrong. Economics, as well as many business schools, has an “interrupting” culture, where speakers are given about two minutes to say something, after which they get interrupted and told why they’re wrong, or why their methods are flawed, or their research question uninteresting.
So what’s with all the unsupportive behavior? And why, despite the protestations of some students, is such a culture worth defending?
F irst, it is important to distinguish between “being adversarial” and “being a jerk.” Consider the distinction between philosophy and surgery, a discipline that I happen to know well because my wife is an academic surgeon.
Surgeons are notorious jerks, a tendency that is clearly encouraged by the disciplinary culture. They are also extremely confrontational, sometimes (to me) shockingly so. They lose their temper, swear, and yell at each other a great deal.
At the same time, the disciplinary culture, with respect to research talks, is weirdly (to me) nonadversarial. When a surgeon gives a talk, the questions will almost always be softballs (e.g., “Can you elaborate a bit more on this?”). Then, as soon as people are out in the hallway, everyone will trash the talk (e.g., “Oh my God, what an awful study that was,” or, “Can you believe they’re doing that to patients?”). And yet they never say it to the speaker. I don’t know how many times I’ve heard surgeons complaining about awful research and terrible talks, and I’ll say, “Did you tell them that?” and the response is always, “Oh no, of course not.”
Not all criticism can be constructive. Some ideas and arguments are genuinely devoid of merit.
This example is illuminating, because it shows that the adversarialism of exchanges in philosophy or economics is not merely a consequence of the fact that so many philosophers or economists are jerks. As the example of surgery shows, it is perfectly possible to have a discipline full of jerks, who nevertheless sustain a nonadversarial discourse around academic research. (I should mention here that a great deal of the complaints about adversarialism have come from people who think that the underrepresentation of women in certain disciplines is a consequence of those norms. I happen to disagree — law is also highly adversarial, but that doesn’t seem to deter too many women — but that’s a subject for a different essay.)
When I ask academic surgeons why they never pose challenging questions at research talks, the answer is usually the same — they don’t think it matters because “it’ll never get published,” or “the referees will catch the problem.” In particular, when academic surgeons make methodological errors, or do their stats all wrong (which they often do), everyone knows that it will get picked up by referees, and so no one feels any obligation to make things uncomfortable for the speaker.
In other words, the practice of medicine, as well as scientific work more generally, is subject to much stricter methodological constraints than many other disciplines, particularly those in the humanities. As a result, audiences at medical talks do not consider it their job to impose quality control on academic research.
O ne can see the importance of this distinction by considering how different disciplines seek to control flaws in our thinking, of which there are many. We routinely assess the validity of arguments merely by looking at whether we believe the conclusion (belief bias). We often treat archetypal instances of a type as being more probable, even when they are less so (representativeness bias). And, worst of all, when testing a theory, we tend to look only for evidence that supports the hypothesis, ignoring all other (confirmation bias).
This last flaw, confirmation bias, is perhaps the most pernicious. It has led to numerous absurdities, ranging from moxibustion and leeching to the belief in chemtrails and invisible deities. It stems from the fact that we human beings are terrible at “thinking the negative.” We consistently ignore the most powerful way of testing a theory, which is to figure out what evidence would disconfirm the hypothesis, and then actively seek it out, in order to establish that it is not there.
The power and ubiquity of this bias is illustrated by Peter Wason’s famous “2, 4, 6" test, which everyone fails. Asked to guess the rule that could generate this sequence of three numbers, people immediately formulate a narrow hypothesis, such as “three even numbers in sequence,” and then look for positive support for this claim. They never consider formulating a hypothesis that they think will be wrong, then looking for evidence of its falsity. As a result, almost no one ever figures out the correct rule, which is actually “any three numbers in ascending order” (such as 23, 67, 291).
Studies conducted by Keith Stanovich and others have shown that, contrary to the self-image of many academics, intelligence offers no protection against cognitive bias. This is why many conspiracy theorists are of above-average intelligence. Once an error has been pointed out, high-IQ individuals may be better at figuring out the correct answer, but when it comes to susceptibility to bias, everyone is about equally bad. (I once had the opportunity to try out the “2, 4, 6" test on the author of a widely used textbook on critical reasoning. He failed, just like the rest of us.)
Being supportive, or adhering to conventional norms of politeness, diminishes the quality of the academic work being done.
There are various features of the scientific method — from the use of controls and the concern about replication, to the table of error types used in statistical hypothesis-testing — that all serve in various ways to control confirmation bias. But in other disciplines, particularly those that are not in the business of collecting empirical evidence, it is less clear where the constraints come from. As a philosopher, having read the literature on confirmation bias, and having come to understand just how profound and pervasive it is, I find it difficult to avoid the suspicion that our field is just all confirmation bias all the time.
Indeed, after I first failed the “2, 4, 6" test, I stopped to consider how much time and effort I have put into thinking about what it would take to prove my own views false, or cause me to change my mind, and then, how much time I have actually spent investigating whether these conditions obtain. The answer was “very little.” I invest a tremendous amount of effort in the positive task, of working out my view and marshalling the arguments to support it, but expend almost no effort in thinking about what might prove it wrong.
P art of the reason, however, that I don’t have to work very hard at thinking of ways that my view might be wrong is that I have colleagues who enjoy nothing better. When it comes to assessing my work, they have no trouble at all “thinking the negative.” So if there are obvious blind spots in my reasoning, I can be quite confident that they will be pointed out to me, in one of those unsupportive, adversarial Q&A sessions. The fact that the profession encourages, and even venerates, those who are able to ask the “killer question,” functions very much like the scientific method does in the physical sciences.
Some disciplines are insufficiently adversarial. The vast quantities of gobbledygook produced under the heading of “capital-T Theory,” I would conjecture, are enabled by the fact that literature departments have an excessively nonadversarial culture. People spend so much time pretending that what the speaker just said made sense — in one of those “rich, thought-provoking” discussions — that they start to think they actually understood it. Being supportive, or adhering to conventional norms of politeness, diminishes the quality of the academic work being done.
Thus it is inadequate to think of the academy as merely an institutional space in which people are free to pursue their own lines of inquiry. When it functions well, the university houses multiple systems of distributed cognition, each with an internal division of labor that makes the whole smarter than any of its parts. In some cases, intellectual collaboration takes an explicitly cooperative form, as when a research team works together in a lab. In other cases, the collaboration takes a more agonistic form, as when people vie for an opportunity to draw attention to each others’ errors.
Just as it is possible to be a jerk without being adversarial, it is also possible to be adversarial without being a jerk. The commitment to adversarialism arises from our professional role; it should not be allowed to become personal. Furthermore, the traditional academic virtues of careful listening, charitable interpretation, and collegial interaction retain their overarching value. The fact remains, however, that not all criticism can be constructive. Some ideas and arguments are genuinely devoid of merit, and we do their purveyors no favors by pretending otherwise.
The great sage of Pittsburgh, Wilfrid Sellars, once defined philosophy as the study of “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term.” We’re doing very abstract work, and we’re often trying to regiment previously unstructured domains of inquiry. So what makes us different from cranks, conspiracy theorists, or people who claim to see Jesus in their toast? Or what stops us from just making stuff up and believing it? I really think that the only thing keeping us tethered to the world is the disciplinary culture, and the fact that we have to defend ourselves, in a room full of people who have spent decades listening to arguments and identifying the flaws in other people’s reasoning.
Joseph Heath is a professor of philosophy at the University of Toronto. He is the author, most recently, of Enlightenment 2.0: Restoring Sanity to Our Politics, Our Economy, and Our Lives (Harper, 2014).