In recent years, I’ve noticed a lot of think pieces in which people talk about their academic fields hitting an impasse. A recent example is this Liam Kofi Bright post on analytic philosophy:
Analytic philosophy is a degenerating research programme. It’s been quite a long time since there was anything like a shared project … People are not confident it can solve its own problems, not confident that it can be modified so as to do better on that first score, and not confident its problems are worth solving in the first place … The architectonic programmes of latter-20th-century analytic philosophy seem to have failed without any clear ideas for replacing them coming forward … What I think is gone, and is not coming back, is any hope that from all this will emerge a well-validated and rational-consensus-generating theory of grand topics of interest.
Another example is Sabine Hossenfelder’s 2018 book Lost in Math: How Beauty Leads Physics Astray, which is about how high-energy particle physics has made itself irrelevant by pursuing theories that look nice instead of those that try to explain reality. The book followed on the heels of a number of pieces noting how the Large Hadron Collider has failed to find evidence of physics beyond the Standard Model (though we still have to wait and see if the Muon g-2 anomaly turns out to be something new). And of course there are many books and articles about the apparent dead end in string theory.
Meanwhile, Tyler Cowen thinks economics hasn’t done a lot to enlighten the world recently. I personally disagree with him, given all the great and very accessible empirical work that has come out in recent years, but Cowen might be thinking mainly about economic theory — back in 2012, he told me that he thought there had been no interesting new theoretical work since the ’90s.
There are seemingly endless warnings of disciplinary dead ends. In 2013, Keith Devlin, director of the Stanford Mathematics Outreach Project, fretted that “mathematics as we know it may die within a generation.” Some worry that the field of psychology will be rendered irrelevant by neuroscience. In general, if you pick an academic field X and Google “the end of X,” you’ll find an essay from the last decade wondering if it’s over — or declaring outright that it is.
Is this normal? Maybe academics just always tend to think their fields are in crisis until the next big discovery comes along. After all, some people thought physics was over in the late 19th century, just before relativity and quantum mechanics came along. Maybe the recent hand-wringing is just more of the same?
Perhaps. But an opposing notion is the “end of science” hypothesis — the idea that most of the big ideas really have been found, and now we’re scraping the bottom of the barrel for the universe’s last few remaining secrets. This is the uncomfortable possibility raised by papers like the 2020 American Economic Review study “Are Ideas Getting Harder to Find?” by Stanford’s Nicholas Bloom and co-authors.
In between these two theories, I offer a third hypothesis, striking a middle ground: The way that we do academic research — or at least, the way we’ve done it since World War II — is not suited to the way discovery actually works.
Think about how a modern university works. Following the German research model, each tenured or tenure-track professor has two jobs — research and teaching. We don’t often think about why research and teaching are combined (that’s a bigger conversation), but they are.
And this means that the number of research-doing professors is determined not by the need for actual research, but by the demand for undergraduate education. As the American population has grown and more people have enrolled in college, universities have hired more professors to instruct them. Furthermore, this demand is department-specific — if more undergrads major in economics, the university will probably hire more economics professors.
This means that the U.S. academic system — and the systems of other countries that roughly copy what we do — is filled with professors who have been essentially hired to be teachers, but who prove their suitability for the job by doing research. They have to publish or perish, whether or not there’s anything interesting to publish. And journals, knowing that shutting researchers out of jobs would be detrimental to the academic system, will oblige young researchers by publishing enough papers so that tenure-track faculty around the country can get tenure.
The effects of this are a lot of pretty incremental research. Yes, throwing more researchers into the fray will increase the overall rate of scientific discovery, and will usually produce a strongly positive social return on investment (at least for now). But from the perspective of any individual researcher, it must seem like less and less is getting done, because the total amount of new discoveries per researcher and per paper shrinks.
This is just a fancy way of saying that research has diminishing returns. We should still be funding it, of course. But this dynamic makes it seem like fields are at an impasse because people are comparing the novelty of each new paper to the novelty of the average paper from a previous age, when there were far fewer undergrads and thus far less demand for publish-or-perish research.
It’s the destiny of fields to stagnate after a while — and our academic system reinforces this stagnation.
On top of this, we have a system that’s set up to continue seeking answers in the same research areas that yielded answers before, while the biggest returns actually come from striking out in new directions and creating new fields.
Here, via Gabriel Camilo Lima, is a chart of the number of papers published with various keywords in their titles related to the field of AI. It is an absolute Cambrian explosion. The field of AI has officially existed since the mid-20th century, but around the year 2000 it became so much larger as to be totally unrecognizable. This explosion of research effort and interest was accompanied by some startling breakthroughs and immediate engineering applications.
Think about what was necessary to produce this burst of discovery. Developments in electrical engineering and computer science laid the groundwork, Moore’s Law was proven to be correct, and the internet made it possible to gather data sets big enough to really make AI shine.
But as Bloom and his “Are Ideas Getting Harder to Find?” co-authors show, Moore’s Law has been getting steadily harder to sustain as we require greater infusions of money and labor to keep boosting computers’ abilities. In other words, the explosively fruitful field of AI has budded off of other fields that are themselves growing increasingly arduous.
A better metaphor, perhaps, is mining for ore. Each site of invention is like a vein of ore. The part of the vein nearest to the surface is the easiest to mine out, and it gets progressively harder as you dig deeper to extract more and more. But then, sometimes, you hit a new vein — and then it’s back to rapid, easy extraction in a new direction.
The problem is that the modern university incentive system clashes with this research “ore mining” model.
First, take hiring. New researchers are hired by older researchers. Older researchers will tend to hire new researchers who study topics they recognize and respect, asking familiar research questions and using familiar methodologies. This isn’t just my armchair theory of researcher psychology — it’s the product of a 2019 paper by MIT’s Pierre Azoulay and co-authors called “Does Science Advance One Funeral at a Time?” Their abstract:
We examine how the premature death of eminent life scientists alters the vitality of their fields. While the flow of articles by collaborators into affected fields decreases after the death of a star scientist, the flow of articles by non-collaborators increases markedly. This surge in contributions from outsiders draws upon a different scientific corpus and is disproportionately likely to be highly cited. While outsiders appear reluctant to challenge leadership within a field when the star is alive, the loss of a luminary provides an opportunity for fields to evolve in new directions that advance the frontier of knowledge.
In other words, when a respected old researcher dies, people stop looking in the old directions that that old researcher was used to looking in, and start looking in new directions.
Next, think about publication. Peer review requires that research that tries to strike out in a new direction conform to the expectations and desires of established researchers. In other words, to publish new kinds of stuff, you have to get the blessing of people who do old kinds of stuff. José Luis Ricón has tracked dozens of examples of “peer rejection” in science, where the old guard at least temporarily holds back novel research efforts.
Finally, consider grants. Granting agencies generally favor research directions with proven track records, which naturally biases them toward research from established fields and established methodologies. Of course, the reality might not be so bad; Alexey Guzey reports that what usually happens is that principal investigators know how to tell granting agencies what they want to hear, and then allocate the money to grad students and postdocs to do different, more-useful research. But this end-around strategy only confirms the overarching problem of money being tied up in the labs of older, established researchers.
In other words, the simplest explanation for why it feels like a bunch of old scientific fields are stagnating is that it’s the destiny of fields to stagnate after a while — and our academic system reinforces this stagnation.
Perhaps this isn’t such a bad thing for humanity’s overall rate of discovery and invention. Maybe new fields are so fertile that big grants and the blessing of established researchers aren’t necessary. Maybe a few maverick pioneers working independently can open up new veins of research without such support, and then everyone else can follow.
But if the superstar researchers who disproportionately drive science forward are caught up in the perverse academic incentive system, spending their genius solving hard problems in old fields instead of creating new fields, then our system might really be hindering progress.
And in either case, an awful lot of people are still toiling away on the last tiny scraps of ore from old, exhausted veins — making a slightly better DSGE model, determining the mass of the Higgs boson slightly more precisely, or whatever. That leads to a lot of disillusionment among people who thought their career would involve discovering the secrets of the universe.
I’m not yet sure what the solution is, but for now I want to stress one piece of general advice. Encourage not just novel research, but research in novel directions. Ask questions no one has asked before. Use methodologies no one has tried before. Create new fields that don’t have an established hierarchy of prestigious journals and elder luminaries. Find new veins of ore to mine.
A version of this essay previously appeared in the author’s Substack newsletter.