Everybody agrees that scientific research is indispensable to the nation’s health, prosperity, and security. In the many discussions of the value of research, however, one rarely hears any mention of how much publication of the results is best. Indeed, for all the regrets one hears in these hard times of research suffering from financing problems, we shouldn’t forget the fact that the last few decades have seen astounding growth in the sheer output of research findings and conclusions. Just consider the raw increase in the number of journals. Using Ulrich’s Periodicals Directory, Michael Mabe shows that the number of “refereed academic/scholarly” publications grows at a rate of 3.26 percent per year (i.e., doubles about every 20 years). The main cause: the growth in the number of researchers.
Many people regard this upsurge as a sign of health. They emphasize the remarkable discoveries and breakthroughs of scientific research over the years; they note that in the Times Higher Education’s ranking of research universities around the world, campuses in the United States fill six of the top 10 spots. More published output means more discovery, more knowledge, ever-improving enterprise.
If only that were true.
While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.
As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.
Among the primary effects:
Too much publication raises the refereeing load on leading practitioners—often beyond their capacity to cope. Recognized figures are besieged by journal and press editors who need authoritative judgments to take to their editorial boards. Foundations and government agencies need more and more people to serve on panels to review grant applications whose cumulative page counts keep rising. Departments need distinguished figures in a field to evaluate candidates for promotion whose research files have likewise swelled.
The productivity climate raises the demand on younger researchers. Once one graduate student in the sciences publishes three first-author papers before filing a dissertation, the bar rises for all the other graduate students.
The pace of publication accelerates, encouraging projects that don’t require extensive, time-consuming inquiry and evidence gathering. For example, instead of efficiently combining multiple results into one paper, professors often put all their students’ names on multiple papers, each of which contains part of the findings of just one of the students. One famous physicist has some 450 articles using such a strategy.
In addition, as more and more journals are initiated, especially the many new “international” journals created to serve the rapidly increasing number of English-language articles produced by academics in China, India, and Eastern Europe, libraries struggle to pay the notoriously high subscription costs. The financial strain has reached a critical point. From 1978 to 2001, libraries at the University of California at Los Angeles, for example, saw their subscription costs alone climb by 1,300 percent.
The amount of material one must read to conduct a reasonable review of a topic keeps growing. Younger scholars can’t ignore any of it—they never know when a reviewer or an interviewer might have written something disregarded—and so they waste precious months reviewing a pool of articles that may lead nowhere.
Finally, the output of hard copy, not only print journals but also articles in electronic format downloaded and printed, requires enormous amounts of paper, energy, and space to produce, transport, handle, and store—an environmentally irresponsible practice.
Let us go on.
Experts asked to evaluate manuscripts, results, and promotion files give them less-careful scrutiny or pass the burden along to other, less-competent peers. We all know busy professors who ask Ph.D. students to do their reviewing for them. Questionable work finds its way more easily through the review process and enters into the domain of knowledge. Because of the accelerated pace, the impression spreads that anything more than a few years old is obsolete. Older literature isn’t properly appreciated, or is needlessly rehashed in a newer, publishable version. Aspiring researchers are turned into publish-or-perish entrepreneurs, often becoming more or less cynical about the higher ideals of the pursuit of knowledge. They fashion pathways to speedier publication, cutting corners on methodology and turning to politicking and fawning strategies for acceptance.
Such outcomes run squarely against the goals of scientific inquiry. The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn’t better. At some point, quality gives way to quantity.
Academic publication has passed that point in most, if not all, disciplines—in some fields by a long shot. For example, Physica A publishes some 3,000 pages each year. Why? Senior physics professors have well-financed labs with five to 10 Ph.D.-student researchers. Since the latter increasingly need more publications to compete for academic jobs, the number of published pages keeps climbing. While publication rates are going up throughout academe, with unfortunate consequences, the productivity mandate hits especially hard in the sciences.
Only if the system of rewards is changed will the avalanche stop. We need policy makers and grant makers to focus not on money for current levels of publication, but rather on finding ways to increase high-quality work and curtail publication of low-quality work. If only some forward-looking university administrators initiated changes in hiring and promotion criteria and ordered their libraries to stop paying for low-cited journals, they would perform a national service. We need to get rid of administrators who reward faculty members on printed pages and downloads alone, deans and provosts “who can’t read but can count,” as the saying goes. Most of all, we need to understand that there is such a thing as overpublication, and that pushing thousands of researchers to issue mediocre, forgettable arguments and findings is a terrible misuse of human, as well as fiscal, capital.
Several fixes come to mind:
First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing.
Second, make more use of citation and journal “impact factors,” from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher’s publication record, the publications on a CV might look considerably different than a mere list does.
Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal’s Web site. The two versions would work as a package. That approach could be enhanced if university and other research libraries formed buying consortia, which would pressure publishers of journals more quickly and aggressively to pursue this third route. Some are already beginning to do so, but a nationally coordinated effort is needed.
There may well be other solutions, but what we surely need is a change in the academic culture that has given rise to the oversupply of journals. For the fact is that one article with a high citation rating should count more than 10 articles with negligible ratings. Moving to the model that Nature and Science use would have far-reaching and enormously beneficial effects.
Our suggestions would change evaluation practices in committee rooms, editorial offices, and library purchasing meetings. Hiring committees would favor candidates with high citation scores, not bulky publications. Libraries would drop journals that don’t register impact. Journals would change practices so that the materials they publish would make meaningful contributions and have the needed, detailed backup available online. Finally, researchers themselves would devote more attention to fewer and better papers actually published, and more journals might be more discriminating.
Best of all, our suggested changes would allow academe to revert to its proper focus on quality research and rededicate itself to the sober pursuit of knowledge. And it would end the dispiriting paper chase that turns fledgling inquirers into careerists and established figures into overburdened grouches.