Can It Really Be True That Half of Academic Papers Are Never Read?
By Arthur G. JagoJune 1, 2018
A recent Chronicleopinion essay arguing that the tenure process can be quite unfair included this line: “At least one study found that the average academic article is read by about 10 people, and half of these articles are never read at all.” In a commentary that I was otherwise in complete agreement with, I found that particular statement quite unbelievable. First, the magnitude of the assertions was simply astonishing. Second, I was perplexed by how someone could design a study to empirically determine that some published articles were never read. Such a study was beyond my imagination; the pseudo-logical fallacy of “proving the negative” came to mind.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
A recent Chronicleopinion essay arguing that the tenure process can be quite unfair included this line: “At least one study found that the average academic article is read by about 10 people, and half of these articles are never read at all.” In a commentary that I was otherwise in complete agreement with, I found that particular statement quite unbelievable. First, the magnitude of the assertions was simply astonishing. Second, I was perplexed by how someone could design a study to empirically determine that some published articles were never read. Such a study was beyond my imagination; the pseudo-logical fallacy of “proving the negative” came to mind.
I contacted the author and was provided her source, an article in Smithsonian, the magazine. This article actually qualified (somewhat) the implausible claim by asserting that 50 percent of papers are never read by anyone “other than their authors, referees and journal editors.” I guess it is some consolation to know that humans do indeed write, review, and select most manuscripts for publication, although we do know that computer-written gibberish occasionally makes it into print, into citation indices, and into researchers’ h-values.
A link in the Smithsonian article points to Indiana University as its source for the statistics, but this proved inaccurate. The Smithsonian author redirected me to the actual source, a 2007 article by Lokman Meho in Physics World, the magazine of the London-based Institute of Physics. When I asked Meho for his source of the cited statistics, he told me that “this statement was added to my paper by the editor of the journal at the time and I unfortunately did not ask from where he got this information before the paper was published.” The Meho article has been formally cited over 300 times.
In turn, I contacted the editor of Physics World from 2007. He told me that “it was indeed” something that he had inserted during editing, from material provided to him in a communications course taken at Imperial College London in 2001. I contacted the instructor of that course, now retired, who told me he could not provide me with a specific reference to what is now “ancient history” but that “everything in those notes had a source, but whether I cross-checked them all before banging the notes out, I doubt.”
ADVERTISEMENT
The Physics World editor suggested that the Imperial College course material may have been based on a 1991 article in Science. However, I discovered that that article was not about unread research but was rather about uncited research. Not being read is a sufficient condition for not being cited; however, not being cited is neither a necessary nor a sufficient condition for not being read — i.e., not being cited says nothing about an article having been read or unread. As a striking illustration of the difference, a 2010 article was recently identified in Nature as an online paper that has never been cited but has been viewed 1,500 times and downloaded 500 times. (The paradox, of course, is that this uncited paper is not now uncited, by virtue of it being cited for its uncitedness.)
Frustrated, I ended my search for the bibliographic equivalent of “patient zero.” The original source of the fantastical claim that the average academic article has “about 10 readers” may never be known for sure.
In the bigger picture, it is certainly true that much of published research has limited readership. As a young scholar — and well before electronic journal access — I was quite amazed to learn that one of the five most prestigious academic journals in my field (business management) had a worldwide circulation, including all libraries, of a mere 800 copies. Indeed, our audiences are often quite small, and some large percentage of articles undoubtedly have very little impact.
However, because an assertion is intuitively appealing or reinforcing of existing beliefs does not justify misstatements of fact or the distortion or embellishment of what can be documented. In their communications with me, all of the participants in this tale — good people, to be sure — recognized an absence of sound justification in their actions.
Even when a primary source is accurate, a reference to it may still be quite problematic when an author relies upon a flawed secondary source but cites, instead, the primary source. Using statistical modeling of recurring identical misprints in bibliographic entries, two UCLA engineers estimate that “only about 20 percent of citers read the original” article that they claim as a source in their own reference lists. Stated otherwise, 80 percent of citers are not readers, and the secondary flaws they encounter they themselves propagate in their own articles.
ADVERTISEMENT
This object lesson in the perils of relying on secondary sources reminds us all that our readers place a trust in us each time we put words to paper. We have a duty, on behalf of all authors, to do our best to fulfill that trust when we produce those words. A single mistake — a bibliographic patient zero — may be quite small and entirely unintentional. However, it can infect the literature like a self-duplicating virus and become amplified with time.
In a 2009 essay, the Pulitzer Prize winner John McPhee noted that “any error is everlasting” and quoted Sara Lippincott, a New Yorker fact-checker, that once an error gets into print it “will live on and on in libraries carefully catalogued, scrupulously indexed … silicon-chipped, deceiving researcher after researcher down through the ages, all of whom will make new errors on the strength of the original errors, and so on and on into an exponential explosion of errata.” Lesson learned.
Arthur G. Jago is a professor emeritus of management at the University of Missouri at Columbia. He has published articles in, among other journals, the very prestigious but not widely read Organizational Behavior and Human Decision Processes.