Eric A. Fong’s manuscript had been conditionally accepted. The editor said Fong needed to ensure it conformed with the journal’s style and to shorten it to meet the word limit. That was easy enough. But the third condition gave Fong pause.
He’d cited only one source from the journal he’d submitted the article to. The editor wrote in an email that that was “unacceptable,” and told him to “please add at least five more.”
Adding citations to articles in the same journal, as the editor had requested, would inflate the journal’s impact factor, which often dictates a journal’s importance. It’s a phenomenon some scholars call “coercive citation,” but Fong, then an assistant professor of management at the University of Alabama at Huntsville, had never heard that term.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Eric A. Fong’s manuscript had been conditionally accepted. The editor said Fong needed to ensure it conformed with the journal’s style and to shorten it to meet the word limit. That was easy enough. But the third condition gave Fong pause.
He’d cited only one source from the journal he’d submitted the article to. The editor wrote in an email that that was “unacceptable,” and told him to “please add at least five more.”
Adding citations to articles in the same journal, as the editor had requested, would inflate the journal’s impact factor, which often dictates a journal’s importance. It’s a phenomenon some scholars call “coercive citation,” but Fong, then an assistant professor of management at the University of Alabama at Huntsville, had never heard that term.
Still, he felt what he was being asked to do was wrong. And yet publishing this paper would be an important part of his case for tenure. Conflicted, Fong printed out the email and headed to Allen Wilhite’s office. Wilhite, Fong’s mentor and an economics professor, was stunned. Most of their colleagues were, too. A few, though, said they had received a similar request from an editor.
ADVERTISEMENT
Coercive citation has drawn increased attention in recent years. Last month two researchers at the Dutch publishing giant Elsevier published a study, titled “When Peer Reviewers Go Rogue,” that examined the citation patterns of nearly 55,000 reviewers for its journals. They found that 433 of those reviewers — less than 1 percent — consistently had their own work cited in papers they reviewed.
The study was spurred by a 2017 incident in which Artemi Cerdà, a soil scientist at the University of Valencia, in Spain, resigned from the editorial board of the journal Geoderma amid accusations he’d asked authors to cite his own work or that of journals he was affiliated with. Cerdà denied the allegations, but Elsevier, Geoderma’s publisher, concluded he was guilty.
Coercive citation is rare, the study suggests, but when it does occur, it’s egregious. Analysis of Elsevier’s reviewer network found one scholar who had requested in 120 separate reviews that the authors add “multiple irrelevant citations” to their papers. Only four of the authors refused to do so.
Faced with his own coercion dilemma, Fong, who’s now an associate professor, wound up adding the superfluous citations to his paper — and he did the same when a reviewer on another paper asked for more citations. He felt he couldn’t refuse. “I would not be here today if I didn’t succumb to the pressure of the editors. Without those publications, my record probably would not have been deemed tenurable,” Fong says. “I’m not saying that makes my decision right, but that’s the pressure that I was under.”
Wilhite understood his protégé’s decision. “It’s easy to say to somebody, ‘That’s just wrong, don’t do it,’” Wilhite says. “But if that has an impact on your future, it’s tough to stand up to that.”
ADVERTISEMENT
I would not be here today if I didn’t succumb to the pressure of the editors. Without those publications, my record probably would not have been deemed tenurable.
Fong, though, would later stand up to the practice, albeit in a different way. He and Wilhite have written a number of papers on citation manipulation. Their first, a 2012 survey in Science, reached the opposite conclusion of the Elsevier study — the phenomenon, Fong and Wilhite wrote, was “uncomfortably common” among their respondents. Those who said they’d been targets of such behavior were more likely to be, as Fong was, untenured and the sole authors of the papers in question.
In 2017 the duo expanded their initial survey to more disciplines and to other dubious practices, such as adding honorary authors to manuscripts and grant applications. This year Fong and Wilhite have worked on two more papers on the subject — one examining the influence of academic networks on journal impact factors and another arguing that citation manipulation constitutes criminal activity. They plan to publish more data that indicate editors are more likely to reject papers whose authors do not add requested citations.
“The instant I earned tenure, we took off with this,” Fong says. “We kind of made hay out of fighting it.”
A Smoking Gun
ADVERTISEMENT
A bundle of terms have been used to describe the constellation of bad-faith actions plaguing citation culture: coercive citation, citation manipulation, citation stacking, citation pushing, citation padding.
Not to mention “citation cartels,” which are pretty much what they sound like: groups of editors or journals banding together, usually agreeing to cite one another’s work for a mutual impact-factor bump. The term was coined in a 1999 article entitled “Scientific Communication — A Vanity Fair?”
The genius of citation cartels lies in the fact that they distribute unethical practices across multiple journals and editors, making them impossible to detect without access to an extensive database of citations, which must then be mapped to illuminate possibly suspicious connections.
That’s where Phil Davis comes in. A former science librarian at Cornell University, Davis believes he was the first to identify and write about a particular citation cartel, in 2012. He now makes a living working with academics who sense something fishy in their publishing networks.
“Usually, when I get drawn into looking at this is when some editor says, ‘I can’t believe that this competing journal is doing so well all of a sudden. It doesn’t make sense,’” Davis says. “Or how someone whose papers have always been considered to be mediocre are all of a sudden superstar papers.”
ADVERTISEMENT
Once that skepticism registers, Davis does what he calls “detective work” to unearth patterns.
You really have to find the smoking gun, and the smoking gun is found in the communication between the editor and the author.
Journal impact factors are calculated based on the past two years of activity. So Davis mines the data to ascertain whether, for instance, a journal has cited itself to a seemingly excessive degree within that period — though that alone doesn’t prove wrongdoing.
“If you don’t know what’s going on behind the scenes, between reviewers and authors, or editors and authors, all you can detect is a pattern — a strange, bizarre pattern,” Davis says. “You really have to find the smoking gun, and the smoking gun is found in the communication between the editor and the author.”
‘Kind of Like Taxes’
ADVERTISEMENT
Davis thinks there are broader questions to be asked about how and why citation manipulation happens. For one thing, he doubts that many offenders start with the intention of masterminding a citation cartel. “No one grows up as a crook. Usually, they start by doing a small crooked behavior that doesn’t get detected. And they would say, ‘Oh, hey, that worked,’ and then they do it again,” Davis says. “It’s kind of like taxes. Many people do a little bit of manipulation. Those people who attempt to do a lot of manipulation get discovered sooner or later.”
It’s easy to understand the appeal. In fact, Davis sometimes wonders why citation manipulation doesn’t happen more often.
Particularly insidious, Davis says, are the ways citations and impact factors have become so closely tied to an academic’s success. Accumulating lots of citations, or publishing in a journal that tops impact-factor rankings in Journal Citation Reports, an annual publication, can make all the difference in a scholar’s promotion-and-tenure dossier.
Authors may not even realize they’re being coerced since reviewer feedback is often provided anonymously. And authors may be especially compelled to comply with the requested citations when those requests come with more substantive suggestions for revision.
Consequences of Gaming the System
ADVERTISEMENT
Davis, Fong, and Wilhite are foot soldiers in the movement against citation manipulation, but none of them professes to be a ruling authority on the matter. That responsibility — of deciding if and how to discipline bad actors — falls to Marie E. McVeigh.
McVeigh is head of editorial integrity at Clarivate Analytics, which publishes Journal Citation Reports. She has worked in citation science since 1984, and was involved in the first delisting of a journal for self-citation, in 2004.
In the aftermath of that incident, Clarivate cracked down on journal self-citation, and the numbers, predictably, went down. But a new wave of manipulative practices emerged in its place. In June, Clarivate issued an update to a 2018 editorial expression of concern about manipulative practices. The firm spotted a “pattern of highly concentrated referencing” in five of its journals. Three authors, Clarivate found, had published more than 50 papers in that group of journals, and all of those authors sat on at least one of the journals’ editorial boards.
But there were more “abnormal characteristics” in the journals’ citation patterns, according to the expression of concern. References were clustered in introductory and discussion sections of papers, acting as “generalized, nonspecific demonstrations that the subject being discussed in the paper is being actively researched.” What’s more, Clarivate sometimes found as many as 30 sources attached to a single reference.
The consequences? Of the five journals, three were delisted from Journal Citation Reports — and thus denied an impact-factor ranking — for one year. A fourth journal’s ranking was so distorted by the manipulation that it, too, had to be withheld, despite no wrongdoing by its own editors.
ADVERTISEMENT
Searching for Solutions
When McVeigh and her team investigate potential manipulation, she says, they’re met with a range of reactions, from “helpful engagement” to “absolute denial of motive” — which, McVeigh says, is understandable. “I think one of the grievous difficulties that we face is an editor or publisher or author will assume that the action that we take is specifically punitive to them,” she says. “This is not a punitive action. This is merely an action to maintain the quality of our data.”
So how else can that quality be maintained?
Jeroen Baas, a co-author of the Elsevier study released last month, suggests a middleman between author and editor in the publication process — someone who “could get into these kind of delicate questions.”
ADVERTISEMENT
Wilhite and Fong are skeptical, though. Adding another dimension to the already-slow process of publishing academic work, Wilhite says, isn’t the solution. Why not instead publish a “recommended reading” list at the end of papers, one that could include related pieces in the same journal but wouldn’t count toward the journal’s impact factor — or, for that matter, remove journal self-citations altogether?
After all, Fong and Wilhite say, the problem with citation manipulation boils down to one thing: impact factors. Has academe succumbed to the allure of one number to rule them all? Maybe in some cases. All the more reason, the pair think, to reinvent it.
“Impact factor is what we make it,” Fong says. “All it is is a formula. So why not make a formula that’s less manipulable?”
These days Fong finds himself playing the role that Wilhite played for him years ago for younger faculty members. “I’ve actually had assistant professors bring reviews to me, not because they were coerced, but just because I’m now a senior faculty and they want to get my advice,” Fong says. “I read the reviews, and lo and behold, there’s coercion in it” — coercion, he says, that his younger colleagues, eager like he once was to earn tenure, don’t even register as a problem.
Megan Zahneis, a senior reporter for The Chronicle, writes about faculty and the academic workplace. Follow her on Twitter @meganzahneis, or email her at megan.zahneis@chronicle.com.