More than 150 researchers and 75 scientific groups issued a declaration on Thursday against the widespread use of journal “impact factors,” blaming the practice for dangerous distortions in financing and hiring in science.
The impact factor “has a number of well-documented deficiencies as a tool for research assessment,” the scientists said in the letter, which had been in preparation since a conference led by publishers and grant-writing agencies last year in San Francisco.
Those deficiencies include the ability of publishers to manipulate the calculations, and the way the metrics encourage university hiring and promotion decisions, as well as grant agencies’ award distributions, that can lack an in-depth understanding of scientific work.
“There is certainly a need for fair and objective methods to evaluate science and scientists, no doubt about that,” said Stefano Bertuzzi, executive director of the American Society for Cell Biology, which organized the campaign. “But that need does not change the fact that the journal impact factor does not measure what it’s supposed to measure when it is applied to evaluations of scientists’ work.”
For all those who signed the letter, however, the effect may be overshadowed by those who did not, including some of the world’s leading publishers and representatives of leading research universities. They include the Nature Publishing Group and Elsevier, two of the most dominant scientific publishers, and the Association of American Universities, which represents top-ranked research institutions.
The editor in chief of Nature, Philip Campbell, said he and other editors of the company’s journals have regularly published editorials critical of excesses in the use of journal impact factors, especially in rating researchers.
“But the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to,” said Mr. Campbell. Among the 18 recommendations in the letter, journals were asked to “greatly reduce emphasis on the journal impact factor as a promotional tool.”
A spokesman for the AAU, Barry Toiv, said he had no comment on the matter.
Years of Criticism
The impact factor is a number, calculated annually for each scientific journal, that reflects the average number of times its articles have been cited by authors of other articles. Some journals have been accused of inflating their ratings through practices that include requiring authors to cite articles that have appeared previously in the journal.
The measure was first developed more than 50 years ago as a way to help librarians decide which subscriptions to maintain. The simple statistic made sense for that purpose, Mr. Bertuzzi said, but not for its now-common use by universities and grant-writing agencies in important hiring and financing decisions.
Although Nature declined to sign the letter, another top-ranked journal, Science, backed the effort. In an editorial timed to the release of the so-called San Francisco Declaration, the editor in chief of Science, Bruce Alberts, said problems attributable to the overreliance on impact factors include scientists’ avoiding riskier research that’s less certain to command a wide audience.
A focus on impact factors also “wastes the time of scientists by overloading highly cited journals such as Science with inappropriate submissions from researchers who are desperate to gain points from their evaluators,” Mr. Alberts wrote.
The impact factor has nevertheless withstood years of criticism, and Mr. Bertuzzi acknowledged there are no simple solutions, given the financial pressures on universities, publishers, and grant-writing agencies.
Still, there may be some new signs that the criticism is having an effect. The National Cancer Institute, a division of the National Institutes of Health, plans this year to join the private Howard Hughes Medical Institute and a few universities in pressing grant applicants to broaden their descriptions of career accomplishments beyond the common list of journal publications.
Under the cancer institute’s plan, scientists will be “asked to describe your five leading contributions to science as a way of helping a reviewer to evaluate your contributions rather than depending on where your name is positioned in a paper with 15 authors or 500 authors,” the director of the cancer institute, Harold E. Varmus, told the annual meeting last month of the American Association for Cancer Research.
Dr. Varmus said on Thursday that he fully backs the San Francisco Declaration. Agencies, he said, “need to change the culture of science, especially with respect to the way that scientists evaluate each other, moving away from simplistic metrics.”