Even scientists are only human, and their biases mar the system used by the National Institutes of Health to evaluate grant proposals, according to an analysis by a researcher at the University of Texas’ M.D. Anderson Cancer Center.
The article, by Valen E. Johnson, a professor of biostatistics, appeared July 29 in the Proceedings of the National Academy of Sciences. It comes as the NIH plans changes in the way it does peer review (The Chronicle, June 9).
But the new analysis makes recommendations that may not be achieved by the proposed changes.
Mr. Johnson evaluated data from reviews of nearly 19,000 proposals that the NIH received in 2005. The reviews were performed by 14,000 individuals. Following standard procedure, each grant proposal was read by a few people on a committee. Then, after discussion among the full group, known as a study section, it received scores from all of the members. The scores were used to determine which proposals received funds.
Mr. Johnson found that the biases of those few actual readers can influence the votes of other members. If the readers tended to give scores more favorable than those of the other reviewers, then the proposals they read were more likely to receive grants, he reported. Correcting for those individual tendencies would change one-fifth to one-fourth of the decisions about which proposals receive financing.
The influence of bias was most severe, Mr. Johnson said, for proposals that received scores near the line separating those that received funds from those that did not.
He suggested that for proposals near that border, reviewers should take into account the dollar amount requested, and reject the more expensive ones. “If they did that,” he said, “they could fund more proposals than they do now.” Such a step would also encourage researchers to request less money, he argued, thereby increasing further the number of grants the NIH could provide.
Antonio Scarpa, director of the NIH’s Center for Scientific Review, which runs the review process for 70 percent of the agency’s grant applications, said Mr. Johnson’s analysis provided good ideas but “is not a silver bullet.” He objected to the idea of considering the amount of money requested, because some studies, like clinical trials, are intrinsically expensive.
Mr. Scarpa said one of the proposed changes, to have each study section rank the various applications, could take care of some of Mr. Johnson’s concerns, since direct comparisons between borderline applications could help the stronger ones prevail.