The peer-review system is often described as the “gold standard” for determining scientific merit. A study published on Thursday gives that belief some empirical affirmation.
The study shows that success rates of scientific projects, as measured by citations and patents, strongly correlate with the scores those projects were given under the peer-review process at the National Institutes of Health.
The analysis, published in Science, covered more than 130,000 research projects financed by the NIH from 1980 to 2008. It found that a drop of one standard deviation in NIH peer-review scores is associated with 15 percent fewer citations, 7 percent fewer publications, 19 percent fewer “high impact” publications, and 14 percent fewer associated patents.
The findings should serve as a warning about the risk of relying on less-costly alternatives to the process of gathering panels of scientists to personally assess the merit of research proposals, one author of the study said.
“There are insights in peer review that we can’t capture with quantitative information,” said the co-author, Danielle Li, an assistant professor of business administration at Harvard University. She worked on the project with Leila Agha, an assistant professor of markets, public policy, and law at Boston University.
Sally J. Rockey, who oversees grant-review activities at the NIH, called the findings gratifying. “Those data demonstrate that the peer-review process works,” said Ms. Rockey, deputy director for extramural research at the agency.
The NIH’s own assessments of the question studied by Ms. Li and Ms. Agha have not been able to quantify similar benefits of peer review, Ms. Rockey said. That, however, is most likely because NIH assessments have studied only relatively recent groups of grants, when cutbacks in the agency’s budget relative to inflation have greatly reduced the variability in project quality, she said.