This past spring, the Computing Research Association did what no other scholarly organization managed to do: It persuaded the National Research Council to use a customized methodology when it evaluated the field’s doctoral programs. But now that the rankings are out, computer scientists are crying foul, claiming that even the revised method is rife with errors.
In March, the NRC was nearing the completion of its mammoth attempt to assess more than 5,000 university programs in 62 different disciplines. After hearing the pleas of computer scientists, the council agreed to change the plan for measuring their research productivity, a crucial element in the rankings. Instead of simply counting journal articles, as it did for most fields, the NRC also counted presentations at major computer-science conferences. In computer science, such conference presentations have an unusually high status. “The field moves so quickly that waiting for peer-reviewed journal publications often isn’t the best idea,” says Andrew Bernat, the association’s executive director.
DIG DEEPER INTO THE NRC RANKINGS
The Chronicle‘s exclusive interactive tool lets you compare the quality and effectiveness of doctoral programs. To use the tool, purchase a one-year Web pass from The Chronicle Store.
Charlotte V. Kuh, the staff director of the NRC project, says computer science was the only discipline to make a major push for a special methodology, but the scientists’ arguments were persuasive. “Their recommendation seemed sensible, although it involved a special effort at the last moment,” she says.
Yet when the NRC’s report was released last week, the association issued a statement decrying what it saw as widespread mistakes. Some computer-science departments suspect that the NRC somehow did not count all of the conferences they had agreed to count, but there is no easy way to audit the process.
One spot check, at the University of Utah, suggests that at least that institution has cause for concern. Martin Berzins, the director of Utah’s School of Computing, says his records show that his faculty members (as of 2006, when the NRC’s surveys were completed) had a total of more than 950 journal articles and conference presentations from 2000-06, the period covered by the NRC’s study. But the NRC report says each of the department’s 37 faculty members had an average of 1.64 publications each year between 2000 and 2006, which works out to only 425. Where did the other 525 publications go?
“This is a data-based report,” says Henry M. Levy, the chairman of the computer-science and engineering program at the University of Washington, who also believes that his department’s data were badly miscounted. “For it to have any validity, the underlying data need to be accurate.”
Questioning Faculty Counts
Mr. Levy is concerned not only about apparently missing conference presentations but also about a major error in the faculty count used in the NRC report. Mr. Levy’s department is listed as having an “allocated faculty” of 62.52 in 2006. (In the NRC’s report, the term “allocated faculty” refers roughly to a program’s full-time-equivalent faculty. If a multidisciplinary professor spends half her time in a history program and half her time in a sociology program, then she is counted as 0.5 in each program’s allocated-faculty total.) But his department’s true number was much smaller than 62.52, Mr. Levy says: probably between 40 and 45.
And that matters because the faculty total is used as the denominator in several of the NRC’s measures, including publication and citation rates. If too many people are listed in that count, the program’s research-productivity numbers look much weaker than they actually are.
Where did that error arise? When university officials filled out the surveys in 2006, Mr. Levy says, they erroneously included many nonfaculty members (such as scientists at nearby Microsoft) who had occasionally served on dissertation committees for the program.
Mr. Levy is not entirely surprised by the error, because the NRC’s survey questions about faculty counts (which can be found beginning on Page 166 of its project report) were quite complex. Certain faculty members were supposed to be included, for example, if they had served on a dissertation committee or on the graduate-admissions or curriculum committee during the previous five years. But emeritus faculty members were to be included only if they had headed a dissertation committee. And the guidelines only grew more complicated from there.
“When I went back and looked at the faculty questions, I had to read them several times to understand,” Mr. Levy says. “I almost had to graph it out.”
Assessing Awards
Mr. Levy adds that there appears to be another major error in his program’s ranking—this one apparently the fault of the NRC rather than his university. One element of the NRC report concerns major scholarly awards and honors held by faculty members. That element happened to be weighted heavily in the computer-science field. The NRC conducted that analysis by gathering data from scholarly societies, not by asking doctoral programs directly.
The NRC report says Mr. Levy’s department has 0.09 awards per faculty member, but Mr. Levy says the correct figure, based on the NRC’s official list of awards, should be at least 10 times higher. (And that is without correcting the erroneous faculty denominator.)
“I just know off the top of my head how many Sloan fellows we have, how many members of the National Academies,” Mr. Levy says.
Mr. Levy says he does not want to place blame on anyone at his university or at the NRC. But he does want to see the numbers corrected. “If this report is going to be a once-in-15-years event,” he says, “then the importance of accuracy is very high.”
Evaluating Errors
The NRC, for its part, says the University of Washington had ample opportunities to correct those data errors, especially the inflated faculty roster. In a statement addressed to Washington’s provost on the NRC’s Web site, Ralph Cicerone, president of the National Academy of Sciences, and Charles M. Vest, president of the National Academy of Engineering, write that “it was unfortunate that faculty lists for several programs at the University of Washington were not submitted correctly to the NRC. Other universities had corrected similar mistakes in their submissions during the data-validation process.”
The NRC has also announced a general process for evaluating possible data errors in its doctoral report. But except in a limited number of cases, the council says, it will probably not recalculate any of the program rankings.
The policy, which is described at the bottom of the project’s Frequently Asked Questions page, invites universities to submit information about apparent errors before November 1. Those university statements will be compiled in a searchable table on the NRC’s Web site. But the NRC will consider correcting its master spreadsheet and recalculating rankings only in cases where it becomes clear that the data errors were the fault of the project’s staff. By contrast, in cases where the errors were generated by university officials who submitted data about their programs, the spreadsheet and rankings will not be updated.
(When any changes to the spreadsheet occur, The Chronicle will update its own tables.)
Mr. Bernat, of the Computing Research Association, says he wishes programs had had a final chance to correct their data before the report was released.
“The numbers are just flawed,” he says. “I know they tried. I know the staff took these questions very seriously. But something went wrong somewhere.”