Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Events and Insights:
    Leading in the AI Era
    Chronicle Festival On Demand
    Strategic-Leadership Program
Sign In
News

Computer Scientists Cry Foul Over Data Problems in NRC Rankings

By David Glenn October 6, 2010

This past spring, the Computing Research Association did what no other scholarly organization managed to do: It persuaded the National Research Council to use a customized methodology when it evaluated the field’s doctoral programs. But now that the rankings are out, computer scientists are crying foul, claiming that even the revised method is rife with errors.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

This past spring, the Computing Research Association did what no other scholarly organization managed to do: It persuaded the National Research Council to use a customized methodology when it evaluated the field’s doctoral programs. But now that the rankings are out, computer scientists are crying foul, claiming that even the revised method is rife with errors.

In March, the NRC was nearing the completion of its mammoth attempt to assess more than 5,000 university programs in 62 different disciplines. After hearing the pleas of computer scientists, the council agreed to change the plan for measuring their research productivity, a crucial element in the rankings. Instead of simply counting journal articles, as it did for most fields, the NRC also counted presentations at major computer-science conferences. In computer science, such conference presentations have an unusually high status. “The field moves so quickly that waiting for peer-reviewed journal publications often isn’t the best idea,” says Andrew Bernat, the association’s executive director.


DIG DEEPER INTO THE NRC RANKINGS
The Chronicle‘s exclusive interactive tool lets you compare the quality and effectiveness of doctoral programs. To use the tool, purchase a one-year Web pass from The Chronicle Store.


Charlotte V. Kuh, the staff director of the NRC project, says computer science was the only discipline to make a major push for a special methodology, but the scientists’ arguments were persuasive. “Their recommendation seemed sensible, although it involved a special effort at the last moment,” she says.

Yet when the NRC’s report was released last week, the association issued a statement decrying what it saw as widespread mistakes. Some computer-science departments suspect that the NRC somehow did not count all of the conferences they had agreed to count, but there is no easy way to audit the process.

One spot check, at the University of Utah, suggests that at least that institution has cause for concern. Martin Berzins, the director of Utah’s School of Computing, says his records show that his faculty members (as of 2006, when the NRC’s surveys were completed) had a total of more than 950 journal articles and conference presentations from 2000-06, the period covered by the NRC’s study. But the NRC report says each of the department’s 37 faculty members had an average of 1.64 publications each year between 2000 and 2006, which works out to only 425. Where did the other 525 publications go?

“This is a data-based report,” says Henry M. Levy, the chairman of the computer-science and engineering program at the University of Washington, who also believes that his department’s data were badly miscounted. “For it to have any validity, the underlying data need to be accurate.”

Questioning Faculty Counts

Mr. Levy is concerned not only about apparently missing conference presentations but also about a major error in the faculty count used in the NRC report. Mr. Levy’s department is listed as having an “allocated faculty” of 62.52 in 2006. (In the NRC’s report, the term “allocated faculty” refers roughly to a program’s full-time-equivalent faculty. If a multidisciplinary professor spends half her time in a history program and half her time in a sociology program, then she is counted as 0.5 in each program’s allocated-faculty total.) But his department’s true number was much smaller than 62.52, Mr. Levy says: probably between 40 and 45.

And that matters because the faculty total is used as the denominator in several of the NRC’s measures, including publication and citation rates. If too many people are listed in that count, the program’s research-productivity numbers look much weaker than they actually are.

Where did that error arise? When university officials filled out the surveys in 2006, Mr. Levy says, they erroneously included many nonfaculty members (such as scientists at nearby Microsoft) who had occasionally served on dissertation committees for the program.

Mr. Levy is not entirely surprised by the error, because the NRC’s survey questions about faculty counts (which can be found beginning on Page 166 of its project report) were quite complex. Certain faculty members were supposed to be included, for example, if they had served on a dissertation committee or on the graduate-admissions or curriculum committee during the previous five years. But emeritus faculty members were to be included only if they had headed a dissertation committee. And the guidelines only grew more complicated from there.

“When I went back and looked at the faculty questions, I had to read them several times to understand,” Mr. Levy says. “I almost had to graph it out.”

Assessing Awards

Mr. Levy adds that there appears to be another major error in his program’s ranking—this one apparently the fault of the NRC rather than his university. One element of the NRC report concerns major scholarly awards and honors held by faculty members. That element happened to be weighted heavily in the computer-science field. The NRC conducted that analysis by gathering data from scholarly societies, not by asking doctoral programs directly.

ADVERTISEMENT

The NRC report says Mr. Levy’s department has 0.09 awards per faculty member, but Mr. Levy says the correct figure, based on the NRC’s official list of awards, should be at least 10 times higher. (And that is without correcting the erroneous faculty denominator.)

“I just know off the top of my head how many Sloan fellows we have, how many members of the National Academies,” Mr. Levy says.

Mr. Levy says he does not want to place blame on anyone at his university or at the NRC. But he does want to see the numbers corrected. “If this report is going to be a once-in-15-years event,” he says, “then the importance of accuracy is very high.”

Evaluating Errors

The NRC, for its part, says the University of Washington had ample opportunities to correct those data errors, especially the inflated faculty roster. In a statement addressed to Washington’s provost on the NRC’s Web site, Ralph Cicerone, president of the National Academy of Sciences, and Charles M. Vest, president of the National Academy of Engineering, write that “it was unfortunate that faculty lists for several programs at the University of Washington were not submitted correctly to the NRC. Other universities had corrected similar mistakes in their submissions during the data-validation process.”

ADVERTISEMENT

The NRC has also announced a general process for evaluating possible data errors in its doctoral report. But except in a limited number of cases, the council says, it will probably not recalculate any of the program rankings.

The policy, which is described at the bottom of the project’s Frequently Asked Questions page, invites universities to submit information about apparent errors before November 1. Those university statements will be compiled in a searchable table on the NRC’s Web site. But the NRC will consider correcting its master spreadsheet and recalculating rankings only in cases where it becomes clear that the data errors were the fault of the project’s staff. By contrast, in cases where the errors were generated by university officials who submitted data about their programs, the spreadsheet and rankings will not be updated.

(When any changes to the spreadsheet occur, The Chronicle will update its own tables.)

Mr. Bernat, of the Computing Research Association, says he wishes programs had had a final chance to correct their data before the report was released.

“The numbers are just flawed,” he says. “I know they tried. I know the staff took these questions very seriously. But something went wrong somewhere.”

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
David Glenn
David Glenn joined The Chronicle of Higher Education in 2002. His work explored how faculty members are trained, encouraged, and evaluated as teachers; how college courses and curricula are developed; and the institutional incentives that sometimes discourage faculty members from investing their energy in teaching.
ADVERTISEMENT
ADVERTISEMENT

Related Content

An Elaborate Ranking of Doctoral Programs Makes Its Long-Awaited Debut
New Doctoral-Program Rankings: Frequently Asked Questions
After Years of Delay, NRC Doctoral-Program Rankings May Be Merely Historical

More News

Photo-based illustration of two hands shaking with one person's sleeve a $100 bill and the other a graduated cylinder.
Controversial Bargains
Are the Deals to Save Research Funding Good for Research?
Illustration depicting a scale or meter with blue on the left and red on the right and a campus clock tower as the needle.
Newly Updated
Tracking Trump’s Higher-Ed Agenda
Illustration of water tap with the Earth globe inside a small water drop that's dripping out
Admissions & Enrollment
International Students Were Already Shunning U.S. Colleges Before Trump, New Data Show
Photo-based illustration of former University of Virginia Jim Ryan against the university rotunda building.
'Surreal and Bewildering'
The Plot Against Jim Ryan

From The Review

Jill Lepore, professor of American History and Law, poses for a portrait in her office at Harvard University in Cambridge, Massachusetts. Monday, November 4, 2024.
The Review | Conversation
Why Jill Lepore Nearly Quit Harvard
By Evan Goldstein
Illustration of a sheet of paper with redaction marks in the shape of Florida
The Review | Opinion
Secret Rules Now Govern What Can Be Taught in Florida
By John W. White
German hygienist Sophie Ehrhardt checks the eye color of a Romani woman during a racial examination.
The Review | Essay
An Academic Prize’s Connection to Nazi Science
By Alaric DeArment

Upcoming Events

CHE-CI-WBN-2025-12-02-Analytics-Workday_v1_Plain.png
What’s Next for Using Data to Support Students?
Element451_Leading_Plain.png
What It Takes to Lead in the AI Era
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group Subscriptions and Enterprise Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin