Now it can be told. The American doctoral program with the longest median time-to-degree is the music program at Washington University in St. Louis: 16.3 years.
That’s just one of a quarter million data points that appear in the National Research Council’s new report on doctoral education in the United States, which was finally unveiled Tuesday afternoon after years of delay. (The Chronicle has published an interactive tool that allows readers to compare doctoral programs across 21 variables.)
The NRC’s new ranking system will draw the most immediate attention. It is far more complex than the method the agency used in its 1982 and 1995 doctoral-education reports. Whereas Cornell University’s philosophy program was once simply ranked as the eighth strongest in the country, it must now be content to know that it has an “R-ranking” between 2 and 19 and an “S-ranking” between 16 and 34. (The first is derived indirectly from programs’ reputations, and the second is derived more directly from programs’ characteristics. For a detailed explanation, see our Frequently Asked Questions page.)
DIG DEEPER INTO THE NRC RANKINGS
The Chronicle‘s exclusive interactive tool lets you compare the quality and effectiveness of doctoral programs. To use the tool, purchase a one-year Web pass from The Chronicle Store.
Why did the project adopt those complex ranges? Because the old system of simple ordinal ranks offered a “spurious precision,” said Jeremiah P. Ostriker, chairman of the project committee, in a conference call with reporters on Monday.
“There are many different sources of uncertainty in the data,” said Mr. Ostriker, who is a professor of astrophysics at Princeton University. “We put them together as well as we could.... That means that we can’t say that this is the 10th-best program by such-and-such criteria. Instead, we can say that it’s between fifth and 20th, where that range includes a 90-percent confidence level. It’s a little unsatisfactory, but at least it’s honest.”
Over the long run, scholars may focus less on those baroque rankings and more on the report’s underlying data. The NRC report contains some of the most-thorough measures ever collected of doctoral-student completion rates, time-to-degree, faculty diversity, and student-support activities.
Evidence of Age
The bad news is that many of those data points have surely gone stale, because the NRC conducted its surveys back in 2006-7. In some departments, so many faculty members have come and gone since 2006 that the research-productivity numbers may no longer be reliable. In other departments, student services and financial aid have changed for better or worse since 2006. So all of the figures should be approached with some caution. The (tentative) good news is that many graduate-school deans hope to continue to collect and analyze such data, even if the NRC itself never conducts another national study.
“There’s going to be a short-term response and a long-term response to this report,” said Debra W. Stewart, president of the Council of Graduate Schools, in an interview on Monday. “The long-term response will be the important one. I think that the framework of this report will help support an ethos of continuous improvement.”
Donna Heiland, vice president of the Teagle Foundation, who has written about the challenges of doctoral assessments, hopes that scholars will not spend too much time picking at the data’s inevitable flaws.
“With projects like this,” she said in an interview last week, “the first thing that happens is that everyone looks at the data and complains that it’s stale or that it isn’t right. But I’ve become converted to the idea that data are just not going to be perfect. The data are not going to be correct down to every single detail. But if you can use this report to open up conversations about student funding or other elements of your program, it’s accomplished its purpose.”
Others are not so sure. Many of the data in the report depend crucially on the correctness of the underlying counts of each program’s faculty members. (Critics of the previous NRC reports said they unreasonably favored large programs, so in this report, several variables are scored on a “per-full-time-faculty-member” basis.) On Monday, the University of Washington’s College of Engineering published a note of protest, saying that the NRC had used incorrect, severely inflated faculty counts when assessing Washington’s engineering programs. Because those denominators are wrong, the statement says, each program’s faculty-publication rates and citation rates look much weaker that they actually are.
Then there is the inevitable issue: Will universities use the report to think about culling weak programs? Should they?
Some officials say the report shouldn’t be used to guide the ax, while others say that the data may indirectly point to winners and losers. “Context is everything,” said Joan F. Lorden, provost and vice chancellor for academic affairs at the University of North Carolina at Charlotte and a member of the project committee. “Maybe a low-ranked program is one that you want to invest in.”
“The conversation will be much more complicated than just producing a cut list from the NRC rankings,” said Richard Wheeler, a vice provost of the University of Illinois at Urbana-Champaign and another member of the NRC project committee, during Monday’s conference call. “Any time a program is looked at in a really stringent review, an enormous amount of information is brought forward. At our universities, an enormous amount is known about these programs that couldn’t possibly be captured in a report like this.”
Potential for Program Cuts
Be that as it may, Ms. Heiland of the Teagle Foundation believes the report could affect the survival of some programs, because many institutions will soon begin to change their structure for economic reasons, which might sometimes mean shedding doctoral programs or merging them with those of nearby institutions.
“Universities tend to think of themselves as complete universes,” said Ms. Heiland. “But I think we need to rethink campuses and to think of them not as self-contained entities but as hubs of learning, open to the world, networked.” The NRC report, she believes, holds the seeds of much of this rethinking. “I’d love to see these data, with all of their flaws and limitations, spark some kind of creative discussion that responds to the national need to educate more scientists, to educate more humanists.”
Some scholars, of course, believe that the entire project of ranking academic programs is folly.
“Rankings have been perceived as synonymous with quality,” said Bruce E. Keith, an associate dean at the United States Military Academy, in an interview on Monday. But projects like the NRC’s, he said, tend to measure quality overwhelmingly in terms of research prestige while paying too little attention to how students are shaped by the programs. Where do their graduates work five years after they have completed their degrees? How many of their dissertations are later published as books? How many of them receive major grants from the National Science Foundation or the National Institutes of Health? (The new report does include a measure of whether graduates of the programs immediately get academic jobs or postdoctoral fellowships, but there are no long-term measures of students’ careers.)
Mr. Keith wishes the new NRC project had focused more explicitly on how programs affect students—an idea that was endorsed in the research council’s 1995 report. One passage in that report said, “The primary questions to be answered are, ‘Do differences in scholarly quality of program faculty or other ratings result in measurable differences in careers of research and scholarship among program graduates? Are these differences attributable to program factors, or are other factors at work?’”
The quarter million data points in the new NRC report will probably shed light on many mysteries, Mr. Keith said. But those fundamental questions about programs’ effects on students still wait to be answered by some new study over the horizon.