Washington, D.C. -- A sweeping assessment of the nation’s doctoral programs was released last week by the National Research Council, ranking the best, the worst, and the merely adequate offerings in 41 fields.
The long-awaited study, four years in the making, updates and expands an assessment of graduate education last issued by the N.R.C. in 1982. The new version examines the quality and effectiveness of 3,634 doctoral programs at 274 universities.
As expected, programs at the nation’s top research institutions -- such as Harvard, Princeton, and Stanford Universities, and the University of California at Berkeley -- ranked at the top in many fields. But doctoral programs at some up-and-coming institutions, like the University of California at San Diego, made strong showings in many disciplines.
No other study of graduate education has the scope of the N.R.C. report, which is titled Research-Doctorate Programs in the United States: Continuity and Change. Academics may be mildly curious about how their campuses fare in the college rankings of U.S. News & World Report, but most of them view such ratings with skepticism. They have more respect for the findings of the council’s report, because it is the collective judgment of their peers and because they know that much more is at stake.
For the next decade, for example, graduate students will use the N.R.C. study to decide where to pursue a Ph.D. Administrators will use it to consider which doctoral programs on their campuses deserve more money and which, perhaps, deserve to be overhauled or abandoned. State and federal policy makers will use it to determine whether taxpayers are getting their money’s worth.
Holding up the 740-page study at a news conference here, Charlotte V. Kuh of the N.R.C., said, “This book is not casual, light reading, but a reference document, really, for at least the next decade.” Ms. Kuh is executive director of the N.R.C.'s Office of Scientific and Engineering Personnel.
The book says it provides “a fresh look at doctoral programs as they appeared in academic year 1992-1993.” It examines programs in five broad areas: arts and humanities; engineering; physical sciences and mathematics; social and behavioral sciences; and biological sciences. Doctoral programs in professional fields, such as business and medicine, are excluded.
The N.R.C. study offers “objective” statistics on 19 characteristics of the programs, such as the size of their faculties, the proportion of their professors with research support, and the percentage of their Ph.D.'s that were awarded to female and minority students.
When most academics page through the study, they usually look first for the “reputational” ratings. These show how doctoral programs ranked in a national survey of faculty members. The survey, conducted in the spring of 1993, asked nearly 8,000 graduate-faculty members to rate doctoral programs on two key issues: the “scholarly quality” of their faculty and their effectiveness in educating research scholars and scientists.
Programs that ranked high in 1982 for the quality of their faculties tended to remain at the top in the new report. But there were some significant shifts. For instance, Duke University’s English program leaped from 28th in 1982 to fifth in the new study, tied with Stanford’s.
Most of the changes in the top 10 of each field were not so startling, however. In anthropology, for example, the Universities of Chicago and Michigan shared the top spot for the quality of their faculties, while Michigan ranked slightly above Chicago in terms of its effectiveness in educating future scholars. Both programs had ranked in the top five in the N.R.C.'s 1982 study.
More intriguing were the rise and fall of anthropology programs below the top 10. New York University’s program, which ranked 36th in 1982 for the quality of its faculty, moved up to 13th. Washington University’s now ranks 17th, up from 42nd. The news was far less rosy for Northwestern University, where the program fell to 34th, from 11th in 1982. And Brandeis University’s dropped to 59th, from 31st in 1982.
Authors of the N.R.C. report cautioned that the rankings of quality and effectiveness were only two of the measures on which doctoral programs were assessed. They urged people to avoid turning the report’s findings into a “tournament.”
“I’m quite concerned about the way this will be used,” said Marvin L. Goldberger, dean of natural sciences at the University of California at San Diego and a co-chairman of the 16-member committee of academics that put the study together.
“Certainly there will be an attempt on the part of institutions to paint themselves in a very favorable light,” he said. “They have constituencies to impress. They have to do pushups before their boards of trustees.”
Mr. Goldberger said he expected to see hundreds of campus press releases in the next few weeks -- including those from his own university system -- putting different spins on the data.
In fact, not quite three hours after the N.R.C. released its report last week, the first such press release was faxed to The Chronicle. It was an 11-page packet from Berkeley, announcing that that campus had “the largest number and the highest percentage of top-ranked doctoral programs of any university in the nation.” The statement noted that 36 of Berkeley’s 91 doctoral programs were included in the N.R.C.'s assessment, and 35 of them ranked in the top 10 of their fields. Chang-Lin Tien, Berkeley’s chancellor, said the rankings proved that “state and federal investments in this institution have definitely paid off.”
Mr. Goldberger said the N.R.C. committee had “studiously avoided” any attempt to rank institutions, because they vary so greatly; it sought only to rank individual programs. “A sheer numbers count of highly rated programs is not a good criterion to reflect the quality of an institution,” he said.
Some critics question the fairness of the rankings. They say academics may harbor grudges against particular programs and give lower ratings to small programs they are unfamiliar with.
The report acknowledged the limitations of the ratings. A program’s reputational standing, it said, doesn’t tell “whether it offers a nurturing environment for students, or if the job placement experiences of its graduates are satisfactory.”
Still, authors of the report defended the rankings as fair. They said the survey sample was large enough to minimize the biases of participants. “Whether reputations are fair or not is really hard to say,” said Brendan Maher, a professor of psychology at Harvard University and the other co-chairman of the report. Still, he urged people to look at all of the measures in the report.
At a news conference last week, Mr. Maher was asked, “Is there some mechanism by which an aggrieved institution might appeal a ranking?”
He replied, “Only in the way that someone without a parachute would want to repeal the law of gravity.”
At a time of austere budgets in higher education, some wonder whether the rankings will be used to kill doctoral programs. Debra W. Stewart, dean of the graduate school at North Carolina State University and a member of the N.R.C. committee that conducted the study, said a program might have a low rating and still be needed. “It might be the only doctoral program in that state in that field. It might be providing community college professors.”
Nonetheless, as an administrator, Ms. Stewart said, she viewed the N.R.C. study as a “very helpful decision-making tool” that institutions could use to “evaluate investments they’ve made over the last 10 years.”
The report found faculty members largely confident in the quality of graduate training. In 11 per cent of the programs in the study, participants rated the quality of the faculties as “distinguished.” The survey found the faculties to be “strong” in 32 per cent of the programs, “good” in 19 per cent, and “adequate” in 19 per cent. In 16 per cent the faculty quality was deemed “marginal,” and in only 3 per cent was it called “not sufficient for doctoral education.”
Scholars in the social and behavioral sciences tended to be the toughest critics of their graduate programs. They ranked faculty quality as “marginal” or “not sufficient” in 23 per cent of the programs. Economists were the “hardest graders” by far, judging the faculties as “marginal” or “not sufficient” in 44 per cent of the Ph.D. programs in the field. By comparison, those in the arts and humanities deemed the faculties “marginal” or “not sufficient” in only 13 per cent of the programs.
Of the 3,634 programs in the study, 1,916 of them had been assessed in the N.R.C.'s 1982 report. The programs are divided into four “quarters,” based on their quality ratings. Three- fourths of the programs that had been ranked in the top and bottom quarters in 1982 remained in those categories in the new study. More shifts took place among programs that had been ranked in the second and third quarters. For instance, 112 programs that in 1982 were in the third quarter moved up to the second quarter, while 103 programs moved in the other direction.
Aside from the ratings, the report presents a broad array of information on each doctoral program. Included are statistics on the percentage of Ph.D.'s who reported having research assistantships as their primary means of support, the percentage who had teaching assistantships, and the time it took students to earn their doctorates. In the arts and humanities, data are included on the number of awards and honors earned by faculty in each program between 1986 and 1992. In the sciences, statistics on the publishing records of a program’s faculty are included.
The $1.2-million study was financed by grants from the Ford, Andrew W. Mellon, Alfred P. Sloan, and William and Flora Hewlett Foundations, and the National Academy of Sciences.
Tables in the report are available on the N.R.C.'s World-Wide Web home page at http://www.nas.edu. The council is planning to release a CD-ROM, with more detailed data, by the end of the year.
Copies of Research-Doctorate Programs in the United States: Continuity and Change are available from the National Academy Press, 2101 Constitution Avenue, N.W., Washington, 20418; (202) 334-3313 or (800) 624-6242. The cost is $59.95, prepaid, plus shipping charges of $4 for the first copy and 50 cents for each additional copy.