Medical school deans swung hard at U.S. News & World Report’s annual medical school rankings in a panel discussion here on Thursday, painting the magazine’s methodology as a one-size-fits-all measure that is flawed by its use of inadequate metrics, low response rates, conflict-of-interest issues, and a “reputational” survey that may be based on decades-old perceptions.
Yet prospective students and parents have relied more and more on the rankings in recent years, and if a college’s ranking drops, heads roll, the deans said during the panel discussion, held at the Mount Sinai School of Medicine.
The Duke University School of Medicine—ranked fifth by U.S. News & World Report—tracked where students who rejected Duke’s offers chose to learn medicine instead, and the trends followed the magazine’s ranking almost perfectly, said Nancy C. Andrews, dean of the medical school.
Brian Kelly, editor of U.S. News & World Report, defended the rankings as a journalistic, public-interest enterprise that seeks to provide consumers with hard data about their best choices. The publication’s statistical methodology has evolved over the years, Mr. Kelly said, and the rankings are just one tool students can use to find a suitable college.
The magazine has published rankings of colleges since the 1970s, as well as rankings of various professional schools like law and medicine. Over the years, he said, it has closed several gaps in its methodology, despite limitations caused by a lack of comparable data for different types of institutions. “We have to have the same data set for Harvard and Stanford and the University of Texas at El Paso,” Mr. Kelly said.
But for the deans, the improvements aren’t enough.
“I would feel really bad for a student who made his or her decision about where to go to medical school based only on your rankings, although they can take them into account,” Jules L. Dienstag, dean for medical education at Harvard University, told Mr. Kelly. “These rankings are very, very nongranular, and they don’t try to help the student find the best place for him or her. The rankings are useless for that.”
The panel, sponsored by U.S. News & World Report, discussed for hours the merits and drawbacks of the medical-school rankings. The event marked the first time the publication has ever engaged with college deans about the rankings in a public setting. U.S. News & World Report had a similar, private discussion with law deans after accusations emerged that the law schools had been gaming the rankings for years.
Static Impressions
During Thursday’s panel, Robert J. Alpern, dean of the Yale School of Medicine, said the problem with the rankings is that they don’t change much each year. Dr. Alpern criticized in particular one component of the rankings, a reputational survey in which officials rate the quality of peer institutions. He said he doesn’t have a clue about the efficacy of medical-education programs on other campuses, leaving him to rank those programs on perceptions that may be 20 or 30 years old.
“It’s frustrating when, as a dean, you make improvements in your school that make it a better experience for the students, and it has no effect on your reputation score. That ends up being the largest percentage on your ranking,” Dr. Alpern said.
But U.S. News & World Report has always operated on the assumption that the aggregate results from hundreds of surveys will present a clear picture of the institution among its peers, said Robert J. Morse, the magazine’s director of data research. “We’re not expecting the respondents to have an accreditation-level knowledge of the institutions,” he said. Mr. Morse, who has developed the methodology of the annual report over the years, acknowledged the rankings would benefit from more data, such as student surveys.
“It’s not a surprise that people don’t like being ranked,” Mr. Morse said. “They don’t think it’s reflective of the complexity of the institution.”
The survey also asks for deans to rank their own schools, and that presents a conflict-of-interest issue, said Robert N. Golden, dean of the University of Wisconsin’s School of Medicine and Public Health. When the National Institutes of Health reviews grants, he said, “if a grant comes up from your institution, you have to leave the room.”
He added that he had just received a survey questionnaire from U.S. News & World Report “as I was getting on the airplane today—how do you think I ranked my school?”
Dr. Golden also said only a very small group of prospective students have trouble choosing between elite, private institutions such as Harvard and Columbia Universities. The rankings truly hinder the students who are deciding between strong public and private institutions that may not be ranked highly, he said. He suggested changing the rankings to categorical measurements, rather than just ranking based on a collection of factors.
Allen M. Spiegel, dean of the Albert Einstein College of Medicine at Yeshiva University, said he’s concerned about the magazine’s survey-response rates from primary-care colleges and residency directors, which are 17 percent and 19 percent, respectively.
Questioned Measures
Joseph P. Grande, associate dean of academic affairs at Mayo Medical School, said the rankings take a one-size-fits-all approach. He said medical schools are better able to serve different areas, and the rankings don’t address the diverse needs of their readers.
“I think deans across the country wish the rankings would just go away,” said Lee Goldman, dean of the faculties of Health Sciences and Medicine at Columbia University. Dr. Goldman said the faculty-student ratio, one of the measures used to rank medical schools, is meaningless. “If you look at medical schools, most of us probably have more than one faculty per student,” he said. “There’s no evidence that three faculty per student is better than two faculty per student.”
The deans also criticized the weight the rankings give to “input” metrics like average scores on the Medical College Admission Test, or the number of NIH grants a school lands. “What you can measure most easily are input measures,” said R. Michael Tanner, vice president of academic affairs and chief academic officer of the Association of Public and Land-Grant Universities. “But that doesn’t tell you that you took a student who needed an opportunity to be moved along, and actually moved him or her along.”
Most of the deans agreed there are many more useful metrics than MCAT scores and NIH grants. Admitting more racially and ethnically diverse classes, for example, has become a critical part of producing physicians, as people find they have a positive reaction to care provided by doctors they have something in common with. Achieving diversity, to help provide physicians in underserved areas, will continue to be an important mission for colleges—so diversity should be part of a ranking, too, some of the deans said.
Grade-point averages were another sticking point in the discussion. Ranking schools based on students’ GPA’s doesn’t take into account the different circumstances of students and the colleges, the deans said.
In the end, the panelists agreed that the discussion should be the first chapter in a more open dialogue about the rankings. The deans said they were impressed that the publication had the courage to discuss the rankings publicly.
Mr. Kelly said the publication tells students all the time, “Don’t pay too much attention to these rankings—we view it as one tool. We’re able to sleep reasonably well at night because we think people are using this responsibly.”