This month high-school seniors have been frantically submitting their college applications for the January deadlines. Students aspire for acceptance into a reputable college, yet how do they determine which one is the best for them? Many of them turn for guidance to U.S. News & World Report and other resources that rank institutions.
Unfortunately, those “one size fits all” rankings, which are influential to both students and institutions, are often poorly designed and untrustworthy.
In November, George Washington University disclosed that it had been inflating class-rank data for the past decade, which resulted in its own inflated ranking in U.S. News. It was the third institution last year to admit to providing inaccurate and inflated data. The other two, Claremont McKenna College and my own employer, Emory University, reported inflated SAT scores. And there are most likely many more instances of data falsification.
I’m not absolving anyone of blame, but there is an inherent conflict of interest in asking those who are most invested in the rankings to self-report data.
Furthermore, the formula used in the rankings is poor. U.S. News calculates “student selectivity"—how picky the college is—based in large part on how many students were in the top 10 percent of their high-school classes. However, the National Association for College Admission Counseling reported that most small private and competitive high schools no longer report class rank, and some public high schools are also forgoing reporting this rank to their students and colleges. But U.S. News still includes it as a category.
While the rankings themselves are suspect, U.S. News’s criteria are a disincentive for colleges to evolve. For example, they discourage colleges from selecting a diverse student body. An institution that begins accepting more African-American students or students from low-income families—two groups that have among the lowest SAT scores, according to the College Board—might see its ranking drop because the average SAT score of its freshmen has gone down.
The rankings also discourage colleges from keeping pace with the digital revolution and doing things more efficiently. For example, in its law-school rankings, U.S. News rewards higher numbers of library volumes and titles, even though the move toward digital formats should make that measure obsolete. Meanwhile, dollars spent per student are rewarded as well, so if colleges perform more cost-effectively, perhaps by using newer technologies like online learning, they are penalized.
Other ranking systems aren’t any better. Forbes, which also annually rates colleges based on value and quality of teaching, includes as part of its scoring system student evaluations from Rate My Professors (notorious for its “hotness” category). These student evaluations are anonymous and unverified, so a student unhappy with her grade or even the professor can comment.
In some systems, colleges can pay to be included. The QS (Quacquarelli Symonds) World University Rankings now has a “star system.” The QS star system is able to use publicly available data for some institutions, like Harvard. But beginning in 2011, the vast majority of other colleges included in the QS star system paid $30,400 for an initial audit and a three-year license for participation. A New York Times article last month highlighted how many of those paying colleges received high star marks in the QS ratings, yet aren’t rated highly in other ratings systems.
Defenders will say these rankings provide a place for prospective students to compare data from various institutions, and may get them to consider ones they were not aware of. Although the rankings do highlight information on institutions, including class size and graduation rates, they miss important measures such as student learning and the university experience. A recent survey conducted by Gallup for Inside Higher Ed reported that only 14 percent of admissions directors believed that these rankings helped students find a college with a good fit.
Students might be better off turning to reports like the National Survey of Student Engagement, which annually collects information from more than 500 institutions about student participation in programs and activities geared toward learning and personal development. At Emory, for instance, we started a program called Living-Learning Communities, which gives upperclassmen incentives to live on campus and participate in residential learning. But you would never learn about that from the ranking formulas.
Competition and colorful magazines are alluring, but we should expect the scores to be meaningfuland accurate. Emory, for its part, has developed a data-advisory committee to ensure a consistent and accurate method to report all institutional data. Other colleges should put in place similar checks of internal data validity or have external audits.
Meanwhile, ranking organizations should develop more-meaningful measures around diversity of students, job placement, acceptance into professional schools, faculty membership in national academies, and student engagement. Instead of being assigned a numerical rank, institutions should be grouped by tiers and categories of programs. The last thing students want is to be seen as a number. Colleges shouldn’t want that, either.