Many college officials are asking hard questions about the methodology and effect of the ‘U.S. News’ rankings. One complaint: The survey overwhelmingly favors private institutions.
Baylor University’s vision for the future is ambitious. In the institution’s 10-year strategic plan, Baylor 2012, officials outline elaborate goals to raise student retention and graduation rates, attract top-notch faculty, and improve facilities on the campus. Baylor is clear about how it will calibrate its success: On the third page of the 42-page document, Baylor states that its overarching goal is to enter the top tier of institutions, as determined by U.S. News & World Report’s college rankings.
Baylor is now ranked 81st among national universities, a marked improvement from 1991, when it was in the unranked third tier of institutions, but still a far cry from the top-50 status it hopes to reach by 2012. In addition to mapping out a strategic plan with many factors that closely mimic those measured by U.S. News, the university has hired a strategic-planning director to make sure every department remains on track to achieve that goal, and has already spent $200-million on related improvements.
Like college officials elsewhere, those at Baylor have figured out how to improve their status in the rankings, and have spent plenty of time and money to meet that goal. In some ways, U.S. News has become the tail that wags the dog. The magazine’s annual college guide does not merely compile data on what colleges are doing. It has changed the way many college officials determine their institutional priorities.
A Chronicle analysis of U.S. News data from the past 24 years reveals that the rankings game does not provide a level playing field for all contestants. The magazine’s criteria seem to overwhelmingly favor private institutions. While five of the top-25 national universities in 1989 were public, only three made the cut on the most recent list. Conversely, every college that has managed to significantly improve its rank during that time is private. (The rankings of liberal-arts colleges, which have been evaluated in a separate category since 1983, have remained largely static.)
U.S. News’s editor, Brian Kelly, defends the rankings, saying the statistics that are compiled are accepted measures of success. Ultimately, he says, the rankings are “a journalistic device.” There has been much discussion of how much weight should be given to each measure, but the methodology “is what it is because we say it is,” he says. “It’s our best judgment of what is important.”
That judgment, though, is coming under unprecedented attack. This spring 24 presidents of liberal-arts colleges, including Drew University and Lafayette College, signed a letter excoriating the magazine for providing misleading data that “degrade the educational worth ... of the college search process.”
In the letter, the presidents also said that they would no longer fill out the magazine’s reputation survey, a central part of the rankings that asks academic leaders to evaluate hundreds of colleges. They also pledged to refrain from endorsing the rankings in any of their college publications or online materials. This month the man behind the letter, Lloyd Thacker, founder of the Education Conservancy, a nonprofit advocacy group that opposes commercial influences on higher education, sent copies to more than 600 other college presidents.
Another potential threat to the U.S. News rankings comes from the federal government. Last fall the secretary of education’s Commission on the Future of Higher Education recommended the creation of a database that would provide much of the same data U.S. News uses, but would let users come up with personal rankings based on their own criteria. The commission’s chairman, Charles Miller, said the database could take the rankings out of the “hand of one publisher.”
But for now, the rankings remain as influential as ever. Colleges, and not just consumers, have helped perpetuate the prestige of the annual list. Many higher-education officials say U.S. News provides a crucial tool that allows applicants to sort through a barrage of information — something that higher education has failed to deliver to consumers.
“We hear things like ‘We here at XYZ are a commuter school, you can’t compare us to Yale,’” says Lee E. Mortimer, academic director of institutional research at the University of Cincinnati. “But guess what? They’re doing it “And so as much as some people may think it’s crap, a whole lot of others don’t.”
Gamesmanship?
For something that many academics say has no intellectual value, the U.S. News rankings have been among the most studied and analyzed sets of data in higher education. Reams of reports have been written about the rankings, how the methodology works, and how it can be manipulated.
Colleges are reluctant to admit that they “game” the figures, but most of the methods are so well known that many officials assume that most of their competitors engage in them.
There are well-documented examples of institutions that have solicited nominal donations from alumni to boost their percentage of giving, encouraged applications from students they had no intention of accepting, or creatively interpreted how they should report the required data to U.S. News. Albion College took one-time gifts from graduating seniors and spread them over five years in order to boost its alumni-giving rate, The Wall Street Journal reported in March. Still, no matter how much any one college does on the margins, there is little evidence that such efforts affect its standing.
“I think there’s some gaming, but I don’t think it’s effective,” says Thomas Hayes, vice president at SimpsonScarborough, a higher-education-marketing firm. “You have to make a very strategic effort over time to move up in the rankings.”
In other words, you have to act like Baylor. One of the first steps the university took, after appointing Van Gray, associate vice president for strategic planning and improvement, to oversee the efforts of all departments, was to tie money for new programs to the standards set forth in its strategic plan. Any official who wanted money beyond his or her budget for a new project had to fill out a form stating how that project would further the goals of Baylor 2012.
At Baylor, as at many other institutions, the admissions office plays a crucial role in improving the rankings because 15 percent of U.S. News’s formula is determined by measures of student selectivity, including scores on standardized entrance exams and the institution’s acceptance rate. To improve those numbers, Baylor increased its total scholarship offerings from $38-million in 2001 to $86-million in 2005 and created an honors college. Since 2002 applications have increased (from 7,431 to 26,421) and the acceptance rate has dropped from 81 percent to 42 percent. Over the last five years, the average SAT score of enrolling first-year students has risen 30 points, to 1219.
“We looked very deliberately at what kind of class we wanted because that’s an issue that’s somewhat controllable,” says Mr. Gray. “I believe we have attracted much higher-performing students as the direct result of this 10-year plan.”
While Baylor says the changes it is making are within the overall mission of the institution, colleges that are ranked lower and want to rise may need to change their very nature.
Take, for example, Chapman University.
Chapman, in the heart of Orange County, Calif., has long been known as a college that gave a second chance to underachieving high-school students who showed promise. When James L. Doti became president, in 1991, he says, Chapman essentially had no admissions criteria, other than the best judgment of the staff.
Students were “using Chapman like a community college,” he says. Only 42 percent of students graduated within five years. The university had one endowed chair. There was almost no money for merit-based financial aid.
So Mr. Doti dropped the athletics program from Division II to Division III, thereby eliminating all athletics scholarships.
“We took that $2-million a year in athletic aid and added it to the financial-aid budget,” he says. The institution increased its tuition one year by 25 percent, so parents and students would perceive that the college had as good a program as “the colleges we wanted to compete with.”
Mr. Doti decided to set a minimum SAT score required for admission. “It was 740, which is nothing great, but for Chapman, at least it was something,” he says. “The next year, it was 760. That lops off a lot of people at the bottom. Every year we went up another 10 or 20 points.” The university began a scholars program with grants for high-achieving students.
Almost all the changes were designed expressly to help the college rise in the U.S. News rankings. “I can quibble with the methodology, but what else is out there?” says Mr. Doti. “We probably use it more than anything else to give us objective data to see if we are making progress on our strategic goals.”
In less than 20 years, Chapman has come to top the “selectivity rank” among master’s-level institution in the West, according to U.S. News. The minimum SAT score is now 1050. It has 45 endowed chairs. The endowment has grown from $20-million to $250-million. When U.S. News expanded the universe of colleges it ranks in 1993 by adding regional institutions, Chapman was in the second quartile of all such institutions in the West, and its academic reputation was ranked 90th among its 112 peers. It now ranks 11th over all among master’s-level institutions in the West, and its academic reputation is tied for 14th highest in that group.
Crunching the Numbers
The methodology has always been the key to the U.S. News rankings. It is built on 16 separate criteria, and the magazine says it results in a comprehensive evaluation of the overall quality of a college.
But higher-education researchers have often pointed out that U.S. News is inordinately focused on “input measures,” such as student selectivity, faculty-student ratio, and average retention of freshmen, and financial measures, like financial resources per student, alumni-giving rate, and faculty salaries. At the same time, it does not emphasize “outcome” measures, such as whether a student comes out prepared to enter the work force.
The U.S. News rankings started out in 1983 as purely a survey of college presidents, asking them to identify the best universities and liberal-arts colleges in the country. Editors at the magazine first incorporated the quantitative statistical categories in 1989.
For the next decade, the editors would play with the relative weights given to each category. The value of the student-retention measure went from 15 percent to 25 percent in 1996, then back down to 20 percent the next year. In 1997 the magazine put in the “value added” category, which calculates an “expected” graduation rate, then gives an institution extra points if it exceeds that expectation.
All in all, the values are just a guessing game. In 1997 U.S. News hired a consultant, the National Opinion Research Center, to evaluate its methodology. “The principal weakness of the current approach is that the weights used to combine the various measures into an overall rating lack any defensible empirical or theoretical basis,” the analysis concluded.
Mr. Kelly, U.S. News’s editor, says the “Sturm und Drang” caused annually by the rankings “always surprises me. This happens all the time in industry. I get surveys all the time, asking me the reputation of this, or the usefulness of that.
“We didn’t ask for this job. We didn’t ask to be the arbiter of higher education. The job has fallen to us.”
The magazine is careful to say every year that one year’s figures should not be compared with previous years’ because editors “change the methodology every year.” Of course, every college does make those comparisons, a point Mr. Kelly acknowledges.
“We still see this as a journalistic product,” he says. “Getting the information right is our job. How it is used is not our concern.”
Adjusting the Formula
A number of other changes in the survey may be on the table soon.
Mr. Kelly says U.S. News is always listening to ideas from college presidents and an advisory committee of college officials established by the magazine about how the methodology can be improved. Among other concerns, he is worried about how to measure SAT scores, when about 735 colleges and universities — with more probably to come — have made the test an optional part of the application process.
In 2004, Sarah Lawrence College became one of the first colleges to tell its students that scores on the standardized test would not be considered at all in the application process. The president, Michele Tolela Myers, in a commentary in The Washington Post, accused U.S. News of threatening to “make up” a number for Sarah Lawrence in the SAT category. Mr. Kelly says the magazine is talking with Sarah Lawrence about how to measure the category in a way that would be satisfactory to both parties.
In addition, Mr. Kelly says he is concerned that fewer high schools are releasing class ranks, another factor in the methodology. While that criterion may have to be eliminated eventually, more than 90 percent of institutions that responded for the 2007 rankings had class ranks for at least half of their admitted students.
But critics say there are other glaring problems with the methodology. The peer-reputation score, which is determined by the survey completed by presidents, provosts, and admissions deans, still represents 25 percent of the total score, the highest value awarded to any category.
While there has been much discussion about the letter 24 college presidents have signed boycotting the reputational rankings, the response to the survey has been falling for a number of years. In 2003, 67 percent of those who received the survey filled it out. For 2006, only 57 percent of college officials did so. (For 2007, that rose slightly to 58 percent.)
Those who fill out the survey, on average, give a score to 56 percent of the institutions on the list, says Robert J. Morse, director of data research for U.S. News, and the father of the university-rankings effort. The lowest response rate is from those at regional comprehensive, master’s, and bachelor’s institutions. Because college officials rate only institutions in their category, and only the colleges they are most familiar with, the lesser-known institutions are being rated by a much smaller peer group.
“There aren’t as many people” rating them, says Mr. Morse. He declined to say how many college officials might be rating, say, a fourth-tier, master’s-level institution in the Midwest. “We still have a statistically significant response,” he says. “We wouldn’t publish a peer score if it was only based on just a few respondents.”
Mr. Kelly believes the peer ranking is so important that if college leaders refuse to provide it, the magazine will find others who will. “We could survey department chairmen, conceivably high-school guidance counselors, and other people we think have the required knowledge,” he says.
Another blind spot in the methodology is the measure of faculty salaries. U.S. News instituted a cost-of-living adjustment to that factor beginning in 1996, but has never updated it, even though the cost of living has risen faster in some areas than in others.
“We don’t have any plans to update it,” says Mr. Morse. “It is serving the analytical premise of taking the cost-of-living differences into account. It may not be up to the minute in terms of 2007, but it is adjusting for the relative cost of living.”
He says editors have studied how to update the calculations without causing “a lot of volatility in the rankings.” After all, the methodology may be flawed, but now that it exists, U.S. News is stuck with it. If it swings wildly from year to year, there will be even more questions.
“If you look at the rankings from about ’89 to now, there is a fair amount of consistency,” says Mr. Kelly, the editor. “We are trying to establish benchmarks that give people a reason to rely on the integrity of the survey. It wouldn’t make too much sense if it changed too much every year.”
Despite all the criticism of U.S. News, its rankings have had some positive influence. They were instrumental in spurring colleges to create the Common Data Set, a uniform set of data that all institutions must now report to the federal government.
Some higher-education officials say the magazine’s rankings provide colleges with a reliable benchmark of their performance.
“It shows that we’re moving in the right direction,” says Mr. Mortimer, of the University of Cincinnati, which reached 139th place in the most recent rankings, a sharp ascent from 172nd five years ago. “There’s satisfaction in knowing our retention rates are increasing, students are engaged, and the application pool is stronger. That’s all good stuff, whether you agree with how they rank it or not.”
Winners and Losers
A funny thing happens when a college improves its standing in U.S. News — everyone at that institution takes credit for it and pats themselves on the back. Yet when a college’s ranking falls, the rankings’ methodology is to blame.
But who really cares what the rankings say?
Trustees do, college officials say. Because many trustees have backgrounds in corporate America, they are used to external benchmarks and public-reporting requirements. Comparing colleges with each other makes sense to them, just as companies are compared by numbers such as their stock prices and profit margins.
Sometimes the rewards are overt: The Arizona Board of Regents approved a contract this year that will give Michael M. Crow, president of Arizona State University, a $10,000 bonus if the institution’s U.S. News rank rises. Chronicle surveys of college presidents and trustees within the last two years show that trustees were twice as likely as presidents to say that improvements in their institution’s rankings were “extremely important” in defining the success of the college’s chief executive.
The University of Pennsylvania, one of the fastest rising research universities in the U.S. News rankings, is one institution that seems to have cracked the code. In 1994, Penn ranked 50th in the faculty-resources category. By 2002 it ranked first in that category, a position it has held ever since. Partly as a result, Penn’s overall rank rose from 16th in 1994 to as high as fourth, most recently in 2006. This year it ranks seventh.
The university’s strategy “includes the recruitment and retention of an ever-more-eminent faculty, reduced class size, increased financial aid, greater opportunities for undergraduates to engage in research, and optimizing opportunities for interdisciplinary work,” says a spokesman, Ron Ozio. “We will continue to pursue that strategy regardless of where we stand in any external rankings.”
A former Penn official said the institution was constantly evaluating its programs and how they were doing in the rankings.
“The effect of different policies on the rankings was constantly being taken into account,” says Colin S. Diver, dean of the law school at Penn from 1989 to 1999, and now president of Reed College. “My sense was, trustees who really cared about the rankings were particularly influential.”
Reed began refusing to participate in the rankings in 1995. Mr. Diver, who has been president there since 2002, said, “I feel fortunate that I live in the part of the educational world that can thumb its nose at U.S. News.”
Penn was helped by having a lot of cash to pay for the improvements that drove up its rank. Money, however, does not always help. Just ask the University of Connecticut. The university hit the jackpot in 1995 when the state made a strategic $1-billion investment in it. Seven years later, UConn received an even bigger windfall: Connecticut’s General Assembly approved an additional $1.3-billion for the university.
Both of the cash infusions were supposed to make the university more competitive in the crowded New England higher-education market, according to M. Dolan Evanovich, Connecticut’s vice provost for enrollment management. Mr. Evanovich says new apartments and suites for upperclassmen, combined with efforts to enhance students’ learning experiences in the classroom, have paid off.
From 1996 to 2006, average SAT scores of entering freshmen rose to 1195 from 1113. In 2007 the university received 21,080 applications for 3,200 spots, up from 13,800 five years earlier. Since 1995 the number of out-of-state students enrolling has doubled to 30 percent, and both graduation and retention rates have consistently improved.
Over the last four years, however, UConn has fallen in the U.S. News rankings, to 67th from 64th place. One possible reason: Its “peer assessment” ranking has remained virtually unchanged during that time, inching up only one-tenth of a point.
“Moving that figure is so difficult,” says Mr. Evanovich. “But I think our national reputation is improving dramatically and that positive momentum will eventually show up in our reputation ranking.”
Is There Another Way?
There is no shortage of alternative systems for ranking colleges, but none have been able to usurp the supremacy of the U.S. News rankings.
Many higher-education officials say the National Survey of Student Engagement, or Nessie, run by the Indiana University Center for Postsecondary Research, gives them much more useful information about the impact they are making on their students’ lives and intellectual growth. The U.S. News criteria, critics say, often puts too much emphasis on the type of student a college can enroll instead of the progress students make after arriving on campus.
But the likelihood of the Nessie results ever reaching consumers is slim. Part of the reason is that they are confidential, and Nessie’s founder has repeatedly said he has no intention of making the results publicly available, though colleges are free to publicize their own scores.
In addition, most of the institutions ranked at the top by U.S. News decline to participate in the Nessie process. Researchers have suggested that after investing heavily to make themselves look more attractive in the U.S. News rankings, they don’t want to risk being outshone by lesser-known competitors.
A number of other college guides exist, including those produced by Fiske and the Princeton Review, but none seek to attach specific numerical ranks to colleges. “The ranking is pretty silly, but it speaks to a powerful American inclination,” says John V. Lombardi, chancellor of the University of Massachusetts at Amherst. “People like any rankings that are related to prestige.” He has started a center that compares the amount of research money attracted by 82 large universities in the United States, but he does not use the data to rank the institutions.
The U.S. News rankings clearly are not going to go away, but can they become more evenhanded in measuring university performance?
Jeffrey S. Lehman, former president of Cornell University, has thought a lot about the rankings and suggested changes over a group dinner where he happened to be seated next to Mortimer B. Zuckerman, the owner of U.S. News. As they are now constituted, he believes the rankings hurt a college because they force it to act in one extreme way or another, and rarely in the best interest of everyone there.
The rankings, for example, weigh average class size, but not average hours of student contact with tenured professors, he says. They measure spending on student aid, but not a university’s tuition levels. They take into account the percentage of alumni who make gifts, but not the average size of their gifts.
“I wish that U.S. News would revise its methodology to focus on trade-offs,” Mr. Lehman says. “The rankings could help to promote educational quality, rather than distorting institutional behavior, and could provide prospective students with a much more realistic view of the different choices that different universities are making.”
Jean Evangelauf, Eugene McCormack, and Scott Smallwood contributed to this article.
HOW THE RANKINGS HAVE CHANGED 1983 | U.S. News publishes its first rankings of undergraduate programs. They are based solely on a survey of college presidents. The magazine ranks the top 14 research universities and top 12 liberal-arts colleges in the nation. | 1987 | The rankings begin annual publication. They are expanded to the top 25 research universities and liberal-arts colleges, and the magazine begins a separate ranking of institutions by region. | 1991 | The rankings for the first time include comparative data, such as graduation rates and acceptance rates. | 1994 | U.S. News begins publishing an annual guidebook with rankings of graduate programs. | 1996 | Numerical rankings are expanded to the top 50 universities and top 40 liberal-arts colleges. | 1997 | A consultant hired by the magazine says there is no scientific basis for the weights attached to the different categories, such as making “peer assessment” 25 percent of the overall score. | 2000 | The California Institute of Technology moves from ninth to first in the rankings of universities. After an outcry, the magazine changes its methodology. The following year, Caltech slips to fourth place. | 2001 | Rankings of liberal-arts colleges are expanded to the top 50. | 2004 | U.S. News drops “yield” (the percentage of admitted students who enroll) as a factor after critics say it was the reason many colleges were starting early-decision programs. Rankings are expanded to the top 125 universities and top 110 liberal-arts colleges. | |
THE FALL OF PUBLIC UNIVERSITIES Public universities struggle to improve their standing in the U.S. News & World Report rankings. In 1991, 26 research universities didn’t crack the top 25 but were in the lower half of “quartile one,” which at the time extended to 51st place. By 2007 five of those 26, all public, had fallen so far that none was ranked higher than 60th: | Rutgers U. at New Brunswick (public) | 60 | U. of Georgia (public) | 60 | Purdue U. (public) | 64 | U. of Connecticut (public) | 67 | Virginia Tech (public) | 77 | Meanwhile, three of the 26, all private, have reached the top 20: | Emory U. (private) | 18 | Vanderbilt U. (private) | 18 | U. of Notre Dame (private) | 20 | |
http://chronicle.com Section: Special Report Volume 53, Issue 38, Page A11