I followed with sadness the developments that resulted in the end of Barat College, beginning with its merger in 2001 with DePaul University and followed three years later by DePaul’s announcement that it would close the college. While the subsequent sale of Barat’s educational assets to a for-profit group was an unusual course of action, the story of the small, tuition-driven institution’s closing is familiar. We have read similar accounts about other colleges, including Ambassador University, Bradford College, Marymount College (in Tarrytown, N.Y.), and Trinity College of Vermont. All were small and in tight financial positions, and all ended up closing their doors.
Those institutions shared another distinction: All had been ranked in the top half — second tier or above — of their respective categories of regional colleges by U.S. News & World Report within a year of announcing that they would cease operations. Ambassador boasted in 1996 that it was “rated by U.S. News as one of the top liberal-arts colleges in the West,” before issuing notice in January 1997 that it would close at the end of the year. Out of more than 60 institutions, Bradford was ranked 22nd in its regional liberal-arts category in 2000, shortly before its demise. Trinity was ranked 21st that same year, and stopped operating in 2001. Barat was ranked in the second tier of its regional category when it merged, and Marymount was ranked 19th in 2005, the year of its closure.
The U.S. News rankings are ostensibly intended to help students and their parents select an institution to attend; the editors state “the rankings can be a powerful tool in your quest for college.” But how valuable is that assistance if it gives a relatively high ranking to a college that closes before the end of a student’s freshman year? The closures, following relatively solid rankings, cast doubt on the validity of the rankings for all small, tuition-driven institutions.
Two questions naturally emerge: What current U.S. News indicators are, in fact, predictors of financial instability rather than of academic quality? What indicators are missing from the ranking system that might prevent that kind of error?
There are a number of assumptions — largely untested — about academic quality inherent in the rankings. The first is built into the category called “student selectivity.” Not only is the argument that selectivity is associated with the quality of teaching dubious, but at least one of its subfactors, “acceptance rate,” encourages what can be considered inefficient or even irresponsible enrollment practices.
U.S. News defines acceptance rate as “the ratio of the number of students admitted to the number of applicants.” While many educators may believe that high numbers of rejected applicants is a sign of selectivity and therefore good, for those of us responsible for the financial stewardship of a tuition-driven institution, it is neither an efficient use of resources nor an indication that the college delivers knowledge particularly well. It is hard to imagine another industry that would regard as a positive practice having the sales force woo prospective customers and process their paperwork, only to find such prospects unable to buy the company’s product.
At my institution, where efficiency is king, admissions personnel are armed with models of statistical prediction and focus their efforts on recruiting students who will matriculate. We spend more time communicating with “hot” prospective students — bringing them to the campus, following up, and helping them complete the application process — than we do with less-likely prospects because we know that our efforts have a better chance of paying off. That drives our acceptance rate up, which harms us in the rankingswhere we consistently appear in the third tier of our category. But awarding inflated rankings to colleges that do otherwise is inappropriate and potentially irresponsible.
Weighted at one-fifth of the overall institutional score, the “faculty resources” category is also fraught with peril, particularly for the struggling college. “Faculty compensation,” the largest subfactor in the category, seems, at first blush, to make sense as an indicator of quality if we accept the logic that higher pay equals higher quality (logic that many religiously affiliated or small, mission-driven institutions do not embrace). But, for several reasons, that argument can be especially flawed at financially troubled colleges.
First, imagine an institution that has imposed a hiring freeze or let recent faculty hires go in response to a downturn in enrollment. It may have relatively fewer junior faculty members and, as a result, show higher average faculty salaries. The college that employs such a strategy may be hurt in the rankings on the “percent of full-time faculty” score. But that variable accounts for only 1 percent of a college’s overall ranking and is a small price to pay relative to the 7 percent accounted for by “faculty compensation.”
Another problem with the faculty-compensation score is that, because it rewards colleges for overpaying for benefits such as health insurance, it is a further example of U.S. News’s proclivity to interpret financial mismanagement as a sign of academic quality.
The second most heavily weighted subfactor in the “faculty resources” category, the percentage of undergraduate classes with fewer than 20 students, also can make a college appear to be on firm footing yet mask financial problems. U.S. News would probably argue that this measure reflects a college’s commitment to spending money to maintain relatively small classes. But it could also mean that the institution’s enrollment figures are down, in which case that alleged measure of quality is actually an indicator of financial weakness.
Consider two small, fictitious private institutions, Alpha University and Beta College. Each has 100 full-time faculty members, with a teaching load of 12 credits per semester at $500 per credit. Alpha has an average class size of 19 students; Beta, 20 students. Beta will gain gross revenues of $600,000 more each semester than Alpha. That money — for library resources, educational supplies, student-support services, faculty development, and other efforts aimed at enhancing quality — is not available to benefit the students at Alpha. Yet U.S. News would argue that, on the basis of the magazine’s ranking criteria, the educational experience at Alpha is appreciably superior to that at Beta.
Does it follow, then, that Gamma University, which had a significant and unanticipated shortfall of first-year students, resulting in 90 percent of its classes’ having fewer than 20 students enrolled, is poised to provide the best experience of all? What any responsible college president or provost would recognize as a sign of serious problems — average class size plummeting — is interpreted as an indicator of high quality by U.S. News.
To be fair, U.S. News does have a “financial resources” category, which takes into account average spending per student over the previous two years on instruction, research, support, and related areas. But that category asks us once again to assume that spending more results in superior performance. The bigger assumption is that the institution is financially stable and is operating with a balanced or surplus budget.
Consider the alternative, though, using our hypothetical Gamma University. If its administrators fail to adjust their expenditures in response to the enrollment downturn — perhaps dipping into the endowment instead of cutting expenses — spending per student will increase in proportion to the decline in the number of students. U.S. News researchers, who have no other measures of financial health to work with, would interpret those decisions as indicators of high quality rather than what they are: red flags signaling financial mismanagement.
Because the various rankings’ factors interact, of course, no single factor can put one institution above another. But the way U.S. News weights those factors is wrongheaded and can give rise to mismanaged colleges’ being overrated. So what should U.S. News do to avoid the kinds of ranking-inflation errors that give high marks to some colleges on the verge of collapse? How can it provide a more accurate picture of what the future holds for those institutions, at least in the short term? It should:
Eliminate the acceptance-rate variable. It is antiquated, fails to say anything about the quality of instruction, and promotes the inefficient use of financial and human resources. To determine a college’s desirability, the rankings could instead focus on the decisions of accepted applicants. For example, the National Student Clearinghouse tracks the enrollment decisions of students accepted at multiple institutions. Each college could have a score based on the number or percentage of students who select that institution over others to which they have been accepted. U.S. News might also use the institutional discount rate (using merit-based aid only, not need-based aid, so as not to encourage institutions to discriminate against low-income students). That would indicate how much of the institution’s own money is spent to attract students and would include only those students who are qualified for admission — making it a more valid indicator than acceptance rate.
Balance faculty compensation with the percentage of full-time faculty members. That way the cost of sacrificing the number of full-time instructors for the compensation of full-time instructors would be higher. The proportion of full-time professors teaching classes is a better indicator of both educational quality and institutional health than the amount that faculty members are paid or, worse, how much the institution spends on its health-insurance plan.
Expand the financial-resources section of the survey. For private colleges, at least, a number of benchmarks could be used to get a better handle on an institution’s financial viability, indicators that could have prevented institutions like Ambassador, Bradford, and others from being ranked as high as they were. Those indicators include the ratios of net assets to total expenses (known as primary reserve), expendable net assets to debt (viability), and debt service to total expenses (debt burden).
Such measures provide a quick comparative snapshot of whether a college is effectively and responsibly managing its resources in ways that secure its long-term financial health. Their inclusion in a ranking score would go a long way toward assuring prospective students that a “top tier” institution will still be around at graduation time.
Andrew P. Manion is provost at Aurora University, in Illinois.
http://chronicle.com Section: The Chronicle Review Volume 53, Issue 38, Page B16