This month, the U.S. Department of Education will publish the annual financial-responsibility scores of thousands of private colleges. The scores are one of the few publicly available, broad-based indicators of individual institutions’ financial health. Or are they?
According to three major higher-education associations and several colleges and private accountants, the scores are often inaccurate and misleading, because the department misapplies its own rules when making its calculations.
The critics also contend that aspects of the 14-year-old formula used to calculate the scores are flawed and outdated.
For more than a year, groups including the National Association of College and University Business Officers and the National Association of Independent Colleges and Universities have pressed the department to re-examine how it calculates the scores. They are derived from the audited financial statements that colleges are required to submit annually to the department. The scores are important because they help determine whether and how freely colleges can participate in federal student-aid programs.
The business officers’ group has documented five areas where, it contends, the department is miscalculating the scores. And the private-college association says the department’s inconsistent application of formulas among its 10 regional offices compounds the unreliability of the scores as a measure of colleges’ financial health.
“There could be schools on the list that shouldn’t be on the list, and there could be schools that should be on the list that aren’t,” says Sarah A. Flanagan, vice president for government relations and policy at the private-college group, known as Naicu.
The scores, which run from 3.0 to minus 1.0, were devised to identify colleges in financial trouble. Over the past two years, several colleges with low scores have been acquired by other parties, suggesting that the list has become a tool for private investors seeking financially ailing colleges that could be ripe for a takeover.
But Naicu contends that the Education Department’s misapplication of its own rules has given a false impression of the number of colleges on the brink. The data for the 2009 fiscal year showed 149 private degree-granting institutions received composite scores of 1.5 or below, the cutoff for passing the test. “There’s just not 150 schools that are at the risk of closure, or even close to that,” Ms. Flanagan says.
She and other critics say that, for 2009 in particular—a year of significant losses for investors—the department’s treatment of endowment declines (it counts a decline in endowment value as if it were an expenditure) improperly put many more colleges on the “failed” list than should have been there.
The department maintains that it applies the regulations consistently with its interpretation of them. In a letter to Nacubo, the business officers’ group, it provided a point-by-point rebuttal to the arguments raised by that organization.
Serious Consequences
Although the higher-education groups’ disagreements with the department over the scores focus largely on arcane principles best understood by accountants—whether to classify a college’s line of credit as short-term or long-term debt, whether endowment losses should count as “total unrestricted expenses"—the ramifications are much bigger.
Colleges with scores below 1.5 are subject to tighter monitoring for their federal student-aid funds. Those with scores below 1.0 are required to post costly letters of credit to remain eligible for financial-aid programs. Colleges that consistently fail the test can be denied the right to issue federal aid to their students.
For some institutions, the publication of the scores becomes a public-relations concern as well.
When Guilford College, in North Carolina, showed up on the list for 2009 with a score of 1.4, “we were on the front page” of the local newspaper, says its president, Kent J. Chabotar. A former college finance officer, Mr. Chabotar is part of a group organized by Naicu, Nacubo, and the Council of Independent Colleges that is studying the financial-responsibility score and the department’s application of it.
There is value in the scores if they’re accurate, he says, but as currently applied, the scores are a source of “misplaced public scrutiny.”
The department has produced the scores since 1998, but the higher-education groups say it was only after the scores were made public, two years ago, that the critics began to discover what they say are widespread problems. In 2009, The Chronicle obtained scores for all colleges under a Freedom of Information Act request and published a comprehensive list of those with failing scores. Last year the department decided to release the scores annually for all institutions on its own Web site. (The release of the latest round of scores, covering the 2010 fiscal year, is expected this month, but the date has not yet been set.)
Before the publication of the scores, colleges often didn’t even know how the calculation had turned out unless they failed, Ms. Flanagan says. Colleges don’t even necessarily know how the department crunches the numbers, she adds.
Problem Areas
In conducting its own analysis, Nacubo says it has identified five areas where it believes the department is misapplying its own formula in ways that are “contrary to the letter and spirit” of the 1997 rules that established the scores.
In addition to the questions over how endowment losses are treated, most of the disagreements involve whether colleges are improperly penalized for things like the way they’ve structured their debt, or how they account for such liabilities as the long-term cost of pensions.
Dale C. Larson, chief financial officer at the Dallas Theological Seminary, says he has no dispute with counting an institution’s annual cost of providing those pensions. But he says it is wrong to treat the entire unfunded liability of a pension as a single year’s expense—as the Education Department did for his institution in 2009. That resulted in a score of 1.0. “I never should have been in the failed category,” he says.
In its rebuttal to Nacubo, the department says it is following the law in accounting for pensions.
Nacubo has also taken issue with the department’s hard line on counting pledged donations from trustees. If the trustee is also doing business with the college, the department may consider the pledge as a transaction from a “related party” and not count the entire pledge as an asset. Nacubo says the department is applying a standard for “related-party transactions” that is appropriate in the for-profit sector, but not for nonprofit institutions.
The Association of Private Sector Colleges and Universities says its members have raised no issues about how the department calculates scores for their institutions.
Ms. Flanagan, of Naicu, argues that the rules themselves need to be updated. For example, she says, while laws enacted in the past few years in most states allow nonprofit organizations greater flexibility in how they spend their endowments, the department’s formula doesn’t reflect that new leeway in its calculation of colleges’ assets.
Department officials have said they are willing to consider changes in the formula but haven’t made it a top priority. Ms. Flanagan says the groups are frustrated by the inaction but understand the situation. “Right now, they’ve got a lot to worry about that we also want them to worry about, like the student-aid programs,” she says. “Our hope is that if we come up with an alternate solution,” department officials will consider it.
Meanwhile, endowments at many colleges are gradually recovering from their 2009 lows, and the groups expect fewer institutions to find themselves on the hot seat when the scores for 2010 become public. This year, says Ms. Flanagan, “we are guessing the list will be smaller.”