Government

More Metrics, More Problems: Breaking Down Obama’s College-Ratings Plan

December 19, 2014

President Obama’s controversial college-ratings plan has spawned 16 months of intense debate, much of it centering on one key question: What measures would factor into the ratings?

On Friday the Education Department released a "framework" outlining what the system might eventually look like. The new information is unlikely to quell the controversy: It’s heavy on possibilities and light on details.

But one document the department released does, at least, outline some metrics that could be factors in the ratings. Broadly, those metrics concern issues of access, affordability, and outcomes. Some of them are familiar, and others are less clear.

Let’s break down those metrics. Here’s a road map of measures the department is thinking about, and how they present certain problems for the ratings plan.

11 Measures That Could Make the Cut …

Percentage of students receiving Pell Grants. As the department notes, the share of a college’s students who receive federal Pell Grants is "the most common measure of access and the most readily available." That’s hard to argue with. "Pell recipient" and "low income" are used pretty interchangeably in higher-education research.

No measure is perfect, and the Pell percentage does have limitations. It masks some real variation in the financial strength of recipients, whose families might have annual incomes of $0 or $50,000. It creates a distinction between similar students just above and below eligibility. And selective colleges have pointed out that it doesn’t credit them for enrolling low-income international students, who aren’t eligible for federal financial aid.

Expected Family Contribution gap. The percentage of students receiving Pell Grants is a common piece of data and readily available. The "EFC Gap" is anything but. The EFC is the amount the government suggests a family can pay for college, based on information the family reports on the Free Application for Federal Student Aid, or Fafsa. But basing a metric on an EFC gap is new.

It’s unclear what, exactly, the department would measure. Its own materials say: "We are exploring defining EFC gap as the average difference between some focal EFC level and each student’s individual EFC (with negative values treated as zero)."

That sounds complicated. In any event, the EFC itself is widely seen as an unrealistic measure of what families can pay.

Family income quintiles. Of course, if the department wants to shed light on the socioeconomic diversity of a college’s students, it could also look at income itself. That’s another option the ratings might use, and it might make more immediate sense to prospective students and families than would measures of EFC or Pell.

Income data would probably come from what families report on the Fafsa. That means the incomes of students who don’t apply for aid would be unknown. At some colleges, that’s a small population. At others, it would be substantial.

First-generation college status. Many colleges that strive to achieve student-body diversity already keep tabs on the share of students who are the first in their families to pursue a degree. But the colleges don’t all define "first generation" in the same way: Some institutions count students whose parents haven’t graduated from college, while others count only students whose parents never enrolled.

It sounds as if the ratings would gather this information, too, from the Fafsa, which asks about the educational attainment of each parent. Once again, pulling data from the Fafsa means it’s available only for aid applicants.

Average net price. Most students don’t pay sticker price, so in recent years the government has been emphasizing net price, or what students pay after grants and scholarships. Colleges’ average net prices are now reported in the Integrated Postsecondary Education Data System, and are available to consumers on the Education Department’s College Navigator.

But average net price applies only to a slice of a college’s students: first-time, full-time beginners who receive grants or scholarships from the institution or from federal, state, or local governments. For public colleges, it counts only those paying in-state tuition.

Net price by quintile. Confusingly, the net-price-by-income data, which are also already reported, considers a different universe of students than does the overall average net price. It counts first-time, full-time students who receive federal financial aid (including those whose only aid is a federal loan). At some colleges, that figure covers a great deal of the incoming class. At others, it does not.

The measure is meant to give students an idea of what someone like them might pay, but it can be misleading. Because some colleges have small numbers of federal-aid recipients in some of the income bands, the data can be skewed by a few outliers. That has led an economist at Wellesley College to encourage the government to adjust the figure to measure median, rather than average, net price.

Also, elite colleges measure income in two different ways. Some have reported their data using one measure, while others have used another, creating an apples-to-oranges comparison with their peers. You can read more about that problem here and here.

Completion rates. Here’s one of the most hotly contested ingredients that could feed into the ratings: How many students earn degrees or certificates within three years (for community colleges) or six (for baccalaureate institutions)? Federal graduation rates count only first-time, full-time students, leaving out the part-time and transfer students who make up well over half the enrollments at many colleges. A more comprehensive measure is in the works, but it won’t be ready until 2017.

Many educators worry that a focus on this measure will encourage colleges to turn away students who are likely to struggle, thereby hurting low-income and minority students disproportionately. "While it is important that colleges are affordable and provide access to disadvantaged students," the administration’s invitation for comments says, "it is essential that they contribute positively to the outcomes of their students."

Transfer rates. Recognizing that many community-college students transfer to four-year institutions without first earning degrees or certificates, the plan’s architects are "exploring the viability" of measuring transfer rates and giving colleges credit for them. That would help offset the hit those colleges would take in the completion category, where transfers aren’t counted.

As higher-education costs soar, more students start out at community colleges with no intention of getting degrees there. "This type of transfer is intrinsic to the community-college mission and should be rewarded as such," the guidelines acknowledge.

Labor-market success. Measuring labor-market outcomes is a tricky business. Focus on salaries, and you’ll reward colleges that graduate bankers and accountants while dinging those whose students become teachers and social workers. The draft plan acknowledges that and says it’s considering setting a minimum threshold for graduate earnings. There are a few ways to do that: looking at the percentage of former students who earn above 200 percent of the federal poverty line, for example, or using a multiple of the full-time minimum wage earned over one year.

The department says it will also take into account that the salaries graduates earn straight out of college don’t necessarily reflect what they’ll be making a few decades later. And it’s trying to figure out how to account for how other factors—like regional differences or a student’s major—can influence earnings data.

Graduate-school attendance. Students who go on to graduate school can drag down short-term earnings and job-placement numbers, the draft plan notes, but colleges shouldn’t be penalized for that. So the department is considering measuring the number of students who attend graduate school within, say, 10 years of entering college. But data are hard to come by because all there is to go on is who took out federal loans—and only about half of the students enrolled in graduate school did so last year.

Until the department can figure out a better way to measure rates of graduate-school attendance, this will be a work in progress.

Loan-performance outcomes. The government already holds colleges accountable—to a point—for their "cohort default rates," or the share of borrowers who default on their loans within a certain time period. But for the ratings, the department says it’s considering other loan-performance measures that might provide "additional or superior information." Possibilities include deferment, forbearance, and repayment rates.

One concern about those metrics: Arguably, they measure something that’s already captured in measures of college costs and postcollege earnings. Another concern: Using the measure might reward colleges for enrolling more students from affluent families.

… and One That Won’t

Average loan debt. The government includes a measure of typical loan debt on its College Scorecard, but it’s not considering that metric for the ratings. In a call with reporters on Friday, Education Department officials explained why.

Basically, measuring debt might not be fair. An institution that attracts many students from higher-income families might look pretty good by that measure because its students wouldn’t need to borrow as much, said Ted Mitchell, the under secretary of education. A college that tends to draw students from low-income families might also perform well because those students would be eligible for Pell Grants, which could decrease their debt levels.

But colleges that enroll more students from middle-income families could be penalized, he said.

In any case, the government doesn’t have a good way to include private-loan debt in this measure. Using a metric that includes only federal loans could create an incentive for colleges to steer students to private ones. That’s the opposite of what the government wants: Its own loans generally have better terms and borrower protections.

Eric Kelderman contributed to this article.