As the Obama administration moves ahead with its plan to rate colleges on access, affordability, and student outcomes, questions remain about the reliability of the data it has to work with. But while some say that’s reason enough to put the plan on hold, others argue that an imperfect accountability system is better than none at all.
Rather than fighting the ratings, colleges should help the federal government come up with accurate measures of how students are progressing and how much their educations are worth, a former education adviser to the president said during a weekend seminar here hosted by the Education Writers Association.
“It’s a difficult task, but that doesn’t negate the need for it to happen,” said Zakiya Smith, who is now a strategy director for the Lumina Foundation. Until last fall, Ms. Smith was a senior adviser for education at the White House’s Domestic Policy Council, where she helped develop President Obama’s higher-education policy. During a discussion here on Saturday, she said she was “cautiously optimistic” about the proposed rating system.
Under the system, which the president unveiled in August as part of a broader college-accountability strategy, students attending higher-rated institutions could obtain larger Pell Grants and cheaper loans.
The goal is to help students select an affordable college from which they can graduate and get a good job. One potential danger in pushing colleges to improve completion rates, though, is that they’ll cut off access to disadvantaged students who might pull their numbers down.
Matthew Reed, vice president for academic affairs at Holyoke Community College, argued that any system that rates colleges by the salaries their graduates earn will penalize those that graduate elementary-school teachers, artists, and others who make valuable contributions to society but don’t bring home fat paychecks.
“If we’re going to be judged on salaries, the easiest way to game the system is to drop programs in music and art history,” he said. “I think that would be a tragedy.” Rating programs, rather than colleges, would make the system more complex, but more fair, he said.
Many students served by his open-admission college in Massachusetts have learning disabilities and other challenges, Mr. Reed said. “The easiest way to goose your graduation rate is to try a little less hard to reach those disadvantaged students.”
Worries About Data
Then there’s the issue of how the federal government measures graduation rates. Everyone agrees it’s flawed because it doesn’t take into account large numbers of students, including those who transfer between colleges and those who attend part time.
Terry W. Hartle, senior vice president for government and public affairs at the American Council on Education, said he would have preferred to see the administration send out a series of formulas for peer review before it made a rating system final. That’s unlikely to happen because the administration has vowed to have a draft of the rating system ready by next fall. Mr. Hartle would also have limited it to four-year colleges, where issues of affordability are more pressing.
“The more suspect the information, the more controversy the rating will attract,” he said.
As Mr. Hartle and Mr. Reed recited a litany of complaints about the data the government uses to measure completion and the dangers of lumping colleges with different missions together, Ms. Smith was bristling.
She said she was surprised at the extent of opposition to a rating system that doesn’t exist yet.
“Politicians tend to hunker down and figure it out themselves,” she said. “It’s a good thing that the administration is seeking input from the higher-education community.”
Colleges all have ways of measuring and touting their successes, she said. They should share those with the federal government, she said, so it can come up with the best possible system for helping students make the best college choice.
Old Battles Rejoined
For years, college leaders and lobbying groups have decried the lack of reliable data on student retention and completion, and for years they’ve also battled over proposals to create a “unit record” database that would track the progress of each student over his or her whole college career. Critics, including lobbyists for private colleges, have argued that that would raise troubling privacy concerns.
As far back as in 2005, Mr. Hartle said that the federal government’s demands for greater accountability weren’t going to go away, and that colleges should work with the government to improve the reporting system rather than just complaining that it’s flawed.
“Those responses,” he said at the time, “don’t work forever.”
As Saturday’s discussion made clear, those arguments are still taking place.
Another challenge of having the Obama administration come up with a rating system, Mr. Hartle said, is that a new administration could change it. Say, for instance, an Obama plan factored community service into a college’s score, but a subsequent Republican administration thought military service should be included instead. “One of the challenges is having a rating system that means something over time,” Mr. Hartle said.
Mr. Reed said that comparing the outcomes of community colleges in sparsely populated rural areas with those of colleges in competitive higher-education markets doesn’t make sense.
“Students here have a wealth of choices, but that means we get fewer honors graduates than we would if we were in North Dakota,” he said. “The contexts are so utterly different that to put everyone on the same grid is going to lead to some serious distortions.”