In 2010, as the U.S. Department of Education was preparing new regulations to bear down on career-focused programs, the for-profit colleges’ main trade association set out to create something it hoped would take their place: a measure of institutions based on factors like graduation and loan-default rates that would account for the numbers of low-income students they enrolled.
The association “wanted a defensible, realistic alternative” to the department’s gainful-employment regulations, one that would also reflect research showing that lower-income students have a tougher time completing college, recalls William C. Clohan, a former under secretary of education who was hired as a consultant to develop and promote that alternative. He came up with a few proposals for “risk adjusted” metrics for colleges, he says, but the folks on Capitol Hill just weren’t very interested.
Today, however, with the Obama administration developing a system that would rate all colleges, it’s no longer just for-profit institutions arguing for a metric based on students’ income and other demographic information.
“You have to take into consideration the nature of the student body,” says Peter McPherson, president of the Association of Public and Land-Grant Universities, which has been vocal in advocating for such an approach. “It’s really not reasonable,” he says, to expect the same outcomes from open-admissions institutions as from highly selective institutions with a particular student profile. Other groups and experts, including the National Association of Student Financial Aid Administrators and several education researchers who contributed to a 2012 project called Context for Success, have endorsed the idea, too.
But the approach raises questions about stereotyping and the dangers of low expectations, so much so that David A. Bergeron, of the Center for American Progress, says the idea makes him “queasy.” Others, like the Institute for College Access & Success, oppose it outright, calling it a dangerous double standard.
Using a risk-adjusted model for an accountability tool, says the institute’s research director, Debbie Cochrane, is “implying that low-income students deserve less.”
The Obama administration has provided few details about the shape of its proposed college-rating system, but officials have said it would recognize institutions that enroll a higher proportion of low-income students, and eventually—subject to Congress’s approval—reward such institutions with higher levels of federal student aid.
Already in Use
Judging colleges on the basis of the makeup of their student bodies has become common in some contexts. The Higher Education Research Institute, for example, which conducts the annual Freshman Survey, has been developing risk-adjusted graduation rates for colleges for about 15 years, and now offers an “Expected Graduation Rate Calculator” on its website. (The calculator uses criteria like students’ ethnicity and academic profile, not income level. But as HERI’s director, Sylvia Hurtado, notes, students’ academic preparedness is often linked to their level of income.)
The “social mobility” rating in the annual college guide produced by Washington Monthly includes predicted versus actual graduation rates, with the former calculated on the basis of the standardized-test scores of incoming students and the proportion of the class that receives Pell Grants. U.S. News & World Report factors in actual-versus-predicted graduation rates in forming its rankings.
But it’s one thing to apply the risk-adjusted concept to a college guide, or for a college to use tools like the HERI calculator for self-assessment. It’s quite another to apply the approach as part of a ratings system devised by the U.S. government.
Indeed, even a federal advisory panel on student aid, whose July 2013 “Measure Twice” report shows that colleges with a high percentage of Pell Grant students have lower graduation rates, argued against using such policies in awarding federal grants to students, because it would be hard to develop a fair and reliable model. In that report, the Advisory Committee on Student Financial Assistance says that if the government’s goal is to improve graduation rates, a better approach would be to identify practices at peer institutions that work best and then provide incentives to other colleges to use them.
Complex Issues
The difficulty in developing a formula for a federal ratings system with consequences has not escaped policy analysts. The American Association of State Colleges and Universities has not yet taken any formal position on the ratings-system idea, but at a forum in April, its director of federal relations and policy analysis, Barmak Nassirian, said his group wants assurances that any risk-adjusted approach would be based on social science, “not numerology.”
Mr. Bergeron, of the Center for American Progress and formerly of the Education Department, says he worries about “the problem of false precision” when using an elaborate risk-adjusted formula.
Even advocates of the approach acknowledge the challenges. At the forum, R. Michael Tanner, chief academic officer of the Association of Public and Land-Grant Universities, which sponsored the event, noted that many “ethical issues” come into play in decisions about what to weigh and by how much. Factors like income levels, high-school grade-point averages, and the level of education of the students’ parents are just a few that might be considered, he said. “It’s going to be hard for a really simple predictor to do its job.”
There are conflicting concerns. On the one hand, “you are risking making big analytical and policy-making mistakes” if you look at factors like graduation rates without taking the student demographics into account, said Nate Johnson, one of the Context for Success contributors. But as a practical matter, he said, the size and diversity of the higher-education sector adds to the already-complex dynamics.
“I’m not sure the federal government is in the right position to create a model that is conceptually and politically workable,” said Mr. Johnson, an education-policy analyst who now runs a consulting company called Postsecondary Analytics.
Even something as simple as using the proportion of students with Pell Grants as a proxy for disadvantaged or at-risk students can be fraught, he noted. Such a measure could exclude at-risk students who don’t qualify for such grants because they aren’t carrying enough of a credit load or have already exhausted their Pell eligibility.
Altering expectations for institutions on the basis of the incomes of their students also invites broader questions, said Mr. Johnson. Does it tell us that poverty is simply something beyond higher education’s control, he asked, or “that we systematically send our poor students to less-effective institutions or serve them less effectively?”
Mr. McPherson, the APLU president, says one way to answer those concerns is by designing a rating system in which the bar is continually moving. “You can’t have permanently low standards,” he says. “The expectations should rise.” That would make it fair to institutions, he says, and minimize the queasiness factor.