The Education Department’s “framework” for its college-ratings plan is surprisingly tentative, filled with verbs like “exploring” and “considering.” It can be seen as a smart move: Kick the ratings can down the road, telegraph what might be coming, get more stakeholder involvement, and so on. But it can also be seen as an OMG moment: After so much effort, so many meetings, and so much chatter, there remain far too many questions unanswered and far too many ratings criteria ill-defined.
The department is considering three categories of variables: access, affordability, and student success. The department has a long history measuring access. So it’s no surprise that the access variables are the best developed. The calculations are mostly straightforward (e.g., percentage of students receiving Pell Grants has been reported for a long time; first-generation college status can be calculated directly from FAFSA), and the “considering” and “exploring” verbs appear only once (in a discussion about family contribution).
The Education Department has less experience measuring affordability. Central to that calculation is average net price, which the department is “considering” using. This measure is considered by many to be flawed. I worked on the Money magazine ranking, and we developed a different measure of net price that we believe more accurately reflects differences across institutions.
It is the next category of measures, student success, where the tentativeness of the administration’s efforts are most evident. We have been round and round for years about how flawed official federal graduation rates are. And as the department notes, broader measures are supposed to be made public in 2017. In the meantime, according to the framework, the department is “exploring” using the National Student Loan Data System (NSLDS) to create broader measures. Certainly during the year and a half since this ratings project was announced, the department could and should have done a more thorough investigation of graduation rates using NSLDS information and checked their reliability against the Integrated Postsecondary Education Data System and survey data.
Education Department officials note that they want to use transfer rates because a two-year college is “a step toward completion of a bachelor’s degree.” However, as the National Student Clearinghouse has shown, patterns of student transfers are not a simple two- to four-year “progression.” Students swirl about, transferring within levels (e.g., from one two-year school to another) and there is a surprising amount of transfers from four-year to two-year schools. But far more important, if we want transfer rates to reflect a step toward completion of a bachelor’s degree, then we should measure the percentage of transfer students who actually complete the bachelor’s degree. Bachelor’s completion rates for transfer students are low. According to the Beginning Postsecondary Students Longitudinal Survey, only 21 percent of two-year college students who intended to earn bachelor’s degrees had earned them five years later. Bachelor’s attainment is what should be measured.
The department is at best vague about how it will measure employment outcomes. The department says it is “exploring a variety of employment outcome measures that can provide critical employment information that appreciate those differences in ways that are sensitive to educational, career, work force, and other variables.” This statement may reflect political realities, but it doesn’t help us understand how the department will actually move forward.
One more note, the department is ignoring the most basic fact about the earnings of graduates: There is far more variation in employment outcomes by program compared to institution. As my College Measures work has documented, what students study is much more closely related to employment outcomes than where they study. And the vague goals of “short-term substantial employment rates” and “long-term median earnings do not inspire confidence.
Left unsaid in the framework is what the department will do with all these measures, no matter how it eventually defines them.
Again, tentativeness rules. Since the department is eschewing rankings, it apparently will clump colleges into high-, middle-, and low-performing categories, with most institutions landing somewhere in the middle. Will these rankings be by individual category (accessibility, affordability, success) or a single score? How will individual variables be combined within each category? Will the percentage of Pell Grant recipients and percentage of first-generation college students, for example, be equally weighted when it comes to accessibility?
Will scores be adjusted for different student populations? Adjustments are needed when comparing the success of graduates from flagships or Ivy League institutions against those from regional colleges, but this requires the construction of risk-adjusted models. And choosing the variables to include in risk-adjusted models is as fraught as choosing variables to include in the ratings system.
After 16 or so months of meetings and discussions, the department will no doubt require even more meetings and discussions. But the administration can’t avoid a fundamental problem. When Money magazine or U.S. News & World Report or Washington Monthly release their rankings, the variables included and the weights used are driven by editorial decisions. Readers are free to buy the magazines and read the rankings or not. In contrast, the federal Department of Education, as a government agency, has a much louder megaphone than even the largest of these private sources. Its ratings will be subject to far more scrutiny than private rankings.
The tasks of which variables to choose, how to measure them, how to report them, and what to do with them are difficult for anybody engaged in rating or ranking colleges. But the challenges are magnified for a government agency. This may account for why, after so many months and so much effort, the framework remains tentative and incomplete.
Mark S. Schneider is a vice president at the American Institutes for Research.