[Updated, 2/6/2014, 7:11 p.m.]
Pity the poor data geeks at the National Center for Education Statistics. The Education Department has less than four months to roll out its controversial college-rating system, and it isn’t going to be easy, as the data experts at Thursday’s technical symposium made clear.
Pretty much everyone presenting at the daylong event agreed that existing federal data on student outcomes are flawed, that a unit-record student-tracking system would solve almost every problem (if only Congress would allow it), and that it would be almost impossible to design a ratings system that would please everyone.
But they disagreed on pretty much everything else, including which metrics the department should use, how it should weight them, and how it should group institutions for peer comparisons. Should the groups be based on mission, as several panelists argued? Geography? Price? Do graduates’ earnings really matter, or are less easily measured factors, such as learning outcomes, paramount? And should allowances be made for colleges that serve “high-risk” populations?
Then there were the broader questions about the purpose of the system: Is it meant to be an information tool for consumers, panelists wanted to know, or an accountability measure? Can it really serve both purposes? And will it even influence behavior, either consumers’ or colleges’?
With a draft system promised by the spring, the department has only a few months left to figure it all out. Ultimately, its employees will have to make do with the data they have now, and decide which outcomes matter most. In the end, it will come down to value judgments and a difficult-to-answer question: What is the purpose of higher education?
Following is a sampling of some of the panelists’ remarks at Thursday’s symposium, as recorded in a live Chronicle blog, and the conversation those comments sparked online.