Although the conventional view in the United States is that elementary and secondary schools have serious problems in terms of quality while that of higher education is exemplary, the evidence suggests that such problems exist among colleges as well as schools. As the report from the Secretary of Education's Commission on the Future of Higher Education, better known as the Spellings Commission, noted several years ago, "There are … disturbing signs that many students who do earn degrees have not actually mastered the reading, writing, and thinking skills we expect of college graduates. Over the past decade, literacy among college graduates has actually declined."
On a purely anecdotal level, the proportion of college graduates I have interviewed for employment in the past decade whom I'd consider fully qualified—for example, who could think and write at competent levels—is astoundingly low, including many graduates of so-called elite liberal-arts colleges. Such trends are especially troubling given the increased importance of higher education to the nation. In our knowledge-driven global economy, high-quality higher education is an important driver of economic competitiveness. Improving higher education is something we all have a stake in.
The challenges that I've described have many causes, including the reduced levels of government support for colleges. But a major contributing factor is that the customers of higher education—students, parents, and employers—have few true measures of quality on which to rely. Is a Harvard education really better than that from a typical flagship state university, or does Harvard just benefit from being able to enroll better students? Without measures of value added in higher education, that's difficult, if not impossible, to determine. And without the ability to measure an institution's education quality, customers won't be able to make the best choices.
Developing such measures of outcome or of value added is itself difficult, but an existing intermediate measure of quality could provide customers with significantly better information. The National Survey of Student Engagement, begun with support from the Pew Charitable Trusts, is designed to obtain, on an annual basis, information from more than 1,300 colleges about student participation in programs and activities that those institutions offer for learning and personal development. The latest version was just released, and the results provide an estimate of how undergraduates spend their time, what they gain from attending college, and their views about the quality of teaching that they've received. Even though the survey doesn't measure education outcomes, it measures the activities and practices that are associated with those outcomes. Indeed, it states, "Survey items on the National Survey of Student Engagement represent empirically confirmed 'good practices' in undergraduate education. That is, they reflect behaviors by students and institutions that are associated with desired outcomes of college."
Yet what is remarkable about the survey is that participating institutions generally do not release the results so that parents and students can compare their performance with those of other colleges. The administrators of the survey have agreements with participating institutions that prevent the reporting of the results for individual colleges. Thus, while colleges are able to see how they rank relative to all others involved in the survey, the public is not. Even if some colleges post the results on their Web sites, and 450 release data for a USA Today site, that is not the same as aggregating all the results in one place.
Requiring all colleges to make such information public would pressure them to improve their undergraduate teaching. It would empower prospective students and their parents with solid information about colleges' educational quality and help them make better choices. To make that happen, the federal government should simply require that any institution receiving federal support—Pell Grants, student loans, National Science Foundation grants, and so on—make its results public on the Web site of the National Survey of Student Engagement in an open, interactive way.
To be sure, many colleges will complain that requiring such information to be made public will lead to all sorts of problems. They will claim that colleges won't participate. But if they want federal funds, they will probably participate. They will say that they already use the information internally as a benchmark to measure themselves against other institutions, so that making it public is not necessary. But that would be like the airline industry's saying that it doesn't need to publish on-time departure and arrival data, and that as long as carriers know how they compare with their competitors, they will improve. After all, what institution in any industry wants information made public about its performance?
Making the survey data public would certainly make life more challenging for faculty members and administrators at low-performing institutions or at those whose relative scores are going down. But competition and accountability drive improvement in performance, whether in the airline industry or in higher education.
Indeed, a growing number of organizations in our economy now have to live with customer-performance measures. It's time higher education did the same. Students, parents, employers, and society as a whole will be better off for it.