For the past three months, The Chronicle's reporters have been writing a series of articles collectively titled Measuring Stick, describing the consequences of a higher-education system that refuses to consistently measure how much students learn. From maddening credit-transfer policies and barely regulated for-profit colleges to a widespread neglect of teaching, the articles show that without information about learning, many of the most intractable problems facing higher education today will go unsolved.
Failing to fill the learning-information deficit will have many consequences:
- The currency of exchange in higher education will continue to suffer from abrupt and unpredictable devaluation. Students trying to assemble course credits from multiple institutions into a single degree—that is, most students—frequently have their credits discounted for no good reason. That occurs not only when students transfer between the two- and four-year sectors, or when the institutions involved have divergent educational philosophies. A student trying to transfer credits from an introductory technical-math course at Bronx Community College to other colleges within the City University of New York system, for example, would be flatly denied by five institutions and given only elective credit by three others. John Jay College of Criminal Justice, by contrast, would award the student credit for an introductory modern-math course acceptable for transfer by every CUNY campus, including Bronx Community College—except that BCC would translate that course into trigonometry and college algebra, not technical math.
- Taxpayers have few defenses against those who would exploit the federal financial-aid system for profit. Last year the U.S. Department of Education rightly criticized the Higher Learning Commission of the North Central Association of Colleges and Schools for accrediting American InterContinental University, despite AIU's "egregious" policy of awarding nine credits for five-week courses. But the department's follow-up proposal to solidify the traditional, time-based definition of credits as signifying one hour spent within the classroom and two without was also criticized, and for good reason. Nearly a third of all college students took online courses last year. Why would anyone define credits in terms of seat time when, increasingly, there are no seats and no fixed learning time? Because they have no other basis for doing so.
- Upward mobility in higher education will remain limited to institutions that happen to be located in the cities favored by Richard Florida's "creative class." If your campus is in Greenwich Village or Foggy Bottom, the sky's the limit. If all you have to offer is unusually good teaching, you're out of luck. How can you prove it? How would anyone know? So aspiring colleges are forced to compete for students by means of marketing campaigns, recreation centers, and other expensive things that continually drive up tuition until there are no students left to pay full freight and subsidize all the rest. And then the whole rickety system comes crashing down. It's not a question of whether this will happen to many mid-tier institutions—it's when.
- The public definition of institutional quality is left to think-tank entrepreneurs and journalists with agendas to push and magazines to sell. Those who are terrified by the notion of Congress's using such information to create an accountability system for higher education should consider that, in fact, we've had such a system in this country since 1983. It's run by U.S. News & World Report.
- Most important, without information about learning, there is less learning. Faculty cultures and incentive regimes that systematically devalue teaching in favor of research are allowed to persist because there is no basis for fixing them and no irrefutable evidence of how much students are being shortchanged.
Students who emerge from this bureaucratic labyrinth should be awarded credit in Kafka studies for their trouble.
Credit devaluation, which wastes enormous amounts of time, money, and credentialed learning every year, is rooted in mistrust. Because colleges don't know what students in other colleges learned, they're reluctant to give foreign courses their imprimaturs.
Lacking objective information about student learning, the crumbling quality-control triad of accreditors, states, and the federal government is faced with an unwelcome choice: Reinforce a time-based measuring stick that was already flawed when it was developed, in the late 19th century, or allow unscrupulous operators to write checks to themselves, all to be paid by the U.S. Treasury.
Reasonable higher-education leaders acknowledge all of those points. Yet the prevailing attitude toward information about learning still ranges from infinite caution to outright hostility. Assessing student learning is difficult, particularly learning at the elevated levels to which colleges ought to aspire. Still, possible instruments of assessment are seen as either gross violations of institutional autonomy or as so crude and imperfect that they require further refinement and study, lasting approximately forever. "The perfect is the enemy of the good" has become a rhetorical strategy to be deployed, rather than a problem to be avoided, when outsiders ask uncomfortable questions about teaching and learning.
American universities grant 50,000 research doctorates per year. Even if we consider only full-time staff in Ph.D. programs, there are upward of 170,000 people working in colleges today who have been rigorously trained to find meaning in chaos. They explore the furthest theoretical reaches of time and space; ponder the nature of justice, beauty, and truth; develop new ways of understanding the human condition; and contribute countless innovations that make the world a more vibrant, humane place to be. Are we to understand that it is beyond their intellectual means to produce a reasonably accurate estimate of how much chemistry majors learn at Institution A compared with Institution B? That a student's relative capacity to think analytically and write clearly is a mystery that no mortal can hope to reveal?
Nonsense. Comparable learning information doesn't exist because many groups have a strong interest in its not existing. Institutions that thrive on centuries-old reputations, despite their present-day failure to challenge students in the classroom. Companies looking to exploit the federal financial-aid system. Faculty who hate teaching and love research. Colleges that profit from forcing students to take the same course twice.
Institutional autonomy is important, and so is the academic freedom that allows faculty to shape the content and character of their courses. But there are reasonable limits to most things, including these. When the autonomy of CUNY math departments produces a Mad Hatter credit-transfer system, it's time to draw the line.
There are, of course, many people in higher education with enlightened motives and views. Public institutions are beginning to publish results from the Collegiate Learning Assessment and other assessments of critical-thinking skills. Seventy-one presidents, many from liberal-arts colleges that specialize in teaching, have formed the Presidents' Alliance for Excellence in Student Learning and Accountability. The better accreditors are using their limited leverage to prod institutions toward more assessment and transparency.
But the question remains: Will those efforts come fast enough or go far enough?
The "gainful employment" regulations that the Department of Education is working to impose on for-profit colleges are nothing less than a wholesale repudiation of traditional higher-education quality control. All of the institutions in question are accredited to do business. Yet the federal government still doesn't trust that their students are learning enough for what they're paying. So the department has chosen to define learning in purely economic terms, comparing students' postgraduate earnings with their debt.
That makes sense for vocational programs. But how long will it be before politicians who see higher education as nothing more than a way to train future workers simply cross out the "for profit" limitation on the gainful-employment measures?
College rankings, meanwhile, are proliferating as private companies compete to sate the growing appetite for comparative information among prospective students at home and abroad. As much as colleges complain that their unique essence can't be distilled into a single number, students choosing a college (or, increasingly, a course) can choose only one. Yet, rather than produce alternative rankings that reflect the core values of higher learning, many people in higher education seem to believe that the rankings genie can be put back in the bottle through a campaign of frequent, uncoordinated complaining, accompanied by the hope that U.S. News, which doesn't even publish an actual newsmagazine anymore, will somehow see the error of its ways.
Meanwhile, a few of those 170,000 smart people are actually interested in how much students learn in college, and are using new psychometric instruments to find out. When their results become public, the myth that everyone with a college degree actually learned something will be definitively punctured, and along with it any justification for keeping information on learning hidden.
The real debate shouldn't be about whether we need a measuring stick for higher education. We need a debate about who gets to design the stick, who owns it, and who decides how it will be used. If higher education has the courage to take responsibility for honestly assessing student learning and for publishing the results, the measuring stick will be a tool. If it doesn't, the stick could easily become a weapon. The time for making that choice is drawing to a close.