Time-based units were never intended to be a measure of student learning. In the early 1900s, Andrew Carnegie, troubled that professors made too little to save for retirement, created a free pension system, administered by the Carnegie Foundation for the Advancement of Teaching. In order for colleges to participate in the program, they had to adopt a standard unit for admissions, which was based on a system used at the high-school level that measured time spent on a subject.
But colleges didn’t stop there. Carnegie’s pension system spurred them to convert their course offerings into time-based units to determine faculty workload thresholds to qualify for the free pensions. And so the credit hour, which has become the fundamental building block of courses and degree programs in higher education, was born. Unfortunately, it has also become the primary proxy for learning.
The Carnegie Foundation did not intend for this to happen. It made that quite clear in its 1906 annual report, when it specified that in the counting of units, “the fundamental criterion was the amount of time spent on a subject, not the results attained.” Just last month, over 100 years later, Carnegie not only reiterated this point, but announced an effort to rethink its unit completely: “The Carnegie Foundation now believes it is time to consider how a revised unit, based on competency rather than time, could improve teaching and learning in high schools, colleges, and universities.”
Theoretically, colleges supplement the credit-hour count with an objective measure of how much students have learned: grades. But it is hard to reconcile that measure with the research suggesting that nearly two-thirds of provosts and chief academic officers think grade inflation is a serious problem. In 1961, 15 percent of undergraduate course grades were A’s; today more than 40 percent are A’s.
Even more worrisome, the National Center for Education Statistics has found that nearly 70 percent of college graduates could not correctly perform basic tasks like comparing opposing editorials. And Richard Arum and Josipa Roksa’s recent work, Academically Adrift, found startlingly low gains in students’ critical thinking, complex reasoning, and communication skills. Given these findings, it seems clear that the function of grades in meaningfully differentiating student learning has eroded.
It is not surprising, then, that employers are not particularly impressed with recent college graduates. Employers want what higher education says it provides: critical thinking, communication, teamwork, writing, and adaptive learning skills. Yet only about one-quarter of employers say that colleges and universities are doing a good job in preparing students effectively for the challenges of today’s global economy.
Perhaps the strongest evidence of the credit hour’s inadequacy can be found in the policies and choices of colleges themselves. If credit hours truly reflected a standardized unit of learning, they would be transferable across institutions. Nearly 60 percent of students in the United States attend two or more colleges, so the nontransfer of credits has huge implications. But colleges routinely reject credits earned at other colleges, underscoring their belief that credit hours are not a reliable measure of how much students have learned. If higher education doesn’t trust its own credits, why should anyone else?
They probably shouldn’t. While the stated mission of higher education may be about learning, the research findings on poor learning outcomes and rampant grade inflation, along with the difficulty of credit transfer, tell a different story. Is learning a priority at institutions with auditorium-size classrooms full of disengaged undergraduates being taught by poorly paid, overextended teaching assistants or adjuncts? Is learning (and teaching) a priority when publishing or raising grant dollars is the primary driver behind tenure?
Without broader agreement about learning outcomes, credits and the value of degrees will remain opaque. Measuring time is easy, but measuring learning is hard. However, that doesn’t mean that it shouldn’t be done. Those in higher education must roll up their sleeves and commit to the hard work of figuring out together what it is they expect students to know and how best to meaningfully assess what they have learned.
This is already happening in some places, where colleges have been experimenting with the Degree Qualifications Profile and the Tuning process, both developed by the Lumina Foundation. The DQP is a framework for what students should know and be able to do with a degree, regardless of discipline. The DQP highlights five key areas (broad, integrative knowledge; applied learning; intellectual skills; specialized knowledge; and civic learning) that should be part of any degree program. Tuning is a faculty-driven process to articulate discipline-level learning. This is often less an exercise in creating minimum outcomes than articulating what is already in practice, allowing groups of experts to fine-"tune” their expectations and make them clear to students, other institutions, and employers.
While promising, these attempts to forge agreement on defining and assessing what students are learning are limited in scope. But federal policy can help catalyze such efforts by leveraging the government’s authority to use financial aid—a huge incentive for institutions—to pay for learning. Today the multibillion-dollar federal financial-aid system runs on the credit hour. And it gets only what it pays for: time.
How will the federal government determine what students should be learning and how we should be measuring their success? It won’t—and it shouldn’t. What it can do is establish, with input from the field, broad guidelines to support a rich diversity of meaningful learning expectations. These guidelines should include clear, externally validated learning expectations and assessments. They should not be the same across the board, but everyone should at least know what students are expected to—and actually do—learn.
As higher education becomes increasingly necessary and expensive, measuring time rather than learning is a luxury that students, taxpayers, and the nation can no longer afford. While Carnegie’s free money for pensions dried up long ago, the federal government is spending hundreds of billions of taxpayer dollars to pay for time-based credits and degrees of dubious value.
Paying for what students learn and can do, rather than how or where they spend their time, would go a long way toward providing students and the nation with desperately needed, high-quality degrees and credentials.