Back in 2000, I was a state budget officer in Indiana, specializing in education. One day, word came that some new outfit in California was planning to release a "higher-education report card." We would be judged and compared with other states on measures like "affordability," "participation," and — particularly puzzling — "learning." Naturally we were suspicious. Who did those people think they were?
When the report arrived, our qualms were confirmed. The grades were all middling, mostly "C minuses" and the like. Therefore the methodology was clearly flawed.
We had not one but two Big Ten universities, after all, plus Notre Dame! True, it had been 13 long years since the last NCAA basketball championship, and the coach kept making headlines for the wrong reasons. But over all, higher education was a point of Hoosier pride. Fortunately, everyone knew such kinds of reports come and go. This too would pass.
We were wrong, of course — about the need to improve our higher-education system and about the longevity of the report card. Eight years later, the National Center for Public Policy and Higher Education is about to release the fifth version of its "Measuring Up" report, issued every two years to grade all 50 states. Much has changed since then, but "Measuring Up" hasn't. And that's been the key to its success.
The report card began with a premise that still amounts to heresy in some quarters: The American higher-education system isn't as great as people would like to believe. In 2000, the top tiers of students and institutions were doing well, as they always had — and still do. But for everyone else, things were dicey. As James B. Hunt Jr., then the governor of North Carolina and then and currently chairman of the national center, said at the time, "Despite the accomplishments of American higher education, its benefits are unevenly and often unfairly distributed." A lot of students were graduating from high school without the course work they needed to begin college courses — or weren't graduating at all. For the typical family, college costs made up a large and growing share of income. And despite some progress in getting students into college, many were never finishing with a degree. Thus, the rash of C grades and worse.
The most important "Measuring Up" grade, moreover, was the one that states didn't receive. For "Learning," every state received an identical I, for "Incomplete." Students and taxpayers were spending hundreds of billions of dollars every year to educate the next generation of citizens and scholars. Yet one could find little evidence of how much those students were learning. No state could produce reliable, comparable data to indicate whether college learning was good, bad, or somewhere in between.
The initial rollout of "Measuring Up" was a success, leaving its authors with the unenviable challenge of planning an encore. In the think-tank world, where I now happen to work, the only thing harder than grabbing the public's attention is keeping it. It would have been easy enough for the national center to "improve" its methodology every two years, creating sudden shifts in the relative position of states. We all know the drill: The winners crow, the losers complain, and every reporter gets a new story to tell.
That didn't happen. Instead, the "Measuring Up" criteria were kept intact, so states could mark their progress over time. The national center also insisted on repeating itself in talking about the things that matter most.
For example, in 2000, Peter T. Ewell, vice president of the National Center for Higher Education Management Systems and a key project adviser, wrote an essay about the 50 "Incompletes" that states had received for learning. Two years later, when the states still hadn't reported enough data to merit grades, he wrote another one. In 2004, Ewell was at it again, noting that five states — Illinois, Kentucky, Nevada, Oklahoma, and South Carolina — had earned a "Plus" grade after participating in a pilot project aimed at combining data about adult literacy and professional licensure with results from tests aimed at measuring problem solving, writing, and quantitative skills. In 2006 — you guessed it — Ewell reported on the "slow but steady evolution" toward gathering real data about what college graduates know and can do. I'm looking forward to reading his 2008 essay on learning. I hope the news is good, but I know that if it isn't, he'll be sure to say so.
In the long run, the only true measure of "Measuring Up" will be progress on the outcomes it assesses. The National Center for Public Policy and Higher Education graded states on affordability, completion, and other measures because it wanted those things to improve. Have they?
In some cases, they have not. The average level of tuition and fees at public four-year universities has increased by 50 percent since 2000, after adjusting for inflation. At the same time, real median household income has fallen. The national center's president, Patrick M. Callan, was blunt about it in 2006: "College affordability has declined dramatically." More students are taking college prep courses, but high-school graduation rates are flat. College-going has nudged upwards in recent years, but the overall percentage of the adult work force with higher-education credentials has been stagnant, even as our international competitors improve on that measure by leaps and bounds. I strongly suspect that in the 2008 report cards, most states will get yet another I for learning.
But that doesn't mean "Measuring Up" hasn't mattered. Far from it. Over the last eight years, slowly but surely, the "Measuring Up" agenda has become the national higher-education agenda.
Since the first report cards were released, huge philanthropies like the Bill & Melinda Gates Foundation have taken up the cause of high-school preparation, working to boost graduation rates, increase academic rigor, and ensure that all students can go to college and succeed there if they choose. Politicians of all political stripes are calling, with increasing urgency, for more college graduates. The U.S. secretary of education has made a concerted push to increase higher-education openness and accountability. And when Congress convened this year to reauthorize the Higher Education Act, affordability was at the top of everyone's mind.
Real progress has also been made on the contentious issue of measuring learning. New measures like the Collegiate Learning Assessment have sprung up to fill the void of information about cognitive outcomes. Many two-year and four-year institutions have adopted well-crafted surveys of student engagement and teaching practices. Accreditors are gradually pushing institutions to demonstrate evidence of student achievement, and the major national associations of public universities have created a Voluntary System of Accountability whereby hundreds of institutions will soon report actual, concrete student results. College learning questions are gradually shifting from "if" and "why" to "how" and "when."
Pat Callan, Jim Hunt, and their colleagues don't claim credit for all of that, of course, and for good reason: They weren't working alone. The task of improving a long-established, overly comfortable higher-education system is neither a sprint nor a solo endeavor.
But people don't get enough credit for prescience and persistence in this world, so let me be clear: In retrospect, "Measuring Up" was right about everything. Right about the problems our higher-education system faces, and right about the best way to fix them — not through denunciation and politicization, but by a steady insistence on truth-telling, and by standing ready to help when education and policy leaders decide to act.
More students are enrolled in American colleges and universities today than ever before. Too many have arrived unready, and too few are on track to earn a degree. Meanwhile, a new generation of leaders is coming to Washington and the nation's state capitols. Sooner or later they'll turn their eyes to higher education. Fortunately for them — and for everyone — the new "Measuring Up" reports will be ready to show them just how much work still needs to be done.
Kevin Carey is research and policy manager of Education Sector, an independent think tank in Washington.
http://chronicle.com Section: Commentary Volume 55, Issue 15, Page A88