Some of the hallmarks of No Child Left Behind are creeping into higher education.
The 2002 law was intended to hold elementary and secondary schools accountable for improving the academic achievement of all students. It has come to be reviled by many teachers for what they see as a narrowing of the curriculum to the material covered on standardized tests, and for punishing schools for their students’ performance.
Professors often invoke the law in objecting to calls for increased oversight—which they fear will come from the federal government or accreditors—as a cautionary tale of accountability run amok. But it is in the states, some of which are requiring colleges to demonstrate what their students are learning, that the real action is taking place.
Most of the efforts under way in a dozen states, still in their early stages, seek to answer mounting concerns about academic rigor in college. Two states are already using student surveys and standardized tests to document learning—and attaching financial rewards to the results.
Missouri, for example, awards a small share of its support to colleges on the basis of how well their students score on standardized tests. Pennsylvania includes data from the deep-learning-scales portion of the National Survey of Student Engagement in its formula for performance-based appropriations. South Carolina is developing learning metrics for educational quality that will be tied to state support.
Other states are bolstering their oversight of what happens in class. Professors who teach classes of 300 or more at Iowa’s three public universities must report to administrators on how they assess learning; the information is sent to the state’s Board of Regents.
The effort that may prove to be the most consequential, a nine-state consortium led by Massachusetts, is seeking to avoid the types of flaws that critics have seen in the 2002 law. Professors and state and college officials are adapting an existing faculty-developed tool to assess students’ work, which they hope can be rated on a common scale and compared across disciplines, colleges, and state systems.
“We are a publicly supported set of institutions,” says Richard M. Freeland, commissioner of higher education in Massachusetts. “We need to be accountable to the state for our outcomes.”
Money for Scores
States’ growing interest in student learning reflects several trends. The national effort to produce more college graduates, often referred to as the completion agenda, has raised worries that the pressure to push more students through the education pipeline will cause academic quality to diminish.
Meanwhile, the formulas by which many states support their colleges have grown increasingly sophisticated, with money being awarded on the basis of more and more data points.
Several college provosts in Missouri said they suggested including student learning in the state’s new performance-based formula. An externally validated standardized test, many of them argued, would indicate rigor more objectively than an internal measure, like grade-point average, which can be inflated.
“We wanted to make sure there was quality assurance along with performance funding,” says Douglas N. Dunham, provost of Northwest Missouri State University. Hypothetically, he says, “there could be a temptation to lower standards in order to get other categories elevated.”
Missouri’s colleges can choose the criteria on which they want their students’ learning to be judged. Six of the institutions, including the University of Missouri, use the results of professional licensure tests, in such fields as accounting, nursing, and teaching, to gauge how well their students fare in their majors.
The University of Central Missouri, Northwest Missouri State, and Missouri Western State University have chosen to be evaluated on the basis of their general-education offerings. They opted for the same standardized test: the Educational Testing Service’s Proficiency Profile, an assessment of mathematics, reading, and writing.
The tests are not a result of the new system of performance-based appropriations. Northwest Missouri State has used the Proficiency Profile, or one its precursors, since 1994 as an externally validated measure of learning. All of its students must take the test early in their junior year.
Many professors there had long wondered why the university bothered with the expense and logistics of administering the test, says Joel D. Benson, a professor of history and president of the Faculty Senate. “The only thing that we were sort of in opposition to,” he says, “was why are we going through all this if it isn’t going to mean anything?”
This year it will mean $186,000 in new money from the state, because more than 60 percent of Northwest Missouri State’s students scored above the median on the test.
The university tries to raise the stakes for students to encourage them to take the test seriously. Those who fare poorly on the two-hour test must take it again. When students try harder, it is thought to provide institutions with better data. Otherwise, says Mr. Dunham, “you’ll get students coming in and filling out boxes.”
Other institutions attach tougher consequences—at least on paper. The University of Central Missouri uses the Proficiency Profile as an exit exam. To graduate, students must score at least 425, which is slightly below the lowest level of proficiency, out of a possible 500.
About 98 percent of Central Missouri’s students pass the test, says Carole E. Nimmer, director of testing services, and they are allowed to take it as many as four times. No one has been denied graduation because of the test.
The importance that standardized tests have assumed troubles some professors, says Mr. Benson. Faculty members do not know what is on the Proficiency Profile, he notes, and so they don’t tailor their class material to it; hence no one expresses much concern about teaching to the test.
Mr. Benson points to a larger context as well: Missouri’s support for higher education has yet to recover from recent cuts. Therefore additional money, even if it is linked to performance-based appropriations, is better than nothing.
“You have to recognize there are political realities,” he says. “It may be a standardized test, but it’s a standardized test that supports general education.”
Other Missouri institutions, like Truman State University, use a combination of internally developed rubrics and standardized tests to evaluate their students’ success. Troy D. Paino, president of Truman State, sees value in the assessments, which he says have sparked a broader conversation on campus, particularly about critical thinking.
But he worries about the embrace of performance-based appropriations, which can quickly reward and punish on the basis of data that often reflect slow-moving trends.
“People want to find some kind of silver bullet that’s going to improve the quality of education, and they think performance funding is going to turn things around,” he says. “It’s not going to work that way.”
The Power of Numbers
Missouri is one of the nine states in the consortium being led by Massachusetts. The others are Connecticut, Indiana, Kentucky, Minnesota, Oregon, Rhode Island, and Utah.
Officials in Missouri hope the group’s efforts will produce a tool to measure learning that surpasses the current array of standardized tests.
The group’s starting point is the Value rubrics of the Association of American Colleges and Universities. They emerged as a faculty-developed response to the concerns about quality raised by the higher-education commission created in 2005 by the secretary of education, Margaret Spellings. More than 1,000 institutions use the rubrics, says the association, which is also participating in the consortium.
The consortium is adapting three of the association’s rubrics: in critical thinking, quantitative reasoning, and writing. Those skills are divided into component parts, each of which can be judged by faculty on a scale of one to four. The hope is that each category within those rubrics, which reflect qualitative judgment calls, can be converted to a quantitative measure like a point system, which could be generalized and compared across departments, institutions, and states.
The virtue of the rubrics, say participants in the effort, is that they reflect common standards but are based on the work that students actually produce. Standardized tests may offer uniformity, but they are not tied to the curriculum.
“If we can make this work, we will have accomplished something rather large in higher education,” says Patricia H. Crosson, senior adviser for academic policy at the Massachusetts Department of Higher Education and a professor emerita of higher education at the University of Massachusetts at Amherst.
The consortium’s efforts could accomplish several goals at once, says Carol Geary Schneider, president of the association. Higher education would have a clear, understandable way to describe learning that would be more nuanced and meaningful than standardized tests, while also giving faculty members tools that truly help improve teaching.
“We’re trying to wean higher education from the simple and often deceptive number,” she says.
But numbers, even those built on faculty-vetted qualitative judgments like the rubrics, have a way of acquiring their own force. Once a number starts to be treated as a definitive measure of truth, it is tempting to tie it to other numbers—like a dollar value.
Mr. Freeland, the Massachusetts higher-education commissioner, envisions the rubrics’ numbers one day feeding his state’s performance-based formula, though that day is far-off.
“It is just so rational,” he says, “to link performance to at least a significant component of your budget.”