“Because each course in General Studies has been approved to meet specific learning outcomes associated with the General Studies curriculum, the course student learning outcomes listed on the syllabus must include learning outcomes that align with the identified General Studies learning outcomes and include assignments that will serve as embedded assessments for these learning outcomes. Courses must also include learning outcomes that align with the contribution the course makes to other program learning outcomes (e.g., if the course is a required course in the major). Instructors may include additional course learning outcomes that align with individual instructor learning goals.”
— Components required for General Studies courses, General Studies Syllabus Guidelines, Center for University Teaching, Learning, and Assessment, University of West Florida.
That statement might easily be confused as a drinking game centered on repeating the words “learning outcomes,” but actually it exemplifies a significant shift in the way students are assessed at universities across the country. It also represents an important challenge to the academic freedom of professors to conduct their courses as they deem appropriate.
Assessing student performance has always been at the heart of the academic enterprise. Historically, this type of professional-based assessment has been within the purview and control of the individual professor. Under this familiar system, a professor uses her expertise to administer the usual assortment of tests, papers, quizzes, presentations, journals, portfolios, etc., to gauge the progress of students through a course or program.
Recently, however, a new form of assessment has emerged that seeks to standardize the process by creating rubrics, comparable matrixes, uniform competencies, and standardized student-learning outcomes. Proponents of the assessment movement contend that more-precise measures of the impact of a program or course on student learning are needed to allow comparisons among programs and courses. Advocates contend there are too many individual idiosyncrasies and too much variability among instructors, courses, and grading standards for anyone to know what students are actually learning.
The move toward standardization has been promoted by a group of edumetricians in fields like educational psychology who have longed to convert the messy art of teaching and learning into a more exact science, and eager administrators who contend that in an age of tight resources, better data are needed to make important decisions about spending and priorities.
These new forms of assessment are backed by a “competency cabal” consisting of regional accrediting bodies and groups like the RAND Corporation, the American Enterprise Institute, and the American Association of Colleges and Universities, each spinning out its own reports, sometimes financed by market-oriented reform groups such as the Lumina and Gates foundations, with sideline support provided by Arne Duncan, the U.S. Department of Education, and various private edu-companies.
This increasing standardization will sound familiar to teachers in elementary and secondary education who have seen this same cry for standards, competencies, and learning outcomes erode their autonomy and power over the curriculum.
Indeed, such standardization is what Common Core is largely about. Common Core’s “value-added measures” attempt to determine the precise impact of one teacher on the learning of an individual student.
These standards can, the argument goes, be used to punish teachers deemed ineffective and to reward those declared effective as determined by the outcomes of the measures. They were influential in the recent California Superior Court case of Vergara v. California, which essentially ruled teacher tenure unconstitutional.
But despite the emphasis on these new assessment models, as John W. Powell, Diane Ravitch, Moshe Adler, and others have pointed out, there is no reliable research showing that value-added measures of learning outcomes actually improve education.
Despite the exuberant emphasis on measurement and “data-driven decision making” as means to solve problems in education, there is little understanding of the impact of incessant measurement and no real assessment of the impact of assessment on the education of students.
So, if there is scant evidence that centralizing the curriculum and standardizing assessment produce better students or improve education (in fact, the Finnish model of elementary and secondary education suggests the opposite), what is the push toward assessment, competencies, and learning outcomes really about? The answer to this question lies not in pedagogy but in politics or, more precisely, in the politics of pedagogy. Standardizing and centralizing assessment is a politically driven attempt to reconfigure the very notion of public education and relatively autonomous public institutions by creating quasi marketplaces in which universities are expected to compete for funds.
Such an approach appeals to the managerial and market-oriented centralizers who contend that if only the college could operate like a line-chain management regime, without those autonomous professors thwarting the managers’ efforts, then education would hum along with the efficiency of a well-run business. It also appeals to those who want to make the university a servant of business interests as directed by the state.
The cycle of market-based higher-education reform that has become prominent in the United States, Britain, and elsewhere, from which these current forms of assessments have emerged, represents a one-two punch to higher education as a relatively independent social institution.
First comes a rollback effect that withdraws state support to higher education, supposedly to reduce tax burdens and generate “sustainability.” This forces a cost-cutting frenzy that downsizes the full-time faculty and adds more adjuncts. Next come efforts to compensate for the resulting loss of quality by generating an elaborate monitoring and accountability system to oversee and correct for the disruptions caused by the first set of disruptions.
This leads to more administrators to oversee the “quality-control systems” and sets off a new spiral that drives up costs, thus requiring a new round of faculty reductions, increased oversight and monitoring, and a new power grab. This continues until you end up with virtual, low-faculty models such as those found at Capella’s Flexpath or Southern New Hampshire University’s College for America where the faculty has been largely eliminated and replaced with platforms and standardized competencies somewhat like online corporate training programs.
This does not mean that assessment is unnecessary. But who will control that assessment, and what purpose will it serve? Will it be driven by centralized systems overseen by administrators as directed by the state, or will it be determined by the professionals who create and disseminate knowledge? Will assessment be used to guide and gauge the development of individual students, or will it be used as part of a punish-and-reward system linked with performance-based funding? The problem is not assessment itself. The problem is assessment’s being taken out of the hands of the individual, standardized, and linked to larger auditing regimes.
As is the case with the high-stakes testing in elementary and secondary education, when assessment is linked to government oversight, it magnifies the importance of those assessments and reifies knowledge into something that one must merely memorize and report back on a test.
In moving to such a system of assessment, universities risk not only churning out more mind-numbing directives like the one cited here but also adopting the false authoritarian dream that rationalization, centralization, and standardization will always makes things better when the opposite is much more likely.