Among the problems on college campuses today are that students study for exams and faculty encourage them to do so.
I expect that many faculty members will be appalled by this assertion and regard it as a form of academic heresy. If anything, they would argue, students don’t study enough for exams; if they did, the educational system would produce better results. But this simple and familiar phrase—"study for exams"—which is widely regarded as a sign of responsible academic practice, actually encourages student behaviors and dispositions that work against the larger purpose of human intellectual development and learning. Rather than telling students to study for exams, we should be telling them to study for learning and understanding.
If there is one student attitude that most all faculty bemoan, it is instrumentalism. This is the view that you go to college to get a degree to get a job to make money to be happy. Similarly, you take this course to meet this requirement, and you do coursework and read the material to pass the course to graduate to get the degree. Everything is a means to an end. Nothing is an end in itself. There is no higher purpose.
When we tell students to study for the exam or, more to the point, to study so that they can do well on the exam, we powerfully reinforce that way of thinking. While faculty consistently complain about instrumentalism, our behavior and the entire system encourages and facilitates it.
On the one hand, we tell students to value learning for learning’s sake; on the other, we tell students they’d better know this or that, or they’d better take notes, or they’d better read the book, because it will be on the next exam; if they don’t do these things, they will pay a price in academic failure. This communicates to students that the process of intellectual inquiry, academic exploration, and acquiring knowledge is a purely instrumental activity—designed to ensure success on the next assessment.
Given all this, it is hardly surprising that students constantly ask us if this or that will be on the exam, or whether they really need to know this reading for the next test, or—the single most pressing question at every first class meeting of the term—"is the final cumulative”?
This dysfunctional system reaches its zenith with the cumulative “final” exam. We even go so far as to commemorate this sacred academic ritual by setting aside a specially designated “exam week” at the end of each term. This collective exercise in sadism encourages students to cram everything that they think they need to “know” (temporarily for the exam) into their brains, deprive themselves of sleep and leisure activities, complete (or more likely finally start) term papers, and memorize mounds of information. While this traditional exercise might prepare students for the inevitable bouts of unpleasantness they will face as working adults, its value as a learning process is dubious.
According to those who study the science of human learning, it occurs only when there is both retention and transfer. Retention involves the ability to actually remember what was presumably “learned” more than two weeks beyond the end of the term. Transfer is the ability to use and apply that knowledge for subsequent understanding and analysis. Based on this definition, there is not much learning taking place in college courses.
One reason is that learning is equated with studying for exams and, for many students, studying for exams means “cramming.” A growing amount of research literature consistently reports that cramming—short-term memorizing—does not contribute to retention or transfer. It may, however, yield positive short-term results as measured by exam scores. So, as long as we have relatively high-stakes exams determining a large part of the final grade in a course, students will cram for exams, and there will be very little learning.
An indication of this widespread nonlearning is the perennial befuddlement of faculty members who can’t seem to understand why students don’t know this or that, even though it was “covered” in a prior or prerequisite course. The reason they don’t know it is because they did not learn it. Covering content is not the same as learning it.
Instead, how we structure the assessment of our students should involve two essential approaches: formative assessment and authentic assessment. Used jointly they can move us toward a healthier learning environment that avoids high-stakes examinations and intermittent cramming.
Formative assessments allow students to both develop their abilities and assess their progress. In this sense, they combine teaching and learning activities with assessment. These are sometimes called classroom-assessment techniques, and they do not require formal grading but rather an opportunity for students, after completing the exercise or assignment, to see what they did well and where they need to improve.
Authentic assessments involve giving students opportunities to demonstrate their abilities in a real-world context. Ideally, student performance is assessed not on the ability to memorize or recite terms and definitions but the ability to use the repertoire of disciplinary tools—be they theories, concepts, or principles—to analyze and solve a realistic problem that they might face as practitioners in the field.
Such an approach to assessment lends itself to the “open book” as a toolbox from which students can draw. Professional or disciplinary judgment is based on the ability to select the right tool and apply it effectively. If there is any preparation, it is based on a review of the formative assessments that have preceded the graded evaluation.
This all makes educational sense, and some enlightened colleges, while not necessarily adopting these assessment approaches, already have come to the realization that final exams do not advance student learning. Professors at Harvard, for example, now may choose whether to give final exams, and increasing numbers of professors are using alternative techniques.
But that is hardly enough. The education system is desperate for a new model, and higher education is the best place to start because postsecondary faculty have more flexibility to experiment with alternative forms of pedagogical techniques than primary and secondary teachers do. We can use these opportunities to make a difference in the way students study, learn, and understand.
Yes, our mantra of “studying for exams” has created and nourished a monster—but it’s not too late to kill it.