In 1963, a doctoral student in sociology at Columbia University conducted the first major survey of cheating in higher education in the United States. William J. Bowers sent surveys to roughly 100 institutions, asking students whether they had engaged in any of 13 academically dishonest behaviors in their college courses. (Read Part 1 and Part 2 of this series.)
Bowers tallied the results in three different ways, but the most straightforward number he presented in his published research, Student Dishonesty and Its Control in College, reported that 75 percent of the surveyed students admitted to cheating at least once in their college careers.
If all you know about cheating rates in higher education comes from reports in the popular press—or from the grumpy professor in your department who likes to complain about the declining moral values of his students—that date and number bear repeating: in 1963, 75 percent of college students admitted to cheating at least once in their college careers.
The numbers have not changed much since then. The lead researcher on this topic in the United States today, Donald L. McCabe, working with a number of co-authors, recently published Cheating in College: Why Students Do It and What Educators Can Do About It, a book that includes the results of close to 10 years of Web surveys on cheating in higher education. Roughly following Bowers’s methods, McCabe and his fellow researchers conducted surveys from 2002 until 2010, asking students to self-report whether they had engaged in certain cheating behaviors.
The results, which include responses from approximately 150,000 students at institutions of all types, show that between 60 to 70 percent of respondents admitted cheating. That’s lower than the numbers reported by Bowers in 1963. However, McCabe and his co-authors argue that lower rates of reporting and different data-collection techniques—the use of the Web, in their case, versus paper surveys in Bowers’s case—should make us wary of assuming that cheating rates have really decreased.
Whether we should trust the numbers themselves or heed McCabe’s cautionary interpretation of them, we certainly do not have reliable evidence that rates of cheating have increased in recent years, despite grumpy colleagues or scare stories in the popular press. We might consider that a positive development.
On the negative side, students have been cheating in higher education at extremely high rates for the past 50 years. If you are standing in front of a class of 20 students, the percentages reported by both Bowers and McCabe would suggest that 13 to 15 of those students have cheated at least once in a college course. We should all find that number disturbingly high, and we should all want to work together to reduce cheating.
Most research on how to respond to cheating in higher education has focused on strategies proposed by student-affairs and academic-integrity offices or from campus committees on the subject. While I believe we absolutely need the creative ideas from those groups, the persistence of high cheating rates over the past 50 years suggests that we need to do more. As I have been arguing in Parts 1 and 2 of this series, the time has come for faculty members to take a more active role in considering how the design of their courses, their assessments, and their daily classroom practices may help to reduce or induce cheating.
In last month’s column, I suggested that one factor in course design that may affect cheating rates is the frequency and nature of the exams and assignments. When students have just two or three opportunities to earn their grade in a course (i.e., two midterms and a final exam), each one of those assessments comes with an intense amount of pressure, and may induce students to succeed by any means necessary, including cheating. Students in such courses also have little or no opportunity to see whether they know the material well enough to succeed on those high-stakes exams. The resulting fear and anxiety may also lead to higher rates of cheating.
By contrast, offering students multiple opportunities to earn their grade, while keeping the stakes for cheating consistent and substantial across all assessments, reduces the gain and increases the risk of cheating on any given task. A varied mix of tests also gives students a clearer sense of their knowledge of the course material and their ability to succeed, which can help them make better study decisions and prepare more effectively for the higher-stakes assessments.
But the real payoff for frequent, low-stakes assessments—and perhaps the more substantive reason that they can help us reduce cheating—is that they increase learning. The best defense we have against cheating, I would argue, is student learning. If students have an interest in learning the course material, and are provided with all of the tools they need, cheating becomes a superfluous option.
In this series I have been focusing on the second of those two elements: how to provide students with the tools they need to learn. The past few decades of research in cognitive psychology have given us excellent evidence that frequent, low-stakes assessments help students learn the course material much more effectively than infrequent, high-stakes assessments—and even much more effectively than the typical study behaviors of highlighting texts, reviewing notes, or rereading the books.
Indeed, we typically think about quizzes and exams as measuring student learning, while study behaviors produce learning. But an extremely robust body of research demonstrates that taking frequent quizzes and exams actually produces learning, and does so far more effectively than studying or reviewing notes.
In one fascinating experiment, conducted by Henry L. Roediger III and Jeffrey D. Karpicke, and reported in Science magazine, the participants were divided into four groups and asked to study 40 English-Swahili word pairs. All of the groups had four study sessions, with each session followed by a test of their recall of the word pairs. A week later, the participants came back to the lab to be tested on their long-term recall.
In one of the groups, the subjects were given all 40 word pairs to memorize in their four study sessions, and then took a test on all of the word pairs after each session. In another group, the subjects took the same four tests on all 40 words pairs, but any pair that they correctly identified on one test was dropped from the list of word pairs they were given to memorize in their subsequent study periods. The results? The two groups performed almost exactly the same both on the tests and, most important, on the “final exam” they were given a week later when they were brought back to the laboratory to test their long-term retention.
In other words, the key to their learning—and especially their long-term retention—seemed to be the repeated testing, not the studying. As Karpicke and Roediger put it, the experiment “shows the powerful effect of testing on learning:" Repeated testing “enhanced long-term retention, whereas repeated studying produced essentially no benefit.” (To read the original Science article, you must click on an abstract and then register on the Web site. But you can read a fuller description of the study here).
Karpicke and Roedger’s experiment provides a concise and elegant demonstration of the “testing effect,” but that phenomenon has been demonstrated amply in a variety of other contexts, including in actual courses at a major research university. Other researchers have pushed it beyond the memorization of simple word pairs, up to and including the kind of complex information and concepts that we use in our courses.
The explanation for the testing effect involves delving more deeply into the research on human memory, some features of which I have explored in a previous series of columns, and so will not repeat here.
But the key takeaway: The testing effect is not limited to “testing.” What seems to make the difference is providing the opportunity for students to retrieve and rehearse—remember and articulate—information that they are trying to learn. The more such opportunities you provide to your students, the more you help them seal up their long-term recall of the material.
Retrieval and rehearsal certainly can come in the form of tests and quizzes, but they can also come in a wide variety of other forms. A simple and easy approach: End a class period with a simplified version of Thomas A. Angelo and K. Patricia Cross’s famous “minute paper.” With five minutes left in class, ask students to close their notebooks, take out a half sheet of paper, and write down the most important concept (or two, or three) that they learned in class that day. That simple exercise, repeated frequently throughout the semester, would give students a substantive boost in their efforts to remember key concepts.
So we know from research in cognitive theory that frequent assessments help improve long-term retention of course material. My own research on academic dishonesty suggests that frequent assessment, coupled with a firm and consistent academic integrity policy, will help reduce cheating. This happy juncture forms the heart of what I hope can point the way toward new lines of research on academic integrity, ones that consider how course design and teaching practices can help reduce cheating and increase learning.
We will always have some students who cheat, for reasons that are beyond all of our control. But I also suspect that we have more control over this problem than we have acknowledged thus far, and that future research on cheating could help us further identify specific features of a learning environment—frequency and nature of assessment, classroom practice, teacher behaviors—that seem to induce or reduce cheating.
Such research may provide teaching faculty members with new avenues to help lessen the problem of cheating in higher education—and, more important, may also help improve our students’ learning in the process.