Douglas Mulford worried when his lab course moved to remote instruction this past spring. Mulford, a senior lecturer of chemistry at Emory University, had worked out a system for giving in-person exams in large classes. But with his 440 students taking their final online, he feared, it would be much easier for them to cheat.
So Mulford set out to protect his test. He looked into lockdown browsers, which limit what students can do on their computers during a test, but concluded they were pointless: Most of his students had a smartphone, too, he figured, and could simply consult it instead. He thought about using a proctoring service, but wasn’t convinced it could handle this volume of tests on such short notice. So he settled on what he calls “Zoom proctoring,” having students take their final in a Zoom room, with videos turned on, while a TA watched them and recorded the session.
Mulford also appealed to his students’ ethics, talking with them about the university’s honor code. The first question on his exam asked them to affirm it.
His approach “failed spectacularly,” he says. After being tipped off that students had cheated, he looked at their activity in the learning-management system and was able to see that close to 20 percent of them had opened course materials during the closed-book test. And that, he realized, didn’t account for anyone who broke the rules some other way.
Policing students shouldn’t be the North Star of anyone’s teaching. Especially not during a crisis that has put everyone under tremendous pressure.
Mulford takes academic dishonesty seriously, but he can also see why it happened. “I was trying to fit the traditional model of testing into an online environment,” he says, “and it just didn’t work.” Under the testing conditions he set, he adds, “the temptation became so high; the barrier to cheating became so low.”
It was a deflating experience, and one that is dragging on for months as 80-some student cases have been winding
their way through Emory’s academic-integrity process. Mulford figures he’s spent about 50 hours dealing with this one episode of cheating. That’s time he could have used to help students or improve his course.
Mulford’s experience trying to anticipate, discourage, and react to his students’ efforts to cheat has been a common story since classes shifted online in March. Cheating has always aroused strong and often opposing reactions among professors. But as pandemic teaching stretches into its eighth month, and many professors continue adapting to online teaching, they’re more divided than ever.
On one side are professors who consider themselves pedagogically progressive. They’ve adopted the perspective that many prominent teaching experts have been encouraging: Trust your students, and find creative ways to assess their learning. Yes, some students will cheat. That’s unavoidable, and policing them shouldn’t be the North Star of anyone’s teaching. Especially not during a crisis that has put students under tremendous pressure.
To professors on the other side, who tend to be more traditional, that advice falls flat. In some corners of a college, especially large-enrollment courses in quantitative disciplines with highly structured, sequential curricula, exams are seen as essential to learning. Cheating undermines their value. And no one seems to have figured out how to stop it.
Nothing instructors can do will eradicate cheating. There will always be some students who plagiarize papers, collaborate on homework, or copy someone’s test. Scholars who study cheating agree that the goal, really, is to create conditions under which most students won’t be too tempted.
Professors can create assignments that are harder to cheat on. They can take pressure off their students. And they can communicate with them about academic integrity, explaining why doing their own work is so important.
To academics, the reasons are obvious. One of the main things a college education is meant to impart is the ability to solve complex, novel problems. Most good teaching practices focus on building this skill.
But as Mulford’s experience illustrates, the temptation can still win out. To students, classes can sometimes seem like a means to an end. A lifetime of schooling has conditioned them to see their task as finding an answer that someone else has already figured out, with a good grade being the ultimate goal.
It’s also worth considering why students cheat in the first place. Among the strongest risk factors, experts say: stress and disconnection. That, unfortunately, describes many students’ current experience of college. So when the virus pushed courses online, some instructors expected cheating to spike — especially on exams.
Now that they’re taking tests at home, students can consult their books or notes. They can collaborate, or Google a forgotten fact or formula. They can find previous versions of the test — with the answers — online. They can submit test questions — no matter how clever or new — to tutoring services like Chegg and get an answer from an “expert” in minutes. This, to many professors, is especially egregious. It’s not cutting corners, it’s paying someone else to run the race.
While there aren’t any hard data showing that cheating has increased since the pivot online, says David Rettinger, a professor of psychology and director of academic integrity programs at the University of Mary Washington, it may well have. Either way, he says, higher ed was probably “naïve” about exam cheating before. Most in-person tests, Rettinger says, are not proctored especially well. It’s simply much easier to tell that students have copied from a website than from a classmate’s paper.
Ask the staff of your college’s teaching center, and you’ll probably hear something like this: You can’t police your way out of cheating, so here’s what we recommend: Move away from that high-stakes, traditional exam. Consider other ways you can assess what students are learning. You might even find out that those other assessments — a project, a paper — lead students to learn more.
College graduates will rarely have to solve problems in an hour during which they have no access to the internet or other people.
If you really require a traditional test, the teaching-center crowd will add, there are a bunch of things you can do to mitigate cheating — and many of them have the benefit of enhancing learning, too. Ask questions that require students to apply what they’ve learned and show their work. Maybe ask them in another question or a short video to reflect on how they got to that answer. Consider oral exams. Or make use of students’ ability to look at resources they normally couldn’t and make your test open note, open book, even open classmate. That’s closer to what they’ll face in the workplace: College graduates will rarely have to solve problems in an hour during which they have no access to the internet or other people.
Trust your students, the pedagogical progressives advise, and they’ll usually live up to it. But that has not been Ajay Shenoy’s experience. In March, Shenoy, an assistant professor of economics at the University of California at Santa Cruz, relaxed the expectations for his winter-quarter final, making it open note and giving students more time.
That hadn’t been Shenoy’s first impulse. Initially, he thought he might make it harder to cheat by letting students view just one question at a time, and randomizing the order of questions. The test would be timed, and everyone would take it at once.
Then his students started to go home, and home was all over the world. Between time zones and air travel, there was no way he could expect them to all find the same two hours for an exam. Besides, he realized, his students were, understandably, incredibly stressed.
Still, Shenoy required students to do their own work. He even asked them to let him know if they heard about anyone cheating.
After the exam, a couple of students came forward. One had heard about classmates putting test questions on Chegg. Another was pretty sure his housemates had cheated off their fraternity brothers. Alarmed, Shenoy decided to investigate. In his research, Shenoy uses natural-language processing to detect signs of political corruption. So to understand the scope of the cheating, he wrote a simple computer program to compare students’ exam responses. He uncovered an amount of cheating he calls “stunning.”
It also bothered Shenoy that it seemed to be common knowledge among his students that a number of their classmates were cheating.
“This is the issue when people say you should just trust students more,” Shenoy says. “Even if 99 percent of the students don’t want to cheat, if that 1 percent is cheating — and if everyone else knows about it — it’s a prisoner’s dilemma, right?” Students who are honest know they are at a disadvantage, he says, if they don’t think the professor is going to enforce the rules.
So Shenoy enforced the rules. He investigated 20 cases in his class of 312, and filed academic-misconduct reports for 18. (Those weren’t the only students who cheated, Shenoy says. Through documentation he got from Chegg, he knows many more students turned to the site. But he had time to pursue only students who had submitted questions to it.)
In-person exam cheating, Shenoy thought, is ineffective, and probably doesn’t boost students’ grades all that much — certainly no more than, well, studying more.
But when he compared the grades of students who had cheated with those of their classmates who didn’t, he found that the cheaters scored about 10 points higher on the exam. “I guess it’s possible that the smarter students were also the ones who chose to cheat,” Shenoy says. “But usually, in my experience, it’s the other way around.”
Who’s hurt when students cheat? It’s their loss, some professors will argue. It’s the cheaters who’ve squandered their tuition payment, time, and opportunity to learn the material. Besides, their actions will probably catch up to them eventually. That’s not how Shenoy views it, though.
If cheating leads to a higher grade, says the economist, then cheating is rational. “This was actually quite valuable to the student,” Shenoy says. “At the expense of the other students.”
So Shenoy felt a responsibility. “Part of my reason for putting so much time into pursuing this,” he says, “was just out of a sense of justice for the other students.”
Are exams really worth this much trouble? Jay Phelan thinks so. Phelan, a faculty member in the life-sciences core curriculum at the University of California at Los Angeles, is so pro-exam that he once wrote, with his wife, an education researcher, an opinion piece arguing that teaching to the test is good, so long as it’s a good test.
“Our point was this,” Phelan says, “that when you’re designing your course you figure out: What do you want them to know? How will you figure out whether or not they know it? And then you design your curriculum around that.”
Typically, Phelan says, his tests are half multiple choice — he has 300 students — and half short answer. Those short-answer questions ask students to apply what they’ve learned in the course. But they also require them to demonstrate mastery of the facts by recalling what they’ve learned, and their score depends on doing both things.
Those verbs — recall and apply — carry a certain significance in teaching circles. They appear on Bloom’s Taxonomy, an influential educational paradigm that divides learning objectives into different levels that move up a pyramid. Instructors spend a lot of time talking about how to get students to do work at the top levels, the highest of which involves creating something original. But the base — recalling facts — matters too, Phelan says. Everything else rests on it.
The shift to remote instruction, Phelan says, put a strain on that model. If students can look facts up, giving them points for remembering them no longer makes sense. But Phelan hesitated to give students a different kind of assessment, which he thought would be subjective, or even a test that was exclusively about application. There is information he simply thinks they ought to know.
“You have to actually have a pretty significant amount of understanding of real facts about biology,” Phelan says, “in order to play with them and use them and integrate them with other things that you know.”
This view is common among instructors who teach introductory courses in STEM. Students in these courses will be expected to draw on what they’ve learned in them in subsequent courses. And on the MCAT. And in their working lives. You want your doctor to have mastered basic science, right?
Phelan admits he sometimes feels “defensive” making a case he knows can sound old-fashioned. Still, he believes that “if you’re going to do something interesting and novel, applying something to a new situation, you first have to understand what the concept is.”
Yes, students can find things on the internet now. They’ll continue to use Google after they graduate. “That’s a useful skill,” Phelan says, “but being able to find something and being able to draw upon it from your own brain and then make use of it to think about other things are not the same thing.”
For any professor who agrees with Phelan, the prospect of giving an exam online presented two basic choices: Give up an important piece of assessment, or find a way to watch your students take their tests.
The young woman cries as she recounts why her professor gave her a zero on an exam. She’d initially gotten a B — a good grade on such a hard test, one she’d worked hard to earn, she tells the camera — but because she talked during the exam, the proctoring software flagged her for cheating. She had simply been reading the questions aloud to herself.
This emotional video was posted on TikTok, then widely shared on Twitter, where many instructors added comments expressing anger and dismay. Here was a powerful example of something progressive professors have long argued: Using third-party proctoring services harms students.
The prospect of falsely accusing a student — which speaks to the biases inherent in both human and artificial-intelligence assessments of test-taking behavior — is just one reason, those professors say. They also have concerns about how students’ data will be used. They balk at peering into their private lives, especially during a pandemic some are riding out in difficult home situations. They wonder what happens to students who don’t have the particular devices some proctoring services require, or sufficient bandwidth.
Then there are the pedagogical concerns. Being watched is stressful, and stress makes it harder to perform well on a test. Being watched could be especially stressful for students who are already marginalized, so the practice could exacerbate inequities. And remote proctoring can undermine the relationship between students and the instructor — which is itself an ingredient in learning.
Professors who use proctoring services generally view them as an important layer of test security, something that, in fact, helps give students confidence they’ll be evaluated fairly. Some online degree programs have used them as a matter of course — though in that case students have agreed to this monitoring and presumably have the necessary equipment to participate.
The strongest risk factors for cheating? Stress and disconnection.
Martha Oakley thinks this debate has been framed all wrong. The real question, says Oakley, a chemistry professor at Indiana University at Bloomington who recently became associate vice provost in the office of the vice provost for undergraduate education, is how they can assess students most equitably. Remote proctoring, Oakley says, could actually help here, by giving professors more flexibility in the kinds of tests they can use. Professors might, for instance, feel comfortable giving more lower-stakes assessments versus a few big tests if cheating were less of a concern.
This summer, as she prepared for her administrative role, Oakley organized a committee of information-technology experts, instructors who teach courses with hundreds of students, and professors with expertise in equity to make recommendations for assessing students during the pandemic. They did not all see eye to eye.
The group’s recommendations capture that tension by acknowledging the concerns of the progressive and traditionalist camps. “Where possible,” the committee wrote, “instructors should use forms of assessment that do not require proctoring. In many cases, these forms of assessment are more accurate than traditional exams.” The university’s teaching center, the recommendations document notes, is happy to help. Still, the rest of the document suggests a first- and second-choice proctoring service, and gives a set of best-practice recommendations for using them.
The recommendations are meant to speak to what instructors teaching large-enrollment classes are up against right now. “I don’t know any faculty who want to be Big Brother in that way,” Oakley says. “It’s just all that we’ve got.”
Lisa Eytel is well-versed in the alternatives. After classes went online, she did everything she could think of to help her students. When she administered the first exam after moving online last spring, Eytel, a clinical assistant professor of chemistry at Boise State University, gave them from 7 a.m. to 10 p.m. to complete a portion of it on paper, and distributed copies to anyone who didn’t have a printer. Students had a full 24 hours for the other part, a set of multiple-choice questions they answered online. Eytel let them use their books and notes — but made it clear they weren’t allowed to consult the internet, or each other.
They could, however, consult her. Eytel made herself available by email — and also spent nine hours signed into Zoom so that students could drop in and get help whenever they needed it. “They could not ask me how to solve the exact question,” Eytel says, “but we would go over similar questions.”
Students took her up on the offer, and she thought the exam had gone well.
Then, Eytel heard from colleagues in her department that they had caught students cheating on exams. And Eytel’s own questions, one reported, were also up on Chegg.
“That was super-disheartening,” Eytel says — in part because she had caught students in the course using Chegg for their homework earlier in the semester and talked a lot about academic integrity after it happened.
After the exam cheating, Eytel lost a lot of sleep. She scheduled a one-on-one Zoom appointment with each student who had been caught. One student’s parents yelled at her over Zoom. There were “a lot of tears,” she says. “From both students and myself.”
Eytel felt especially bad, she says, because “I had made myself available in so many different ways. It was: If you can’t get on Zoom, you can call me. Or you can email me. I had tried so many different ways to make it accessible for students, and make asking questions, and asking for help from me, feel more OK.”
Some of the same students who had come to her for help, Eytel discovered, had also turned to Chegg.
That confirmed to Eytel that some students felt desperate. She redoubled her efforts to help. Eytel, who is in just her second year of solo teaching, essentially remade her lecture course mostly in-person this fall. This time around, Eytel has built “soft” and “hard” deadlines into her lecture class. She has given each assignment two due dates. The first ensured feedback from Eytel within 48 hours. But students can turn work in at any time before the next due date, set right before they will be tested on the material, without losing credit. That reduces the pressure to cut corners in order to meet a deadline, and provides an incentive to do the work even if it’s late.
As for tests, Eytel rebranded them “knowledge checks.” She reduced their weight in students’ grades. The goal: Lower students’ stress, and put their focus on learning.
Eytel also talked about cheating from the very beginning of the course. She had students read a passage from the university’s code of conduct. She had them describe what academic integrity means to them and why using sites like Chegg is a violation of it. The exercise revealed that not all of her students see using the site as a problem— which gave her an opportunity to talk things out with students before anything had gone wrong. So far, Eytel says, the new approach to her lecture course appears to be successful.
That hasn’t been true in the lab course she’s also teaching this semester. While she did take smaller steps to add flexibility, relieve pressure, and discuss academic integrity, she has caught students in that course using Chegg.
But she hasn’t given up hope. And she’s still trying to find ways to reduce cheating on the exam in her lab course. She plans to use a larger pool of questions, and prevent students from going back to a question once they answer it. And she’ll keep talking with them about academic integrity.
Eventually, Eytel plans to remake her lab course along the lines of what she did with the lecture. The best approach to reducing cheating, she believes, is something both simple and hard to realize, especially during a crisis: better teaching.