Last month The Review published an opinion piece in which a Columbia University undergraduate detailed how his classmates were using ChatGPT to write their essays — and doing so in ways that couldn’t be detected. The widely read piece exposed just how quickly the artificial-intelligence tool, which was released a little over six months ago, had made longstanding academic-integrity policies obsolete.
Although the appearance of ChatGPT has brought conversations about AI into the mainstream, some academics have been thinking about the effects of AI on higher education for years. Recently The Chronicle held a virtual forum with three experts about the ethical issues surrounding ChatGPT and other AI technologies. The forum, which I hosted, included Tricia Bertram Gallant, director of the academic-integrity office
at the University of California at San Diego; Sarah Elaine Eaton, an associate professor in the Werklund School of Education at the University of Calgary, in Canada; and Thomas Lancaster, a senior teaching fellow at Imperial College London. This transcript of our conversation has been edited for length and clarity.
Ian Wilhelm: How will AI change how we assess students?
Thomas Lancaster: When students are in college, we need to make sure they have the foundational skills to rely upon later in their academic career and later in their life because they can’t replace themselves with machines all of the time. If you are training a doctor, you need to know that they understand certain things about the human body and they don’t have to go and look it up every time. We can start thinking about a more authentic style of assessment where students work on more open-ended tasks, which are like the ones they would do in the workplace, and these tasks might use artificial intelligence. There is going to be a skill to working out which AI system is appropriate. There’s going to be a skill to keeping up to date with the changes in the technology and knowing where it gets things wrong. We can’t assume that just because everything has always been assessed in one way that it’s going to be the right way for the next 10 years and beyond.
Tricia Bertram Gallant: Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in. That might mean the death of the five-paragraph essay. It might mean the death of language teaching because we’re going to have universal translators. It might mean different skills, like AI literacy, that we haven’t been teaching. We should think about the human skills that AI doesn’t have, like empathy, critical thinking, and analysis, and recenter our curriculum around them. Right now we assume students are developing those transferable lifelong skills through higher ed, but we don’t necessarily focus on them as our learning objectives. We overrely on the end product to assess what students know. We have to figure out how we can assess process. How do we observe students’ knowledge and skills as they’re being executed rather than depending on a research paper as proof? Oral exams might be one way to do it.
Wilhelm: How can we assess writing composition without writing assignments when AI tools are so available?
Sarah Elaine Eaton: We would never deny students access to the internet. We would never deny them access to dictionaries or spell-check. So why would we deny them access to this new tool? It could just be that writing evolves and our understanding of how to do it evolves. A lot of people don’t know what’s under the hood of a car. And most of us don’t really need to know. We need to know how to drive the car. We need to know how to produce language for a purpose. But we may not necessarily need to know all of the mechanics. If there are tools that help us do it, and if our purpose is to communicate, then that can be the thing that we focus on.
What if there’s a way of thinking through artificial intelligence or with artificial intelligence that doesn’t require writing?
Bertram Gallant: In math, even though we have calculators, kids still get taught basic arithmetic because there’s something about the importance of learning how math works in order to do higher-order math skills. So what are the basic skills in writing that need to be taught, even if we are going to be writing with ChatGPT in the future? And then once students are taught those, can they cognitively offload them to artificial intelligence so that they can then build on those skills with higher-order skills? I hear a lot of writing instructors saying that we have to keep teaching writing because writing is how we think. That’s how we think because we were taught to think through writing. But what if there’s a different way of thinking? What if there’s a way of thinking through artificial intelligence or with artificial intelligence that doesn’t require writing?
Wilhelm: Sarah, you’ve written that we might be nearing an era of post-plagiarism. What did you mean by that?
Eaton: We’re not going to throw away definitions of plagiarism. They exist in policy, and they exist for good reasons. But what would it look like to transcend it because of artificial intelligence? What is the next thing? Post-plagiarism is a world in which AI/human hybrid writing is the norm, where the end product may be neither fully written nor originally written by a human and neither fully generated by an AI, but rather something where we might use AI to give us prompts, write drafts, inspire us. There’s no evidence I can see that there’s going to be any threat to human imagination or creativity. Our ability to continue to inspire and imagine and create will remain boundless. I don’t see AI as any kind of a threat in that way.
Wilhelm: What guidelines should faculty members include in syllabi related to ChatGPT?
Bertram Gallant: Are they allowed to use it? And if so, how, when, where, and why? My big thing for faculty is always being very clear: How does this connect to the learning objectives for the class? Why are you allowing this or not allowing this? And if you are allowing AI, we recommend having students acknowledge their use of it, how they used it, and how it helped their learning. Avoid being resolute. I’ve had faculty say in their syllabi, “If I find out you used ChatGPT, I will give you an F in this class.” After they find the student used it, they come to me and say they have to report the student but they don’t want to give them an F in the class. Well, then don’t say that in your syllabus. Try not to paint yourself into a corner.
This is going to be a standard technology that’s used all the time and will become almost indistinguishable from ways we used to work.
Lancaster: It is about being very clear on the front page of the syllabus about what is acceptable, what’s not acceptable, and also giving the student the opportunity to say exactly what external support they had for an assignment, not just generative AI. They might want to acknowledge they’ve used Grammarly, they might want to acknowledge they’ve had a third party proofread their work. It’s not going to be possible to ban AI. We know it is being integrated into Microsoft Office, for instance. So a student will just be able to click “rewrite this section of text,” “turn this into a PowerPoint presentation,” or “email a reply.” This is going to be a standard technology that’s used all the time and will become almost indistinguishable from ways we used to work. Discussions about detection come up all the time. There are lots of competing systems out there. They aren’t completely reliable. We don’t know what calculation they’re doing behind the scenes to determine if something is written by AI or not. So you can end up accusing a lot of people without firm evidence.
Wilhelm: How do you make sure students are being transparent about the tools that they’re using?
Eaton: If we expect students to act with integrity, then we as educators have to act with integrity and model that behavior. One of the ways we can do that is through transparency with our assessment. We’ve had faculty members tell us they want to use text-detection tools in their classes. We don’t support that institutionally. We tell them that if they are going to do it, it’s best practice to put a written statement in the syllabus so students know what kind of technologies will be used in the assessments, and to talk to students about the limitations of those detection tools. It’s not about trying to use technology in order to catch students. Nobody wins in an academic-integrity arms race. Deceptive assessment using tools and technologies without students’ knowledge ahead of time is not modeling integrity.
Wilhelm: A lot of people have compared the appearance of ChatGPT to the calculator or to the word processor. What historical moment is this similar to?
Bertram Gallant: In education, at least, I think this is unparalleled because other disruptors have been discipline-focused tools like the calculator. The one that resonates the most with me is the Industrial Revolution, with the automation of agriculture and other production. But even that occurred much more slowly.
Lancaster: The thing I can compare it to, which is not the most pleasant one, is Covid, when we very much had a significant change in education and had to adapt incredibly quickly and put alternative assessments in place. We had to learn to teach online. The big difference here is we have a little bit more time to think and we can make changes. And if they’re wrong, we can change them again. We just don’t want to make changes that are completely detrimental to students because we don’t need to, unlike when Covid hit and we were struck by desperation.