Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
Vector illustration of rows of classroom-type desk chairs. On one side they're pixelated
Eva Vázquez for The Chronicle

Professors Ask: Are We Just Grading Robots?

Some are riding the AI wave. Others feel like they’re drowning.

The Changing Classroom
By Beth McMurtrie June 13, 2024

Jeff Wilson is a professor of religious studies at the University of Waterloo. Since ChatGPT appeared on the scene, he has warned his students against using artificial intelligence to do their work. Even so, he says, he saw a “massive” uptick in its use over the past academic year, estimating that about 25 percent of his students at the Canadian institution used generative AI in their assignments.

Some relied on AI to write responses to 150-word prompts. Others used it to complete an experiential-learning assignment, in which they were supposed to do mindfulness meditation, say, and then write about the experience. When he asked why, some students said they knew it was a mistake to do so, but they were pressed for time. A few didn’t know they had used generative AI because it’s embedded in so many other tools, like Grammarly. Others flat-out denied using AI, knowing, Wilson surmises, that it was unlikely they’d be investigated further.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Jeff Wilson is a professor of religious studies at the University of Waterloo. Since ChatGPT appeared on the scene, he has warned his students against using artificial intelligence to do their work. Even so, he says, he saw a “massive” uptick in its use over the past academic year, estimating that about 25 percent of his students at the Canadian institution used generative AI in their assignments.

Some relied on AI to write responses to 150-word prompts. Others used it to complete an experiential-learning assignment, in which they were supposed to do mindfulness meditation, say, and then write about the experience. When he asked why, some students said they knew it was a mistake to do so, but they were pressed for time. A few didn’t know they had used generative AI because it’s embedded in so many other tools, like Grammarly. Others flat-out denied using AI, knowing, Wilson surmises, that it was unlikely they’d be investigated further.

I’m grading fake papers instead of playing with my own kids.

The explosion in AI use, the endless hours spent figuring out whether — as he put it — there was a person on the other side of that paper, and the concern that students who cheat could end up getting the same grades as those who did the work sent Wilson reeling.

“I’ve been teaching at this university for 17 years and suddenly this comes along to devalue everything I’ve done to become a caring, competent instructor, and the students are creating make-work for me,” he says, describing the shift as “devastating.” “I’m grading fake papers instead of playing with my own kids.”

The tension surrounding generative AI in education shows no signs of going away. If anything, faculty members are sorting themselves into two camps. Some, like Wilson, are despairing over its interference with authentic learning, and deeply worried they will have to scuttle the meaningful assignments and assessments they’ve developed over the years because they have become too easily cheatable. Others agree that AI abuse is a problem but focus instead on how AI could enhance learning. Or they have found ways — in the short term, at least — to minimize its abuse while maintaining the integrity of their assignments. (Some argue there’s a third camp: the professors who so far are ignoring AI’s existence.)

Stay informed on teaching topics

Illustration showing professors and students leaping successfully through web browser windows, a field of books, and idea light bulbs
Nien-Ken Alec Lu for The Chronicle

Want to continue the conversation on ChatGPT and other teaching topics? Sign up for our free Teaching newsletter.

Members of both groups, however, agree that administrators need to provide more and better support for faculty members, who remain largely on their own as they try to adapt to this rapidly changing landscape. Professors complain of receiving generic AI guidance that encourages them to experiment with the tools in their teaching but without providing tested examples of how to do so. Others say that unless students confess, it’s often pointless to try to bring forward an academic-integrity case, even with evidence, because of the difficulty in proving AI use.

The lack of guidance and training is of particular concern, experts say, because AI will soon be everywhere. AI tools can now listen to and summarize a lecture, as well as read and summarize long academic articles. “Now we have to start thinking about more than just assessments in AI. We have to think about learning itself,” says Marc Watkins, a lecturer in the department of writing and rhetoric at the University of Mississippi.

Watkins and his colleagues were experimenting with ChatGPT in its earliest days, and he has become a savvy consumer not just of generative AI but of how companies are marketing the various tools to students, often through influencers on TikTok. Most professors and administrators, Watkins believes, don’t fully understand this marketplace, something he has been focusing on in a series of articles on Substack. The landscape will only get more complicated, he notes, with developments such as OpenAI’s ChatGPT Edu, which aims to expand generative AI’s use on college campuses.

Watkins says he tries to neither despair about AI nor become a cheerleader but instead “adopt a stance of curious skepticism.” He gives an example from his first-year writing course, where he tested out a reading assistant in the spring of 2023. He liked the idea that the tool could help students with hidden disabilities or those who struggle with English as a second language. “I thought at the time this would be great,” he recalls.

And it did, in fact, help some of those students, he says. “But what scared me was that everyone else in the class said, ‘This is amazing. I no longer have to read ever again for anything I have to do because it turns any sort of digital document into a CliffsNotes on demand.’ I didn’t even think about that before I deployed it.”

He began the next class talking to students about why offloading their reading was a bad idea. He finally got through to them, he says, when he asked how they would feel if he used the AI reading assistant to grade their essays. “They said, ‘You can’t do that; you’ll be fired.’” he recalls. “My response is, ‘Fired for what?’ There’s no ethical consideration yet for any of these tools. We have to create this in real time.”

ADVERTISEMENT

Some instructors find they have to push back against their colleagues and peers when urging more caution around AI.

Yolanda Gonzalez, an associate professor of English at McLennan Community College, in Waco, Tex., has heard instructional designers on her campus argue that faculty members should worry less about cheating with AI because they can’t yet prove when students use it. Her response: It’s her job to ensure students develop basic writing skills, and the noticeable uptick in AI use is impeding those efforts.

“For our developmental students, those we know lack the basic skills needed to get through college, we are doing them a disservice if we’re not distinguishing between what is acceptable and what is not,” Gonzalez says.

She recalls attending a webinar in which a panel of academics were asked what to do about AI use by students. “Their answer was: ‘Well, now that we have all these tools that cut out some of these menial tasks that students have been accustomed to doing, students have to do more. They have to be required to do more of this higher-order, critical-thinking work that they can’t possibly do with AI.’” she says. “And that was, to me, a terrifying thought.”

ADVERTISEMENT

Those “menial tasks,” Gonzalez says, might include creating an outline or developing a thesis statement. It might mean working through a rough draft and reviewing it and taking it apart and rebuilding it again. “It’s difficult work. If we’re not having students do these processes, how are they going to develop those critical-thinking skills that are necessary to do that higher-order work?”

The difference in attitudes among faculty members probably depends in part on their responsibilities in the classroom. Teaching a large general-education or introductory course comes with a different set of goals and challenges than teaching an upper-level seminar. A STEM professor may encourage students to use AI to polish their writing if that helps them better articulate scientific concepts. But a humanities professor might balk, since clear and coherent writing is central to mastering the discipline. Differing approaches may also depend on rank: Tenured professors who teach fewer classes have the ability to explore and experiment with AI tools in the way that a busy adjunct does not.

Steven Greene, a political-science professor at North Carolina State University, explicitly encouraged AI use in his senior seminar this past spring. But his focus there was to have his students, who already understood the content, use generative AI (he prefers one called Claude) to improve the writing in their papers. He sees that as akin to asking a classmate to read over a final draft.

Even still, Greene wishes his students used AI more effectively by creating better prompts that would allow for more sophisticated feedback. “There was definitely less truly bad writing in the final seminar papers I graded,” he notes. “But over all, it struck me that most students massively failed to fully take advantage of AI to improve their papers.”

ADVERTISEMENT

Greene, a tenured professor who teaches two courses per semester, which he says gives him the opportunity to experiment, also doesn’t worry about assigning take-home exams. “My prompt literally said, ‘You can use the AI to help your writing, but in large part because of that, I am looking to see not just your knowledge of political parties, but your knowledge of … spring 2024 Steve Greene’s political-parties class.’”

He has continued to use take-home finals in an introductory class. He figures that if about three out of 20 students used AI, “that’s a cost I’m willing to pay at this point.” Still, he is careful, he says, to design questions that he thinks AI would do a mediocre job answering, at best. He asked them to describe, for example, what important concept for understanding how government and politics works is widely misunderstood by the American public. They also had to cite research, explain how democracy would work better if people understood the concept, and consider what they might have gotten wrong about the argument.

Like other AI users, Greene is bothered by how little discussion is taking place among the faculty and administration. “I get where that’s coming from, because there is such a strong tradition and culture of faculty autonomy,” he says. “And it’s like: ‘We’re not going to tell you how to teach your class. We’re not going to tell you how to use this tool.’ But that’s not enough. A lot of people are like: ‘I don’t get it. I need help. I want to understand how I need to evolve and adapt in response to this tool.’”

Another explanation for this disconnection may be that administrators are more upbeat about AI’s influence on teaching than professors are. A Chronicle survey on how generative AI is changing the classroom, underwritten by Amazon Web Services and to be released this month, found a noticeable difference in attitudes between the two groups.

ADVERTISEMENT

While 78 percent of administrators said AI tools would have a positive impact on teaching in the next five years, only 46 percent of faculty members felt that way. Faculty members who believed that AI posed a threat argued that its usage would undermine academic integrity, students’ critical thinking and writing skills, and creativity. Administrators who felt positively about AI focused on the need to prepare students for an AI-infused workplace, and said that it could spur new ways of thinking about problems and enhance learning through tools such as AI tutors.

Another report, released on Wednesday by Tyton Partners, found that instructors surveyed in the spring of 2024 were less likely to have tried or regularly used generative AI tools compared with administrators or students. “Time for Class 2024: Unlocking Access to Effective Digital Teaching and Learning” found that 59 percent of students are regular users, compared with 36 percent of instructors and 40 percent of administrators. Only 23 percent of students said they had never used generative AI, compared with 36 percent of instructors.

Daniel Szpiro, a professor of practice in Cornell University’s business college, has seen differences on his own campus. A university committee organized to examine AI’s impact on pedagogy encourages faculty members to experiment with it. Far less attention has been paid to how to maintain academic integrity right now, he says. What, for example, is he supposed to do about the exponential use of AI he saw this past semester on the discussion boards in his introductory-accounting course, other than stop using them?

“Nobody likes talking about short-term operational thinking. It’s just not an exciting thing to do,” he says. “People would rather talk about the big picture and how the world is going to change than the nuts and bolts of how to operate every day.”

The tone I’m starting to see from people is beyond frustration. ‘Anguish’ is the word I’m using mostly.

Finding solutions to AI challenges is particularly difficult because the technology is changing rapidly. What might have been true of AI’s capabilities or how to AI-proof an assignment a semester ago might not be true today. Watkins, the University of Mississippi writing instructor, designs and runs faculty-development institutes on AI. He estimates that every time he creates a new one he has to revamp about 80 percent of the content. His university is ahead of the curve in offering such training, he says, but on many campuses professional-development money is usually reserved for, say, sending professors to conferences. What they need instead is more immediate and flexible training.

ADVERTISEMENT

But these new challenges AI presents are not just about technology. Professors say that when students admit to misusing the tech, they often apologize and say that they got behind on their work or panicked or felt overwhelmed. Rare is the student who enrolls in a class planning to cheat his way through it all.

A true defense against AI abuse, Watkins says, has to include thoughtful conversations with students about the importance of authentic learning, and why taking the easy way out is harmful. Watkins designed an entire course to focus on the ethics of generative AI. “I don’t know how that translates to an overwhelmed faculty member who might have 15 minutes of one class session to talk about this,” he says.

Many academics feel that there’s a crisis brewing among instructors desperate for support.

“The tone I’m starting to see from people is beyond frustration. ‘Anguish’ is the word I’m using mostly,” says Derek Nelson, a history professor at Everett Community College, who is fostering discussion on his campus in Washington State through a new daily newsletter. Even AI-savvy professors, he has noticed, share this feeling of being untethered from truth when they read students’ writing. “These intense feelings come and go. You feel like you’ve got it all figured out and got a plan. Then you read something and it throws you off again.”

ADVERTISEMENT

Looking to the next semester, professors continue to diverge on how they plan to deal with AI.

After successfully experimenting with ChatGPT to help provide feedback on students’ mock NSF research proposals, Justin Shaffer, a teaching professor and associate dean of undergraduate students at the Colorado School of Mines, plans to continue incorporating AI into his teaching. Because he often didn’t know much about the student’s topic of choice, he found that he could supplement his high-level comments with ChatGPT’s specific recommendations. Nearly all of his students, he says, found it helpful and accurate. “I still feel a little guilty about using AI,” he says. “But it worked.”

He intends to advise his students along similar lines. Use AI for brainstorming, or as a tutor, or to improve your writing, he will tell them, but don’t have it do the work for you. He notes that some textbook publishers, like Macmillan, are already embedding AI tutors into their learning platforms, so why shouldn’t students take advantage of the tools at their disposal?

“I do feel a little worried about the future of our students and their training and skills moving forward. But I can’t individually do anything to shut that down or change it. It’s more providing a good example for them. It’s about being a moral compass and showing them that good direction,” he says. “If they choose to abuse it, I kind of believe in karma. ... The world will catch up to them and check them in ways that will damage their career trajectories.”

ADVERTISEMENT

For Wilson, the religious-studies professor, such equanimity has been elusive. He can’t bring himself to accept AI usage in his classes because, to him, that means sending the message to students that it’s OK to cheat. And why wouldn’t you want to cheat, he asks, if using AI gets you a grade similar to those who are doing the work?

So this fall, he plans to scrap many of his writing assignments, including the experiential-learning one that was once so meaningful to many of his students. “Because of those people at the bottom of the scale making it impossible for me to do my work,” he says of AI users, “all those people at the upper end of the scale will never have that good experience.” Some of those better students might even have chosen to become religious-studies majors.

Instead, he will assign less writing and less deep reading, because students’ work in that area is now difficult to assess. He will rely more on lectures and in-class, handwritten exams.

“It’s going to force everybody to the lowest common denominator.” But he refuses, he says, “to waste a whole bunch of time just grading robots.”

A version of this article appeared in the July 5, 2024, issue.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
McMurtrie_Beth.JPG
About the Author
Beth McMurtrie
Beth McMurtrie is a senior writer for The Chronicle of Higher Education, where she focuses on the future of learning and technology’s influence on teaching. In addition to her reported stories, she is a co-author of the weekly Teaching newsletter about what works in and around the classroom. Email her at beth.mcmurtrie@chronicle.com and follow her on LinkedIn.
ADVERTISEMENT
ADVERTISEMENT

More News

Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.
Pano Kanelos, founding president of the U. of Austin.
Q&A
One Year In, What Has ‘the Anti-Harvard’ University Accomplished?

From The Review

Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg
Illustration of an unequal sign in black on a white background
The Review | Essay
What Is Replacing DEI? Racism.
By Richard Amesbury

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin