We’re sorry. Something went wrong.
We are unable to fully display the content of this page.
If you continue to experience issues, contact us at 202-466-1032 or email@example.com
While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.
Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.
Even this limited use is complicated. Different audiences — journal editors, grant panels, conference attendees, students — will have different expectations about originality for particular tasks. For example, while peer reviewers might accept translated statistical code, students might balk at AI-generated lecture slides.
But it’s in the realm of academic writing and research where ethical debates about transparency and fairness really come into play.
Recently, several leading academic journals and publishers updated their submission guidelines to explicitly ban researchers from listing ChatGPT as a co-author, or using text copied from a ChatGPT response. Some professors have criticized these bans as shortsightedly resistant to an inevitable technological change. We shouldn’t be surprised at the disagreement. This is a new ethical space that only roughly follows the outlines of our existing agreements on plagiarism, authorship criteria, and fraud. Precisely where to draw red lines is not clear.
For example, the editors of Science have decided that authors should not use text generated by ChatGPT in a submitted manuscript. Fair enough. But can authors use ChatGPT to generate an early outline for a manuscript? Though not an exact copy-paste of text, is that not a copy-paste of AI-generated ideas? Academic research desperately needs a broader set of principles to inform future debates over rules and norms.
Academic research desperately needs a broader set of principles to inform future debates over rules and norms.
What feels most different about ChatGPT compared to other assistive technologies is the possible reduction of intellectual labor. For most professors, writing — even bad first drafts or outlines — requires our labor (and sometimes strain) to develop an original thought. If the goal is to write a paper that introduces boundary-breaking new ideas, AI tools might reduce some of the intellectual effort needed to make that happen.
Of course most papers are not breaking new ground. That’s because academe also features peculiar incentives that could strongly influence how researchers decide if and how to use AI assistance. Most obvious is the pressure to produce writing — and lots of it. This includes journal articles, books, and conference papers, but also proposals for grants and fellowships (which, in turn, lead to more academic writing). For many of those on the tenure track, the number of published works matters, even where “quality over quantity” is emphasized. While we might aspire to high-minded pursuit of new knowledge, in this pressurized environment, sometimes we settle for what’s good enough to satisfy peer reviewers, editors, or grant panels.
Some will see that as a smart use of time, not evidence of intellectual laziness. After all, if we can eliminate the struggle of staring at a blank page and blinking cursor, won’t that leave us much more time for the more creative and exciting parts of academic research? Yes, possibly. But there is critical room for inequality here, especially in departments and fields that value frequent publication. Researchers who adopt AI assistance may raise the bar, leaving behind those who choose not to use it, or who cannot. Notably, our current debates have been sparked by a free version of ChatGPT; pricing structures are likely to be forthcoming.
We can only monitor whether AI technologies are exacerbating existing inequalities in research (or creating new ones) if we know how they are being used. To do this, we can borrow from existing academic models around authorship, like author-contribution statements. One function of these statements is to shine a light on the often-unequal distribution of labor required to produce an academic journal article. Another is to ensure that authors with relatively large contributions are recognized fairly for those inputs.
That question of fairness is an especially difficult one. Each discipline and audience will need time to decide if and why red lines should be drawn, being careful to not stifle innovation while also examining questions of quality, rigor, and equity. Still, we should urgently adopt a principle of transparency for the use of ChatGPT and similar AI technologies.
Our academic systems rely on trust. As a peer reviewer for grants and journal articles, I’ve never used a plagiarism checker or directly questioned the accuracy of an author-contribution statement. Compare this to my students’ essays, which are automatically passed through plagiarism-checking software upon submission. Academics enjoy an environment where we might challenge claims and critique the novelty of ideas, but we rarely question the originality of each other’s written work.
For this system of trust to hold in academe, we must firmly and rapidly commit to transparency around the use of AI. Only then can we hope to have informed and reasoned discussions about what norms and rules should govern academic writing in the future.