糖心Vlog

PhD students are let down by sector’s silence on ChatGPT misuse

Failure to explain how AI-aided academic writing is a form of plagiarism leaves graduate students horribly compromised, says E.M. Wolkovich

Published on
August 26, 2025
Last updated
August 26, 2025
Man stepping into digital bear trap, as an illustration that the failure to explain how AI-aided academic writing is a form of plagiarism leaves graduate students horribly compromised.
Source: Moor Studio/iStock

Recently I walked out into the bright summer sun from the darkness of the graduate studies building at my university in a daze of mild horror. I realised almost three years has gone by since ChatGPT’s arrival and we have squandered this time as instructors and academic leaders.

At least, I have. I have failed to discuss it in any organised way with trainees in my lab. I failed to develop my own clear opinions and guidelines on it and so I have failed the trainees in my lab for some years now.

What precipitated all this was chairing my first PhD defence where the thesis was written in part with generative AI. How did I know this? Because it was flagged by the external examiner’s report. And thus I found myself reading a paragraph in the thesis’ preface that I would otherwise have missed, one?that described how this “original work” was “improved” in its clarity, grammar and structure by several proprietary generative AI models.

As chair for a PhD defence at my university, I am tasked with making sure the external examiner’s concerns are addressed and my university’s exam policies are followed. I found myself reading and rereading the policies on generative AI in doctoral theses, where students are told they can “normally assume that the use of AI tools for editorial (ie, grammar, flow) and transcription purposes are broadly acceptable” but that the use of generative AI “to get started with drafting needs approval from the supervisor and supervisory committee”. Put simply, literal outputs cannot be included in the thesis unless they are cited as output of generative AI, otherwise it would constitute plagiarism. And it would be plagiarism if you did it at any stage –?not just the final draft, but also when sharing drafts with your supervisory committee.

糖心Vlog

ADVERTISEMENT

Unquoted ChatGPT as plagiarism made sense to me. Yet it was also something I had never seen mentioned until now. As academics we are trained to guard against plagiarism. We are afraid of students using someone else’s words, but this time it is not someone else’s voice, it is something else’s. An amalgamation of all the voices – our own much included through our academic writing – combined with an algorithm searching for a way to match the training dataset.

A colleague’s comments from earlier in the spring echoed in my head: “It sounds like me.” I suspect it does – ask it a few questions in an area of plant biology this colleague has revolutionised and the line between a summary and a straight-up rip-off of unquoted text provided by ChatGPT is so thin I am not sure it exists.

糖心Vlog

ADVERTISEMENT

However, students could have been plagiarising from ChatGPT for years for their theses, dissertations and other academic writing and we’ve been mute. I had never heard a colleague compare ChatGPT to plagiarism, but it is, and I don’t think any of us ever told our students. At least no one I know has.

I know because I have been asking around and the replies have astounded me. Some deem it acceptable to paraphrase ChatGPT, albeit skilfully (“you should always change it just enough so you don’t have to use quotation marks”). Responding to a on AI use, others complained how ChatGPT’s prohibition is arbitrary and we really should not consider generative AI as plagiarism. There is also a repeated undertone of mild horror from some colleagues that I have not figured out this is the pathway to reducing the burden in teaching students to write. A colleague whispered to me that it was “a relief to not have to edit my students’ writing”, as though she was explaining a new drug everyone but me was taking.

Is everyone on the new drug of work-free writing and editing via the magic of generative AI? Maybe, but I think they’re missing out on the fun of science. As the statistician and political scientist Andrew Gelman wrote, “It is by writing that I explore and organise my thoughts. Writing logical natural-language text is not so different from deductive mathematical reasoning: the point is to state my assumptions, work out their implications, and confront the places where these implications do not comport with reality or with my current understanding of the world.” If we don’t train our students in this, what are we doing?

My sense is that we have fretted over undergraduates using generative AI, while offering endless seminars and grant opportunities on how to “integrate AI” in our training, teaching, and beyond. We never told the future educators and researchers we’re training why they might not want to use generative AI for their writing, what they will lose, and also that it is academic misconduct. At least, it is to me now.

糖心Vlog

ADVERTISEMENT

After chairing the defence and discussing with colleagues and trainees in my lab, I have decided on guidelines for those in my lab in how they use generative AI for their writing. I like writing and I like teaching writing, no matter how painful it is sometimes. I want to focus the time I have – for now at least – in training others in writing natural language so I don’t want trainees in my lab using generative AI for their text. I am applying this rule more widely: I will only chair defences where the student states they did not use generative AI at all in the writing of their thesis, as I .

I realise my choice could be limiting for those who are not native speakers of English and I want to leave room to not further disadvantage them. I also think we need a broader conversation of how we want generative AI to help level the English-language playing field in academia. Personally I would rather have non-native speakers write a finished first draft in their native language, and then use generative AI to translate it. They can review it, tweak the translation to make sure it has their meaning, then submit the full workflow of this to a journal, which then publishes the original and translated versions. This is a conversation I want to have.

A conversation I don’t want to have is one where a graduate student tells me they don’t understand why they would have to tell their supervisory committee that all the chapter drafts they were reading included ChatGPT text. One student to whom I pointed this out looked at me like I was crazy – why would they need to disclose this?

One reason is that their university has decided it could be academic misconduct. But no one – until me at the student’s PhD defence – told the student. And that’s the unacceptable position on which we all seem to have agreed.

糖心Vlog

ADVERTISEMENT

Elizabeth M. Wolkovich is associate professor in forest and conservation sciences at University of British Columbia.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT