On his final album, Leonard Cohen’s gravelly voice croaks one last warning:
“As for the fall, it began long ago. Can’t stop the rain. Can’t stop the snow.”
Cohen had metaphysical matters on his mind, but his words could just as easily capture the unstoppable spread of GenAI through higher education and the deeper roots of these changes. They might also speak to the wishful thinking of those trying to prevent students from using AI.
A found that 88 per cent of undergraduates have used GenAI for an assessment, up from 53 per cent in 2024. In , researchers from the University of Reading submitted AI-generated coursework across a range of university modules: 94 per cent went undetected and earned grades roughly half a classification higher than the average student.
Vlog
Fears of cheating, skill loss and poorer-quality AI work have driven a backlash. In a 2024 , University of Glasgow researchers claimed that ChatGPT produces bullshit in the sense popularised by the philosopher Harry Frankfurt. Because chatbots merely predict the next word, based on patterns in their training data, such tools are indifferent to truth, the researchers argued. AI merely imitates understanding, like a student bluffing through a seminar.
Campus spotlight guide: Bringing GenAI into the university classroom
The authors expressed hope that this framing would deter student use, and it swept academia. Yet in recently published in Ethics and Information Technology, I show that it rests on a misunderstanding of both the philosophy and the technology.
Vlog
Frankfurt’s project was moral in nature. The health of our social institutions, he argued, depends on respect for truth. Bullshit threatens that respect because the bullshitter pursues some ulterior motive without regard for truth or falsehood. Remove that motive and the concept collapses and loses its moral bite. A person rehearsing the script for a play or singing lyrics is also indifferent to truth but is hardly bullshitting.
Nor are chatbots. They have no truth-threatening motives either. Yes, they have biases – both cultural assumptions and political influence from ownership, although its scope varies widely (with Grok’s infamous Nazi outbursts at the extreme end). Besides, bias need not involve indifference to truth.
Moreover, it is not true that all AI outputs are bullshit by the very nature of the tool. Although GenAI lacks understanding, it can still track truth indirectly. A compass, for instance, merely tracks the Earth’s magnetic field and thereby helps us locate geographic north. In a phrase such as “Shakespeare wrote Romeo and”, the most probable next word is also the word that completes a true sentence. The accuracy of such indirect truth-tracking depends on the quality of the training data and fine-tuning processes but it is misleading to say the model is indifferent to truth.
More importantly, the bullshit framing is unhelpful educationally. It is grist to the mill of those we might call the campus prohibitionists and diversionists. The former would ban AI in assessments and rely on detection tools, declarations and interviews to catch offenders. But this is a losing battle. Tech companies are embedding GenAI into everyday software and prioritising the student demographic, while students are mastering AI camouflage – taking its flawless grammar and, as one put it, making it “a bit shitter.” Only the least AI-literate, unaware of red flags such as the notorious em dash, will be snared in this theatrical net.
Diversionists insist on assessments that make AI use practically inviable, either by designing tasks that AI cannot yet perform effectively or by returning to in-person, supervised exams. But the first approach is likely to be rapidly obsolete as GenAI evolves, while the second is likely to be pedagogically regressive. Concerns over narrow or inauthentic assessment, shallow rote learning and test-taking anxiety have not gone away.
Vlog
Critical inclusionists take a different path – for which there is clear consensus in the pedagogical literature. Driven by the pragmatic conviction that it is better to shape AI use than chase it underground, they integrate it into teaching and wider critical thinking practices, alongside a range of mitigation strategies. They acknowledge the epistemic risks – bias, hallucination, logical error – but try to minimise them through training in how GenAI works and how to interrogate its outputs.
They also see AI’s educational value. Students can use it to produce extensive tailored reading lists, build personalised study plans and engage in role plays and simulations. ChatGPT can act as a judgement-free, 24-hour Socratic tutor or debate partner.
A more radical branch of inclusionism asks a deeper question: why do so many students prefer not to produce their own work after investing so much to study? From this standpoint, it quickly becomes clear that we have cultivated a fertile breeding ground for unhealthy AI use. Students face a barrage of high-stakes, one-shot assessments that are standardised, impersonal and often uninspiring. The students themselves are anonymised, numbered and disconnected from teaching staff on large courses with dense bureaucracy. Any remaining sense of personal obligation is miraculous under these conditions.
Vlog
As James Warren notes, “The scandal that should be grabbing the headlines is that for a generation we have been training our undergraduates to be nothing more than AI bots themselves.” We need not abolish grading, that traditional extrinsic motivator, but evidence in psychology suggests that intrinsic motivation is the key driver of learning. We should increase our focus on cultivating it – through active, collaborative, real-world learning that makes students want to create rather than opt to outsource their thinking.
We need memorable AI-critical concepts, but concepts that target specific types of problematic output or use, not blanket moral judgements. For instance, we might call seemingly confident but baseless explanation botsplaining. Or we might highlight the threat to critical thinking posed by sycophantic responses as botlicking. Both terms invite critical engagement rather than fear.
Indiscriminate pejorative language and moral panic shut down dialogue about the underlying causes of students’ bot dependence. But it has deep, tangled roots. The fall began long ago.
is lecturer in political theory at the University of Liverpool.
Vlog
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to ձᷡ’s university and college rankings analysis
Already registered or a current subscriber?








