Vlog

ChatGPT is not bullshit – but it is far from perfect either

We need memorable concepts targeting specific types of problematic AI output or use, not blanket moral judgements, says Andrew J. Routledge

Published on
November 24, 2025
Last updated
November 25, 2025
A cowpat
Source: Faraonvideo/Getty Images

On his final album, Leonard Cohen’s gravelly voice croaks one last warning:

“As for the fall, it began long ago. Can’t stop the rain. Can’t stop the snow.”

Cohen had metaphysical matters on his mind, but his words could just as easily capture the unstoppable spread of GenAI through higher education and the deeper roots of these changes. They might also speak to the wishful thinking of those trying to prevent students from using AI.

A found that 88 per cent of undergraduates have used GenAI for an assessment, up from 53 per cent in 2024. In , researchers from the University of Reading submitted AI-generated coursework across a range of university modules: 94 per cent went undetected and earned grades roughly half a classification higher than the average student.

Vlog

ADVERTISEMENT

Fears of cheating, skill loss and poorer-quality AI work have driven a backlash. In a 2024 , University of Glasgow researchers claimed that ChatGPT produces bullshit in the sense popularised by the philosopher Harry Frankfurt. Because chatbots merely predict the next word, based on patterns in their training data, such tools are indifferent to truth, the researchers argued. AI merely imitates understanding, like a student bluffing through a seminar.


Campus spotlight guide: Bringing GenAI into the university classroom


The authors expressed hope that this framing would deter student use, and it swept academia. Yet in recently published in Ethics and Information Technology, I show that it rests on a misunderstanding of both the philosophy and the technology.

Vlog

ADVERTISEMENT

Frankfurt’s project was moral in nature. The health of our social institutions, he argued, depends on respect for truth. Bullshit threatens that respect because the bullshitter pursues some ulterior motive without regard for truth or falsehood. Remove that motive and the concept collapses and loses its moral bite. A person rehearsing the script for a play or singing lyrics is also indifferent to truth but is hardly bullshitting.

Nor are chatbots. They have no truth-threatening motives either. Yes, they have biases – both cultural assumptions and political influence from ownership, although its scope varies widely (with Grok’s infamous Nazi outbursts at the extreme end). Besides, bias need not involve indifference to truth.

Moreover, it is not true that all AI outputs are bullshit by the very nature of the tool. Although GenAI lacks understanding, it can still track truth indirectly. A compass, for instance, merely tracks the Earth’s magnetic field and thereby helps us locate geographic north. In a phrase such as “Shakespeare wrote Romeo and”, the most probable next word is also the word that completes a true sentence. The accuracy of such indirect truth-tracking depends on the quality of the training data and fine-tuning processes but it is misleading to say the model is indifferent to truth.

More importantly, the bullshit framing is unhelpful educationally. It is grist to the mill of those we might call the campus prohibitionists and diversionists. The former would ban AI in assessments and rely on detection tools, declarations and interviews to catch offenders. But this is a losing battle. Tech companies are embedding GenAI into everyday software and prioritising the student demographic, while students are mastering AI camouflage – taking its flawless grammar and, as one put it, making it “a bit shitter.” Only the least AI-literate, unaware of red flags such as the notorious em dash, will be snared in this theatrical net.

Diversionists insist on assessments that make AI use practically inviable, either by designing tasks that AI cannot yet perform effectively or by returning to in-person, supervised exams. But the first approach is likely to be rapidly obsolete as GenAI evolves, while the second is likely to be pedagogically regressive. Concerns over narrow or inauthentic assessment, shallow rote learning and test-taking anxiety have not gone away.

Vlog

ADVERTISEMENT

Critical inclusionists take a different path – for which there is clear consensus in the pedagogical literature. Driven by the pragmatic conviction that it is better to shape AI use than chase it underground, they integrate it into teaching and wider critical thinking practices, alongside a range of mitigation strategies. They acknowledge the epistemic risks – bias, hallucination, logical error – but try to minimise them through training in how GenAI works and how to interrogate its outputs.

They also see AI’s educational value. Students can use it to produce extensive tailored reading lists, build personalised study plans and engage in role plays and simulations. ChatGPT can act as a judgement-free, 24-hour Socratic tutor or debate partner.

A more radical branch of inclusionism asks a deeper question: why do so many students prefer not to produce their own work after investing so much to study? From this standpoint, it quickly becomes clear that we have cultivated a fertile breeding ground for unhealthy AI use. Students face a barrage of high-stakes, one-shot assessments that are standardised, impersonal and often uninspiring. The students themselves are anonymised, numbered and disconnected from teaching staff on large courses with dense bureaucracy. Any remaining sense of personal obligation is miraculous under these conditions.

Vlog

ADVERTISEMENT

As James Warren notes, “The scandal that should be grabbing the headlines is that for a generation we have been training our undergraduates to be nothing more than AI bots themselves.” We need not abolish grading, that traditional extrinsic motivator, but evidence in psychology suggests that intrinsic motivation is the key driver of learning. We should increase our focus on cultivating it – through active, collaborative, real-world learning that makes students want to create rather than opt to outsource their thinking.

We need memorable AI-critical concepts, but concepts that target specific types of problematic output or use, not blanket moral judgements. For instance, we might call seemingly confident but baseless explanation botsplaining. Or we might highlight the threat to critical thinking posed by sycophantic responses as botlicking. Both terms invite critical engagement rather than fear.

Indiscriminate pejorative language and moral panic shut down dialogue about the underlying causes of students’ bot dependence. But it has deep, tangled roots. The fall began long ago.

is lecturer in political theory at the University of Liverpool.

Vlog

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (14)

What, I must ask, is a "memorable concept"? Do you mean memorizable as in Leonard Cohen lyrics? In what specific ways are "blanket moral judgements" opposed or contrary to "memorable concepts"? Are not their intertwined. This does not make sense to me.... Slogans? Word plays without meaning? Dare I ask: which AI program wrote this?
I was wondering how long it would take before I was accused of being AI. First post! I talk about memorability in part as an acknowledgement that the "bullshit" critique by Hicks et al has obviously been quite memorable and engaging and, in that sense, useful as it's initiated a valuable debate about the nature of AI systems. But it's too indiscriminate - it claims that all AI outputs are bullshit, which is misleading as I explain in the piece. You quote part of my sentence but in the wider sentence I call for AI critical concepts that target specific types of problematic output or use - like outputs that are culturally biased, or outputs that are excessively agreeable and flattering. We need these concepts to properly train students in thinking critically about different outputs as they use AI, which they are doing regardless of how we feel about the fact.
"Do you mean memorizable as in Leonard Cohen lyrics?" well this is ambiguous as a question because it depends on ones view of how easily the late Mr Cohen's lyrics are able to be committed to memory, or how easily learned so as to be remembered. In choosing this example are you indicating there is a certain difficult or a certain ease? Personally, I would have a very strong reluctance to memorize them.
It's quite correct that the heart of the matter is the problem not that we've thought bots to think, but that we've right students to act like bots. But the solution isn't yet more angst over "authentic assessment". If you authentically want to get from A to B in a hurry, then you drive. But we still encourage people to run and cycle, because it keeps their bodies healthy. We even have races to motivate them, races where we would endeavour to prevent the use of assistance that it would be crazy not to use in "authentic" situations. I was once a fairly serious dancer. In days when we weren't training I'd go to the gym. Going to the gym is "inauthentic" in terms of my dance ability, but I was far more able to learn the lifts and tricks, and keep up the long practice sessions having been to the gym. Just so, University is a gym for the mind. I tech people to think. What is an authentic assessment of "thinking". The purpose is not to prepare by practicing a real life situation, but to undertake a series of mentally challenging tasks that increase the general mental fitness of your faculties, putting you in a better place when you need to do something specific and "authentic".
The defense that "AI" systems have no motive to be "bullshitters" is flawed the systems themselves are merely programs. The grievous error is in the data that the programs operate on. That data could contain biases or outright falsehoods and the programs will simply copy and glue-together chunks of text from different polluted sources. Not beingable to guarantee that the water out of a tap is as fresh as that from a spring is no excuse to connect your water supply to the waste treatment works. "AI" is to knowledge what pulp-fiction is to literature. You can fool some of the people some of the time. However sooner or later.
"A compass, for instance, merely tracks the Earth’s magnetic field and thereby helps us locate geographic north." Well it points towards the Magnetic North pole discovers by Sir James Ross' in 1818 not the Geographic North Pole, these are very different and the closer one gets to the Geograpic Pole the more erratic the compass becomes actually pointing south below the Geographic North Pole. The Magnetic Pole also moves around influenced by movements inside the earth's core which generates the magnetic field. I think this, unwittingly, serves as a very good analogy for the subject of the piece itself. Or am I just Bullshitting to use that very vulgar term?
I would never accuse one so learned as yourself of "bs" though Andrew says the compass "helps us locate geographic north" which is correct in that it points in that direction and the divergence between the poles (magnetic and geographic) is only really obvious once one actually gets closer to them.
All these AI pieces just seem to merge into each other in my view.
new
"Bots" cannot think., Students cannot "think like bots." Students are human beings. "Bots" are not. Dare any one try to teach students as human beings, AND cease mixed metaphors and contradictions? No! The point clearly is NOT AI (or what I call AU or artificial unintelligence) YES or NO, but how to use AI responsibly. AI is hardly new. Author: stop the defensiveness, please!
new
Dare I suggest that "Graff" is actually an AI bot? The predictable, robotic interventions. The pedantic pomposity. The "NO" "NOT" s and excessive use of exclamation marks and question marks. All the characteristics of an AI bot in my opinion. Yes!
new
Graff is a very real, 76 year old, internationally known scholar across the humanities, social sciences, and education. And you? How, specifically, am I "pedantic" or "pompous"? How, specifically, is my use of punctuation for clear and obviously successful communication "excessive"? Quotation marks--double marks in most of the world--are used for quoting words, phrases, sentences, and more. May I refer you to a dictionary or grammar guide? Or THE's standards for commenting which you violate? Yes, question marks follow questions!
new
And totally lacking in any sense of humor? An obvious give away.
new
World famous for my humor and my ability to laugh at, and correct myself publicly. And you?
new
'If you want to get somewhere in a hurry, you drive'. I live in a city where cycle or metro is quicker. This is the sort of one-track thinking that identifies AI as the latest unavoidable instalment of progress. You wouldn't outsource your exercise to a machine and you can't outsource your thinking to AI. Graduateness, like fitness, is about being able to do it yourself.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT