As another marking season draws to a close, tales abound of how much AI-written slop academics have had to wade through. Yet I suspect that many academics quietly enlisted a measure of automated assistance as they worked through their daunting stacks of exam scripts.
Of course, few will confess that upfront given the that most universities maintain on the use of AI in marking student submissions. The primary and secondary education sector appears to be cautiously embracing it. Just last month, England’s Department for Education issued guidance explicitly allowing generative AI for “”, and the Joint Council for Qualifications, which represents the largest awarding bodies, now permits the use of AI in essay marking provided that it is not “”. Yet, in the tertiary education sector, some universities go so far as to frame AI-assisted marking not only as contrary to regulations but as inherently unethical.
The University of Leicester is a good example. It argues that there is guaranteeing that assessment is carried out by “” in order to and ensure “”. Some universities also emphasise that assessment is a relational process, allowing the educator to “get to know” a student.
While these rhetorical commitments to human-centred education are laudable, they are also deeply hypocritical given the decades-long disputes across UK higher education over fair pay and working conditions for academic staff.
Take my own institution, SOAS University of London. In 2010, protests broke out after tutors found that the pay offered failed to cover the actual hours required for marking and feedback. A similar protest erupted in 2014 after an internal survey of staff on fractional contracts – including part-time tutors and graduate teaching assistants (TAs) – revealed that over half of their working hours were in essence unpaid. A showed that 40 per cent of fractional teaching work still went unremunerated.
Disputes over pay for marking resurfaced at SOAS in 2017, 2020, 2022 and 2023. In the current academic year, TAs have once again , citing unacceptable payment tariffs and working conditions. According to an April all-staff email from Dillon Maxwell, SOAS UCU’s anti-casualisation officer, the average reading rate for anglophone adults is 238 words per minute. “Based on this, it takes around 13 minutes to read a 3000-word essay one time, and for texts with difficult prose (which some students’ scripts will have) one could reasonably expect this to be even higher,” he wrote. “Moreover, for non-native English speakers – which a large number of tutors are – it would likely take longer.”
According to the training mandated for all SOAS TAs, feedback for assessments should be approximately 200 words. Hence, “assuming the average typing rate of 40 words per minute, a typical tutor would spend around five minutes writing feedback for a student if they never paused to think,” Maxwell went on.
“The relevant marking tariff proposed by [SOAS] is 28.8 minutes, [which] leaves less than 11 minutes to evaluate a student’s script, think about appropriate and useful feedback, and link it to the relevant marking criteria in a clear and concise way. All this is before considering the fact that tutors, like everyone else, must eat, drink, use the bathroom, and rest their minds in order to focus and work effectively.”
After reading Maxwell’s email, I ran a simple experiment. I marked a 3,000-word student exam, consisting of three answers of 1,000 words each. It took me 37 minutes to read and assess the three responses and to write 200 words of feedback for each. I then gave the same anonymised exam to GPT-o3 – OpenAI’s latest model, known for its “advanced reasoning capabilities”. In just 8.5 seconds, it produced nuanced, well-structured and on-point commentaries – about 850 words each – for all three answers.
A standard marking load for TAs at SOAS is around 300 exams. At my pace, it would take more than 23 full working days (marking non-stop eight hours a day) to complete that load. Given the 28-minute marking allocation per exam, I would in effect work five and a half of those days unpaid. Using GPT-o3, I could theoretically process?all 300 exams in under 45 minutes.
To be clear, I have not used generative AI to mark exams this year. But it would take uncommon virtue for our underpaid TAs to resist. As Anatole France observed, “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” Universities’ blanket bans on AI for marking fall equally on professors and precarious tutors, yet only the latter must choose between subsistence and compliance.
While marking boycotts highlight how systematic wage theft is embedded in many UK universities’ business models, a ?15-a-month GPT Plus subscription protects TAs’ bottom line far more effectively. Cutting marking time from weeks to minutes would turn even today’s paltry tariff into a de facto pay rise. Many TAs at SOAS – and across the UK – have surely reached the same conclusion.
Of course, if TAs pass on their marking to AI, vice-chancellors might simply respond by deciding to cut out the middlemen altogether and directly outsourcing the marking to bots. But that amplified penny-pinching would expose the emptiness of their ethical commitment to the “implicit contract”.
If university managers want human-centred institutions, they must be willing to fund the labour that human-centred education requires.
is a reader in comparative politics and a member of the AI Working Group on Learning and Teaching at SOAS University of London.
请先注册再继续
为何要注册?
- 注册是免费的,而且十分便捷
- 注册成功后,您每月可免费阅读3篇文章
- 订阅我们的邮件
已经注册或者是已订阅?