As another marking season draws to a close, tales abound of how much AI-written slop academics have had to wade through. Yet I suspect that many academics quietly enlisted a measure of automated assistance as they worked through their daunting stacks of exam scripts.
Of course, few will confess that upfront given the that most universities maintain on the use of AI in marking student submissions. The primary and secondary education sector appears to be cautiously embracing it. Just last month, England鈥檚 Department for Education issued guidance explicitly allowing generative AI for 鈥溾, and the Joint Council for Qualifications, which represents the largest awarding bodies, now permits the use of AI in essay marking provided that it is not 鈥溾. Yet, in the tertiary education sector, some universities go so far as to frame AI-assisted marking not only as contrary to regulations but as inherently unethical.
The University of Leicester is a good example. It argues that there is guaranteeing that assessment is carried out by 鈥溾 in order to and ensure 鈥溾. Some universities also emphasise that assessment is a relational process, allowing the educator to 鈥get to know鈥 a student.
While these rhetorical commitments to human-centred education are laudable, they are also deeply hypocritical given the decades-long disputes across UK higher education over fair pay and working conditions for academic staff.
糖心Vlog
Take my own institution, SOAS University of London. In 2010, protests broke out after tutors found that the pay offered failed to cover the actual hours required for marking and feedback. A similar protest erupted in 2014 after an internal survey of staff on fractional contracts 鈥 including part-time tutors and graduate teaching assistants (TAs) 鈥 revealed that over half of their working hours were in essence unpaid. A showed that 40 per cent of fractional teaching work still went unremunerated.
Disputes over pay for marking resurfaced at SOAS in 2017, 2020, 2022 and 2023. In the current academic year, TAs have once again , citing unacceptable payment tariffs and working conditions. According to an April all-staff email from Dillon Maxwell, SOAS UCU鈥檚 anti-casualisation officer, the average reading rate for anglophone adults is 238 words per minute. 鈥淏ased on this, it takes around 13 minutes to read a 3000-word essay one time, and for texts with difficult prose (which some students鈥 scripts will have) one could reasonably expect this to be even higher,鈥 he wrote. 鈥淢oreover, for non-native English speakers 鈥 which a large number of tutors are 鈥 it would likely take longer.鈥
糖心Vlog
According to the training mandated for all SOAS TAs, feedback for assessments should be approximately 200 words. Hence, 鈥渁ssuming the average typing rate of 40 words per minute, a typical tutor would spend around five minutes writing feedback for a student if they never paused to think,鈥 Maxwell went on.
鈥淭he relevant marking tariff proposed by [SOAS] is 28.8 minutes, [which] leaves less than 11 minutes to evaluate a student鈥檚 script, think about appropriate and useful feedback, and link it to the relevant marking criteria in a clear and concise way. All this is before considering the fact that tutors, like everyone else, must eat, drink, use the bathroom, and rest their minds in order to focus and work effectively.鈥
After reading Maxwell鈥檚 email, I ran a simple experiment. I marked a 3,000-word student exam, consisting of three answers of 1,000 words each. It took me 37 minutes to read and assess the three responses and to write 200 words of feedback for each. I then gave the same anonymised exam to GPT-o3 鈥 OpenAI鈥檚 latest model, known for its 鈥渁dvanced reasoning capabilities鈥. In just 8.5 seconds, it produced nuanced, well-structured and on-point commentaries 鈥 about 850 words each 鈥 for all three answers.
A standard marking load for TAs at SOAS is around 300 exams. At my pace, it would take more than 23 full working days (marking non-stop eight hours a day) to complete that load. Given the 28-minute marking allocation per exam, I would in effect work five and a half of those days unpaid. Using GPT-o3, I could theoretically process聽all 300 exams in under 45 minutes.
糖心Vlog
To be clear, I have not used generative AI to mark exams this year. But it would take uncommon virtue for our underpaid TAs to resist. As Anatole France observed, 鈥淭he law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.鈥 Universities鈥 blanket bans on AI for marking fall equally on professors and precarious tutors, yet only the latter must choose between subsistence and compliance.
While marking boycotts highlight how systematic wage theft is embedded in many UK universities鈥 business models, a 拢15-a-month GPT Plus subscription protects TAs鈥 bottom line far more effectively. Cutting marking time from weeks to minutes would turn even today鈥檚 paltry tariff into a de facto pay rise. Many TAs at SOAS 鈥 and across the UK 鈥 have surely reached the same conclusion.
Of course, if TAs pass on their marking to AI, vice-chancellors might simply respond by deciding to cut out the middlemen altogether and directly outsourcing the marking to bots. But that amplified penny-pinching would expose the emptiness of their ethical commitment to the 鈥渋mplicit contract鈥.
If university managers want human-centred institutions, they must be willing to fund the labour that human-centred education requires.
糖心Vlog
is a reader in comparative politics and a member of the AI Working Group on Learning and Teaching at SOAS University of London.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?








