The QAA admitted in 2007 that 鈥榠t cannot be assumed that similar standards have been achieved鈥. Amazingly, this received little public attention
鈥淚s a 2:1 in history at Oxford Brookes worth the same as a 2:1 in history at Oxford?鈥
Five years ago, this question was posed by a parliamentary select committee to the vice-chancellors of both of those universities. Their rambling and convoluted responses were considered so unsatisfactory by MPs on the Innovation, Universities, Science and Skills Committee 鈥 which was conducting investigations for its report Students and Universities 鈥 that they were accused of 鈥渙bfuscation鈥, and of giving an answer that 鈥渨ould not pass a GCSE essay鈥. And the committee鈥檚 final report included the damning conclusion: 鈥淚t is unacceptable for the sector to be in receipt of departmental spending of 拢15 billion but be unable to answer a straightforward question about the relative standards of the degrees of students, which the taxpayer has paid for.鈥
The correct answer to the committee鈥檚 question was, in fact, a very simple one: we just don鈥檛 know. We do not have the necessary systems in place to tell us. The traditional reliance on the external examiner system to mediate standards within the system is misplaced, as a number of studies have shown. However experienced an individual examiner may be, their experience across the sector can only be limited and they have no opportunity to calibrate their standards within their disciplinary community. This was emphatically recognised by the 糖心Vlog Academy鈥檚 2012 document, A Handbook for External Examining: 鈥淭he idea that a single external examiner could make a comparative judgment on the national, and indeed international, standard of a programme has always been flawed.鈥
糖心Vlog
The naive outsider might think that assuring comparability of standards is surely the role of the Quality Assurance Agency for 糖心Vlog 鈥 the independent body set up to monitor standards across the UK sector 鈥 and something addressed as a matter of course within its institutional review processes. But in 2007, two years before the select committee hearing, the QAA made the brave public admission, in a Quality Matters briefing paper, that: 鈥淔ocusing on the fairness of present degree classification arrangements and the extent to which they enable students鈥 performance to be classified consistently within institutions and from institution to institution鈥he class of an honours degree awarded鈥oes not only reflect the academic achievements of that student. It also reflects the marking practices inherent in the subject or subjects studied, and the rule or rules authorised by that institution for determining the classification of an honours degree.鈥
In other words, local and contextual assessment practices make it impossible to make objective comparisons. This should not, in fact, have come as a surprise: certainly not to anyone up to date with the research literature. For at least the previous 10 years, especially through the work of the Student Assessment and Classification Working Group, an informal body of academics and administrators who share an interest in assessment, a series of papers and studies had demonstrated the distorting effects of central university systems that treat all marks the same regardless of the nature of the assessment task or the subject discipline.
糖心Vlog
It had been shown, for instance, that students consistently score better on coursework tasks than in examinations and in the more numerate disciplines than the arts and humanities or social sciences. Research had also shown that, given exactly the same set of assessment results, students at different institutions could end up with awards that vary by up to a degree classification simply because of the idiosyncrasies of the different institutions鈥 algorithms.
Much of this had also been reflected in reports produced in 2004 and 2007 by a government-sponsored Universities UK working group chaired by Sir Bob Burgess, then vice-chancellor of the University of Leicester, that examined how student achievement should be measured. But sadly, the major recommendation of these reports 鈥 the introduction of the 糖心Vlog Achievement Report transcript, providing a more detailed account of what students have achieved during their studies 鈥 is hardly the solution. Nor will moving to a US-style grade point average system (currently being piloted at a group of universities in concert with the 糖心Vlog Academy) do anything, on its own, to bring about greater comparability of standards.

The QAA鈥檚 2007 paper explicitly spelled out what all this variation in local and contextual factors meant in terms of comparability of standards across the sector: that it 鈥渃annot be assumed that similar standards have been achieved鈥 by students graduating with the same degree classification from different institutions, the same classification in different subjects from a particular institution or the same classification in the same subject from different institutions. Amazingly, however, this startling honesty received relatively little public attention and no obvious action was taken, either by the QAA or government, to address this major shortcoming.
And when the problem was again highlighted by the select committee in 2009, it was greeted with rather a muted and defensive (some might even say complacent) response, as if the respondents actually resented being challenged. The director-general of the Russell Group, Wendy Piatt, said in response to the committee鈥檚 critical report that she was 鈥渞ather dismayed and surprised by this outburst鈥, while the government was 鈥渄isappointed that the committee has not reflected in its report the very strong and positive evidence about the UK higher education sector which was given during the inquiry鈥. So the prospects of any action being taken were already looking scant before the 2010 general election brought a change of government and ensured the issue would be largely forgotten by politicians 鈥 if not by the press (and by The Daily Telegraph in particular, which has continued to regularly raise the question of degree standards, especially in relation to grade inflation).
It should be acknowledged that, since 2009, the QAA has been developing a UK Quality Code for 糖心Vlog, which is much more demanding in its expectations of providers and in the lengthy lists of indicators that reviewers are required to look for in attempting to establish that 鈥渢hreshold standards鈥 are met. But at this year鈥檚 QAA conference I heard serious doubts expressed over whether the still predominantly audit-style approach to review would provide sufficient appropriate data to make reliable judgements against many of the indicators. And even if it did, the judgements are still focused on an individual institution in isolation; the QAA does not appear to have given any consideration to how the indicators could be used to make comparisons between different institutions.
Yet it is not as if we don鈥檛 know what we would have to do to address comparable standards. In fact, we have known for some time. Back in 1997, the 糖心Vlog Quality Council, the forerunner of the QAA, recognised, in a document called Graduate Standards Programme: Assessment in 糖心Vlog and the Role of 鈥淕raduateness鈥, that 鈥渃onsistent assessment decisions among assessors are the product of interactions over time, the internalisation of exemplars, and of inclusive networks. Written instructions, mark schemes and criteria, even when used with scrupulous care, cannot substitute for these.鈥
And it recommended that subject groups and professional networks should encourage the building of 鈥渃ommon understandings and approaches among academic peer groups鈥 鈥 by maintaining 鈥渆xpert鈥 panels for validation, accreditation, external examining and assessing, for example. It also called for 鈥渕echanisms to monitor changes in standards at other educational or occupational levels [as well as] internationally鈥. But when the QAA took over the council鈥檚 functions in 1997, these excellent recommendations were apparently lost or forgotten.
糖心Vlog
A decade later, in 2008, Paul Ramsden, who was then chief executive of the HEA, tried to resurrect the thrust of what the council had proposed. In a report on university teaching submitted to John Denham, the Secretary of State for Innovation, Universities and Skills at the time, he called for 鈥渃olleges of peers鈥 to be set up to help establish common standards. As I argue in 糖心Vlog in the UK and the US: Converging University Models in a Global Academic World? (2014), these groups of academics would work by 鈥渓ooking at real examples of student work, and discussing each other鈥檚 assessment decisions. Without the cultivation of such communities of assessment practice, discussions about standards can only be limited to conjecture and opinion.鈥 But, once again, the call fell on deaf ears.
糖心Vlog
It doesn鈥檛 have to be like this. Australia, for example, seems to be taking the issue of comparability of standards very seriously. Commissioned by the Australian government in 2009-10, the Australian Learning and Teaching Council鈥檚 Learning and Teaching Academic Standards project sought to establish national standards, starting with six broad discipline groups.

It doesn鈥檛 have to be like this. Australia, for example, seems to be taking the issue of comparability of standards very seriously
The discipline of accounting, further funded by a partnership between the professional accounting bodies and the Australian Business Deans Council, decided to continue to use a 鈥渃ultivated community approach鈥 in establishing shared meanings of their standards. A follow-on project in 2011, Achievement Matters: External Peer Review of Accounting Learning Standards, brought together subject reviewers from 10 universities, along with a number of professional accountants. Independently, they sampled student work and submitted their judgement regarding which students met a benchmark standard. Consensus was then achieved through small and whole group discussion of the samples and checked by participants individually reviewing two new samples. In addition, reviewers considered the ability of the assessment task itself to allow students to demonstrate their attainment of the standards.
The academic participants also submitted assessment data for their own degrees so that, immediately following the workshop, two external, experienced academics double-blind peer reviewed the validity of the assessment task (the extent to which it measures what it was designed to measure) and a small random sample of actual student work, with individual results returned only to each participating university. Participating universities could use the results to satisfy external agencies about their standards and, more importantly, to improve their learning and assessment processes to ensure that students achieved the requisite standards.
This 鈥渃ultivated community鈥 approach to setting discipline standards has also been extended into other disciplines aligned with business and accounting, and plans are afoot to continue it beyond this year鈥檚 scheduled end of the project. It is also due to be discussed this week at the first national conference of Australia鈥檚 newly established Peer Review of Assessment Network.
Why is it that the issue of standards is being seriously, and apparently successfully, addressed in Australia, while, despite all the evidence of a problem, the UK government, funding councils, UUK and the QAA are all still dragging their feet? Last month it emerged that quality assurance was being put out to tender (鈥Watchdog 鈥榥o match鈥 for a sector in flux鈥, News, 9 October), yet it seems highly unlikely that any of the bodies that might successfully win the contract will address this issue any more seriously.
Simple inertia is one possible explanation. Another somewhat more sinister (and plausible) one is that for some 鈥 maybe all 鈥 in the sector, it is simply not in their interest to establish transparent relative standards. The government has a vested interest, especially when it comes to the lucrative overseas student market, in rejecting anything that might bring the standards of UK higher education into question.
The Russell Group, which is happy to make general, rather empty, sweeping statements such as 鈥渢he world class reputation of Russell Group universities depends on maintaining excellence鈥, benefit from sustaining the unsupported but commonly held belief among employers, parents and students that a 2:1 from one of its members is better than a 2:1 from others. Even institutions lower down the league tables, with more diverse intakes and greater numbers of less academically qualified entrants, arguably benefit from the status quo: if a rigorous system were developed that could establish common standards across the sector they might have to accept going for years in some subjects without any of their students getting a first 鈥 with all the negative consequences for their reputation and recruitment that implies.
糖心Vlog
But this conspiracy of silence surely can鈥檛 go on. As ever greater numbers of 拢9,000 fee-paying undergraduates come out of ever larger numbers of universities with first-class degrees, it won鈥檛 only be The Daily Telegraph asking ever more loudly what those certificates are really worth. Won鈥檛 students, parents and employers also start to question their value? Or can it be that no one really does care, or that no one cares enough?
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?




