Those charged with planning the next research excellence framework could have been forgiven a weary sigh on hearing that there is to be yet another review of the exercise.
The news, announced in November鈥檚 spending review, came despite a published last year (two of which I was involved in), which concluded that the REF worked, was value for money and that metrics are not sophisticated enough as an alternative form of research assessment.
But clearly this evidence has not been persuasive, with the government now intent on, as the puts it, examining 鈥渉ow to simplify and strengthen funding on the basis of excellence鈥. The review, to be chaired by Lord Stern of Brentford, president of the British Academy, follows up on the suggestion in November鈥檚 that the government is keen to see 鈥済reater use of metrics鈥 in order to 鈥渃hallenge the cost and bureaucracy鈥 of the REF.
There are two defining characteristics of the REF. One is that it is the means by which funding is distributed on one side of a dual-support system. The , which the chancellor committed in the spending review to implementing, concluded that dual support 鈥渟hould be preserved鈥, so some kind of REF will still be needed.
糖心Vlog
The second defining characteristic is that it is performance-related. The first question a review should therefore answer is whether this is appropriate. If not, then a simple and cheap formulaic system could be based on the volume of research grant funding generated by universities. Some might argue that this would still be performance-related because research grants are awarded competitively 鈥 but the point of the REF is that it rewards the outputs and outcomes (or impacts) of research, as opposed to the inputs of research funding.
Any assessment of any performance is not free, raising the issue of what is an appropriate cost. We know that the for the 2014 REF were 2.4 per cent of the total funding that will be allocated on the basis of its results. That is significantly cheaper than research council transaction costs, . Although the analysis on which this figure is based is somewhat dated, a review could update it, as well as set clear evidence-based expectations as to what are appropriate transaction costs for a REF鈥憀ike assessment.
糖心Vlog
If a review concludes that the system needs to be performance-related and that the dual-support system needs to be preserved then the use of metrics reappears as a solution. This is despite the conclusion of James Wilsdon鈥檚 , commissioned by Lord Willetts, the former universities and science minister, and published in the summer, that 鈥渋ndividual metrics give significantly different outcomes from the REF peer review process, and therefore cannot provide a like-for-like replacement for REF peer review鈥.
Although I am sympathetic to much of what is said in The Metric Tide, it is perhaps too simplistic to conclude that, when it comes to research outputs, 鈥渋t is not currently feasible to assess鈥uality鈥sing quantitative indicators alone鈥. The issue here is the 鈥渓ike-for-like鈥 comparison. The implicit assumption is that the peer review process of the REF is the benchmark against which alternatives should be compared. But we know that peer review is not perfect: it can be biased against non-established groups, can inhibit innovation and is a subjective process. As we showed last year in a , it is possible to undertake bibliometric analysis of research outputs for some (mostly science) subjects. However, bibliometrics is also not without its flaws: it is biased to certain subjects, dated and most relevant for journal articles (not books).
In other words, we are comparing two imperfect systems. Under such circumstances, it is fair to ask about the costs of assessment 鈥 or, to put it another way, the efficiency of the REF as opposed to its effectiveness.
We know from a that the absolute cost of the 2014 REF was 拢246 million (the 2.4 per cent transaction cost already referred to). The 拢232 million cost to submitting institutions consisted of about 拢212 million for the submission process and about 拢19 million for panellists鈥 time. The biggest cost (拢93 million) was for the selection of staff and publications. One way to eliminate this would be to submit all staff, although that would clearly have upward cost implications around volume of assessment and may generate some unintended behaviour, such as incentivising the movement of staff on to teaching-only contracts.
糖心Vlog
Examining the details of the costs also suggests that a move to metrics might not save as much as it would seem at first sight. Given the widely accepted inability of metrics to replace impact case studies, a best-case scenario would be that the environment element became wholly metrics based, which would save 拢34 million, and that outputs to the science panels would be wholly assessed through bibliometrics, which would save about 拢15 million (half of the cost of assessing outputs).
But the use of metrics will not be cost free, so let鈥檚 set aside 拢4 million for their central generation and management. This results in total savings of 拢45 million. This is clearly a crude and heuristic analysis but even if out by a factor of two, the saving is still probably not as much as the government anticipates.
That is not to say that another review is a waste of time (and money). Done right, and focused on the right questions, it could help to fill in the gaps in the existing body of evidence and perhaps challenge some of the myths that have arisen around the supposedly exorbitant costs of the REF鈥檚 current incarnation.
Jonathan Grant is director of the Policy Institute, professor of public policy and assistant principal for strategy at King鈥檚 College London.
糖心Vlog
POSTSCRIPT:
Print headline: Settling the bill
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?




