糖心Vlog

Pros and cons of the metrics system

Published on
May 16, 2013
Last updated
May 22, 2015

The main problem with the research excellence framework is the huge amount of time and money that it consumes, distracting academics from scholarly activities and diverting funds to employ squads of administrators to manage it (鈥For richer, for poorer鈥, 9 May).

In a , I reported an analysis I did showing that you could pretty accurately predict the research assessment exercise鈥檚 psychology results by taking an H-index (indexes that attempt to measure the productivity and impact of academics鈥 published work) based on departmental addresses.

In the comments on the post, there is further evidence that this is also true for physics. Computing the H-index takes an individual two to three hours, as opposed to the two to three years spent by legions of people working on the REF.

At present, we don鈥檛 know whether such differences as exist between H- index ratings and RAE panel ratings mean that the latter were better. For both psychology and physics, once you had taken the H-index into account, additional variance in funding levels could be explained by whether there were departmental representatives on the assessment panels.

糖心Vlog

ADVERTISEMENT

This suggests that the additional value given by having expert assessment incorporated in such evaluations may just be adding subjective bias, which does not necessarily indicate any malpractice but could reflect the advantage that panel members have of intimate knowledge of how the panels work.

Most of us don鈥檛 like metrics, and I accept that they may not work in the humanities, but I would suggest that if we are not going to use a metric like this, we need to do an analysis to justify the additional cost incurred by any alternative procedure. If the additional cost is very high, then we might decide that it is preferable to use a metric-based system, warts and all, and to divide the money saved between all institutions.

糖心Vlog

ADVERTISEMENT

Dorothy Bishop
Via timeshighereducation.co.uk

One of the problems with the use of citations as a measure of research quality is that the method assumes that the higher the number, the greater the quality. Ignoring the possibility of 鈥渢it for tat鈥 reciprocity between mates, what if your article is cited and immediately preceded by 鈥渇or a total misunderstanding of even these basics, see Mead鈥︹?

In addition, citations don鈥檛 work as a measure of anything where the chance of others quoting your work is low: if I were ploughing a relatively lonely research furrow, I鈥檇 prefer to take my chances with a subpanel of the great and the good. The fact that no one else is relying on my work because no one else is interested in it (or has even heard of it) says nothing about whether it is good, bad or indifferent. Like impact, citations therefore have a tendency to skew personal research interests into institutional research agendas, favouring more of the greatly populated same, not those who are pushing boundaries and exploring for its own sake.

David Mead
Professor of public law and UK human rights
University of Essex

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT