Source: Getty
Crystal-clear concoctions: Snowball Metrics has crafted recipes that anyone can use to whip up comparisons and slice and dice data on various facets of research
Vice-chancellors鈥 ever greater focus on rankings and bibliometrics suggests that a good benchmarking exercise might just constitute the most fun they ever have without taking their clothes off.
But their fascination with how their institution measures up against others is not merely idle. According to John Green, a life fellow of Queens鈥 College, Cambridge and the retired chief coordinating officer of Imperial College London, senior managers are all faced with questions like: 鈥淲hy am I losing income in neuroscience? Is it because there is less money in the system, or because I am losing market share to Cambridge?鈥, 鈥淗ow should I decide whether to invest in photovoltaics or nanoscience?鈥 and 鈥淚 am looking to collaborate; how do I know who is truly strong in photovoltaics?鈥
鈥淭hey need metrics to understand all that,鈥 he says.
糖心Vlog
The problem is that a good benchmarking exercise is not easy to come by. Before they halted the practice because of fears it may be anti-competitive, the 鈥渂ig five鈥 UK universities in terms of research income used to meet to compare their success ratios for funding applications, Green says. 鈥淏ut Oxford never counted a funding application as lost for two years, whereas Imperial did so after six months if they hadn鈥檛 had an award letter. So we were not comparing apples with apples.鈥
Other problems with existing methods of comparison include the various formats in which compliance bodies require data to be prepared, and the tendency of the figures to be outdated by the time they are published.
糖心Vlog
Feeling that it was time he 鈥済ave something back鈥, Green decided four years ago to do something about it. His aim was to create a set of 鈥渂ottom-up鈥, universally agreed research-related metrics, complete with standardised 鈥渞ecipes鈥 for how they should be calculated, including the data sources available for doing so. Their eminent usefulness, he hoped, would lead to their catching on around the world in a snowball effect, hence his choice of name: .
Green assembled and chaired a steering group of eight UK research intensive-universities, including Imperial, University College London and the universities of Cambridge and Oxford. He enlisted Elsevier 鈥 which owns the Scopus citation database 鈥 to manage the project on a pro bono basis, and to create pilot tools to test whether the recipes the group came up were 鈥渃ookable鈥. That tool also allowed the institutions to 鈥渟lice and dice鈥 data according to a number of 鈥渄enominators鈥, such as theme, discipline and department, or to normalise for factors such as number of researchers.
A crucial part of the arrangement is that each institution only gets to see others鈥 cooked 鈥 rather than raw 鈥 data, not least in order to avoid any potential for falling foul of competition law. However, for Green, a key feature of the recipes is that they are free and 鈥渟upplier agnostic鈥: anyone, not just Elsevier, can cook them, using commercial tools or even just spreadsheets. (He admits that such self-cooking presents an opportunity for dishonesty, but feels that only once the metrics are widely adopted will it be appropriate to consider setting up a standards agency to 鈥減olice鈥 universities鈥 kitchens for potential rats.)
Elsevier has also pledged to build a free Snowball Metrics Exchange, which will allow universities to form 鈥渂enchmarking clubs鈥. This 鈥淚鈥檒l show you mine if you show me yours鈥 ethos is another key aspect of how Green sees the recipes being used.
鈥楻ecipe鈥 book gives user-friendly tips
The first Snowball Metrics Recipe Book was published in 2012, with 10 recipes relating to areas such as research funding and output. And so great has been the universities鈥 enthusiasm for the unexpected insights that cooking the recipes has provided that at the end of last month , containing a further 14 recipes relating to factors such as collaboration, societal impact, intellectual property and spin-offs.
糖心Vlog
Crucially, the new edition incorporates six metrics that have also been adopted by a parallel working group of seven universities in the US, including the University of Michigan, Northwestern University and the University of Illinois at Urbana-Champaign. According to Green, the Americans鈥 original fears that differing nomenclatures and data sources would scupper alignment of US and UK recipes have been overcome by some minor tweaks to their wording.
The sense of a global snowball starting to gather momentum is added to by the interest from a group of Australian and New Zealand universities (including the universities of Queensland and Auckland), the Japanese RU11 Group of research-intensive institutions and the Association of Pacific Rim Universities.
Although the US universities will continue to develop their own recipes before comparing them with the UK ones 鈥 which Green describes as an 鈥渁rduous process鈥 鈥 he hopes that once they are happy with their formulations, the adoption of the UK-US recipes as global standards will begin apace.
糖心Vlog
The prospect of global adoption is also boosted by the enthusiasm of funders for a robust way to compare institutional strengths in various disciplines: the 鈥渉oly grail鈥, according to Green, being to perfect a way to link research inputs with outputs via digital 鈥渇ingerprinting鈥 of relevant documents and data.
Future role in REF mooted
Of course, all this is also potentially of great relevance for the research excellence framework and, sure enough, Green has made a submission to the 糖心Vlog Funding Council for England鈥檚 independent review of the role of metrics in research assessment. It says that if Hefce chooses to adopt metrics for the REF, it should take on board what has been achieved with Snowball Metrics and avoid 鈥渞einventing the wheel鈥.
Green also suggests Hefce, or the 糖心Vlog Statistics Agency, as the ideal 鈥渘eutral body鈥 he would like eventually to take ownership and develop the metrics further. However, he believes that metrics should only 鈥減lay a part鈥 in research evaluation alongside peer review, and he would 鈥渉ate鈥 to see Snowball Metrics 鈥済rabbed鈥 for the 2020 REF.
鈥淭he great thing about the REF is that, in a rather subtle way, the academics buy into it, partly because of [their role in] the peer review bit,鈥 Green says.
糖心Vlog
鈥淚 hope to grow that same trust in Snowball. I don鈥檛 ever want to get into the space of saying these metrics would be helpful in the REF: the sector has to work that out for itself.鈥
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?




