糖心Vlog

Paper rejected after plagiarism detector stumped by references

Editors say episode demonstrates need for humans to review work of software

Published on
June 26, 2019
Last updated
June 26, 2019
Dalek
Source: Alamy

News that robots are coming to steal our jobs may have been underestimated, following an incident which suggests that automation is going a step further in preventing human discoveries being published at all.

Jean-Fran莽ois Bonnefon, a research director at Toulouse School of Economics, told peers of his surprise in learning that a paper he submitted to an unnamed journal had been 鈥渞ejected by a robot鈥.

According to Dr Bonnefon, 鈥渢he bot detected 鈥榓 high level of textual overlap with previous literature鈥. In other words, plagiarism.鈥 On closer inspection, however, the behavioural scientist saw that the parts that had been flagged included little more than 鈥渁ffiliations, standard protocol descriptions [and] references鈥 鈥 namely, names and titles of papers that had been cited by others.

鈥淚t would have taken two [minutes] for a human to realise the bot was acting up,鈥 he wrote on . 鈥淏ut there is obviously no human in the loop here. We鈥檙e letting bots make autonomous decisions to reject scientific papers.鈥

糖心Vlog

ADVERTISEMENT

Reaction to the post by Dr Bonnefon, who is currently a visiting scientist at the Massachusetts Institute of Technology, suggested that his experience was far from unique. 鈥淵our field is catching up,鈥 said Sarah Horst, professor of planetary science at Johns Hopkins University, 鈥渢his happened to me for the first time in 2013.鈥

Sally Howells, managing editor of the聽Journal of Physiology聽and聽Experimental Physiology, said that her publications and most others used Turnitin鈥檚 iThenticate to detect potential plagiarism.

糖心Vlog

ADVERTISEMENT

鈥淗owever, this is the first time that I have seen a 鈥榙esk rejection鈥 based solely on the score,鈥 she said.

Ms Howells said that most editors would ask the system to exclude references from a plagiarism scan. 鈥淭he software is incredibly useful, but must always be checked by a human,鈥 she said. 鈥淭hankfully there are still a few of them left.鈥

Kim Barrett, editor-in-chief of聽The Journal of Physiology and distinguished professor of medicine at the University of California, San Diego, agreed that anti-plagiarism tools 鈥渘eed to be used appropriately, and they should never be the basis for an automatic rejection鈥.

Mark Patterson, executive director of the online megajournal eLife, said that his platform did not use software to screen for plagiarism but did conduct 鈥渁 number of quality control checks鈥n addition to the scrutiny by the editors鈥.

糖心Vlog

ADVERTISEMENT

鈥淲here computational methods are used at other publishers, staff need to then interpret the findings to avoid situations like the one highlighted,鈥 he said. 鈥淚n the future, of course, these techniques are likely to get much better.鈥

rachael.pells@timeshighereducation.com

POSTSCRIPT:

Print headline:聽Confused robot says no to paper

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Peer review is lauded in principle as the guarantor of quality in academic publishing and grant distribution. But its practice is often loathed by those on the receiving end. Here, seven academics offer their tips on good refereeing, and reflect on how it may change in the years to come

6 December

Reader's comments (2)

...and you think this is exclusive to academe? The big consultancy firms have been selling this idea to large corporations for years. Mostly used in purchasing the accountants like it because it removes any professional judgement, skill and expensive people. Because they control the data, they can prove how much they appear to have saved. Beware - nobody noticed in the commercial world - robots have no sense of value.
It also gets in the way of integrating assessment across a course. Say you have a final year project module and one about testing running concurrently. So the lecturer writing the testing module invites students to use their project in their testing coursework, empowering them to apply their new-learned testing skills to a real piece of work. Great you might think... until they write their project reports and naturally wish to explain how they tested their work. All Turnitin's bells go off at once! Of course this is an opportunity to teach the students about 'self-plagiarism' and the need to reference things that they themselves have written elsewhere - but it's just as well this was spotted in time to give them the necessary guidance BEFORE they handed in their reports :)

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT