Peer review sucks. That is the conclusion of by聽the American psychologist Adam Mastroianni. He鈥檚 not the first person to聽say this, of聽course. Other academics . But Mastroianni has struck a chord with his compellingly unabashed argument that peer review should be abandoned.
It helps that he鈥檚 right 鈥 peer review really does suck. It聽does a聽terrible job of weeding out bad science, but a聽surprisingly great job of聽slowing down and tripping up good science. But I聽want to聽focus on聽what comes next. If聽we scrap peer-reviewed journals, what on聽earth do聽we replace them with? Is聽it simply the case that peer review is聽the worst system of聽publication 鈥 except for all the others?
The key issue that any alternative system has to grapple with is discoverability. I鈥檒l use my as an example. This paper followed the traditional publishing model. I聽submitted to a peer-reviewed journal, and, after more than a year and two rounds of revisions, it was published. It didn鈥檛 set the world on fire, but a steady trickle of citations over the years suggests that at least some people working in my field are reading聽it.
What would I聽have done with this paper in a world without peer-reviewed journals? I聽could have followed Mastroianni鈥檚 example and just . Except I聽didn鈥檛 have a website. And even if I聽did, no聽one would have visited. I聽could have used social media to promote it, but I聽don鈥檛 use social media because, to adapt an old Stewart Lee joke, 鈥渢he internet is a flood of sewage that comes unbidden into your home. Social media is like you constructed a聽sluice to聽let it聽in鈥.
糖心Vlog
This is a point made by several . In a world without journals, a paper鈥檚 visibility will be determined largely by聽its authors鈥 ability and willingness to generate attention. A聽paper by a second-year PhD student with zero social media game would almost certainly sink without trace.
So without peer review, how will we avoid being swamped by an ocean of dreck? How will we prevent the devolution of scientific publishing into a YouTube-style attention-economy hellscape?
糖心Vlog
A good place to start has got to be the existing system of preprint publishing. Preprint repositories, such as the physics arXiv and its , are in essence minimally filtered databases of research papers in various states of completion. We could simply abolish journals and ask researchers to upload their papers to these repositories instead; however, the result would be a discoverability nightmare for the reasons we鈥檝e already covered. Instead, I聽believe that a truly viable system would need at least three additional features.
First, there must be a way to assess and communicate research quality. The obvious way to do this would be to allow readers to publicly comment on and rate research papers. This is a form of , which allows readers to easily see how a paper has been received by other scientists (unlike traditional pre-publication reviews, which typically disappear after a paper is published). This is , but it is likely to play a much larger role in a world without journals.
Incidentally, post-publication review also limits the power of . In the current journal system, these reviewers can block a paper from being published at all. But under the post-publication model, they can only leave a negative public review (the merits of which other readers may judge for themselves).
Second, we will need to fall back to a much older conception of the academic journal 鈥 not as a venue for finished research products, but as a forum for scientists to talk to each other. These forums could be implemented as separate community-run 鈥渃hannels鈥 on central repositories (different from 鈥溾, which involve editorial oversight). Each would ideally be quite niche 鈥 formed by a community of scientists as a venue for discussing a single topic, or even a single hypothesis. This would help keep the flood of new papers manageable.
糖心Vlog
Finally, we need a way to break the link between the visibility of research and the ability to grab attention. Quality metrics derived from post-publication review would help: positively reviewed papers would float to the top of their respective forums (and those with rave reviews could be escalated to a more generalist channel 鈥 replicating the function of journals such as Science and Nature).
But authors would still have to hustle to get any reviews in the first place (a聽situation familiar to any Amazon seller or YouTube creator). To solve this problem, every new paper should be sent to random forum members for review. To retain posting privileges, forum members would have to review a small number of submissions, say every few months. These 鈥渞eviews鈥 could be as simple as a thumbs聽up, to signal to other community members that a paper is worth their time. These mandatory reviews would provide crucial visibility to those least able or willing to play the attention game.
I am not claiming that this is a perfect system 鈥 there will inevitably be problems I鈥檝e not thought聽of. But the question we should ask of any new publishing model is聽not 鈥渄oes it have flaws?鈥 but rather 鈥渁re the consequences of those flaws worse than those of the system we already have?鈥. As Mastroianni so persuasively showed, this is a much lower bar than many people realise.
Robert de Vries is senior lecturer in quantitative sociology at the University of Kent.
糖心Vlog
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?








