How Digital Technologies Can Improve Scientific Research: The Case of Peer Review
Academics and researchers disseminate their findings with other scholars by publishing in a scientific space. Publishing in academic journals has been the principal means of distributing research dating back at least 300 years. Since the dawn of the digital age, the internet has been used to improve on a model of scientific publishing and communication that has been in place for centuries.
Visible progress has been made – researchers are no longer bound by the limits of geography or the contents of their local library – but is the potential being truly maximised?
One does not have to be a pessimist to conclude that the answer to the above is no. The majority of scientific papers are still published and shared through relatively traditional journals, while the substantial potential of social media, open access (OA) and decentralised databases is going underexplored. Most researchers have dealt with the frustration of repeatedly bumping into paywalls and obscure publishing policies.
Peer review processes are often inefficient, opaque and poorly incentivised. And, as a byproduct of the ‘publish or perish’ mentality, some researchers may even be moulding some of their results to fit a ‘promising’ narrative to get a publication in one of the ‘high impact’ or big-name journals. This ‘packaging’ of scientific findings and conformity to existing paradigms impacts reliable scientific discourse and true innovation – a clear and urgent problem.
Additionally, the staggeringly profitable scientific publishing market is dominated by a handful of power players. Companies such as Wiley and Elsevier, in particular, have a vice-like grip on the industry. This oligopoly structure means they have the power to define the rules of the game to their advantage, while not properly compensating scientists for their work and limiting access to those not in a major research lab or university.
The problems don’t end there, however, despite bright spots such as EU guidelines mandating that any results that come from a publicly EU-funded research grant have to be published in an OA journal. Groups such as the European University Association (EUA) have commented that the state of scientific publishing has ‘worsened’ over the past 20 years and that current business models present a number of ‘concerning’ issues. They point to issues like a lack of transparency in pricing, calls for open access without a positive effect on pricing, asymmetry in negotiating power and a trend towards vendor lock-in.
“Groups such as the European University Association (EUA) have commented that the state of scientific publishing has ‘worsened’ over the past 20 years and that current business models present a number of ‘concerning’ issues”
Ironically, this is during the same timeframe that the internet rose to prominence – a time when the idealism of digital technology promised us freedom, empowerment and connection. As it turns out, the internet hasn’t, so far, been the democratising revolution that many, perhaps naively, assumed – at least where scientific publishing is concerned. There are myriad problems with scientific publishing, but perhaps most critical are the issues surrounding peer review.
Peer Review Issues
There are myriad problems with scientific publishing, but perhaps most critical are the issues surrounding peer review. Peer review, or scientific refereeing, is the foundation of the academic process – the litmus test for ideas. In general, the peer review of a manuscript involves recruiting three or more referees to undertake a thorough assessment of a manuscript and provide written feedback for the authors, to which they can respond with additional experimentation, argumentation or revision to the text.
Scientists must be able to trust this system. If they see that something is peer-reviewed, it should be a stamp of trust. But as it stands, peer review systems have issues, and many argue that they’re fundamentally broken. An article in Nature Microbiology contests this claim and calls it a ‘gross overstatement’, but concedes that “strains and stresses clearly need to be addressed”. While the severity of the problems is debatable, whether they exist is not.
Assessing the quality of scientific work is a difficult task, even for trained researchers, and especially for innovative or pioneering studies. One of the key problems with peer review is that, not only is it a slow process but it is also often inconsistent, unreliable, biased and done by poorly incentivised, busy people. Reviewers are generally themselves scientists or researchers who could be spending their time doing original research.
Although some argue they should be, reviewers are not paid and their name is often not associated with a peer review, so there is little incentive to do it properly or at all. Anonymity can also allow some reviewers to be unnecessarily combative, hypercritical and even biased. In its current form, peer review offers few incentives for thorough and impartial efforts.
“One of the key problems with peer review is that, not only is it a slow process but it is also often inconsistent, unreliable, biased and done by poorly incentivised, busy people. Reviewers are generally themselves scientists or researchers who could be spending their time doing original research”
As a consequence, there aren’t always enough qualified reviewers willing to do the work. This creates a tension for researchers who want to get important findings out the door, but who are also hesitant to publish without significant feedback owing to the risk of publishing inaccurate results. Such delays can do real harm: during the Zika crisis, for example, sponsors of research had to persuade publishers to declare that scientists would not be penalised for releasing their findings early.
There is a pressing need for new models of publishing original research. Part of this will have to come about as an internal culture change within scientific communities, but it can be aided by a shift in how publishers operate and the tools available to researchers. One encouraging sign is the ongoing debate surrounding the problems of peer review and that new approaches are being tested.
One of the more radical solutions proposed has been post-publication peer review, which through both formal and informal means, is set to provide new avenues for critical discussion and constructive feedback on work already published. In another example, the United States National Academy of Sciences now, in some cases, publishes the name of referees and all of their work alongside the research.
Nature is trying out an accreditation approach that, with reviewer permission, will allow authors to acknowledge and thank reviewers of their study by name at the end of the paper. OA journal eLife has taken a different angle and altered its manuscript-review process altogether. The new process takes the consensus view on the manuscript after an initial exchange between reviewers, thus minimising excessive revision and additional experiments prior to acceptance.
“One of the more radical solutions proposed has been post-publication peer review, which through both formal and informal means, is set to provide new avenues for critical discussion and constructive feedback on work already published”
Or take interactive publishing platform Apograf, which is building a DLT-powered mechanism to reward researchers for published papers. Apograf also features integration with the ORCID registry, which improves the recognition and discoverability of research outputs, is interoperable (it works with many institutions, funders, and publishers) and persistent, as it can be used throughout a research career.
Lastly, there is F1000Research, which is publishing manuscripts within a few days of submission after a rapid editorial screening and open peer review. This includes signed referee reviews and author responses published with each article.
The scientific-publishing landscape has changed significantly in the past 15 years or so, including the advent of digital journals and open access. OA, in particular, has made inroads towards making scholarly research literature freely accessible online. But although there have been many positive developments, the progress so far can best be described as a good first step with a lot of ground left to cover. So, what needs to be done?
Putting research papers on preprint archives as soon as the papers are ready is a worthy move. These archives act not only as storage for negative results, but also offer invaluable reviews and comments by other researchers. There needs to be further trialling of new methodology in peer review and greater acceptance of innovative practices that are working.
There also needs to be greater adoption and support for platforms or tools that are challenging the status quo of scientific publishing. It’s imperative that an open innovation ecosystem is established as an alternative to the control that a few commercial entities maintain over markets for research information, academic-reputation systems and research technology generally.
“There needs to be further trialling of new methodology in peer review and greater acceptance of innovative practices that are working. There also needs to be greater adoption and support for platforms or tools that are challenging the status quo of scientific publishing”
BioMed Central and Digital Science recently wrote a report entitled What might peer review look like in 2030?, which leaves us with some useful recommendations. Firstly, find and invent new ways of identifying, verifying and inviting peer reviewers, focusing on closely matching expertise with the research being reviewed to increase uptake. Secondly, encourage more diversity in the reviewer pool (including early-career researchers, researchers from different regions). Finally, experiment with different and new models of peer review, particularly those that increase transparency.
Peer review is, of course, just one piece of the puzzle. But with each improvement, we move closer to a system that allows free, rigorous and open sharing of knowledge and greater institutional and public ownership of that infrastructure.
The internet itself has had a problematic effect on scientific publishing, but there are solutions on the horizon that could soothe these teething pains. Blockchain technology, for one, seems a natural fit for science, particularly in the area of peer review. An immutable log of records could go a long way to solving disputes over who made a new version of a scientific paper, and when. Disputes over authorship are uncommon but the principle is sound.
Equally, scientific transparency is something researchers are keen to promote. On the blockchain, reviewers and authors will be unknown to each other, but their activities and reviews will be open and logged for anyone to examine. Reviewers can be paid in tokens for their time which, when combined with the open nature of their work, should encourage transparency and accuracy. There are other areas in which decentralisation can influence scientific publishing too, and it seems only a matter of time before it does.
Illustrations by Kseniya Forbender
To contact the editor responsible for this story:
Margarita Khartanovich at [email protected]
- What’s Up With... That Virtual Reality (VR)? Is It Still The Thing?
- Can We Stop Our Toasters From Spying on Us?
- Can Blockchain Be Censored?
- Microsoft Cortana Research: Could Negative Perceptions of AI Harm Its Development?
- What Are Digital Twins and Why Are They The Next Stage in the Internet of Things (IoT)?