Reader Comments

Post a new comment on this article

Open Peer Review plays a role in detecting fraud

Posted by deevybee on 21 Oct 2023 at 12:44 GMT

The authors have performed a useful service in clarifying the distinction between different kinds of open peer review, and noting the need for more evidence on impacts. My specific interest is in Open Identities and Open Reports, where I feel their analysis has too narrow a scope. The problem is that they ignore fraud. If we recognise that there is a large and growing problem of fake articles, with many being planted in journals by paper mills who produce fraudulent material at scale, often with the collusion of editors, then the role of open peer review goes beyond the question of "whether and how they might improve publication quality and trustworthiness". Quite simply, Open Reports make it much harder for dodgy editors to pretend that a proper peer review process has taken place when it has not, and Open Identities can provide evidence that a paper was actually reviewed by a real person. Many researchers find themselves asking "How did this get past peer review?" when confronted with a seriously flawed piece of work. in some cases, the answer is that it didn't - collusion between a papermill and a compromised editor occurred to let a paper through with fake review. For an example see Abalkina & Bishop (2022): in this case evidence from open peer reports was invaluable in helping confirm suspicions that a set of six papers had originated from a paper mill. Where open review was combined with open reviewer identity, the evidence of fraud was particularly clearcut. There is a whole set of questions about how one detects fake reviews, but what is beyond doubt is that if you can't see the peer review, you can't know what was the basis for acceptance of a paper. To that extent, I would argue we should move ahead mandating at least Open Reports, without requiring further evidence.

Abalkina, A., & Bishop, D. (2022). Paper mills: A novel form of publishing malpractice affecting psychology. PsyArXiv. https://doi.org/10.31234/...

No competing interests declared.

RE: Open Peer Review plays a role in detecting fraud

tross-hellauer replied to deevybee on 22 Oct 2023 at 13:26 GMT

Dear Dorothy Bishop,

Thank you for these thought-provoking comments. You raise a crucial issue (paper mills and other work published without proper review seem to be on the rise and urge for better quality control), and are right to point out that Open Peer Review (especially Open Reports) could play a key role in demonstrating whether or not sufficient review scrutiny has occurred. Open Reports can bring several benefits to scholarly communication, with the detection of paper mills and predatory journals among the most important. We are thankful for the valuable work by you and others in spotlighting the issue of paper mills. We are also very sympathetic to your points. Indeed, some of us have made similar arguments in the past (e.g., https://www.nature.com/ar...).

We’d also like to point out that we are not calling for a complete stop to the roll-out of Open Peer Review models while we wait for the research on its pros and cons being done. Especially in the case of Open Reports, two of us (in a scoping review of recent evidence, https://doi.org/10.31222/...) have said:

“We believe that the evidence presented here shows Open Reports to present little harm to the procedural aspects of review processes, including time spent on review and likelihood of accepting review invitations, and could hence be much more widely implemented by publishers. Even though some caution should be adopted, particularly in small research communities, the available evidence points to greater benefits than drawbacks of Open Reports.”

However, this is not to say that the research community should not also be highly vigilant to possible side-effects, and hence in our scoping review we also spotlighted the need to further study potential negative effects of “publishing of anonymised reviews in, e.g., smaller communities (niche disciplines, or smaller communities publishing in local languages)”. If scholars face repercussions for writing critical reviews, that is a serious concern: and even if some might only be afraid of such repercussions, this could jeopardise the quality of review.

However, we also do see a danger that this argument in favour of Open Reports (publish review reports to show where good review is done, and where not) might evaporate given the current speed of development of Large Language Models. Simply put, convincing, yet fake reports might soon easily be generated to go alongside fake articles.

As you say, “Open Identities can provide evidence that a paper was actually reviewed by a real person”. Hence, adding reviewer names would be a further step which might guard not only against publication of questionable content, but also against such AI-generated review reports. However, with such large unanswered questions regarding power dynamics, and clear demographic differences in who chooses to sign when given the choice, we do not currently believe we know enough of the downsides of Open Identities to be able to recommend it. Despite the clear advantages like acknowledgement and assurance that a real person did the review, we feel the potential risks of Open Identities,, particularly for scholars in vulnerable positions, require more scrutiny before they can be responsibly implemented..

Sincerely,
Tony Ross-Hellauer, Lex Bouter & Serge Horbach

No competing interests declared.