Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Knowledge and motivations of training in peer review: An international cross-sectional survey

  • Jessie V. Willis ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

    jessievwillis@gmail.com

    Affiliations Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, Department of Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Canada

  • Janina Ramos,

    Roles Data curation, Formal analysis, Investigation, Methodology, Writing – review & editing

    Affiliations Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, Department of Biology, Faculty of Science, University of Ottawa, Ottawa, Canada

  • Kelly D. Cobey,

    Roles Conceptualization, Methodology, Supervision, Writing – review & editing

    Affiliations University of Ottawa Heart Institute, Ottawa, Canada, School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

  • Jeremy Y. Ng,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliation Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada

  • Hassan Khan,

    Roles Formal analysis, Writing – review & editing

    Affiliations Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

  • Marc A. Albert,

    Roles Formal analysis, Writing – review & editing

    Affiliations Telfer School of Management, University of Ottawa, Ottawa, Canada, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada

  • Mohsen Alayche,

    Roles Formal analysis, Writing – review & editing

    Affiliations Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, Department of Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Canada

  • David Moher

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliations Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

Abstract

Background

Despite having a crucial role in scholarly publishing, peer reviewers do not typically require any training. The purpose of this study was to conduct an international survey on the current perceptions and motivations of researchers regarding peer review training.

Methods

A cross-sectional online survey was conducted of biomedical researchers. A total of 2000 corresponding authors from 100 randomly selected medical journals were invited via email. Quantitative items were reported using frequencies and percentages or means and SE, as appropriate. A thematic content analysis was conducted for qualitative items in which two researchers independently assigned codes to the responses for each written-text question, and subsequently grouped the codes into themes. A descriptive definition of each category was then created and unique themes–as well as the number and frequency of codes within each theme–were reported.

Results

A total of 186 participants completed the survey of which 14 were excluded. The majority of participants indicated they were men (n = 97 of 170, 57.1%), independent researchers (n = 108 of 172, 62.8%), and primarily affiliated with an academic organization (n = 103 of 170, 62.8%). A total of 144 of 171 participants (84.2%) indicated they had never received formal training in peer review. Most participants (n = 128, 75.7%) agreed–of which 41 (32.0%) agreed strongly–that peer reviewers should receive formal training in peer review prior to acting as a peer reviewer. The most preferred training formats were online courses, online lectures, and online modules. Most respondents (n = 111 of 147, 75.5%) stated that difficulty finding and/or accessing training was a barrier to completing training in peer review.

Conclusion

Despite being desired, most biomedical researchers have not received formal training in peer review and indicated that training was difficult to access or not available.

Introduction

Peer review is the predominant quality control measure for scientific publishing regardless of country or discipline [13]. Peer review refers to the process by which “peers” are selected to assess the validity and quality of submitted manuscripts for publication [4]. Responsibilities of peer reviewers typically include providing constructive feedback to the authors of the manuscript and sometimes recommendations to journal editors [5, 6].

Despite its foothold in scholarly publishing, peer review is not a standardized process and lacks uniform guidelines [710]. Different scholarly publishers have different requirements and responsibilities for their peer reviewers and peer review data is not always made public [11]. Some publishers provide guidelines and training for their peer review process; however, a 2012 study found that only 35% of selected journals provided online instructions for their peer reviewers [12, 13].

It is therefore understandable that many potential peer reviewers feel inadequately trained to peer review. This is especially true for early career researchers; a recent survey showed that 60% of those under 36 years of age felt there is a lack of guidance on how to review papers [14]. Additional studies have shown that training is highly desired by academics [1517]. In a 2018 survey by Publons, 88% of survey respondents felt training would have a positive impact on the efficacy of peer review. Despite this, 39% of respondents had never received training and 35.8% had self-trained by reading academic literature. Most respondents believed that training should be provided by scholarly publishers or journals and 45.5% believe that it should be a practical online course [18].

Unfortunately, the effectiveness of peer review training has been studied only via small-scale studies on non-online methods (e.g., workshops) with limited evidence of any benefit [1922]. Our group was unable to identify any randomized-controlled trials regarding how the electronic delivery of peer review guidelines has impacted the knowledge of potential peer reviewers.

In the present study we conducted a large-scale, online survey to provide an up-to-date perspective of international biomedical researchers’ views on peer review training. We focused on biomedical researchers as this is our content area and the needs and perspectives of researchers related to peer review may differ by discipline.

Methods

Transparency statement

Ethics approval was obtained from the Ottawa Health Science Network Research Ethics Board (OHSN-REB Protocol Number 20220237-01H). Participants were provided with a consent form prior to entering the survey and consent was presumed if they completed the survey. The study protocol was registered to the Open Science Framework (OSF) prior to data analysis (https://osf.io/wgxc2/) [23]. Text for this manuscript was drawn directly in reference to the registered protocol on OSF. Anonymous study data and any analytical code was shared publicly using the OSF and study findings were reported in a preprint and open access publication.

Study design

We conducted a cross-sectional online survey of biomedical researchers. The CHERRIES reporting guidelines were used to inform the reporting of our findings [24].

Participant sampling framework.

We identified a random sample of international biomedical researchers who are actively publishing in peer-reviewed medical journals. We used the Scopus source list to randomly select 100 biomedical journals. The Scopus list was restricted to those journals with an All Science Journal Classification (ASJC) code of ‘Medicine’ and those that specified the journal was ‘active’ at the time of searching (November 2021). We excluded journals that indicated that they only published articles in a language other than English. Using the RAND function in Excel, we then randomly selected 100 journals from this list. Subsequently, we visited each of the randomly selected journal websites and extracted the corresponding authors from the last 20 published research articles. Corresponding author email extraction was completed on December 9, 2021. In instances where the journal was not open access and we did not have access via our academic institution, we replaced the journal with another randomly selected journal. We also replaced any journals which had non-functioning links. A total of 26 journals were replaced for either not being in English (n = 7), not being active after 2020 (n = 8), broken link (n = 4), not open access (n = 3), or no emails for corresponding authors listed (n = 4). We have used this broad approach to sampling successfully in previous research [25]. This approach enabled us to identify a population of 2000 randomly selected researchers to invite to our survey.

Survey.

The survey was purposefully built for this study and was administered using SurveyMonkey software (https://www.surveymonkey.ca/r/7B2JYR6). This was a closed survey; thus, only available to invited participants via our sampling framework. We emailed the identified sample population a recruitment script with a link to the survey. Participation in the survey was voluntary and all data was completely anonymized. The survey was sent on May 23, 2022. We sent all participants a reminder email to complete our survey after one week (May 30, 2022) and two weeks (June 6, 2022), respectively, from the original invitation. The survey was closed after three weeks (June 13, 2022).

The survey contained 37 questions: 1–10 were demographic questions about the participant, 11–15 were regarding level of experience with peer review, 16–23 were opinion-based questions about peer review training, 24–33 were for respondents who have experience running peer review from a journal perspective, and 34–37 were open-ended questions with comment boxes. 33 of the questions were quantitative while four were qualitative. The survey questions were presented in blocks based on content and question type. The survey used adaptive questioning where certain questions appeared based on the participants’ previous responses. The full list of survey questions can be found in S1 File.

The survey was created in SurveyMonkey by two authors (JVW, JR). All survey questions were reviewed and piloted by four researchers (HK, JYN, KDC, DM) and two invited researchers outside of the author list. The average time to complete the survey was estimated to be 15 minutes by pilot testers. All questions were optional and could be skipped. We offered participants the option to report their email to be entered into a draw to win one of three $100 Amazon Gift Cards. Email addresses were stored separately from the study data.

Data analysis

We used SPSS Statistics and Microsoft Excel for data analysis. We reported the overall response rate based on the number of individuals that completed our survey from the sample identified, as well as the survey completion rate (i.e., the number of people who viewed our survey that completed it). We excluded participants from data analysis if they did not complete 80% or more of the survey. We reported quantitative items using frequencies and percentages or means and SE, as appropriate. For qualitative items, we conducted a thematic content analysis of responses in Excel. For this, two researchers (JR, MAA) independently assigned codes to the responses for each written-text question. Codes were then discussed and iteratively updated until there was consensus among the two researchers that best reflected the data. Following this, individual codes were independently grouped into themes by the two reviewers and finalized by consensus. We then created a descriptive definition of each category. We reported the number of unique themes and the number and frequency of codes within each theme.

Results

Protocol amendments

Survey roll-out was changed from four weeks to three weeks due to time constraints. Minor revisions were made to the survey questions, recruitment and reminder emails and consent form.

Participants

Demographics.

A total of 186 participants completed the survey of the 2000 researchers invited (9.3%). There were 107 (5.4%) instances where the email was unable to be sent and 32 (1.6%) instances where the participant indicated (including auto-replies) an inability to be reached/participate. As these accounted for less than 10% of invited participants, no changes were made to the recruitment strategy. A flowchart detailing these instances can be found in S1 Table.

The average completion rate was 92% and it took, on average, 13 minutes to complete the survey. There were 14 responses that were excluded based on having less than 80% questions answered, thus the final included number was 172. A total of 97 of 170 respondents (57.1%) identified as men. The survey received responses from 48 different countries with the greatest representation from United States (n = 41, 24.0%), United Kingdom (n = 13, 7.6%) and India (n = 13, 7.6%). The majority of respondents identified as independent researchers defined as assistant, associate, or full professors (n = 108 of 172, 62.8%), were primarily affiliated with an academic organization (n = 103 of 170, 62.8%), and had published more than 21 peer-reviewed articles (n = 106 of 172, 61.6%). Full demographics are described in Table 1.

Experience with peer review and peer review training.

In total, 144 of 171 participants (84.2%) have never received formal training in peer review. The majority answered that their primary institution did not offer peer review training (n = 108, 63.2%) or otherwise did not know of any training offered (n = 48, 28.1%). For those (n = 26) that had received peer review training, the most common training formats were in-person lectures (n = 12, 44.4%), online lectures (n = 10, 37.0%), or online courses of at least 6 sessions (n = 10, 37.0%). Most of the training received was provided by an academic organization (n = 18, 66.7%). Less than half (40.7%) of participants indicated the training was completed over 5 years ago.

For their first-time performing peer review, 88 of 166 (53.0%) participants felt either very unprepared (10.8%), unprepared (24.1%), or slightly unprepared (18.1%). Highlighted responses about peer review and peer review training are shown in Table 2. A complete table of responses can be found in S2 Table.

thumbnail
Table 2. Experience with peer review and peer review training.

https://doi.org/10.1371/journal.pone.0287660.t002

Opinion-based questions

General statements on peer review and peer review training.

Participants rated their agreement with statements related to peer review and peer review training on a 7-point scale from strongly disagree to strongly agree. A graph of the responses is depicted in Fig 1.

thumbnail
Fig 1. Participant agreement with statements based on overall experiences with peer review in the last 12 months.

https://doi.org/10.1371/journal.pone.0287660.g001

Notable findings included that 148 respondents (86.5%) either strongly agreed or agreed that peer review is important for ensuring the quality and integrity of scholarly communication. One hundred and sixteen (69.5%) strongly agreed or agreed that their experience as a peer reviewer had been positive. Seventy-six (45.2%) strongly agreed or agreed that there is a lack of knowledge and understanding for how to properly conduct peer review. Ninety-nine (58.9%) strongly agreed or agreed that peer reviewers should receive formal training in peer review prior to acting as a peer reviewer for journals. Eighty-six (50.9%) strongly disagreed or disagreed that there were appropriate incentives in place to motivate them to engage in peer review.

Desired training topics, organizations and formats

These questions required participants to rank their responses in order from most to least preferred. Based on average rank position, a score was given to each response (max score based on number of ranked items). A higher score corresponds to being ranked more highly/preferrable. A graph of the response scores can be found in Fig 2.

thumbnail
Fig 2. Ranking of preferred topics, training formats, funding providers, and creating organizations.

Score calculated by average rank placement. A higher score indicates a more preferred and highly ranked response.

https://doi.org/10.1371/journal.pone.0287660.g002

The topics that participants were most interested in were appraisal of study design and methodology, appraisal of the research question, and appraisal of statistics. The most desired training formats were all online, including online courses (at least 6 sessions), and online lectures. Academic institutions and scholarly publishers/journals were both similarly ranked as the preferred organization to develop peer review training. Scholarly publishers were additionally ranked as the preferred funders.

Journal-specific questions.

Participants were only able to answer these questions if they indicated they worked or volunteered for a journal that publishes peer reviewed articles. There were 80 respondents that were included in this section.

In total, 55 of 80 participants (68.8%) indicated that the journal they were affiliated with did not explicitly require peer review training for their reviewers. Eight (10.0%) indicated that it was required and provided by the journal internally, while two (2.5%) indicated that it was required but externally delivered. The most common format of training required was either an online course and/or lecture. Required training length was variable from 1–5 hours to 20+ hours.

Only 10 of 80 (10.0%) of participants indicated that the journal assessed peer review reports of new reviewers; however, the majority (n = 51, 63.8%) indicated they were unsure or did not know. Twenty-one (26.6%) provided reporting guidelines to reviewers as part of the peer review assessment process.

Qualitative questions.

There was a total of 503 comment responses to the four open-ended questions. Other suggestions on how to improve peer review training included clearer standards and expectations (n = 36), improving incentives (n = 32), increased feedback and oversight (n = 28), and improved guidance (n = 19). Barriers to engaging peer review training included difficulty finding or accessing training, including a lack of time or the cost of training offered (n = 111). Desired incentives listed included recognition (n = 39) and financial incentives (n = 35). Please see Table 3 for a list of themes and definitions. In terms of the last question which asked for any further comments, responders typically highlighted topics already listed, such as suggestions on how to improve the process (n = 17), suggestions for the implementation of training (n = 16), and the need for incentives (n = 10). The full thematic content analysis can be found in S2 File.

Discussion

One hundred and eighty-six respondents completed our international survey on peer review training. Among respondents, the vast majority indicated they have never received formal training in peer review. A lack of training could therefore explain the less-than-optimal reporting quality of biomedical research [26] and the inability of reviewers to detect major errors and deficiencies in publications [17].

Limitations of our study included the lower than anticipated response rate. However, this is not out of line with other online surveys, particularly during the COVID-19 pandemic. In addition, most respondents of our study were well established researchers, which was likely a result of our sampling method. Therefore, whether non-responders and early career researchers would respond similarly is unknown.

A majority of respondents either strongly agreed or agreed that peer reviewers should receive formal training. They also indicated that their preference was to receive such training online, as through an online course or lecture. This differs from our recently conducted systematic review which revealed that few online training options exist and of those that do exist, most of the training is a brief one-hour lecture [27]. There appears to be a need to align this disconnect between what peer reviewers want and what is available.

In addition, most respondents indicated difficulty in finding and accessing training in peer review, including not being able to find the time to justify engaging in training. Furthermore, most participants indicated that their primary institution or journal which they worked at did not require/provide training in peer review. Despite this, respondents indicated that scholarly publishers, journals, or universities are best positioned to provide this training. Given academic institutions are training the next generation of researchers, it is surprising that part of a researcher’s training does not include such a fundamental part of research. Universities would likely have the expertise and resources to provide such training, given this is often where editors and editorial boards reside. As for journals and scholarly publishers, they may be less inclined to require training given the existing challenges to find peer reviewers. While lowering the barrier is potentially important, this needs to be balanced against the potential risk of providing an unhelpful review and/or missing flaws.

Even if training were available, there may not be enough incentive for peer reviewers, as many respondents in the survey indicated a lack of time or personal benefit. More than a third of respondents reported that recognition for undergoing training or peer reviewing would incentivize them to obtain additional training in peer review. Discussion surrounding incentivizing peer review is part of a broader discourse to move away from the metric of publications to a more societal perspective and reward behaviours that strengthen research integrity [28]. The Declaration of Research Assessment (DORA), the Coalition for Advancing Research Assessment (COARA), and others are now advocating for formally including such activities as part of research(er) assessment [29, 30].

Scholarly peer review plays a crucial role in ensuring the quality of research manuscripts before they are published. Training peer reviewers could enhance reviewer expertise, establish guidelines and standards, and improve review quality. In addition, training could cover important topics such as mitigating bias and publication fraud. We implore stakeholders in peer review to focus future efforts in creating an open and accessible training program in peer review.

Supporting information

S1 File. Complete list of survey questions delivered through SurveyMonkey.

https://doi.org/10.1371/journal.pone.0287660.s001

(PDF)

S2 File. Full thematic content analysis data.

https://doi.org/10.1371/journal.pone.0287660.s002

(XLSX)

S1 Table. Flowchart of instances of failure of email delivery to intended survey participants.

https://doi.org/10.1371/journal.pone.0287660.s003

(PDF)

References

  1. 1. Tennant JP, Dugan JM, Graziotin D, Jacques DC, Waldner F, Mietchen D, et al. A multi-disciplinary perspective on emergent and future innovations in Peer Review. F1000Research. 2017;6:1151. pmid:29188015
  2. 2. Burnham JC. The Evolution of Editorial Peer Review. JAMA: The Journal of the American Medical Association. 1990;263(10):1323. pmid:2406470
  3. 3. Nicholas D, Watkinson A, Jamali HR, Herman E, Tenopir C, Volentine R, et al. Peer Review: Still king in the Digital age. Learned Publishing. 2015;28(1):15–21.
  4. 4. Rowland F. The peer-review process. Learned Publishing. 2002;15(4):247–58.
  5. 5. Glonti K, Cauchi D, Cobo E, Boutron I, Moher D, Hren D. A scoping review on the roles and tasks of peer reviewers in the Manuscript Review Process in biomedical journals. BMC Medicine. 2019;17(1). pmid:31217033
  6. 6. Glonti K, Boutron I, Moher D, Hren D. Journal editors’ perspectives on the roles and tasks of peer reviewers in Biomedical Journals: A Qualitative Study. BMJ Open. 2019;9(11). pmid:31767597
  7. 7. Kelly J, Sadeghieh T, & Adeli K. Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide. EJIFCC, 2014;25(3):227–243.
  8. 8. Smith R. Peer Review: A Flawed Process at the Heart of Science and Journals. Journal of the Royal Society of Medicine. 2006;99(4):178–182. pmid:16574968
  9. 9. Horbach SP, Halffman W. The changing forms and expectations of Peer Review. Research Integrity and Peer Review. 2018;3(1).
  10. 10. Superchi C, González JA, Solà I, Cobo E, Hren D, Boutron I. Tools used to assess the quality of peer review reports: A methodological systematic review. BMC Medical Research Methodology. 2019;19(1). pmid:30841850
  11. 11. Squazzoni F, Grimaldo F, Marušić A. Publishing: Journals could share peer-review data. Nature. 2017;546(7658):352–. pmid:28617464
  12. 12. Song E, Ang L, Park J-Y, Jun E-Y, Kim KH, Jun J, et al. A scoping review on biomedical journal peer review guides for reviewers. PLOS ONE. 2021;16(5). pmid:34014958
  13. 13. Hirst A, Altman DG. Are peer reviewers encouraged to use reporting guidelines? A survey of 116 Health Research Journals. PLoS ONE. 2012;7(4). pmid:22558178
  14. 14. Mulligan A, Hall L, Raphael E. Peer Review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology. 2012;64(1):132–61.
  15. 15. Ho RC-M, Mak K-K, Tao R, Lu Y, Day JR, Pan F. Views on the peer review system of biomedical journals: An online survey of academics from high-ranking universities. BMC Medical Research Methodology. 2013;13(1). pmid:23758823
  16. 16. Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, et al. The ups and downs of peer review. Adv Physiol Educ. 2007 Jun;31(2):145–52. pmid:17562902
  17. 17. Patel J. Why training and specialization is needed for peer review: A case study of peer review for Randomized Controlled Trials. BMC Medicine. 2014;12(1). pmid:25285376
  18. 18. Publons’ Global State of Peer Review 2018. 2018. https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf
  19. 19. Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of Biomedical Journals: A Systematic Review and meta-analysis. BMC Medicine. 2016;14(1). pmid:27287500
  20. 20. Schroter S, Black N, Evans S, et al.: Effects of training on quality of peer review: randomised controlled trial. BMJ. 2004; 328(7441): 673 pmid:14996698
  21. 21. Callaham ML, Wears RL, Waeckerle JF: Effect of attendance at a training session on peer reviewer quality and performance. Ann Emerg Med September 1998;32:318–322.
  22. 22. Galipeau J, Moher D, Campbell C, Hendry P, Cameron DW, Palepu A, et al. A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology. Journal of Clinical Epidemiology. 2015;68(3):257–65. pmid:25510373
  23. 23. Willis JV, Ramos J, Cobey KD, et al. Knowledge and Motivations of Training in Peer Review: A protocol for an international cross-sectional survey. Epub ahead of print September 3, 2022.
  24. 24. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004 Sep 29;6(3):e34 pmid:15471760
  25. 25. Cobey KD, Monfaredi Z, Poole E, Proulx L, Fergusson D, Moher D. Editors-in-chief perceptions of patients as (co) authors on publications and the acceptability of ICMJE authorship criteria: a cross-sectional survey. Res Involv Engagem. 2021;7(1):39. pmid:34127081
  26. 26. Kleinert S, Horton R. How should medical science change? Lancet. 2014 Jan 18;383(9913):197–8. . Epub 2014 Jan 8.. pmid:24411649
  27. 27. Willis JV, Cobey KD, Ramos J. Online training in manuscript peer review: a systematic review. 2022.
  28. 28. Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, et al. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLoS Biol 18(7): e3000737. 2020. pmid:32673304
  29. 29. American Society for Cell Biology. DORA. Declaration on Research Assessment [Internet]. Available from: http://www.ascb.org/dora/.
  30. 30. Coalition for Advancing Research Assessment. CoARA. Available from: https://coara.eu/