Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Transparency in conducting and reporting research: A survey of authors, reviewers, and editors across scholarly disciplines

  • Mario Malički ,

    Roles Conceptualization, Data curation, Formal analysis, Visualization, Writing – original draft, Writing – review & editing

    mario.malicki@mefst.hr

    Affiliation Urban Vitality Centre of Expertise, Amsterdam University of Applied Sciences, Amsterdam, The Netherlands

  • IJsbrand Jan Aalbersberg,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Visualization, Writing – review & editing

    Affiliation Elsevier, Amsterdam, The Netherlands

  • Lex Bouter,

    Roles Conceptualization, Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliations Faculty of Humanities, Department of Philosophy, Vrije Universiteit, Amsterdam, The Netherlands, Department of Epidemiology and Data Science, Amsterdam University Medical Centers, Amsterdam, The Netherlands

  • Adrian Mulligan,

    Roles Conceptualization, Investigation, Methodology, Project administration, Software, Writing – review & editing

    Affiliation Elsevier, Oxford, United Kingdom

  • Gerben ter Riet

    Roles Conceptualization, Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliations Urban Vitality Centre of Expertise, Amsterdam University of Applied Sciences, Amsterdam, The Netherlands, Department of Cardiology, Amsterdam University Medical Centers, Amsterdam, The Netherlands

Correction

17 Mar 2023: The PLOS ONE Staff (2023) Correction: Transparency in conducting and reporting research: A survey of authors, reviewers, and editors across scholarly disciplines. PLOS ONE 18(3): e0283443. https://doi.org/10.1371/journal.pone.0283443 View correction

Abstract

Calls have been made for improving transparency in conducting and reporting research, improving work climates, and preventing detrimental research practices. To assess attitudes and practices regarding these topics, we sent a survey to authors, reviewers, and editors. We received 3,659 (4.9%) responses out of 74,749 delivered emails. We found no significant differences between authors’, reviewers’, and editors’ attitudes towards transparency in conducting and reporting research, or towards their perceptions of work climates. Undeserved authorship was perceived by all groups as the most prevalent detrimental research practice, while fabrication, falsification, plagiarism, and not citing prior relevant research, were seen as more prevalent by editors than authors or reviewers. Overall, 20% of respondents admitted sacrificing the quality of their publications for quantity, and 14% reported that funders interfered in their study design or reporting. While survey respondents came from 126 different countries, due to the survey’s overall low response rate our results might not necessarily be generalizable. Nevertheless, results indicate that greater involvement of all stakeholders is needed to align actual practices with current recommendations.

Introduction

Scholarly publishing has been steadily growing for the last two centuries with estimates of more than 3 million articles published per year [1]. In recent times, there has been an increasing focus on the detection and prevention of misconduct and detrimental research practices, in enhancing research rigor and transparency, and in cultivating a research climate best suited to foster these goals [25]. Specifically, calls have been made to increase the transparency in conducting and reporting research by registering projects and data analyses plans before data collection, using reporting guidelines when writing up studies, posting preprints, sharing (raw) data, reproducing or replicating studies, as well as rewarding, hiring or promoting researchers based on these practices [69]. One such call was made in 2014, when the Transparency and Openness Promotion (TOP) Committee developed eight transparency standards related to: 1) citations, 2) data, 3) analytic methods (code), 4) research materials, 5) data and analysis reporting, 6) preregistration of studies, 7) preregistration of analysis plans, and 8) replication of research [10]. Since then, more than 5,000 journals and 80 organizations became TOP signatories [11]. However, to the best of our knowledge, attitudes related to all aspects of TOP guidelines have not been systematically assessed, and, yet, without agreement from all stakeholders regarding these guidelines, it is unlikely that scholarly practices will change.

It was therefore our goal to assess differences in attitudes and perceptions between authors, reviewers, and editors regarding the TOP guidelines, differences in perceptions of their work climates, and differences in their perceived prevalence of responsible and detrimental research practices.

Materials and methods

We reported our study following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [12], as well as the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) [13].

Study design and participants

Full methodological details of our study, including its registration and study protocol are available at our project’s data repository [14]. In short, we sent a survey to 100,000 email addresses. The email addresses were from: 1) randomly selected corresponding authors of papers indexed in Scopus (n = 99,708); or 2) editors whose Instructions to Authors we analysed in our previous study (n = 292) [15].

Setting

The survey was sent on 24 April 2018, had reminders on 9 and 24 May, and was closed on 12 June of that same year. The survey invitation and reminders, full survey questions and details of their development and testing are available on our project’s data repository [14]. Respondents could skip items and change the answers they gave until submitting their responses by clicking on the “close the survey” button. Estimated time to finish the survey was 12 minutes (based on pilot results), and was listed in the survey invite letter.

Variables and their measurement

The survey was divided into 4 sections and the questions were presented in the same order to all participants:

  1. attitudes towards transparency in conducting and reporting research (11 questions covering the TOP guidelines with 5-point Likert-type answers ranging from strongly agree to strongly disagree);
  2. perceptions of work climate (13 questions with 5-point Likert-type answers ranging from strongly agree to strongly disagree);
  3. perceived prevalence of responsible and detrimental research practices (14 questions with 5-point Likert-type answers ranging from very prevalent to never occurring);
  4. sociodemographic and knowledge of statistics questions (10 questions with categorical answers).

All questions for the first three sections also included a Don’t know/Not applicable option. In section 4, respondents could select one of 28 prespecified scholarly fields, a multidisciplinary category or the “other” category where they could write their field(s) using an open text response format. All answers they provided in free text, as well as the 29 choices, were recoded to one of the 6 major categories that were used in our previous study on Instruction to Authors: Arts & Humanities, Health Sciences, Life Sciences, Physical Sciences, Social Sciences, and Multidisciplinary [15]. Additionally, we had open-ended questions (free text format) that explored reasons for respondents’ (dis)agreement with questions of first three sections, and a final survey question for general feedback on the survey. In order to maintain the focus on quantitative data and have a reasonable reporting length, analysis of open-ended answers is planned for another publication.

Statistical methods and study bias

We conducted all analyses in STATA v.13, with statistical code and outputs published on our project’s data repository [14]. The grouping of respondents to authors (A), reviewers (R) and editors (E) is explained in detail in the Appendix. The main outcomes (answers to questions in sections 1 to 3 listed above) are presented as absolute numbers and percentages (calculated on the basis of the number of respondents that answered a specific question). For readability, percentages are shown only for those who agreed or strongly agreed to statements, or that perceived prevalent or very prevalent practices. Data for all answer options are available on our project’s data repository [14]. Differences in sociodemographic characteristics and knowledge of statistics between authors, reviewers and editors were explored using the chi-squared test for categorical variables, and Kruskal-Wallis test for respondents’ age. To explore possible associations between answers to questions of sections 1 to 3 and explanatory variables (sociodemographic characteristics and knowledge of statistics) we used ordinal regression analyses. We conducted regressions for all 38 questions individually. We also explored treating questions of each section as separate scales (with the scales consisting of 11, 13 and 14 questions, respectively). We constructed 3 summary scores for those scales, which we then also explored in ordinal regression analyses. To adjust for multiple comparisons, we considered p≤0.001 as statistically significant (based on the Bonferroni correction method of dividing 0.05 by 50, which was the number of conducted regressions rounded-up to the nearest decile). For readability, regression outputs are presented graphically in the Appendix, while details of analyses, odds ratios and their associated 95% CI are available on our project’s data repository [14].

Respondents’ inquiries and deviations from the protocol

Due to miscommunication within the team, instead of adding the collected email addresses of the editors from our previous study on Instruction to Authors [15] to the total of 100,000 planned invites, 58 of collected email addresses were used for piloting the survey, and 292 were incorporated into the 100,000 invites sent. Additionally, during survey creation, instead of planned options of 0, 1–4, 5–10,10+ for question—How many articles (approximately) did you review in the last 12 months? survey options were instead 1, 2–5, 6–10, and 10+. In the invite email, respondents were asked to contact us if they encountered any (technical) difficulties. Details of their questions are listed in the Appendix.

Ethics approval

An ethics waiver for the study has been obtained on 6 April 2018 from the Medical Ethics Review Committee of the Academic Medical Center, University of Amsterdam (reference number W18_112). The survey invitation (available on our project’s data repository [14]) included information on the study purpose, investigators, estimated length to complete the survey, planned sharing of anonymised data, and publication of summary results. No incentives were offered for participation.

Results

Our overall response rate was 4.9% (3,659 out of 74,749 delivered emails) and included responses from 1,389 authors, 1,833 reviewers, and 434 editors. Respondents came from 126 countries, most commonly USA (16%), India (8%), Italy (7%), and Brazil (5%). Respondents’ median age was 44 (IQR 35 to 55, range 23 to 90). The majority worked for universities or colleges (62%), and were male (66%). While respondents came from all scholarly disciplines, most were from Physical (33%) and Life Sciences (25%), followed by Multidisciplinary (18%) and Health Sciences (13%). Similarly, while respondents came from all career stages, most had a publication record of 6 to 25 publications (39%), or more than 50 publications (25%). Finally, most respondents considered themselves to have either basic (44%) or intermediate (45%) knowledge of statistics. Full details on response rate calculation, assignment of respondents to authors, reviewers, and editors, their self-declared statistical knowledge, and sociodemographic characteristics are available in the Appendix. Summary results are presented below, while all answers, as well as percentages of respondents who chose “Don’t know” or “Not applicable” options, are shown in the Appendix.

Attitudes toward transparency in reporting and conducting research

Respondents’ attitudes towards 11 statements on transparency in reporting and conducting research, which were based on TOP guidelines, are shown in Fig 1. The lowest support, across all respondents, was for preregistering studies prior to conducting the research (21%), and the highest for appropriately citing all data, analytic methods (program code) and materials used in the study (95%). Regression analyses (Appendix Table A7 in S1 Appendix) revealed no significant differences between authors, editors and reviewers and significant associations for 3 factors: 1) discipline–Health Sciences researchers had more positive attitudes towards transparency in reporting and conducting research than researchers of other disciplines; 2) number of publications–researchers with less than 6 publications had more positive attitudes than other researchers; 3) country–respondents from India had more positive attitudes than respondents from other countries (Fig 2).

thumbnail
Fig 1. Respondents’ attitudes toward transparency in reporting and conducting research.

In the numerical comparison differences between groups larger than 5% are in bold. In the graphical comparison highest percentage is darker. * Questions are sorted by the total agreement percentage. Order of questions as they were asked in the survey is presented in the Appendix. Slight variations exist for the number of respondents per question, exact numbers are available on our project’s data repository.

https://doi.org/10.1371/journal.pone.0270054.g001

thumbnail
Fig 2. Examples of univariate comparisons that were confirmed in regression analyses to be significantly associated with attitudes towards transparency in reporting and conducting research, perceptions of work climate or perceived prevalence of responsible and detrimental research practices.

In the graphical comparison highest percentage is darker. Slight variations exist for the number of respondents per question, exact numbers are available on our project’s data repository.

https://doi.org/10.1371/journal.pone.0270054.g002

Perceptions of work climate

Respondents’ perceptions on 13 statements about their own work climate are shown in Fig 3. Most respondents stated that they share their research data with other researchers unless legal or ethical reasons prevent them from doing so (79%), and that having access to others’ data benefits (or would benefit) them (79%). Two thirds (66%) considered the quality of peer review they received to be generally high, as well as the quality of publications in their field (64%). One fifth of respondents (20%) stated that due to the pressure to publish they sacrifice the quality of their publications for quantity, and one seventh of respondents (14%) stated that funders or sponsors interfered in their study design or study reporting. Regression analyses (Appendix Table A8 in S1 Appendix) revealed no significant differences between authors, editors and reviewers and significant associations with perceptions of work climate with two factors: 1) country–respondents from India or USA had more positive perceptions of their work climate than respondents from other countries; 2) age–younger respondents perceived a worse work climate than older respondents (e.g., younger respondents perceived more pressure to sacrifice quality for quantity, and they more often stated that long peer review times had negatively impacted their careers, Fig 2).

thumbnail
Fig 3. Respondents’ perceptions toward their work climate.

In the numerical comparison differences between groups larger than 5% are in bold. In the graphical comparison highest percentage is darker. * Questions are sorted by the total agreement percentage. Order of questions as they were asked in the survey is presented in the Appendix. Slight variations exist for the number of respondents per question, exact numbers are available on our project’s data repository.

https://doi.org/10.1371/journal.pone.0270054.g003

Perceived prevalence of responsible and detrimental research practices

Respondents’ perceived prevalence of responsible and detrimental research practices are shown in Fig 4. Among detrimental practices, 38% of respondents perceived undeserved authorship, and 33%, prior relevant research not being cited, as (very) prevalent in their field. Of responsible practices, 32% perceived self-reporting of limitations, 19% perceived sharing of raw data, and 6% publication of studies with null or negative results to be (very) prevalent. Regression analyses (Appendix Table A9 in S1 Appendix) revealed editors perceiving fabrication, falsification, plagiarism, and omitting of references to be more prevalent than authors or reviewers. Additionally, editors also perceived undeclared conflicts of interest, and publication of studies with null or negative results to be more prevalent than reviewers (but not than authors, Appendix Table A9 in S1 Appendix).

thumbnail
Fig 4. Respondents’ perceptions of perceived prevalence of responsible and detrimental research practices.

In the numerical comparison differences between groups larger than 5% are in bold. In the graphical comparison highest percentage is darker. * Questions are sorted by the total agreement percentage. Order of questions as they were asked in the survey is presented in the Appendix. Slight variations exist for the number of respondents per question, exact numbers are available on our project’s data repository.

https://doi.org/10.1371/journal.pone.0270054.g004

Regression analyses also revealed 3 other factors associated with the prevalence of research practices: 1) country–respondents from India or USA perceived detrimental research practices to be more prevalent than respondents from other countries; 2) discipline–Health Sciences respondents perceived responsible practices, i.e., use of reporting guidelines, open peer review, publication of studies with null or negative results, and reporting of study limitations as more prevalent than respondents from other disciplines, but they also perceived one detrimental practice—ghost writing—to be more prevalent than the respondents from other disciplines; 3) age–younger respondents perceived undeserved authorship, as well as prior relevant research not being cited, as more prevalent than older respondents (Fig 2).

Discussion

Our study has shown that authors, reviewers and editors were not supportive of all TOP recommendations for transparency in conducting and reporting of research. For example, while 95% of respondents (strongly) agreed that researchers must appropriately cite study data, methods and materials; a large majority (74%) that authors must follow reporting guidelines, that journals must encourage publication of replication studies (61%), or that authors must share data (60%); only half (50%) felt that journals have to verify that study findings are replicable using the deposited authors’ data and methods of analysis, and 21% that studies must be preregistered. While we found no significant differences in these attitudes between authors, reviewers, and editors, we did observe differences between respondents of different countries, disciplines and research seniority. Overall, younger respondents, those from Health Sciences, or from India, had more positive attitudes towards the TOP recommendations. Direct comparisons of our results with other surveys are difficult, as to the best of our knowledge, no surveys have addressed all aspects of TOP guidelines, nor surveyed respondents from all disciplines or countries represented in our survey since the TOP guidelines were published. Furthermore, year when the survey is conducted, wording differences, use of scales versus single questions to assess attitudes, as well as differences in collected sociodemographic data that might be explored as explanatory variables, often don’t allow direct comparisons [16]. Nevertheless, a survey of 1,015 authors of observational studies, also conducted in 2018, showed that 63% of respondents used reporting guidelines, and that their attitudes towards, awareness, and use of reporting guidelines are influenced by journals’ endorsements [17], which was also echoed in earlier studies [18]. Our previous analysis of journals’ endorsements, however, showed that only 13% of journals across disciplines recommended the use of reporting guidelines, and only 2% required it [15]. Data sharing practices across sciences have not been systematically explored, but recent estimates indicated that data sharing was mentioned in 15% of biomedical [19], and in only 2% of psychological articles [20]. A 2020 systematic review indicated that most researchers have positive attitudes toward data sharing [21]. This discrepancy of positive attitudes versus lack of actual practice of sharing data is influenced by many factors, including requirements for job selection and promotion, dedicated funding and skills, as well as incentives, time and training required to prepare (anonymised) data for sharing [21]. Regarding preregistration, a 2017 survey of 275 authors of systematic reviews or meta-analyses, showed that 37% of participants agreed with making protocols mandatory [22]. While that percentage was higher than in our study, the study’s sample size was smaller, respondents were predominantly from biomedical disciplines, and the questions only pertained to preregistration of systematic reviews.

It is likely that the lack of support of editors in our study towards all aspects of TOP recommendations is one of the contributing factors why these recommendations have not been endorsed by a larger number of journals today. The roles of editors and journals in endorsing or requiring specific practices, the lack of resources they often face, and the lack of proper intervention studies, were discussed extensively in our previous publications [15,23]. Here we would like to add that uptake and attitudes toward different practices are also likely influenced by whom and how recommendations are made. More rigorous methodological steps during recommendation development (similar to those for reporting guidelines or clinical practice guidelines), and open feedback calls might have led to higher initial uptakes. Additionally, published case reports with cost estimates and practical tips from those involved in recommendation development, that then managed to change practices of their journals, departments or institutions, could perhaps lead to additional endorsements.

Our study has also shown that work climates of authors, reviewers, and editors still had room for significant improvements. Approximately two thirds of respondents (66%) found the quality of peer review they received, as well as the quality of publications in their field (64%) to be generally high, one fifth of the respondents (20%) stated that due to the pressure to publish they sacrifice the quality of their publications for quantity, and 14% stated that funders or sponsors interfere in their study design or study reporting. The finding regarding pressure to publish was associated with respondent’s age, with younger respondents feeling more pressure to sacrifice quality for quantity. Younger respondents also felt that long peer review times had a more negative effect on their careers than did the other researchers. Again, direct comparison with previous studies is difficult due to differences in questions, but a recent survey on 7,670 postdoctoral researchers from 93 countries indicated that 56% had a negative outlook on their careers, and that only 46% would recommend to their younger selves to pursue a career in research [24]. Furthermore, as most postdocs in that survey reported being hired for only short periods of time, it is likely that long peer review times, and the number of publications might have more impact on their job prospects than of (tenured) academics. The overall high satisfaction of researchers with peer review we found, has also been reported in previous studies, with a caveat of known differences in reported satisfaction among those that had their papers rejected vs those that had them accepted [25].

Finally, our study showed that most commonly perceived detrimental research practices were undeserved authorship (as was also shown by previous research) [26,27], and prior relevant research not being cited. While there were no significant differences between perceptions of authors, reviewers, and editors regarding the prevalence of undeserved authorship; editors perceived higher prevalence of relevant research not being cited as well as higher prevalences of fabrication, falsification, and plagiarism than authors or reviewers. These findings could be explained by the fact that most researchers often engage in conversations on authorship during their projects and publications, while less than 2% of researchers admitted to having fabricated or falsified data [28], or plagiarised other’s work [29], and therefore the latter practices are more likely to have been encountered or discussed by editors than by other researchers. Additionally, we observed differences in perceived detrimental research practices, with younger researchers finding undeserved authorship, and not citing relevant research to be more prevalent than older researchers. This mimics previous studies which showed that young researchers often felt they were doing all of the work while others were receiving the credit, and that they had more research experience than many starting faculty members [24].

We also found that Health Sciences respondents perceived use of reporting guidelines, open peer review, publication of studies with null or negative results, and reporting of study limitations to be more prevalent than respondents from other disciplines, which is congruent with the frequency of recommendations on those topics in Health Science journals compared to other disciplines [15]. Finally, we also observed that researchers from USA and India perceived detrimental research practices to be more prevalent than respondents from other countries. This could be a consequence of the higher visibility of the Office of Research Integrity, and its legal foothold in the USA [30], as well as role of the Society for Scientific Values in India [31], the number of misconduct cases in those countries, as well as possibly higher competition levels in those countries [32,33].

While our study assessed attitudes and perceptions of a large number of respondents across many countries and disciplines, it is not without limitations. First, due to the low response rate our findings are not necessarily generalizable (we discuss possible influences of self-selection and non-response biases in detail in the Appendix). Our response rate, however, was similar to other recent large online surveys [16,22,3436], and response rates have been consistently found to be lower in online versus other modes of survey dissemination [37]. The recent exception to this pattern was a 2019 survey of 2,801 researchers from economics, political science, psychology, and sociology regarding open science practices, which had a response rate of 46%. However, each participant in that survey was compensated with either 25 or 40 dollars (randomly) if they were students, or 75 or 100 dollars (randomly) if they were an author.

Second, while we did have respondents from 126 countries, we explored differences between respondents’ attitudes and perceptions for only 4 countries with highest number of respondents in our survey. Further research is warranted to determine national or institutional differences [16,38]. Third, although our survey was confidential, previous research has suggested that researchers often overestimate detrimental research practices of their colleagues [28], but may also underreport such practices in order to protect the reputation of their field or themselves, for being unwilling to report such practices for official investigations [39]. Fourth, to preserve confidentiality, we did not ask information on respondents departments or universities. This precluded taking into account potential clustering of some observations as it is possible that witnessing the same detrimental practice or being aware of the same high-profile cases within a department or a field might have led to overestimation of such practices. Finally, we did not define all terms used in the survey, so some differences between respondents might also stem from their different interpretations of some terms. For example, previous surveys on falsification yielded higher estimates when the term ‘falsified’ was not used but researchers were instead asked if they had ever altered or modified their data [28]. Sixth, while we explored several sociodemographic characteristics, publication practices and knowledge of statistics as factors associated with respondents’ attitudes and perceptions, previous research has also shown influence of respondents personality traits which we did not measure in our study [40].

In conclusion, our study has found that attitudes of authors, reviewers, and editors did not significantly differ regarding the TOP guidelines or their perceptions of their work climates. We also observed differences in the perceived prevalences of detrimental practices between editors and authors or reviewers, which highlights the need to raise awareness of these issues among all stakeholders, and to develop projects where all stakeholders would be working together to eradicate or minimize them. More studies are also needed to showcase the impact of any policy changes, as well as studies that lower the burden of implementing such policies [41]. Finally, recognition and rewarding of responsible practices should move from recommendations to actual practice [9].

Supporting information

S1 Appendix. Details on the survey’s response rate, generalizability and bias, variable coding, and additional analyses are available in the appendix.

https://doi.org/10.1371/journal.pone.0270054.s001

(DOCX)

Acknowledgments

Declarations

We would like to thank both our pilot and survey respondents for their valuable inputs and taking the time to answer our questions. We would also like to apologise to them for the delay in publishing our results, which occurred in part due to MM moving to another institution and due to the COVID-19 pandemic and its effects on our personal and professional lives. MM is currently a postdoc at Meta Research Innovation Center at Stanford University, but as most of the study had been conducted during his postdoc in Amsterdam, the listed affiliation is the one that reflects the institute where most of the work was done. Finally, we would like to thank Ricardo Moreira, Research manager at Elsevier for scripting the survey and managing the fieldwork, and Robert Thibault for comments on the draft of our manuscript.

Presentations at meetings/conferences

Preliminary results of our survey were presented at PUBMET 2018: The 5th Conference on Scholarly Publishing in the Context of Open Science held on September 20–21, 2018, in Zagreb, Croatia; as well as on the 6th World Conference on Research Integrity, held on June 2–5, 2019, in Hong Kong, China.

References

  1. 1. The STM Report: An overview of scientific and scholarly journal publishing. The Hague, the Netherlands: 2018.
  2. 2. Fanelli D. Why growing retractions are (mostly) a good sign. PloS Med. 2013;10(12):e1001563. Epub 2013/12/07. pmid:24311988; PubMed Central PMCID: PMC3848921.
  3. 3. Ioannidis JPA. Why most published research findings are false. PLoS medicine. 2005;2(8):696–701. ISI:000231676900008. pmid:16060722
  4. 4. Nuijten MB, Hartgerink CH, van Assen MA, Epskamp S, Wicherts JM. The prevalence of statistical reporting errors in psychology (1985–2013). Behav Res Methods. 2016;48(4):1205–26. pmid:26497820; PubMed Central PMCID: PMC5101263.
  5. 5. Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research Integrity and Peer Review. 2016;1(1):17. pmid:29451551
  6. 6. Marusic A, Malicki M, von Elm E. Editorial research and the publication process in biomedicine and health: Report from the Esteve Foundation Discussion Group, December 2012. Biochem Med. 2014;24(2):211–6. Epub 2014/06/28. pmid:24969914; PubMed Central PMCID: PMC4083572.
  7. 7. Tennant JP, Dugan JM, Graziotin D, Jacques DC, Waldner F, Mietchen D, et al. A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research. 2017;6:1151. Epub 2017/12/09. pmid:29188015; PubMed Central PMCID: PMC5686505.
  8. 8. Spellman BA. A Short (Personal) Future History of Revolution 2.0. Perspectives on Psychological Science. 2015;10(6):886–99. pmid:26581743.
  9. 9. Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, et al. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biology. 2020;18(7):e3000737. pmid:32673304
  10. 10. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Scientific Standards. Promoting an open research culture. Science. 2015;348(6242):1422–5. pmid:26113702; PubMed Central PMCID: PMC4550299.
  11. 11. Science CfO. Current Signatories 2017 [cited 2017 14/12/2017]. Available from: https://cos.io/our-services/top-guidelines/.
  12. 12. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS medicine. 2007;4(10):e296. pmid:17941714
  13. 13. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). Journal of medical Internet research. 2004;6(3). pmid:15471760
  14. 14. Malički M, ter Riet G, Bouter LM, Aalbersberg IJJ. Project: Fostering Transparent and Responsible Conduct of Research: What can Journals do? Mendeley Data; 2019. Available from: http://dx.doi.org/10.17632/53cskwwpdn.6.
  15. 15. Malicki M, Aalbersberg IJJ, Bouter L, Ter Riet G. Journals’ instructions to authors: A cross-sectional study across scientific disciplines. PLOS One. 2019;14(9):e0222157. pmid:31487331; PubMed Central PMCID: PMC6728033 alter our adherence to PLOS ONE policies on sharing data and materials.
  16. 16. Baždarić K, Vrkić I, Arh E, Mavrinac M, Gligora Marković M, Bilić-Zulle L, et al. Attitudes and practices of open data, preprinting, and peer-review—A cross sectional study on Croatian scientists. PLOS ONE. 2021;16(6):e0244529. pmid:34153041
  17. 17. Sharp MK, Bertizzolo L, Rius R, Wager E, Gómez G, Hren D. Using the STROBE statement: survey findings emphasized the role of journals in enforcing reporting guidelines. Journal of Clinical Epidemiology. 2019;116:26–35. pmid:31398440
  18. 18. Fuller T, Pearson M, Peters J, Anderson R. What Affects Authors’ and Editors’ Use of Reporting Guidelines? Findings from an Online Survey and Qualitative Interviews. PLOS ONE. 2015;10(4):e0121585. pmid:25875918
  19. 19. Serghiou S, Contopoulos-Ioannidis DG, Boyack KW, Riedel N, Wallach JD, Ioannidis JPA. Assessment of transparency indicators across the biomedical literature: How open is open? PLOS Biology. 2021;19(3):e3001107. pmid:33647013
  20. 20. Hardwicke TE, Thibault RT, Kosie JE, Wallach JD, Kidwell MC, Ioannidis JPA. Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014–2017). Perspectives on Psychological Science. 2021:1745691620979806. pmid:33682488
  21. 21. Zuiderwijk A, Shinde R, Jeng W. What drives and inhibits researchers to share and use open research data? A systematic literature review to analyze factors influencing open research data adoption. PLOS ONE. 2020;15(9):e0239283. pmid:32946521
  22. 22. Tawfik GM, Giang HTN, Ghozy S, Altibi AM, Kandil H, Le H-H, et al. Protocol registration issues of systematic review and meta-analysis studies: a survey of global researchers. BMC Medical Research Methodology. 2020;20(1):213. pmid:32842968
  23. 23. Malički M, Jerončić A, Aalbersberg IJ, Bouter L, ter Riet G. Systematic review and meta-analyses of studies analysing instructions to authors from 1987 to 2017. Nature Communications. 2021;12(1):5840. pmid:34611157
  24. 24. Woolston C. Postdoc survey reveals disenchantment with working life. Nature. 2020;587(7834):505–8. pmid:33208965
  25. 25. Global State of Peer Review: Publons; 2019. Available from: https://publons.com/community/gspr.
  26. 26. Marusic A, Bosnjak L, Jeroncic A. A systematic review of research on the meaning, ethics and practices of authorship across scholarly disciplines. PLoS One. 2011;6(9):e23477. Epub 2011/09/21. pmid:21931600; PubMed Central PMCID: PMC3169533.
  27. 27. Noruzi A, Takkenberg JJ, Kayapa B, Verhemel A, Gadjradj PS. Honorary authorship in cardiothoracic surgery. The Journal of thoracic and cardiovascular surgery. 2021;161(1):156–62. e1.
  28. 28. Fanelli D. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS One. 2009;4(5). ISI:000266490000014. pmid:19478950
  29. 29. Pupovac V, Fanelli D. Scientists Admitting to Plagiarism: A Meta-analysis of Surveys. Sci Eng Ethics. 2015;21(5):1331–52. Epub 2014/10/30. pmid:25352123.
  30. 30. Pascal CB. The Office of Research Integrity: Experience and Authorities. Hofstra L Rev. 2006;35:795.
  31. 31. Juyal D, Thawani V, Thaledi S. Rise of academic plagiarism in India: Reasons, solutions and resolution. Lung India. 2015;32(5):542–3. pmid:26628786.
  32. 32. Ison DC. An empirical analysis of differences in plagiarism among world cultures. Journal of Higher Education Policy & Management. 2018;40(4):291–304.
  33. 33. Gaudino M, Robinson NB, Audisio K, Rahouma M, Benedetto U, Kurlansky P, et al. Trends and Characteristics of Retracted Articles in the Biomedical Literature, 1971 to 2020. JAMA internal medicine. 2021. pmid:33970185
  34. 34. Van Noorden R. Some hard numbers on science’s leadership problems. Nature. 2018;557(7705):294–6. pmid:29769686.
  35. 35. Patience GS, Galli F, Patience PA, Boffito DC. Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories. PLoS One. 2019;14(1):e0198117. pmid:30650079
  36. 36. Rowley J, Johnson F, Sbaffi L, Frass W, Devine E. Academics’ behaviors and attitudes towards open access publishing in scholarly journals. Journal of the Association for Information Science and Technology. 2017;68(5):1201–11.
  37. 37. Daikeler J, Bošnjak M, Lozar Manfreda K. Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates. Journal of Survey Statistics and Methodology. 2020;8(3):513–39.
  38. 38. Huang C-K, Wilson K, Neylon C, Ozaygen A, Montgomery L, Hosking R. Mapping open knowledge institutions: an exploratory analysis of Australian universities. PeerJ. 2021;9:e11391. pmid:34026359
  39. 39. Gardner W, Lidz CW, Hartwig KC. Authors’ reports about research integrity problems in clinical trials. Contemporary clinical trials. 2005;26(2):244–51. pmid:15837444
  40. 40. Tijdink JK, Bouter LM, Veldkamp CL, van de Ven PM, Wicherts JM, Smulders YM. Personality traits are associated with research misbehavior in Dutch scientists: a cross-sectional study. PloS one. 2016;11(9):e0163251. pmid:27684371
  41. 41. Digital Science, Fane B, Ayris B, Hahnel M, Hrynaszkiewicz I, Baynes G, et al. The State of Open Data. 2019. https://doi.org/10.6084/m9.figshare.9980783.v2.