Peer Review History
| Original SubmissionMay 15, 2025 |
|---|
|
Dear Dr. Hujoel, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 12 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.... We look forward to receiving your revised manuscript. Kind regards, Robin Haunschild Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that your Data Availability Statement is currently missing the repository name and/or the DOI/accession number of each dataset OR a direct link to access each database. If your manuscript is accepted for publication, you will be asked to provide these details on a very short timeline. We therefore suggest that you provide this information now, though we will not hold up the peer review process if you are unable. 3. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 3 in your text; if accepted, production will need this reference to link the reader to the Table. 4. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: The authors propose an estimator for implausible study outcomes which is a commendable initiative. The approach is beyond comparing means and less subjective. Nevertheless, the manuscript is of a very general nature. The applied setting is intuitive, however, the findings might be supported by a simulation study with all underlying assumptions being varied (treatment effects, dropout rates, the set of trustworthy trials, etc.). Major - page 4, 1st paragraph: respective DIVBTA estimators should be mentioned in more detail. The paper could also benefit from benchmarking results of alternative estimates in the applied setting - page 4, 2nd paragraph: “The set of randomized controlled trials at the basis of clinical guidelines” and, in particular, the definition of a “set of trustworthy trials” should be extended. So far authors remain vague on this definition albeit it might have crucial relevance for application of DIVBTAs. Which criteria qualify to be part of the set, placebo-controlled trials, trials with active control (head-to-head comparisons), efficacy/safety trials, sample size of trials? When is the size of the set sufficient? How to flag a trial as untrustworthy in absence of such a set? - The first assumption “First, there is no treatment effect heterogeneity – the treatment effect is the same for every individual” I'm not sure if this is mentioned here as a more general or specific assumption for the estimator? This cannot apply to placebo-controlled studies. If otherwise, please comment. - Please specify in more detail the second assumption “Second, the error of the outcome does not depend on the treatment or the outcome”. How is error defined? Is that a plausible assumption for all kinds of outcomes? For example, single- or double-bounded outcomes should be considered, please see: https://doi.org/10.3102/1076998610396895. Variances should also change, in particular, when a considerable proportion of patients achieve remission. - Please also be more detailed about randomization. The phrasing “A particularly unlucky randomization may lead to …” appears not well aligned with common terminology. - The phrasing “the variability of DIVTBA can be impacted by the noise created by participant dropout” is unclear. In which form do dropouts add noise? To this reviewers experience, they introduce imbalanced patient characteristics and missingness. In case of selective dropout this leads to upwards/downwards biased results. - The whole paragraph at the end of page 5/1st page 6 is unclear. - The section “Interpretation of the DIVBTA variance observed in a set of trustworthy trials” might benefit from a table summarizing the assumptions for DIVBTA such as expectations regarding the mean/variability in trustworthy vs. untrustworthy trials, impact of dropout, interaction, by chance, trial size, randomization, and further biases. Minor Abstract: Please correct typos “. to illustrate this approach, we assessed the DIVBTAs for Hemoglobin A1cin a systematic sample” Please revise the use and setting of references with/without leading spaces. Reviewer #2: I very much enjoyed reading your manuscript and look forward to seeing it published soon. With kind regards, Emma Sydenham (Senior Editor, Cochrane Database of Systematic Reviews) Major comments: 1) Concerning point 3 above 'Have the authors made all data underlying the findings of their manuscript fully available' I have answered no as the list of reviews identified in 'II. Problematic differences in variance between trial arms (DIVBTA): Application' is not given. It would be helpful if the thirty-five Cochrane reviews that were identified and reported HbA1c could be referenced in this section, or listed in a table. Minor comments: 1) The third sentence of the introduction does not follow. I suggest re-phrasing based on the following comment, and those below. While it is true that the unreliable trials have permeated clinical guidelines, I disagree that this has contributed to a growing crisis in research integrity. Rather, unreliable trials have permeated guidelines because research integrity and regulatory compliance criteria were not previously included in meta-analysis and guideline development processes. This is partly due to the information not being requested or made available by publishers in previous decades, and the issue not having been taken up by a working group in evidence based medicine years ago (i.e. in the early 2000s). I don't think there is a growing crisis in research integrity - I think research integrity was never raised as an issue until recently. Research integrity issues were highlighted during the COVID pandemic in particular, in 2020, which was some 30 years after evidence based medicine started taking hold as an academic discipline. I think the possibility of tackling research integrity issues in evidence based medicine has never been better than at present, because now there are a handful of research integrity checklists, policies, and methods (such as this paper) which are available for researchers to use which didn't previously exist. Also, looking back ten years, the critics of research integrity were not morally wrong or factually incorrect in their views. (Here I refer to an old debate (https://www.bmj.com/content/350/bmj.h2463) and a more recent commentary (https://www.jclinepi.com/article/S0895-4356(25)00003-4/pdf).) Clinical trials are highly regulated and so it should indeed be safe to assume that they were conducted according to the relevant domestic and international regulations. It should be the case that study reports accurately reflect lawfully collected clinical trial data. One of a number of problems is that new computing technologies have been developed which make it easy to generate fake data, and these new technologies have become more accessible over the last ten years. Multiple issues relating to computer generated data, changes in medical journal publishing, trial registration, international harmonisation of clinical trial conduct and electronic data collection exist in parallel and are constantly evolving. The more recent acknowledgement that research integrity can be a problem, and the use of solutions, are making evidence based clinical guidelines safer. Here is one such example: https://www.cochrane.org/about-us/news/cochrane-launches-new-feature-identify-retracted-publications So I would encourage you to reconsider using the phrase 'a growing crisis in research integrity'. 2) The following sentence 'A 2021 Cochrane editorial...' could be expanded. This editorial by Boughton, Wilkinson and Bero was the announcement of Cochrane's Editorial Policy on Managing Potentially Problematic Studies, which was developed over 5 years with broad consultation. I assume I have been invited to comment on your work as I am listed as a member of the Policy advisory committee. Unfortunately the Policy document does not have its own DOI, because it is included in an online policy manual and website, so the editorial is often referenced instead of the policy implementation guidance web link. The editorial you reference was only one means of disseminating the policy, there are also training materials produced by Cochrane, there was a popular blog post by Richard Smith 'Time to assume that health research is fraudulent until proven otherwise?' https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/ , among other resources and conference round table discussions. However, rather than referring to prior calls for action, you could present the work as a contribution to the other new developments which have followed. For example, a research integrity tool with an associated R package was developed by Hunter et al: https://doi.org/10.1002%2Fjrsm.1738 and there is also an R package for the statistical checks of the REAPPRAISED checklist: https://reappraised.wordpress.com/2023/03/28/the-reappraised-r-package/ Ideally this piece of work will inform new R packages, which are currently being used to automate research integrity checks in the field of Data Science. Are you aware that an R package has already been developed for this analysis? It might be worth referencing, too: https://github.com/harrietlmills/DetectingDifferencesInVariance 3) You have chosen to examine a cohort of Cochrane reviews for your example. The analysis that you have done for these trials is good, I just think there is a slight problem in the way you have explained the rationale for selecting these trials which should be reconsidered. The search for reviews starts in 2010 which is prior to the development of Cochrane's Editorial Policy for Managing Potentially Problematic Studies. So the trials that are included in your analysis are not ones which went through a Trustworthiness Screening Tool (such as: Identifying and handling potentially untrustworthy trials – Trustworthiness Screening Tool (TST). Developed by the Cochrane Pregnancy and Childbirth Group. Alfirevic Z, Kellie FJ, Weeks J, Stewart F, Jones L, Hampson L, on behalf of the Pregnancy and Childbirth Editorial Board and 10.1002/cesm.12037). The content of your work is fine and should be published, I just think the way you have framed the issue of the reviews being trustworthy because they are Cochrane reviews is not quite right if those reviews were published prior to Cochrane's policy and the reviews didn't incorporate a trustworthiness screening tool or another research integrity assessment tool or strategy. 4) Further to point 3) with regards to the paragraph 'Trustworthy trials as a source of DIVBTA estimates' (p.4), 'By focusing on a set of trials viewed as trustworthy by an authoritative organization' (p.6), and 'A non-parametric approach to define unusual DIVBTAs...' (p.7) it's worth noting that not all Cochrane authors are aware of the Policy on Managing Potentially Problematic Trials. This is partly due to the fact that the Cochrane Handbook is long, and the Policy is described in a separate online manual covering Cochrane's editorial policies. Even at present (July 2025), not every trial included in a Cochrane review is assessed using a research integrity tool. I have no doubt that will come in the future, but we aren't there yet and very few trials were statistically checked in the past. Furthermore, teaching about meta-analysis varies in scope and research integrity may not be included in the curriculum. Research integrity was not commonly taught prior to 2020, until the COVID pandemic brought issues concerning research integrity into public discourse. 5) I encourage you to re-phrase the sentences 'A non-parametric approach to define unusual DIVBTAs is to derive the median DIVBTA for each trustworthy trial' (p.7) and 'The two criteria for defining the set of trustworthy clinical trials were...' (p.10). I understand what you mean; however, I would like to point out that the trials you have selected for analysis have not been through a formal trustworthiness assessment prior to publication of the Cochrane reviews. The only Cochrane editorial group that routinely assessed all studies for trustworthiness prior to inclusion in the review was the Pregnancy and Childbirth Group, which developed and applied their Cochrane PCG-TST tool in the reviews for which they held editorial responsibility. (Note, this has all changed now as there is a 'new' Cochrane Central Editorial Service.) So while your work presented in this paper is true and valid in terms of the actual analysis, it is not the case that the studies included in the 35 Cochrane reviews were assessed as being trustworthy to start with. No trustworthiness assessment was done, apart from possibly filtering out the retracted studies as part of the searching procedures (as per the Mandatory standard C48 Examining Errata, Cochrane Handbook section 4.4.6 and the technical supplement section 3.9). The work you have done and presented in this paper is good and should be published, I'm just not sure you should say that the trials are trustworthy if they have not been through a trustworthiness screening tool or a research integrity checklist (such as PCG-TST, the Reappraised checklist (Grey et al), RIA (Weibel et al), TRACT (Mol et al), or a procedure as described in the RIGID Framework (Mousa et al)). These research integrity tools hadn't been developed in 2010 which is the start date of your search. 6) In the first paragraph of the Discussion, you could make reference to some other work in this area. For example, in order to understand the errors identified, the RIGID Framework and the Cochrane Policy recommend contacting trial authors to request clarification of the reasons for possible errors. RIGID Framework: https://www.thelancet.com/pdfs/journals/eclinm/PIIS2589-5370(24)00296-7.pdf Cochrane Policy: https://www.cochranelibrary.com/cdsr/editorial-policies/problematic-studies-implementation-guidance Very sadly there have been a few cases internationally of researchers taking their own life after their work was found to have problems, and one of the reasons for liaising with them is to make them aware their work is under review and to give them an opportunity to explain any problems that might be identified. (You can look up the case of Yoshiki Sasai, for example, which was highly publicised. However, there are other cases which have received no publicity so the actual number of cases is slightly higher than one can find in a web search.) 7) Your example is in diabetes research, but you could also reference where a similar analysis has been used in other areas of medicine, such as: https://doi.org/10.1097/EDE.0000000000001401 and https://doi.org/10.1002/bimj.202200116 among others. ********** what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: Yes:Dr. Adrian RichterDr. Adrian RichterDr. Adrian RichterDr. Adrian Richter Reviewer #2: Yes:Emma Sydenham (Senior Editor, Cochrane Database of Systematic Reviews)Emma Sydenham (Senior Editor, Cochrane Database of Systematic Reviews)Emma Sydenham (Senior Editor, Cochrane Database of Systematic Reviews)Emma Sydenham (Senior Editor, Cochrane Database of Systematic Reviews) ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Hujoel, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 27 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.... We look forward to receiving your revised manuscript. Kind regards, Robin Haunschild Academic Editor PLOS One Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: I Don't Know Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: The authors have provided a very thorough and carefully considered revision, which has substantially improved the manuscript. The simulation studies convincingly underscore the relevance of an estimator that addresses implausible variance differences between trial arms and even provide novel insights in relation to previous methodological work, while also pointing to potentially problematic randomized controlled trials. The discussion of the proposed method is well balanced and appropriately acknowledges its limitations. It would therefore be highly desirable for this method to be considered in applied settings, for example in meta-analyses, to support the assessment of the underlying evidence base and its credibility, ideally via an openly available R package rather than a Git repository alone. However, as the manuscript has undergone substantial revisions, there are remaining issues related to structure and naming conventions that at times make it difficult to follow. In particular, the manuscript appears to pursue four distinct and only loosely connected objectives, each addressed in a separate section: (1) identification of statistically significant DiVBTA outliers, (2) simulation studies, (3) a case study on a landmark fraud case, and (4) a case study on clinical trials included in systematic reviews on diabetes management. As a consequence, the manuscript deviates from the conventional IMRaD structure. For example, methods are introduced in both Sections 1 and 2, and the latter simultaneously presents results. Moreover, the two sections appear somewhat disconnected, as estimates or methods introduced in Section 1 do not clearly reappear in Section 2. In this context, closer adherence to established reporting guidelines would substantially strengthen the manuscript. In particular, the recommendations for simulation studies proposed by Boulesteix et al. (1) and Morris et al. (2) (ADEMP framework, endorsed by the STRATOS initiative) would provide a helpful structure. At present, key elements required for transparent reporting of simulation studies are missing or insufficiently described, including: - Why are results presented only for LnCVRs? - How was the simulation setup defined with respect to the number of iterations, software packages used, distributional assumptions, and used seeds? - How were noise levels and signal-to-noise ratios handled? - Why was n = 20 chosen for small-sample trials, and is n = 250 a realistic or appropriate choice for large trials? - Is the chosen range of treatment effect variability (0.2% to 1.4%) plausible for small randomized controlled trials? In addition, clear performance metrics such as sensitivity and specificity are currently missing. Potential users of the proposed method need to understand both the risk of false-positive findings and the probability that an untrustworthy trial remains undetected. Minor comments - Inconsistent naming conventions are used, for example: “Large samples (250/trial arm) and HTE” versus “Small samples (n = 20 per group) and HTE”. - The correct reference to Tukey’s original work should be included and how fences or spread are defined should be added. (1) Boulesteix A-L, Groenwold RH, Abrahamowicz M, Binder H, Briel M, Hornung R, Morris TP, Rahnenführer J, Sauerbrei W. Introduction to statistical simulations in health research. BMJ Open. 2020;10(12):e039921. https://doi.org/10.1136/bmjopen-2020-039921. (2) Morris TP, White IR, Crowther MJ. Using simulation studies to evaluate statistical methods. Statistics in Medicine. 2019;38(11):2074-102. https://doi.org/10.1002/sim.8086. Reviewer #2: (No Response) ********** what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: Yes:Dr. Adrian RichterDr. Adrian RichterDr. Adrian RichterDr. Adrian Richter Reviewer #2: Yes:Emma SydenhamEmma SydenhamEmma SydenhamEmma Sydenham ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 2 |
|
Unusual Outcome Variances as a Method to Identify Potentially Problematic Clinical Trials PONE-D-25-25920R2 Dear Dr. Hujoel, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.... If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Robin Haunschild Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-25-25920R2 PLOS One Dear Dr. Hujoel, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Robin Haunschild Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .