Peer Review History
| Original SubmissionApril 15, 2024 |
|---|
|
PONE-D-24-14920Instability of Estimation Results Based on Caliper Matching with Propensity ScoresPLOS ONE Dear Dr. Maruo, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 04 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Md. Belal Hossain, PhD(c) Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. Thank you for stating the following financial disclosure: "JSPS KAKENHI Grant Number JP22K19682" Please state what role the funders took in the study. If the funders had no role, please state: ""The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."" If this statement is not correct you must amend it as needed. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf. 4. Please expand the acronym “JSPS” (as indicated in your financial disclosure) so that it states the name of your funders in full. This information should be included in your cover letter; we will change the online submission form on your behalf. 5. Thank you for stating the following in the Acknowledgments Section of your manuscript: "This work was supported by JSPS KAKENHI Grant Number JP22K19682." We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "JSPS KAKENHI Grant Number JP22K19682" Please include your amended statements within your cover letter; we will change the online submission form on your behalf. Additional Editor Comments:
Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Overall, the paper is crisp and the problem nicely stated and elaborated. I only have severe concerns with the chosen data example, which is exactly the kind of study where propensity methods should not be used. At some places it seems that the translation into English failed (e.g. Line 263 "paired data that return the median" what does it mean?). See my specific concerns below. Introduction: The authors should give a better motivation for propensity score caliper matching (as opposed to other methods that are less dependent on a stochastic algorithm such as covariate adjustment for propensity score or inverse probability of received treatment weighting). If there are better methods around, why should we take care of another deficiency of PS matching that does not exist with covariate adjustment or IPTW? Line 47: the instability problem may not be visible, but different researchers may set different seeds which leads to different results despite full prespecification even if they used the same data set. Maybe this could be mentioned here. Line 114: consider rewriting the sentence "As a modified version of W ATT, a method that excludes more than the 95th percentile of the distribution of propensity scores was also applied (W_ATTt) (Lee et al.)" Does it mean that subjects with propensity scores greater than the 95th percentile were excluded? Line 185: what does it mean '... except when the number of matched pairs was covariates'? Lines 208ff Case study. While I like the exposition and the simulation study, I don't think it is wise to use propensity score matching for a risk factor. Moreover, in this example the exact causal ordering of the measured variables is not known (it is unclear if the 'pretreatment' condition is fulfilled for all considered confounding variables). The authors should consider showing an example with a proper intervention variable. If the paper is published, other authors will apply PS matching to similar questions and we should generally discourage from using PS methods with non-interventions, in particular if standard assumptions for causal inference are in doubt. A more proper example can be found, e.g., in the paper of Luijken et al (2024, Biometrical Journal) and the accompanying GitHub repository in the folder https://github.com/KLuijken/CI_CovSel/tree/master/rcode/add-ons/simulated_CABG_example. These authors emulated the data of a real observational study with an intervention using R code, which can be found on the GitHub repository. Line 219: " the odds ratio for smoking variables ..." I assume the authors mean the odds ratio of smoking vs. not smoking. But anyway, I strongly propose to change the example. Lines 241ff: I don't quite agree with the conclusion that any of the methods (HtL or LtH) should be recommended. In fact, matching is still arbitrary and not 'wrong' if performed with a different ordering. Hence, as already stated in my comment on line 47, even if the matching algorithm becomes deterministic when using a predefined ordering, it is still one of several possible matchings that are all in line with the caliper requirement. There is no scientific reason to prefer HtL to LtH. Line 246 & 260: When the authors recommend to use multiple random order matching and report the median, did they investigate the coverage rates of 95% confidence intervals based on the median OR? I would assume that there is some undercoverage if one picks a matching and ignores the variability of matching. Hence I would be more cautious than saying 'there seems to be no major problem' (line 262). The authors could easily add an evaluation of coverage rates at least for the methods which have (almost) no bias. Or they should tone down their conclusion. The major problem with matching is the number of subjects that cannot be matched, which are lost from the analysis. Hence, one should probably better using weighting or covariate adjustment in practice in order not to throw away precious information. Figure S4: there are no lines for OR=0.5 and OR=0.75. Please check or add explanation on overlaying curves. Reviewer #2: This article addresses the implications of the criteria chosen for matching individuals when performing caliper matching with propensity scores in observational studies. The authors expose the current issues regarding reproducibility and instability according to literature and do a large simulation study where they compare different criterias for caliper matching and also compare them to propensity score weighting. The authors also apply these techniques in a case study using publicly available data. This is a relevant work, given that propensity scores in general and matching in particular are widely used in research, and it is also well conducted and presented by its authors. However, I have several concerns regarding the manuscript which are described below: 1. In the Introduction section, the authors mention that cherry-picking is easy to conduct in this type of matching. Is this problem very extended in the literature? Do the authors have access to any bibliography regarding the cherry-picking issue in this type of studies, and/or how effective can it be? If so, I would recommend to include it in this section to give a measure of the magnitude of the problem. 2. The greedy nearest neighbor matching technique should be described in more detail in Section 2, apart from introducing the four types of sorting available in literature. 3. Page 6, lines 102-103: “multinomial distribution with 10 categories” What were the probabilities for each of the 10 categories in this distribution? Were they uniform or unequal? 4. Page 7, lines 106-107: “represented a case where the propensity in treatment selection was close to random or where there was remarkable propensity” Respectively? 0.65 in the c-statistic is close to random and 0.85 remarkable propensity? Please clarify. 5. I understand that the simulation has been done considering that the outcome variable is binary. Would the results be different if the outcome variable was continuous? I think this point should be at least discussed in the manuscript, as continuous outcomes are relatively common in observational studies. 6. Page 9, lines 153-155: “We added 0.1N (10%) of the data to each simulation, and the absolute difference between the odds ratio of the added merged data and that of the original data and the median of the absolute difference were calculated.” Could the authors clarify in the text what was the purpose of this procedure? 7. I think the graphics in the Results section (and the Supplementary Material) would benefit from redefining the limits of the Y axis. I find that, in many cases, the span of the axis is very wide and that makes the lines to be too “crushed” and barely distinguishable from each other, and this could be alleviated by readjusting the amplitude of the Y axis. I would recommend trying to redo the graphics if the authors are able to do so. 8. Page 11, lines 178-180: “For the weighting based analysis method, only the results of the ATT weight-based method are presented in the text because it is one of the most frequently used methods.” I find that W_ATO and W_ATTt showed good performance according to the graphics presented in the supplementary file. Could this statement be revised? 9. I think the authors should include more details about the data used and its origins in the Case Study section, even if it is a dataset taken from an R package. Knowing more details would help the reader to know the quality of the observational study conducted and the expectations they should have regarding propensity adjustments. 10. Is there any possibility to assess the actual bias of the estimations from the case study? Are the authors aware of any data on real randomized experiments on this matter? Typos: 11. Page 6, line 99: “p gj” I think it should read as “p_{gj}” 12. Page 14, lines 233-235: the verb tense should be changed in this sentence (i.e. every “was” should be replaced by “is”). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: Yes: Georg Heinze Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-24-14920R1Instability of Estimation Results Based on Caliper Matching with Propensity ScoresPLOS ONE Dear Dr. <!--StartFragment-->Maruo, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Mar 01 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Academic Editor PLOS ONE Additional Editor Comments: The authors made significant changes in the revised version. However, two major issues still need to be addressed before publication. • Pooling the estimate: It was proposed to pool the estimate and calculate SE to reflect the improved uncertainty due to polling. There should be at least one comparison of how the proposed method works compared to pooling the estimate. Here is an example where the authors used the modified multiple outputation technique to account for within and between variations in the estimate for improving precision (doi: 10.1007/s12561-024-09461-6). • The authors should consider a collapsible effect measure and compare the methods in terms of bias, model-based standard error (SE), empirical SE, and coverage probability. The justification for not doing these was insufficient and unreasonable (doi: 10.1002/sim.8086). [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: (No Response) Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The authors have successfully addressed all the comments posed in the first revision, hence I recommend the manuscript to be accepted for publication. Reviewer #3: Thank you for the opportunity to review this manuscript. This study investigates the effect of matching order in propensity score matching to demonstrate the potential instability of results, particularly in small-to-medium sample sizes. Overall, I believe this is a well conducted study that raises an interesting point about the potential for cherry-picking results in such studies. Below are two specific comments, and questions that I had while reviewing this study: 1. line 160: "the expected number of events was one for each group" is this a realistic scenario for any type of analysis? I am struggling to think of an area of study where we can do any serious sort of analysis with a single event, especially in the context of a frequentist statistical framework. In situations like this I would think that the matching order of the propensity scores is pretty far down on the list of major issues. I would think that this is more than a very small information setting, to the point where the issues this paper are discussing almost become moot due to the severity of the violation of other statistical assumptions. I think maybe a bit more explanation regarding the explicit purpose of modeling such a scenario may help readers better understand the purpose of using such an extreme simulation setting. 2. At several points in the manuscript the idea of conducting multiple analyses as more data are accumulated is discussed. This is an interesting area that is receiving more attention recently in the clinical trial literature (adaptive clinical trials). My concern with this is the potential for the potential inflation of Type I error and cherry-picking results/p-hacking that can arise from this stream of methodological choices if not conducted carefully and transparently reported (preferably a priori). While this is more of a practical concern not directly relevant to the simulation per se (as the true effects are known) I think it may be worthy of adding some language surrounding this concept cautioning the potential issues surrounding this approach. Reviewer #4: Introduction section: •The aim could be stated more succinctly at the beginning to ensure the reader understands the problem immediately. •The introduction could benefit from a brief discussion of how the findings will address real-world applications or improve current practices. •The introduction could briefly outline why the chosen design (simulations and case studies) is most suitable for evaluating instability. •Briefly justify the choice of simulation studies as the primary method to explore instability and validate findings through case studies. •Revise for conciseness and grammatical clarity to improve readability. Some sentences are verbose or contain minor grammatical inconsistencies, such as "especially, particularly for a non-large sample" and "no research for variability in propensity score analysis due owing to multiple analyses." Greedy nearest neighbor matching section: •Quantify or provide examples of how variability due to data accumulation affects Methods 1-3 compared to random order matching. •Including following considerations in discussion section would provide a more balanced perspective: 1. The assumption of a unimodal propensity score distribution might not always hold in real-world data. 2. The exclusion of interaction effects or non-linear covariate relationships in the propensity score model could limit applicability to more complex datasets. Simulation design section: •The unimodal distribution assumption for propensity scores may not always reflect real-world scenarios where multimodal distributions or non-normality are observed. A sensitivity analysis incorporating such variations could strengthen the conclusions. •While different matching orders are evaluated, the robustness of caliper matching to outliers or extreme propensity scores could be explored further. For example, alternative caliper widths, kernel-based matching, or machine learning approaches for propensity score estimation might provide additional insights. Case study section: •“The risk of a large change due to data addition was higher for the Rand method.” Quantify “large change” to contextualize this risk. •For the low birth weight data, the authors focus on a binary outcome and a single matching ratio. In contrast, the GBCS data are analyzed with different outcomes, sample sizes, and ratios. While this demonstrates versatility, the rationale for the differing approaches between datasets is not discussed. Clarifying this would improve coherence. •The c-statistic for the propensity score model in the low birth weight data is not reported. Given its importance in propensity score analyses, this omission should be rectified for consistency with the GBCS data discussion. •The impact of adding 10% data on odds ratios is presented but not fully contextualized. For instance: Does the larger change in the Rand method suggest a lack of robustness, or is it expected behavior given its design? How does this finding translate into recommendations for researchers dealing with incomplete or expanding datasets? Discussion section: •The recommendation to use 1,000 iterations for random order matching (instead of 100) is presented without sufficient justification. A brief discussion of how this choice balances computational load and result precision would strengthen this recommendation. •The discussion of unmeasured confounders is minimal. It would benefit from a brief exploration of how such confounding might exacerbate instability or bias results. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #2: No Reviewer #3: No Reviewer #4: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Instability of Estimation Results Based on Caliper Matching with Propensity Scores PONE-D-24-14920R2 Dear Dr. Maruo, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Belal Hossain, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed Reviewer #4: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: The authors have addressed all my comments. Thank you for the opportunity to review this interesting, and insightful piece of work. Reviewer #4: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #3: No Reviewer #4: No ********** |
| Formally Accepted |
|
PONE-D-24-14920R2 PLOS ONE Dear Dr. Maruo, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Md. Belal Hossain Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .