Peer Review History
| Original SubmissionMarch 12, 2024 |
|---|
|
PONE-D-24-10015Hit screening with multivariate robust outlier detectionPLOS ONE Dear Dr. Leong, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jul 28 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Longxiu Huang, Ph.D. Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: Yes Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: See the attached file for the full comments. One major comment is there. In the dimensional reduction step, it's mentioned that at least $99\\%$ of the total variance of $X$ are retained. I'm interested in knowing the value of $k$ in the experiments, particularly for the high-dimensional case (p=96). I'm curious whether the orthogonality of $Y$ is necessary while the dimensional reduction might be unnecessary for mROUT. Could the authors test mROUT with full dimensions? If such comparisons have already been made, it would be beneficial to discuss the necessity of dimensional reduction in mROUT, although the discussion of its necessity in other methods is already included. Reviewer #2: This paper describes a multivariate outlier detection method (mROUT) that combines PCA with robust Mahalanobis distance computation. Some of the key steps include estimating the Cauchy-Lorentz location parameter using maximum likelihood (Levenberg–Marquardt DLS method) and estimating the scatter using the Motulsky-Brown robust standard deviation of residuals (rsdr). The authors have demonstrated in their results the importance of incorporating BH multiplicity-adjustment on the p-value to control the false detection rate (FDR) when outlier detection is conducted via T-squared testing. The motivation and essential details of the proposed algorithm are clearly described. Although the results demonstrate quite convincingly the strengths of the proposal within the current scope of the experiments, further discussion to elucidate the merits and general limitations of the approach should be encouraged. Perhaps the authors can consider re-parameterising/extending some of the experiments and including relevant findings where appropriate. More detailed suggestions are given in particular in Comments 4-8. Overall, this work represents an original research contribution to the field of phenotypic screening / biostatistics. Specifically, the authors have developed an FDR-calibrated multivariate outlier detection approach that may facilitate more effective identification of potential targeted treatments. In practical terms, it may lead to discovery of compounds for the treatment of complex diseases, and gaining insights into novel pathways or mechanisms. The reviewer is intrigued to see some of the phenotypic changes following treatment and the use of convolutional neural network, self-supervised contrastive learning in a binary classification task to represent well images and differentiate between inactive and active controls; then applying outlier detection to features extracted from the penultimate layer of a CNN. The results (e.g. line 399-406) show tremendous promise. The reviewer hopes this can be scaled up and adopted more widely. List of Comments [Comment 1] Introduction (L39-40) 1a) Regarding normally distributed inactives [as observed in multi-dimensional cell painting or morphological profiling] - how often is this a valid assumption? 1b) What methods are available for handling situations where the Gaussian assumption is violated? PCA seems like a reasonable choice when there is no preferential direction (all variables are valued equally) or non-linearity observed in the point distribution. When this is not true, have the authors considered using local neighbourhood embedding [1] or kernel techniques [2] to handle non-Gaussian data for the inactives? Any thoughts regarding the suitability of manifold learning methods? 1c) A statistical comparison of outlier detection efficacy - with and without these embedding, manifold learning or kernel techniques - may be considered as part of future work. [1] Jiaqi Xue, Bih Zhang, Qianyao Qiang. Local Linear Embedding with Adaptive Neighbors, Pattern Recognition, vol. 136, article 109205, Apr 2023. DOI: https://doi.org/10.1016/j.patcog.2022.109205 [2] P. Rudra, R. Baster, E. Hsieh, D. Ghosh. Compositional data analysis using kernels in mass cytometry data, Bioinformatics Advances, vol. 2., issue 1., 2022. DOI: https://doi.org/10.1093/bioadv/vbac003 [Comment 2] Multivariate outlier detection (L158-161) The authors observed that "even after we modified rPCA-rMD methods to also provide a BH FDR-adjusted p-value..., we found that our method had better Type I error control, better FDR control, and/or higher statistical power to declare H1 in (1) compared with other methods." Can you offer some explanation as to why these other techniques produced inferior results despite the p-value FDR adjustment? [Comment 3] mROUT algorithm, step 2 3a) (L208-209) Was the Levenberg–Marquardt nonlinear regression algorithm used to estimate the Cauchy-Lorentz location parameter? If so, please mention and include a reference (e.g. [4]). What are its competitive advantages? [4] H.P. Gavin, The Levenberg-Marquardt algorithm for nonlinear least squares curve-fitting problems, 2024. DOI: https://people.duke.edu/~hpgavin/ExperimentalSystems/lm.pdf 3b) Can you provide some information about the optimizer used to maximise the likelihood. In terms of practicality, any particular reason for using the Nelder and Mead (1965) method, or not using quasi-Newton and L-BFGS-B. 3c) Did you encounter any difficulty estimating the Cauchy parameters using real data. 3d) Have you implemented any checks to ensure the estimates are sensible, i.e., detect anomalies or reject poor estimates. Do you envisage any situation where the maximum likelihood technique might fail *silently* when deployed in a large scale setting? [Comment 4] Simulation (L237) An important question is whether the performance of the proposed method would hold for fewer observations (e.g. N between 20 and 50)? Recommendation 1 - Where possible, the experiments should be repeated for different N, and graphs plotted to observe any changes in the results. [Comment 5] Application to real data (L402-403) "Using a significance threshold Q=0.01, mROUT identified 12 gene KOs as hits. All ACs and LCs were detected and none of the ICs were declared as hits" 5a) This is encouraging. It would be helpful to provide some guidance on the selection of Q, if we want to view this as a parameter tuning exercise. Recommendation 2 - Visualise in a graph how statistical power (sensitivity) and FDR vary with Q. This will help readers outside a clinical setting to appreciate the type 1 and type 2 errors trade-off (often the F1 score is skewed depending on the application, sometimes higher recall is preferred over precision). General Questions - not tied to a specific section [Comment 6] In chemometrics, data can have dimensions in the thousands. Is there evidence that the proposed technique would work equally well for this case (where the number of variables p >> 96)? Would it make sense to use projection pursuit initially to retain the most critical information (by whatever criteria) and reduce the number of dimensions to something more manageable? The concern is that PCA explains data variance using eigenvectors aligned with directions of maximum variation. This is optimal for linear decomposition. However, the reliability of this procedure depends on the proportion and magnitude of the outliers (relative to the inactives). Second, the Euclidean metric is only meaningful if important differences are reflected by large distances as measured by the L2 norm. Are the authors aware of any situation where nonlinear dimensionality reduction would be of benefit, where manifold learning would be needed to preserve small differences or latent structures of biological significance? [Comment 7] What are some of the known limitations of the proposed method? For example, for tiny data sets, how would it behave if applied in another problem domain where the number of measurements N is small (e.g. N between 20-50, rather than > 200)? This is related to Comment 4. Can this be explored through Monte Carlo simulation, by randomly drawing say {25, 50, 100} samples from the complete set of measurements (which acts as ground truth) to emulate such effect? The reviewer is thinking how this would affect the multiplicity-adjusted p values and hypothesis testing outcomes. [Comment 8] How would the proposed method behave given perfect data (with no outliers) in high dimensions (let's say the retained PCA components k far exceeds 11), and garbage data of a psuedo-random nature with no trends at all. Does the current design raise a flag, to perhaps alert scientists that the data is unexpected or perhaps some error has been introduced during the experiments? Reviewer #3: A new multivariate robust outlier identification technique is presented in this paper. Despite the appealing idea and promising methodology, the discussions, conclusions, and result interpretation have some significant shortcomings. Additionally, there are major improvements in writing and presentation required. Here are some main significant problems (from my point of view) 1- The literature review and proper citations are missing. It is necessary to update the References. The references are extremely old, with several dating back to before 2010, and there is just one reference from 2021. 2- The existing techniques for detecting outliers multivariate data are not well reviewed. Numerous recent papers have presented robust techniques for the identification of outliers. A comparative analysis of the offered new approach in this study and existing techniques in recent papers is significantly needed. Here are examples of few recent papers: - Vishwakarma et al., 2021. A hybrid feedforward neural network algorithm for detecting outliers in non-stationary multivariate time series. Expert Systems with Applications 184:115545 - Touny et al., 2024. Scalable fuzzy multivariate outliers identification towards big data applications. Applied Soft Computing 155: 111444 - Vishwakarma et al., 2023. An automated robust algorithm for clustering multivariate data. Journal of Computational and Applied Mathematics 429: 115219 - Hilal et al., 2022. FinanciaL Fraud: A Review Of Anomaly Detection Techniques And Recent Advances. Expert Systems with Applications 193: 116429 3- The main article must include tables that present some performance measures to compare the new method and the existing techniques. 4- The organization and presentation of the results, along with a lack of clear interpretation are major challenges. There should be some discussion on the theoretical arguments for the new method's superiority. 5- There is a need to enhance the resolution of the figures. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-24-10015R1Hit screening with multivariate robust outlier detectionPLOS ONE Dear Dr. Leong, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 26 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Longxiu Huang, Ph.D. Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thanks for revising the manuscript. Before it is published, I have three tiny comments: 1. A period is needed at the end of equations between the lines, especially equations in lines 241 and 249. For equations in lines 128 and 264, it would be better to add a comma at the end of the equations. For equation (1), I also suggest adding a comma or a semicolon at the end of the first line and a period at the end of the second line. 2. When mentioning "Supporting information," consider using "supporting information" or "Supporting Information." Also, since S1_File.pdf is the code, why not upload the raw code file directly? 3. The images in Figure 2 are still unclear. Although they might meet the PLOS requirements, I suggest using vector format for all images in the manuscript. If you prefer to use the current images, please enlarge the images in Figure 2 as they are hard to see carefully because they are small and some lines are close. Reviewer #2: The authors are thanked for responding judiciously to my previous comments. By extending the experiments, and incorporating relevant changes in the revised paper, the authors have acted on both my recommendations and strengthened this manuscript considerably. One area of improvement is informing readers of the strengths and limitations of the proposed methodology. In relation to changes in performance as the number of observations (N) is varied, the authors demonstrated that Type I error and FDR are consistently maintained at levels below 0.01; however statistical power is noticeably reduced when N is roughly below 40 (see Supporting Info S3). In relation to the significance threshold (Q), the author discovered that Q may be pushed from 0.01 to 0.05 to increase the True Positive Rate with negligible impact on the False Positive Rate, in the context of their experiments (see L425-444 and ROC curves in Fig.4). These additional insights elevate the quality of this contribution. Such guidance is invaluable as it assists researchers/practitioners in making informed choices during experimental design. The comments/discussions relating to non-Gaussian distributed features, convergence, cautionary remarks and techniques that specifically deal with nonlinearity (L197-205, L254-258 and L516-524, resp.) are appreciated. The issues identified in the previous review have been satisfactorily addressed. On this basis, this reviewer recommends acceptance of this article for publication in PLOS. A question is left below for the authors to ponder. This question (C-2) is, in my opinion, inconsequential for the publication decision. It is more about reflection and academic curiosity. Minor comments C-1 [Author comments, Figure R-1] The vertical axis appears to be mislabelled. Should it be the error of omission "1 - Power"? C-2 [Author's response to Reviewer 2, Figure R-3] In the top-left plot for 2D simulated data, it is interesting to observe the Type 1 error increases with N. This trend is the opposite of FDR(N) which decreases as N increases. a) This reviewer appreciates the Type 1 error is well controlled irrespective of N (remains below 0.01), but why are there fewer False Positives with fewer observations (for smaller N) and more False Positives with more observations (for larger N). If the Type I error is not under control, the inference would be "limit your observations" which seems a little counterintuitive. b) If we consider a minimum volume ellipsoid (MVE) estimator, one would expect the location and scatter matrix (center and covariance structure of the ellipsoid) will be optimized to cover the inliers in the dataset, and the determinant of the positive definite symmetric matrix will be minimised in the process, subject to this cover constraint. In view of the finding in the Type 1 error graph, can we expect the volume to become more compact as an increasing function of N? Reviewer #3: The authors have adequately addressed my comments raised in a previous round of review and I feel that the revised version is much better and can be accepted. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Hit screening with multivariate robust outlier detection PONE-D-24-10015R2 Dear Dr. Leong, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Longxiu Huang, Ph.D. Academic Editor PLOS ONE Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for addressing my comments and providing feedback on the image quality. Everything looks good for publication from my perspective. Best of luck with your future research! ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No ********** |
| Formally Accepted |
|
PONE-D-24-10015R2 PLOS ONE Dear Dr. Leong, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Longxiu Huang Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .