Peer Review History

Original SubmissionJune 24, 2021
Decision Letter - Yanbin Yin, Editor

PONE-D-21-19845

NGS read classification using AI

PLOS ONE

Dear Dr. Dabrowski,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 10 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Yanbin Yin

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors present a machine learning pipeline to predict pathogens from metagenome NGS data. The pipeline consists of two parts, coding frame prediction and taxonomy classification. The machine learning models were trained and tested on test dataset and real NGS reads data. The results show the models perform not bad using ROC curve and accuracy. However, several concerns are listed as follow.

1. The authors created a dataset that only contains CDS. I think the frame can be easily predicted by translating DNA reads into proteins using six reading frames. And then check if the stop codon is included in the protein sequence. The incorrect coding frames will lead to some stop codons in the translated sequence. I strongly suggest the author compare the reading frame prediction model to other CDS prediction approaches.

2. For classification of NGS reads, I suggest the author also compare their model to other classification softwares, such as Krakens, RIEMS.

3. The purpose of the model is to predict pathogens from NGS data. The sequencing data includes sequencing error. The model should be trained and tested on NGS simulated data. E.g., the simulation data can be generated by software, such as CAMISIM.

Reviewer #2: The authors aim to predict the taxonomic classification of sequences that can not be classified by comparison to a reference database by neural network. The topic is interesting, and the method shows a good prediction performance. However, there are several problems:

1.The authors provide the pipeline in Figure 1. Although the pipeline is relatively clear, more details should be provided to better illustrate their framework and flow about neural network model.

2. Authors should pay attention to the format. For example, the model steps should be described clearly according to the format of algorithm description.

3.To study the impact of parameters, the authors should describe how to select the dimension of a single feature vector h. More comprehensive results are needed when the dimension is set to various values.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

First and foremost, we would like to extend our sincere gratitude to the reviewers for their insightful and helpful comments. They have allowed us to modify and extend our work and to hopefully close the identified gaps - primarily in making the description of our approach clearer, better describing the aim of our work, and adding more realistic data and a comparison to other tools. We describe what specific action we took in response to each comment in more detail in the following sections.

## Reviever 1 ##

Reviewer's comment: The authors created a dataset that only contains CDS. I think the frame can be easily predicted by translating DNA reads into proteins using six reading frames. And then check if the stop codon is included in the protein sequence. The incorrect coding frames will lead to some stop codons in the translated sequence. I strongly suggest the author compare the reading frame prediction model to other CDS prediction approaches.

Author's response: We wholeheartedly agree with the reviewer that it is important to compare the performance of our frame prediction to existing tools in order to show whether rolling our own frame prediction makes sense. We have accordingly performed a basic benchmark against other existing tools and added a section ``Benchmarking of frame classification'' in both the ``Methods and Implementation'' (lines 241-271) and the ``Results'' (lines 324-333) sections. In short, we have found six applicable tools, out of which two were installable and could thus be used in the benchmark. On our test dataset, we outperformed both of then by a significant margin.

Reviewer's comment: For classification of NGS reads, I suggest the author also compare their model to other classification softwares, such as Krakens, RIEMS.

Author's response: While we agree with the reviewer that benchmarking is a very important factor, out proof-of-concept software is not intended as an alternative to taxonomic classification or pathogen detection tools such as Kraken or RIEMS - neither in the current state nor once it is in a production-ready state. Instead, the aim is to create a follow-up tool that will allow the analysis of the reads that still remain unclassified after the use of such tools. As such, benchmarking against these tools would not represent the intended use-case.

In order to make it clearer that we are explicitly not attempting to create yet another taxonomic profiler that solves the same problem, we have expanded the sentence in lines 70-72 to read: ``[...] after traditional metagenomic data analysis and taxonomic classification has been performed with tools such as Kraken, RIEMS, PathoScope, PAIPline or MetaMeta''.

Reviewer's comment: The purpose of the model is to predict pathogens from NGS data. The sequencing data includes sequencing error. The model should be trained and tested on NGS simulated data. E.g., the simulation data can be generated by software, such as CAMISIM.

Author's response: We thank the reviewer for pointing out this aspect and agree that investigating the stability of our scheme in more realistic settings is indeed interesting. Therefore, we added supporting information to the draft (``S2 Appendix: Frame classification with realistic NGS reads'', lines 379-410), which examines the influence of sequencing errors on the frame classification. We used ART to generate simulated NGS data. The general outcome is that sequencing errors slightly affect the accuracy of our classification, as expected. Still, the frame classification seems relatively robust, and regular error-reduction practices such as cutting off the last bases (for the sake of simplicity, we represented this by cutting off the last 50 bases of each read instead of performing a thorough evaluation of the effect of different quality trimming approaches) help to improve it.

## Reviever 2 ##

Reviewer's comment: The authors provide the pipeline in Figure 1. Although the pipeline is relatively clear, more details should be provided to better illustrate their framework and flow about neural network model.

Author's response: We agree with the reviewer that clarity is paramount in such visualizations and are very grateful for this comment. Figure 1 has been accordingly revised and now contains more details of the pipeline. We hope that it illustrates the framework and information flow more clearly now.

Reviewer's comment: Authors should pay attention to the format. For example, the model steps should be described clearly according to the format of algorithm description.

Author's response: We thank the reviewer for pointing out that the description of the algorithm might not be sufficiently clear. Unfortunately, we have not been able to find any format specifications in the PLOS ONE Submission Guidelines regarding the illustration of algorithms. We have thus generally reworked Figure 1 to make the overall operation of the pipeline easier to understand (as per the first comment) and hope that this also satisfies the reviewer's requirements.

Reviewer's comment: To study the impact of parameters, the authors should describe how to select the dimension of a single feature vector h. More comprehensive results are needed when the dimension is set to various values.

In general, we fully agree that a more in-depth investigation of the feature vectors towards the overall performance is interesting. However, the dimension of the feature vectors results from the pre-trained language model used in the pipeline and is therefore an externally given parameter. In the case of ProtBert it is fixed to 1024. We reworked lines 156-158 to clarify this aspect. Such an evaluation would thus require retraining the whole ProtBert language model which is very - depending on the available resources even prohibitively - expensive (note that ProtBert utilized 1024 GPUs or 512 TPUs in the training process). This would also run contrary to the underlying idea of our work of investigating the power of using an existing language model, which is in part motivated by exactly this often prohibitive cost of re-training.

Attachments
Attachment
Submitted filename: RebuttalLetter.pdf
Decision Letter - Yanbin Yin, Editor

NGS read classification using AI

PONE-D-21-19845R1

Dear Dr. Dabrowski,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Yanbin Yin

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: The authors addressed reviewers’ comments well. The revised version is improved in quality. I have no further suggestions to make.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Yanbin Yin, Editor

PONE-D-21-19845R1

NGS read classification using AI

Dear Dr. Dabrowski:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Yanbin Yin

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .