Peer Review History

Original SubmissionSeptember 6, 2020
Decision Letter - Christos A. Ouzounis, Editor, William Stafford Noble, Editor

Dear Mr. Sommer,

Thank you very much for submitting your manuscript "Balrog: A universal protein model for prokaryotic gene prediction" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Christos A. Ouzounis

Associate Editor

PLOS Computational Biology

William Noble

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: I am glad to see some new tool developed using deep learning to predict proteins from prokaryote genomes and this sounds a nice tool to improve the accuracy and easily used without training for specific taxonomic units like other tools, e.g. prodigal and prokka. I would like to test it by myself, but failed in using the web server provided by the authors, even failed in uploading my sequences. I suppose the model file is large hindering distribution of the tool. Standalone version is much helpful.

The writing is good, then I still have other concerns.

1) As we know, prokaryote genome sequnces are largely biased in sequncing for some pathogens. Then the data set for taining is not balanced.

2) for prokaryote genomes, the difference of gene numbers within the same species, that is, different populations/strains, is large because of HGT or other reasons resulting in quite difference if accessory genomes. Why the authors select proteins for training based on the rule of picking up genomes and then determine the proteins. It looks like that the authors need to select all prokaryote genomes with high qualtiy and then cuurate pangenome to cluster these proteins for traing your model. Another option is that the author could extract high quality of protein seuqnces of porkaryotes from known databases, e.g. uniprot.

3)I am not sure the rule to make non-hypothetical genes only based on a description containing “hypothetical” or “putative”. This is really coarse.

4) two figures are too simple to express clearly what was done by authors.

Reviewer #2: The authors developed a method for gene prediction in prokaryotes, Balrog, which is based on deep convolutional neural networks (CNNs) and was trained on 3290 genomes and tested on 36. To focus the test results on non-trivial cases, no genomes from the same genus as any of the test were allowed in the training set. The method employs recent technological developments of using CNNs in sequence modeling (Bai, Kolter, Koltun, 2018). First, a CNN is trained to predict for every position of a translated amino acid sequence whether translation is in the right frame or not. This is the heart of the method. Second, a CNN for predicting translation initiation sites is trained on the 32 nucleotide long sequences around each start sites of the non-hypothetical proteins in the training set. Third, to avoid making contradicting ORF calls (e.g. strongly overlapping ones), the longest weighted path through a directed acyclic graph is computed, in which nodes represent possible ORFs and nodes are connected by edges if the ORFs do not overlap too much.

Balrog achieves very similar sensitivity as the gold standard tools Prodigal and Glimmer3, and it has 11% and 30% fewer likely false predictions than Prodigal and Glimmer3, respectively. Balrog takes 5-10 minutes to process a typical bacterial genome on a GPU, whereas Prodigal takes a few seconds at most on a single CPU core.

The results are a bit disappointing considering the big advances that deep learning has afforded in many bioinformatic applications. However, the study is interesting for two reasons. First, if the slight improvements hold true with an unbiased benchmark, they would be a worthwhile improvement of prediction accuracy. Second, the study demonstrates how to use state-of-the-art deep learning methods for the task of gene prediction.

Major points:

1) It is unclear to what degree the training set is biased by the fact that many gene annotations in the training genomes are also produced by bioinformatic prediction tools. Since Glimmer3 and Prodigal have been the standard tools for gene prediction since 1998 and 2010, respectively, it is likely that most of the 'extra" genes annotated as hypothetical were actually predicted found by Glimmer3 or Prodigal. It is therefore not surprising at all that Glimmer3 and Prodigal would find more such 'extra' genes than a tool such as Balrog using a very different methodology.

The authors need to construct a benchmark that can correct for such biases or at least estimate them. One option could be to test on genomes that have been annotated using experimental data such as RNA-seq, CAGE-seq or the like.

2) It would be important to get more information on how much this very highly parameterized method can generalize beyond the genus. The benchmark should therefore be repeated with training sequences from which all genomes from the same family / order of any of the test genomes have been excluded.

Minor points:

3) The Methods do not mention what dilation sizes were used in the gene model CNN. d = 2^i ?

4) To train the start site model, negative training examples were taken to be the start site codons after the annotated start site of the positive training ORFs. Isn't that quite risky since start sites are notoriously hard to annotate and might be frequently wrong? Wouldn't it be better to use start codons within the negative ORF training examples?

5) Whereas it is stressed in the abstract that Prodigal and Glimmer3 need to be pretrained on each genome to achieve optimal results, the Methods section does not mention if such pretraining was employed.

6) Please explain why 'Efficiently training a temporal convolutional network requires sequences of the same 96 length.'

7) Why is Balrog so slow? I count 20 * 8 * 32 * 8 = 41920 parameters. Since predictions can be done in parallel on the GPU, that should take a few seconds, not minutes, for the few ten thousand translated ORFs longer than 60 codons. Could it be that each convolutional filter is not only computed once per input window of length k, as it should, but 100-k+1 times (where 100 is the length of the sequences used for training)?

8) Line 13: Delete '32 ∗ L hidden units per layer, and'

9) Please comment in the discussion on why you did not use a transformer architecture.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

Revision 1

Attachments
Attachment
Submitted filename: Balrog_review_response_letter_.docx
Decision Letter - Christos A. Ouzounis, Editor, William Stafford Noble, Editor

Dear Mr. Sommer,

We are pleased to inform you that your manuscript 'Balrog: A universal protein model for prokaryotic gene prediction' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Christos A. Ouzounis

Associate Editor

PLOS Computational Biology

William Noble

Deputy Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: No further comment.

Reviewer #2: The authors have addressed all reviewer comments satisfactorily. I particularly appreciate providing open-source C++ code that can run on CPUs.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Christos A. Ouzounis, Editor, William Stafford Noble, Editor

PCOMPBIOL-D-20-01618R1

Balrog: A universal protein model for prokaryotic gene prediction

Dear Dr Sommer,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Alice Ellingham

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .