Peer Review History

Original SubmissionJanuary 4, 2021
Decision Letter - Arne Elofsson, Editor, Martin Weigt, Editor

Dear Dr. Xu,

Thank you very much for submitting your manuscript "Deep Template-based Protein Structure Prediction" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

Both reviewers clearly appreciate the reported performance and the clarity of the manuscript. However, the second reviewer raises an important concern about the question of reproducibility and method availability. Currently only a link towards the RaptorX server is provided, but all seems to be integrated and not independently testable. Please address this point carefully.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to the same reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Martin Weigt

Guest Editor

PLOS Computational Biology

Arne Elofsson

Deputy Editor

PLOS Computational Biology

***********************

Both reviewers clearly appreciate the reported performance and the clarity of the manuscript. However, the second reviewer raises an important concern about the question of reproducibility and method availability. Currently only a link towards the RaptorX server is provided, but all seems to be integrated and not independently testable. Please address this point carefully.

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors improve template-based modeling over their previous method by using ADMM to align predicted distance potential to that of a template. Overall good description of method and what was done during CASP.

Some general concerns:

1) The authors state that "DeepThreader is the first distance-based threading". But it seems DALI (Distance-matrix ALIgnment method) could also be used to align distance matrices. Have authors tried using predicted distance matrices with DALI for template search/ranking and threading?

2) For cases where DeepThreader did better than NDThreader, is this because NDThreader got stuck in local minimum during ADMM alignment? If the best possible alignment (using TMalign) or alignment from DeepThreader is rescored using NDThreader's score function, does it have a better or worse score than the NDThreader's solution?

3) For Table 5, it would be good to see a scatterplot comparing the methods. Averages can be a bit misleading, as a single outlier target can skew the results. It would be good to know what fraction of the targets RaptorX did better than Zhang-Server/BAKER-SERVER. The statement in the abstract "the best GDT score among allCASP14 servers on the 58 TBM targets." makes it sound like the method got the best GDT score on all the 58 targets... though I'm guessing this only true on average?

Reviewer #2: The authors of "Deep Template-based Protein Structure Prediction" present a highly successful deep learning method for improving the accuracy of template based modelling of protein structures.

The paper is written in a readable, methodical way and is sufficiently easy to follow for an interested reader. The methods presented are both interesting and sufficiently novel.

They would be of great use to the general community, if only it were available. In a current format, the article describes the method, though in a way that is not sufficient to replicate it by the reader.

= Major concerns:

1. The paper introduces new architecture - Deep Convolutional Residual Neural Fields (DRNF), which appears exciting and (judging by authors' report) performs exquisitely well for the problem approached. However, there is no reference implementation of this architecture available, not to speak about the actual method that could be used on arbitrary data. Without ability to replicate results, the message of the paper becomes much weaker.

2. Authors provide the compositions of the training/validation/testing set, but these are all the data provided. As there is no method available, there is no way to verify the correctness of the results, nor there is a way to test the method independently. Furthermore, there is no way to assess the quality of predictions beyond the metrics provided by the authors.

3. Authors of AlphaFold2 claim that their method largely supersedes TBM methods for protein structure prediction. How does DRNF and NDThreader compare to such a method - especially in terms of practical usefulness for the protein structure prediction and modelling communities?

4. Training/validation/test set separation. In p. 16 l. 273. "A multi-domain protein chain may belong to multiple groups". It is not evident for me, that such a partition does not result in an information bleed between data sets, especially in context of page 17 l. 285-288. Let's assume one protein pair contains domains in superfamilies S1 and S2, and is assigned to (let's say) training set by virtue of domain S1. Let's assume also that superfamily S2 is not a part of the training set (which, in light of the paper it does not need to be). Can the same protein pair be assigned to the validation/test set by virtue of domain S2? Can proteins containing domains in S2 be a part of validation/test sets, even though examples of these domains are already in training set?

5. ADMM diversification, cf. p. 22. l. 411-413. How much of divergence can one expect with different initialization criteria? It would seem, like for the "easier" tasks inclusion of SAS and/or SS priors should be irrelevant. And both TMalign and DeepAlign should result in identical alignments... Can you provide an estimate how often do such a treatment help?

= Minor concerns:

p. 1 l. 12/13 NDThreader becomes DNThreader

p. 1. l. 17 unnecessary space in 'co-evolution'

p. 2. l 30 "methods developed for TBM": missing verb

p. 2. l. 40 "good percentage" - would you care to specify?

p. 15. Table 3. for NDThreader at (TM+GDT)/2 "top 1" score value 0.419 is incongruent with the rest of the table. I assume it is a typo... Additionally, please look into the table headers and their alignment.

p. 22. l. 410. "converges to local optimal" - I think you mean "optimum"

p. 22. l. 411-413. How much of divergence can one expect with different initialization criteria? It would seem, like for the "easier" tasks inclusion of SAS and/or SS priors should be irrelevant. And both TMalign and DeepAlign should result in identical alignments...

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: No: Authors did not provide the results they discuss, nor did they provide the way to replicate them. The methods used in the paper remain private and as such none of the results can be verified or replicated.

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

Revision 1

Attachments
Attachment
Submitted filename: NDThreader-ReplyLetter.docx
Decision Letter - Arne Elofsson, Editor, Martin Weigt, Editor

Dear Dr. Xu,

Thank you very much for submitting your manuscript "Deep Template-based Protein Structure Prediction" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Besides a number of very minor comments, Reviewer 2 raises major concerns with respect to the sharing of your code, which does not comply with open source standards. Please change access to the software such that it can be downloaded and used without registration. For your information, the PLoS standards for software sharing are reproduced here:

"

We expect that all researchers submitting to PLOS submissions in which software is the central part of the manuscript will make all relevant software available without restrictions upon publication of the work. Authors must ensure that software remains usable over time regardless of versions or upgrades. If the original software is not able to be shared, authors must provide a reasonable facsimile.

[https://journals.plos.org/ploscompbiol/s/materials-and-software-sharing#loc-sharing-software].

If the Software is a central part of the submission the paper must meet the following requirements:

Based on open source standards

Conform to the Open Source Definition

Deposited in an open software archive (see “Depositing software,” below)

Included in the submission as supporting information

Linked directly from the manuscript file

"

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Martin Weigt

Guest Editor

PLOS Computational Biology

Arne Elofsson

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

"We expect that all researchers submitting to PLOS submissions in which software is the central part of the manuscript will make all relevant software available without restrictions upon publication of the work. Authors must ensure that software remains usable over time regardless of versions or upgrades. If the original software is not able to be shared, authors must provide a reasonable facsimile."

[https://journals.plos.org/ploscompbiol/s/materials-and-software-sharing#loc-sharing-software].

If the Software is a central part of the submission the paper must meet the following requirements:

Based on open source standards

Conform to the Open Source Definition

Deposited in an open software archive (see “Depositing software,” below)

Included in the submission as supporting information

Linked directly from the manuscript file

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors have addressed most of my concerns. I would recommend acceptance.

DALI should be cited, in the context of "first distance-based threading method", regardless if one uses real vs. predicted distances.

Reviewer #2: Thank you for comprehensive addressing of my questions. I sustain my opinion, that the work presented in this paper is of high quality, both in terms of novelty and implementation and constitutes a valuable addition to the knowledge in the field.

However, even though the Authors do provide the software presented in the paper, it does not fulfil the Open Source requirement of the Journal. It is impossible to download the software without registering an account (which compromises the implied anonymity of the review, or requires submitting fictitious personal data). Moreover, users with commercial email addresses cannot register - which is a minute detail, but still contravenes the Open Access/Open Source policy of the Journal. The software, even though it is (commendably) provided as a source code, comes with blanket reservation:

"Unless explicitly permitted by the RaptorX team, these standalone programs and datasets cannot be used for commercial purposes or included into a web server." in README file on the web server, where one can download the software. This alone makes the software not OSI-license compatible (https://opensource.org/licenses/alphabetical) which is a prerequisite for software papers in PLOS journals. The restriction of access, redistribution and usage violate the OSI Open Source Definition (https://opensource.org/docs/osd) at least in points 1, 3, 5, and 6.

Were authors to release the software with a free license, I would have no reservations.

My remaining concerns are largely cosmetic:

p. 2 "protein data bank" -> "Protein Data Bank"

p. 4 "New Deep-learning Therader" -> "... Threader"

p. 9 The captions of Table 3 are still improperly typeset - which I trust will be fixed in the editorial office, but still makes the paper less approachable to read for reviewers.

p. 15 "Run TMalign to find ... " - it feels like there is a subject missing, consider adding "We".

And general concern:

In many places authors contrast methods, asserting a supremacy of one over the other. Due to limited size of comparison sets, and often small margins of difference, it would greatly improve the paper, if there were confidence intervals or - even better - p-values provided with such comparisons.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 2

Attachments
Attachment
Submitted filename: NDThreader-ReplyLetter-v2.docx
Decision Letter - Arne Elofsson, Editor, Martin Weigt, Editor

Dear Dr. Xu,

We are pleased to inform you that your manuscript 'Deep Template-based Protein Structure Prediction' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Martin Weigt

Guest Editor

PLOS Computational Biology

Arne Elofsson

Deputy Editor

PLOS Computational Biology

***********************************************************

Formally Accepted
Acceptance Letter - Arne Elofsson, Editor, Martin Weigt, Editor

PCOMPBIOL-D-21-00008R2

Deep Template-based Protein Structure Prediction

Dear Dr Xu,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Katalin Szabo

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .