Peer Review History

Original SubmissionNovember 14, 2023
Decision Letter - Jianhong Zhou, Editor

PONE-D-23-35773WilsonGenAI a deep learning approach to classify pathogenic variants in Wilson DiseasePLOS ONE

Dear Dr. BK,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 03 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jianhong Zhou

Staff Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

4. Thank you for stating the following financial disclosure:

“Funding from the Council of Scientific and Industrial Research (CSIR) through the IndiGenApp Grant and OLP2301

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

5. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

6. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

1. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

7. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper presents genetic variant classification using machine learning techniques, specifically TabNet and XGBoost, to classify ATP7B gene variants associated with Wilson's Disease. The study's strength lies in its robust training and validation on a high-confidence dataset and its practical application, as evidenced by successful independent verification and potential utility in clinical and research settings. I have several comments that are need to be addressed.

Major:

(1) Why were TabNet and XGBoost chosen as the primary models for this analysis over other deep learning or machine learning models? What specific advantages do they offer for this type of data and problem? Please provide the comparison with other relevant deep learning methods.

(2) The authors mentioned that TabNet uses sequential attention for feature selection, which is instance-wise. How does this impact the generalizability of the model across different datasets or variants? Is there a risk of overfitting to specific features in the training dataset?

(3) The authors note that XGBoost is effective in handling sparse data. However, it is not clear that on how this capability was specifically advantageous in current study, given the characteristics of used dataset?

(4) For the models, the authors have set specific hyperparameters. The manuscript need more details about how were these parameters chosen.

(5) The authors adjusted the scale_pos_weight in XGBoost for class imbalance. How significant was the class imbalance in used dataset, and how did this adjustment impact the model's performance, especially in terms of precision and recall?

(6) TabNet stopped training at the 187th epoch out of a possible 1000. Was this due to an early stopping criterion based on validation accuracy? A big epoch size does not necessarily increase the accuracy of the model. How was the risk of overfitting addressed given the excessive epoch size (>100)?

(7) The authors mentioned the top 20 features in feature importance plots for both models. Could the authors provide insights into what these features represent and how they contribute to the pathogenicity classification? How interpretable are these models in terms of understanding the biological significance of these features?

(8) The manuscript needs more details on how specificity, negative predictive value (NPV), or area under the precision-recall curve (AUPRC) considered?

(9) The test sets' composition (number of benign vs. pathogenic variants) and their source (whether they were balanced or reflective of real-world distributions) are not detailed. How might this affect the models' generalizability to other datasets or real-world scenarios?

(10) The comparison with CADD and other models like RENOVO and MLVar suggests superior performance of your models. However, were these comparisons made under similar conditions (e.g., same datasets, metrics)? How do the models compare in terms of computational efficiency and scalability?

(11) When reclassifying variants of uncertain significance, how did the authors validate the accuracy of these reclassifications? Is there a risk of introducing bias or errors in this process, given the uncertain nature of these variants?

(12) The discussion section of the manuscript needs to be significantly expanded. These are few points the authors may consider while revising the discussion section. In discussion the authors should interpret and explain the findings, placing them in the context of the broader field. Begin by summarizing the main findings of the study, highlighting how they address the research questions or hypotheses stated in the introduction. Then, contextualize these results within the existing literature, discussing how these findings align with or differ from previous research and the potential reasons for these similarities or differences. What is the significance and implications of the results, considering both their theoretical and practical applications. Acknowledge the limitations, discussing how they might affect your findings and suggesting areas for future research to address these gaps. This section should bridge the gap between the presented research and the larger scientific community, demonstrating how this work contributes to and advances the field.

Minor:

(1) It would help readers to introduce Wilson's disease in the introduction section.

(2) The relevance of choosing the ATP7B gene needs to be added in the introduction.

(3) "Non-exonic variants and VUS were removed from the analysis and this resulted in a variant dataset of 723 unique variants, ..." Explain why. What is VUS? Expand all the abbreviations at the first use.

(4) lines 106 – 113: The parameters could be presented in a table.

Reviewer #2: Summary:

Vatsyayan et al. applied two ML models (TabNet and XGBoost) to classify ATP7B genetic variants of Wilsen disease based on highly engineered features of each variant. Both models show very high classification accuracy, indicating the potential usability to reduce the manual evaluation efforts such as following guidelines of American College of Medical Genetics and Genomics and the Association of Molecular Pathologists. However, because of the lack of comparison with other variants classification methods e.g. disease agnostic model, it is hard to tell the novelty of WilsonGenAI and whether WilsonGenAI really adds value to Wilson's disease specific variant classification. Due to the high requirement of storage size of the WilsonGenAI, I have not evaluate the software itself. Please see the following comments for major revision:

Major:

1. In introduction, please review and discuss related works. Line 174-180 should be part of the introduction.

2. In results, in addition to CADD, please compare the WilsonGenAI results with more state-of-the-art methods such as Eigen-PC, REVEL, AphaMissense, etc. Moreover, the argument of not comparing the proposed methods with RENOVO and MLVar are not convincing. Please also include these results in Table S3. Without seeing these baseline results, it is difficult to conclude TabNet and XGBoost are necessary Wilson's Disease specific model. The model comparison figure (e.g. barplot of Table S3) might be the main figure highlighted by this paper.

3. line 68, there are much more pathogenic variants than benign class. Have the authors considered whether the imbalanced distribution will influence the results?

4. line 74-76, it seems the three population used for annotation is different from the population of WilsonGen dataset. Can the authors discuss more on the potential problem of this inconsistency?

5. Figure S1. Can the authors show both training and validation loss in order to easily see whether the model is overfitting or not.

6. Figure S2. It seems the important features identied by XGBoost and TabNet are quite different but their ROC are similar. Can the authors discuss more about this?

7. Figure S3. It seems the accuracies are very unstable. Can the authors comment on this problem?

8. Figrue S4. It is weird that the total number of variants are different between the methods.

9. Since the proposed models only consider two classes, in practice, for the variants with around 0.5 predicted probability, should the user regard them as VUS? Is there a recommended threshold? This is especially important as the authors claimed WilsonGenAI could be used for clinical diagnosis.

10. For the VUS of independent datasets (line 124), is the predicted score of them around the margin of the two classes? Is there a trend or correlation between the predicted score and the 5 ordinal classes?

11. Line 189-200, is there a specific consideration to choose S855 and C271X for validation? Ideally, it would be very interesting to see if some VUS with very high predicted pathogenic probability can be validated to lead to low Copper concentration.

12. It seems the independent dataset is not available.

Minor:

1. Please define abbreviation WD, VUS before using it.

2. line 149. Please round the number.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

A point-by-point response to reviewers

Reviewer #1: This paper presents genetic variant classification using machine learning techniques, specifically TabNet and XGBoost, to classify ATP7B gene variants associated with Wilson's Disease. The study's strength lies in its robust training and validation on a high-confidence dataset and its practical application, as evidenced by successful independent verification and potential utility in clinical and research settings. I have several comments that are need to be addressed.

Major:

(Q1) Why were TabNet and XGBoost chosen as the primary models for this analysis over other deep learning or machine learning models? What specific advantages do they offer for this type of data and problem? Please provide the comparison with other relevant deep learning methods.

Response: We appreciate the inquiry from the reviewer. In our initial exploration of model selection, we conducted a comprehensive analysis using the Weka suite (Witten et al. 2011). The dataset under consideration at that time comprised 725 variants, split into 70% training and 30% testing datasets. The table below illustrates the train and test accuracies achieved by different algorithms:

Model Train Accuracy Test Accuracy

RandomForest 97.925 98.611

J48 97.41 98.61

SMO 96.89 97.92

NaiveBayes 83.59 76.39

Consistent with conventional wisdom, tree ensemble models demonstrated superior performance on tabular data. Notably, RandomForest and J48 outperformed other algorithms. Therefore, we opted for the XGBoost classifier, a widely-used gradient-boosted decision tree, known for its efficiency in handling tabular datasets. XGBoost offers advantages such as faster execution, robust performance with missing data, and effective handling of class-imbalanced datasets. Its built-in regularization helps mitigate overfitting, a concern associated with models like RandomForest.

Furthermore, recognizing the specialized nature of tabular data, we incorporated the TabNet deep learning model into our analysis. TabNet, designed specifically for tabulated datasets, leverages a transformer architecture to emulate the learning of decision trees. This design enables TabNet to rapidly discern intricate data patterns. While XGBoost excels in certain scenarios, TabNet has demonstrated superiority over tree methods for specific tabular datasets.

Given the variability in the comparative performance of these models depending on the dataset, we decided to include results from both XGBoost and TabNet for a comprehensive evaluation. This dual-model approach allows for a more nuanced understanding and robust assessment of the predictions made by each model.

(Q2) The authors mentioned that TabNet uses sequential attention for feature selection, which is instance-wise. How does this impact the generalizability of the model across different datasets or variants? Is there a risk of overfitting to specific features in the training dataset?

Response: We appreciate the reviewer's insightful consideration of the generalizability of TabNet across diverse datasets. While TabNet incorporates features like prior scales to mitigate overfitting, predicting the exact extent of model generalization remains challenging, particularly in the absence of ample accurately classified variant datasets. Our training utilized the most extensive dataset of Wilson's Disease variants reported in literature, encompassing nine large datasets with ACMG classifications. The model consistently demonstrated high classification accuracy across both classes, as assessed by Matthews Correlation Coefficient (MCC). This performance instills confidence in its potential to perform well on other real-world datasets. Regrettably, we were unable to conduct additional testing due to the scarcity of available ACMG-classified or functionally validated variant datasets. Despite this limitation, our rigorous training on a diverse and comprehensive dataset enhances our confidence in the model's ability to generalize to different datasets and variants. Future investigations and validations with additional variant datasets would certainly contribute to a more comprehensive understanding of TabNet's generalizability across a broader spectrum of genetic variations.

(Q3) The authors note that XGBoost is effective in handling sparse data. However, it is not clear that on how this capability was specifically advantageous in current study, given the characteristics of used dataset?

Response: We appreciate the reviewer's observation, and would like to clarify the specific advantage of XGBoost's capability to handle sparse data in our study. Real-world datasets often exhibit missing values, posing a challenge for deep learning models. In our dataset, certain features, such as pathogenicity and conservation scores, had missing values due to the inherent characteristics of their respective prediction algorithms. To address this issue with TabNet, we had to perform imputation by substituting missing data with a constant value far outside the scale of all scores. This substitution aimed to avoid introducing unintended bias.

Contrastingly, XGBoost demonstrated an inherent advantage in handling sparse data. Unlike TabNet, XGBoost required no data substitution for missing values, resulting in a more streamlined preprocessing step. Moreover, XGBoost's Sparsity-aware Split Finding algorithm automatically determines optimal splits for data points with missing values, contributing to an improved overall performance. This capability proved advantageous in our study, in terms of streamlining the preprocessing phase and enhancing the model's efficiency in handling sparse data patterns.

(Q4) For the models, the authors have set specific hyperparameters. The manuscript need more details about how were these parameters chosen.

Response: We appreciate the reviewer's request for more details on the hyperparameter selection process. In the TabNet model, we explored different values for the mask_type parameter during experimentation. The "entmax" setting demonstrated superior overall prediction accuracy compared to the default "sparsemax," leading us to choose it for model training. The “patience” parameter, governing the number of epochs to await improvement before terminating a training run, was set at 100, with a maximum of 1000 epochs allowed. Various dataset splits were tested, including 70% train and 30% test, as well as 80% train and 20% test, to ensure robust testing.

For the XGBoost model, hyperparameters were carefully selected and evaluated using a 5-fold cross-validation approach. A randomized search on hyperparameters was conducted using RandomizedSearchCV with 5-fold cross-validation. To address class imbalance, the scale_pos_weight parameter was determined by dividing the number of majority class entries by the number of minority class entries. Model performance was assessed using the mean cross_val_score function with a 10-fold cross-validation. Multiple models, with and without the determined hyperparameters (including scale_pos_weight), were tested using accuracy, AUC, and MCC metrics. Additionally, various train/test splits were explored to identify the best-performing model.

These details have been incorporated into the revised manuscript to provide a comprehensive understanding of the hyperparameter selection process for both the TabNet and XGBoost models.

(Q5) The authors adjusted the scale_pos_weight in XGBoost for class imbalance. How significant was the class imbalance in used dataset, and how did this adjustment impact the model's performance, especially in terms of precision and recall?

Response: Our train set had 577 pathogenic/likely pathogenic and 146 benign/likely benign variants. Given this imbalance, we adjusted the scale_pos_weight parameter to the recommended value of 3.95. To evaluate the impact of this adjustment, we employed the Matthews Correlation Coefficient (MCC) metric, which comprehensively considers all components of the confusion matrix, namely true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). It thus enabled us to determine if the classifier was doing well on both positive and negative classes. This was pertinent due to the potential clinical implications associated with misclassifying benign variants as pathogenic.

Comparing models with and without the adjusted scale_pos_weight, we observed an improvement in performance with the weighted model achieving an MCC of 0.95 compared to 0.90 without the weights. While precision remained consistent at 0.98 for both models, the weighted model exhibited an enhanced recall of 1.00 as opposed to 0.98 without the adjustment. Moreover, the F1-score demonstrated improvement with the weighted model, reaching 0.99 compared to 0.98 without the adjustment.

It is noteworthy that our pursuit of optimal hyperparameter configurations involved experimenting with various combinations. Throughout this process, models consistently performed better when the scale_pos_weight parameter was appropriately adjusted. This underscores the significance of addressing class imbalance, as reflected in the superior performance and robustness of models that incorporated the weighted approach.

(Q6) TabNet stopped training at the 187th epoch out of a possible 1000. Was this due to an early stopping criterion based on validation accuracy? A big epoch size does not necessarily increase the accuracy of the model. How was the risk of overfitting addressed given the excessive epoch size (>100)?

Response: Indeed, TabNet implemented an early stopping mechanism based on the validation accuracy metric during training. The early stopping criterion was defined by setting patience at 100, meaning that if the accuracy did not improve for 100 consecutive epochs, the training process would halt. Subsequently, TabNet automatically selected the epoch with the best accuracy score for making predictions on the evaluation set.

To mitigate the risk of overfitting, the model width, representing the number of nodes in a layer, was set to 8. Additionally, the parameter n_steps was configured to 3. These decisions aimed to strike a balance between model complexity and generalization capacity, reducing the likelihood of overfitting.

Furthermore, to validate the robustness of the model, its performance was rigorously assessed on an independent validation set, where it was able to correctly classify all variants across both classes. We thus anticipate the model to be able to generalize well on data beyond the training set.

(Q7) The authors mentioned the top 20 features in feature importance plots for both models. Could the authors provide insights into what these features represent and how they contribute to the pathogenicity classification? How interpretable are these models in terms of understanding the biological significance of these features?

Response: Certainly, the feature importance plots for both models shed light on the factors influencing pathogenicity classification, derived from a comprehensive training set of 73 attributes. These attributes include variant positional information, global population prevalence, pathogenicity prediction scores from various tools, and evolutionary conservation scores.

The top features identified by both models highlight critical determinants of pathogenicity. Loss of function (LoF) information emerges as a key contributor, emphasizing the significance of genetic lesions that impede normal gene product formation, a hallmark of disease causation. The genomic position of the mutation(Start: nucleotide), could also be important in predicting pathogenic effect. Further, the global prevalence of variants, as indicated by the 1000Genomes allele frequency (1000Genomes AF - ALL), underscores the observation that the number of high frequency disease causing variants is usually small, i.e. most pathogenic variants are rarely prevalent across a population. The remaining features common to both models are pathogenicity scores from seven prediction tools (MetaSVM, MCAP, MutPred, SIF4G, REVEL, PolyPhen2 HDIV, and MutationTaster), reflecting the amalgamation of diverse computational predictions.

The XGBoost model additionally considers exonic function (Function), which describes the nature of the effect the variant has (a stopgain/loss variant for example, would have a larger effect on the protein than a synonymous variant).Allele frequencies from the GnomAD database, representing a larger population dataset, are also considered.It also takes into account conservation scores (Siphy 29way logOdds and MutationAssessor) that dictate how conserved a given site is among mammals, indicating a potentially important location, and thus a potentially more disruptive effect. The model further incorporates pathogenicity scores from DANN, MetaRNN, and BayesDel.

The TabNet model additionally considersvariant prevalence across Gnomad (GnomadAF - Raw) and the Northeast African subset of the Greater Middle East populations (GME AF - NEA). Pathogenicity and conservation scores, including LRT, integrated_fitCons, PrimateAI, Eigen-PC-raw coding, and LIST-S2, enhance the model's ability to capture nuances in variant significance.

Thus both models take a well-rounded approach, and consider different aspects that determine variant pathogenicity, and are thus able to make reliable predictions. Further, the train dataset labels have been determined through ACMG classification that take into account all aspects of relevant biological data including functional and segregational evidence. As such the models capture patterns among the attributes that lead to these classifications.

The table below is a subset of Supplementary Table 1, and offers greater detail on each of the top 20 important features:

Feature Name ACMG2015 Description Dtype

Function PVS1, BP7 Exonic function of the variant (e.g.: nonsynonymous SNV, stopgain/loss, frameshift insertion/deletion etc.) category

DANN Scores PP3, BP4 Deleterious Annotation of genetic variants using Neural Networks. Score range: 0-1. float64

Start: Nucleotide Genomic location of nucleotide int64

MetaSVM PP3, BP4 A radial SVM model to predict pathogenicity, trained on whole exome variants. Score range: -2 to 3. float64

Siphy 29way logOdds Scores PP3, BP4 SiPhy score based on 29 mammals genomes. The larger the score, the more conserved the site. Score range: 0 to 37.9718. float64

Gnomad AF - ALL BA1, BS1, BS2, PM2 Alt allele Frequency in the GnomAD database float64

LoF PVS1, BP7 Whether a variant is High Confidence LoF category

MCAP Scores PP3, BP4 Pathogenicity classifier for rare missense variants in the human genome. Score range: 0-1. float64

MutPred Scores PP3, BP4 Automates the inference of molecular mechanisms of disease from amino acid substitutions. Models changes of structural features and functional sites between wild-type and mutant protein sequences float64

SIFT4G Score PP3, BP4 Faster implementation of SIFT for wider range of organisms. Score range: 0-1. float64

1000Genomes AF -ALL BA1, BS1, BS2, PM2 Allele frequency in the 1000 Genomes database float64

MetaRNN Scores PP3, BP4 Pathogenicity prediction scores for human nonsynonymous SNVs (nsSNVs) and non-frameshift (NF) indels. float64

BayesDel with AF Scores PP3, BP4 Deleteriousness meta-score for coding and non-coding variants, SNVs and small insertion / deletions from database with integrated MaxAF. Score range: -1.29334 to 0.75731. float64

REVEL Scores PP3, BP4 Predicting the pathogenicity of missense variants on the basis of individual tools: MutPred, FATHMM, VEST, PolyPhen, SIFT, PROVEAN, MutationAssessor, MutationTaster, LRT, GERP, SiPhy, phyloP, and phastCons. Score range: 0-1. float64

MutationAssessor Scores PP3, BP4 Predicts the functional impact of amino-acid substitutions in proteins, such as mutations discovered in cancer or missense polymorphisms. The functional impact is assessed based on evolutionary conservation of the affected amino acid in protein homologs. Score range: -5.545 to 5.975. float64

Polyphen2 HDIV Scores PP3, BP4 Polyphen2 prediction based on HumDiv; The PolyPhen-2 score predicts the possible impact of an amino acid substitution on the structure and function of a human protein. Score range: 0-1.

Attachments
Attachment
Submitted filename: ResponsetoReviewers.docx
Decision Letter - Muhammad Salman Bashir, Editor

WilsonGenAI a deep learning approach to classify pathogenic variants in Wilson Disease

PONE-D-23-35773R1

Dear Dr. BK,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Muhammad Salman Bashir, M.S.C

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Formally Accepted
Acceptance Letter - Muhammad Salman Bashir, Editor

PONE-D-23-35773R1

PLOS ONE

Dear Dr. BK,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Muhammad Salman Bashir

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .