Peer Review History

Original SubmissionApril 8, 2022
Decision Letter - Bert L. de Groot, Editor, Arne Elofsson, Editor

Dear Mr. Verdier,

Thank you very much for submitting your manuscript "A maximum mean discrepancy approach reveals subtle changes in α-synuclein dynamics" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. Both reviewers raise major concerns that should be carefully addressed. In light of the reviews (below this email), we would like to invite the resubmission of a substantially revised version that takes into account all comments from both reviewers.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Bert L. de Groot

Associate Editor

PLOS Computational Biology

Arne Elofsson

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: Verdier et al describe a methodology for comparing sets of biomolecule trajectories as obtained in imaging experiments to identify changes in the associated dynamics. They propose a two-step scheme, involving feature extraction through machine learning (via a graph neural network) and a classical statistics test (maximum mean discrepancy). They apply this approach to detect differences in the dynamics of a presynaptic protein.

The task of extracting information from short and noisy trajectory is relevant and has been included in a recent competition in which the authors took part but they do not mention/cite (Muñoz-Gil et at, Nat Comms 2021). Unsupervised methods have also been recently proposed for this task (Muñoz-Gil JPhysA 54 504001 2021).

The method they present in the manuscript builds on the one they developed for the competition and is based on graph neural networks (Verdier et al, JPhysA – ref 24). Recently, graph neural networks are getting a lot of attention and might potentially be a good strategy for this kind of problem. The results presented in the manuscript seem promising but are mostly presented at a qualitative level and lack comparison with a gold standard. It is difficult to understand the performance of the methods since very little quantitative assessment is presented.

I have a few major concerns:

1. Features are extracted at the single-trajectory level, but the comparison is done between sets of (short) trajectories. Is it really necessary to use such a complex architecture for this task? Wouldn’t other classical approaches at the ensemble level perform similarly (i.e., comparing the distribution of D, or of displacements, or of gyration radii)? For example, Fig 6 suggests that differences are mainly due to a change in mobility (as also stated in the intro) that could be detected also with other methods.

2. The manuscript lacks a baseline with a classical approach and a comparison with other methods designed to tackle similar problems (e.g., the winners of the competition mentioned above). For example, it would be important to understand what the advantage (if any) is of using a graph neural network over a CNN or RNN for this task. This point should be demonstrated by comparing the performance of different architectures on the same set of data.

3. The manuscript lacks an ablation study that is necessary to conclude that the method is performing well due to its good design rather than to the potential artifacts.

4. The architecture requires a more detailed description. It is not clear which features are passed to each node/edge (Only coordinates? Are they normalized? Time? Do they use any embedding?) nor how the features are updated (message-passing?) and aggregated. Figure 2 seems to suggest that the graph is fully connected, but I could not find any mention of this in the manuscript. Is this the case or any sparsification is applied?

5. The link to the code directs to a private repo. The reviewers must be enabled to check the code to better understand the method.

6. Does feature extraction provides any interpretability besides a hint about the generative model? A representation of the UMAP features with respect to the anomalous diffusion exponent (instead of D) might be helpful in this sense.

7. Linked to the previous comment: The additional statistical test somewhat subdues the elegance of an end-to-end trainable model. Is it really necessary? In the end, the results are based on a “soft” classification. In an equivalent fashion, one might think to perform a regression and/or a classification tout court and then just compare the proportion of trajectories in each class.

8. Can’t similar results be directly obtained by extracting global features from the graph neural network?

9. Many references miss journal, page, and year. Relevant literature about graph neural networks is missing (e.g., arXiv:1806.01261)

Reviewer #2: In their manuscript the authors introduce a two-step statistical testing scheme combining simulation-based inference to train a neural network and a maximum mean discrepancy statistical test on the vectors of learned features to compare the features. They characterize sets of simulated random walks and analyze experimental alpha-synuclein traces in synapses in cultured cortical neurons in response to membrane depolarization. The authors also identify the domains in the latent space where the differences between biological conditions are the most significant,

Technically the work seems to be interesting and sound. There are, however, serious conceptual flaws in the motivation, (alleged) state of the art, and even the basic logic of their reasoning (for details please see below).

Several crucial (and recurring) statements are very puzzling, e.g. “… to detect changes in biomolecule dynamics within organelles without needing to identify a model of their motion”. I cannot identify any reason whatsoever why it should be required to identify a model of motion in order to detect differences. Yet, this seems to be the main motivation for the work.

Specific remarks:

1) Basically the authors argue that instead of “describing trajectories using a set of explicitly defined features” (i.e. physics based observables) it is preferable to use a black box. The entire reasoning seems to be based on the premise that one is required to identify some underlying model for the dynamics in order to be able to identify and quantify changes in the dynamics, which is certainly not true. To spell this out, there is in fact no need for an underlying model if one sets out to quantify the properties of trajectories or to detect and quantify differences between them. There is a very large (and I claim open) set of physical observables one may infer directly from trajectory data in a model free fashion. For example, the “canonical” time averaged squared displacement, position power spectra, van Hove functions and single-trajectory van Hove functionals, occupation measures, asphericity of individual trajectories, spectral and fractal dimensions of trajectories, memory kernels and other memory quantifiers, etc. None of these requires any kind of underlying model, all of these directly provide physical intuition, in contrast to the proposed new approach.

The motivation and declared superiority (if any at all) of the proposed black-box approach must be quantified by a fair benchmark comparison (which may be difficult), or the statements claiming superiority dropped. Moreover, any kind of such statements must also be formulated precisely and factually. No comparison analysis is performed whatsoever. It is not clear if the above intuitive “canonical” measures for quantifying the properties of trajectories (and their differences) are truly outperformed by the present approach to a degree that would outweigh the fact that the black-box treatment offers little if any physical insight.

2) The existing literature on quantifying properties of individual particle traces is literally not existent. On the one hand this is nominally unacceptable. On the other hand it may explain the authors' belief that the task requires to identify some underlying model in the sense that they are perhaps simply not familiar with the literature well enough to recognize that this is not true.

3) Consider the following conclusions stated in the manuscript:

“This indicates that the addition of KCl to the medium can affect the physical properties of many if not all alpha-synuclein molecules in a similar manner, irrespective of their subcellular location.”

“This demonstrates that alpha-synuclein is highly mobile in living cells …”.

“… the representative αlpha-synuclein trajectories exhibit a greater mobility in the depolarised state. This is likely the result of a weaker binding of alpha-synuclein at synapses, as reflected in the overall reduction of alpha-synuclein molecules during KCl application ...”

Any of the above mentioned canonical analyses would have directly reveal this information. In other words, the authors seem to have found a (very indirect) black-box substitute to infer physics. I am no biologist but these conclusions also do not seem to provide any new biological insight.

4) The title and discussion claims “subtle differences in alpha-synuclein dynamics”. I am no biologist (perhaps this may be required) but I really struggled to identify why the observed differences are “subtle”. Based on the title I expected that all canonically used methods employed in studies of particle transport (incl. the more “advanced” approaches) would fail to the detect differences, and this would call for the proposed approach. I would agree that such differences were subtle. But since no true comparison is presented the word “subtle” does not seem to be justified.

Summarizing remarks:

The results may certainly be interesting for a specialized community interested in technical aspects of the analysis of particle-tracking data. However, the analysis does not seem to be really required (at least with the motivation given in the manuscript). The way the results are presented one may be led to thinking that this is the kind of analysis to be performed if one aims to identify differences between measured trajectories but one does *not* want to gain the physical insight that comes “for free” in the application of traditional physics-based analyses. I may be mistaken, but the way the results are presented simply leads to such a conclusion.

As a result, based on this version of the manuscript I tend to recommend against publication and to instead seek a more specialized journal.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No: The code and the data can't be accessed

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 1

Attachments
Attachment
Submitted filename: Reviewers alphaSyn.pdf
Decision Letter - Bert L. de Groot, Editor, Arne Elofsson, Editor

Dear Mr. Verdier,

Thank you very much for submitting your manuscript "Simulation-based inference for non-parametric statistical comparison of biomolecule dynamics" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by the original reviewers. The reviewers appreciated the improvements in the revised manuscript, but also expressed a few remaining concerns. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Bert L. de Groot

Academic Editor

PLOS Computational Biology

Arne Elofsson

Section Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The revision has largely improved the manuscript but has produced a major shift in the focus of the article, which now reads as an example of a general recipe about how to handle relatively large datasets involving heterogeneous entities (in this case, sets of short single-molecule trajectories) by extracting meaningful features (using GNN, others ML approaches, or even analytical indicators), statistically testing them for (typically small) differences, and highlighting regions of the latent spaces that might produce these differences.

Due to the recent development of high-throughput methods for live-cell imaging and the inherent biological variability of these experiments, I believe this work is relevant to the quantitative development of the field.

Still, there are a few important points that, in might opinion, need to be further discussed and clarified. Since the authors are now focusing on the methodology and on its generality, I believe that it would be beneficial to provide additional insights on the statistical test and the simulation-based inference steps.

Major points:

1) To be fair, one should compare the results of the MMD test to those obtained through another nonparametric multivariate (or univariate, respectively) 2-sample test, whereas the Hotelling T-squared (or t-test, respectively) is parametric and requires normality and homoskedasticity.

2) The experiments described in the article involve the comparison of more than two conditions. In this case, a 2-sample test can be used as a post-hoc pairwise analysis (with the proper correction for multiple comparisons) after carrying out a (significant) nonparametric omnibus test, whereas the authors apply directly the MMD test pairwise. Please discuss this point and propose a procedure for these cases.

3) I can agree with the authors that different architectures showing similar performance in an inference or classification task learn an equivalent latent representation. However, I believe that the relevant question here is how the choice of the learning bias influences the latent representation. For example, I would expect the method to behave differently if one would use different (or additional, or less) target features for training with respect to model probability and alpha. Or if one would replace the supervised GNN with an architecture with no additional target features besides trajectory coordinates and time. What would the machine learn in these cases? How would the latent representation change? Just to give an example, one might think to obtain the latent representation by using autoencoders (as in https://iopscience.iop.org/article/10.1088/1751-8121/ac3786) or even a generative model (e.g., variational autoencoders).

4) I wonder how much the choice of the dataset used for the training contributes to implicitly defining the latent features that are extracted. I guess it would be interesting to investigate the influence of the parameter space used in the training dataset. I believe that some comparison (e.g., using different training sets and the same test set) might help to partly clarify this point.

5) Are the learned features correlated? The results shown in fig S4B (Gratin2 vs Gratin16) seem to suggest so. Please discuss the possible implications for the statistical test.

6) Are the learned features correlated with the analytical indicators (before UMAP reduction)? This might provide hints about the interpretability of the features.

Minor points:

1) In the reply, The authors stated “We also added a reference to the method of Muñoz-Gil et al. 2021, which, like ours, relies on learnt representations of individual trajectories” but I could not find it in the references of the revised manuscript.

2) There is a missing reference on line 816

3) I find it confusing to use \\alpha both for the anomalous diffusion exponent and the significance level of the statistical test

Reviewer #2: The authors did a good job in revising their manuscript, but a few points in their response are rather obscure. In point 1 they claim that "Very often, observed biomolecules do not all exhibit the same type of dynamics ... " which I must interpret as a statistical ensemble of paths (and a probability measure on said space of paths) does not exists. Few sentences later they claim that their method can "identify the most closely corresponding random walk model" which directly (and irrefutably) contradicts the previous statement. I know that the probabilistic "hygiene" is not the main priority, but the authors should try being a bit more coherent in their claims.

Moreover, stationary is not an assumption that one must make, one can test for that (directly, and rather straightforwardly). Scientists have been (quantitatively) describing observations of glassy behavior long before the advent of machine learning.

I do agree that the "dealing with under-sampled data" may be a strength of the method and this could potentially be made even more visible.

Something I did not mention in my original report (but the authors may consider) is to classify trajectories of the "unequal twin" processes using a small number of trajectories (https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.107.260601). This may be a challenge but would (potentially) also objectively demonstrate the power of the new method. But this is optional.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 2

Attachments
Attachment
Submitted filename: Reviewers alpha-syn (round 2)-3.pdf
Decision Letter - Bert L. de Groot, Editor, Arne Elofsson, Editor

Dear Mr. Verdier,

We are pleased to inform you that your manuscript 'Simulation-based inference for non-parametric statistical comparison of biomolecule dynamics' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Bert L. de Groot

Academic Editor

PLOS Computational Biology

Arne Elofsson

Section Editor

PLOS Computational Biology

***********************************************************

Formally Accepted
Acceptance Letter - Bert L. de Groot, Editor, Arne Elofsson, Editor

PCOMPBIOL-D-22-00557R2

Simulation-based inference for non-parametric statistical comparison of biomolecule dynamics

Dear Dr Verdier,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Bernadett Koltai

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .