Peer Review History

Original SubmissionNovember 23, 2020
Decision Letter - Lyle J. Graham, Editor, Frédéric E. Theunissen, Editor

Dear Mr Pospisil,

Thank you very much for submitting your manuscript "The unbiased estimation of the fraction of variance explained by a model" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Dear Dean and Wyeth,

First, I am very sorry this took so long! Not only was it difficult to find reviewers but two of the reviewers that originally accepted to review never returned their review. We are going to proceed with the two reviews that you see here. Both reviews bring up "major"/"Main" issues but I believe that you will be able to address these. The comparisons with the other metrics with real data as suggested by reviewer2 might not only be illustrative but be a good selling point - no?. I actually believe that some of these other metrics are equivalent to yours mathematically. But you provide a nice statistical foundation and confidence intervals that I don't believe was presented in prior work (although I did not look at all these papers).

Also it would be VERY useful to have this code in a notebook published on git. Maybe this is already the case - but as reviewer 2 I did not see that in the manuscript.

Best wishes,

Frederic.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Frédéric E. Theunissen

Associate Editor

PLOS Computational Biology

Lyle Graham

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Dear Dean and Wyeth,

First, I am very sorry this took so long! Not only was it difficult to find reviewers but two of the reviewers that originally accepted to review never returned their review. We are going to proceed with the two reviews that you see here. Both reviews bring up "major"/"Main" issues but I believe that you will be able to address these. The comparisons with the other metrics with real data as suggested by reviewer2 might not only be illustrative but be a good selling point - no?. I actually believe that some of these other metrics are equivalent to yours mathematically. But you provide a nice statistical foundation and confidence intervals that I don't believe was presented in prior work (although I did not look at all these papers).

Also it would be VERY useful to have this code in a notebook published on git. Maybe this is already the case - but as reviewer 2 I did not see that in the manuscript.

Best wishes,

Frederic.

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: This work focuses on obtaining a consistant, unbiased estimator for the "expected response" correlation. The authors argue, correctly, that the model fit should be judged independent of noise effects. Via an approximation they determine that the numerator and denominator of the correlation calculation both have noise-related biases that the authors propose to estimate and subtract. The authors assess the various properties of this estimator and compare on a number of datasets.

The method itself seems sound and well-presented. I have a number of comments that I believe the authors should consider before publication:

Main points:

1) At the very base, this correlation correction is based on having a single trial prediction of neural activity. For the mean of this distribution to be meaningful, the model prediction must, in some sense be unimodal (no probabilistic models that might offer two scenarios, i.e., P(neuron not active this trial) vs P(activity | neuron is active this trial)). Such effects can be caused by missed place fields, for example, and have are better explained in terms of population activity. Can a version of this metric still apply to such scenarios? This would be important as the field continues to move in the direction of population activity.

2) Modeling of neural spiking statistics is an evolving area. The authors introduce a variance-normalized spiking which can handle specific forms of over- and under- dispersion, however I'm curious if the proposed metric can apply to more general statistical models developed over the past decade. For example, Goris et al. 2014 [R1] show a quadratic increase in overdispersion with rate, and follow-up work has further explored more complex relationships in the mean-variance space [R2]. How would the double stochasticity in such models be handeled? The noise is not a clear-cut addition to the numerator and denominator, and so cannot be simply subtracted due to the cross-terms.

3) I find the discussion on SNR on page 12 a little confusing (Par. starting on line 364). It's not always possible to increase the bin size as there are limiting factors on the chang in the underlying homogeneous spike rate. At some point the rate then becomes meaningless, i.e., you get a sinple "tuning curve" with no time information on the response. This is especially going against the increased emphasis on neural dynamics in the population level that is becoming prominant.

4) The calcium fluorescence experiments requires more detail as to how the model "mean" changes. It has been observed both in real data [R3] and in simulation [R4] that single spikes sometimes do not cause significant changes in the DF/F signal. This is a skewed version of the spike count, and so is the idea here that there is a different statisitcal model for the data that no longer relies on the count of firing events? For example, a linear model on the DF/F changes is possible, but so is a traditional tuning curve model with a conditional probabilistic model over DF/F [R5]. I think the authors mean the former, but this should be clarified mathematically as to what statistical model of "neural firing" is used.

5) The demonstration of inconsistancy in r^2 and consistancy in r_ER^2 don't give much of a window into the nonasyptotic biases. For example, while demonstrating how r^2 is bias even in the limit is interesting, it would be interesting to understand as a function of m how fast r_ER^2 converges. Figure 3 seems to indicate a very slow (linear or sublinear?) convergence rate. Is there any way around this given that typical experiments might not have m>1000 stimuli?

[R1] Goris, Robbe LT, J. Anthony Movshon, and Eero P. Simoncelli. "Partitioning neuronal variability." Nature neuroscience 17.6 (2014): 858-865.

[R2] Charles, Adam S., et al. "Dethroning the Fano Factor: a flexible, model-based approach to partitioning neural variability." Neural computation 30.4 (2018): 1012-1045.

[R3] Wei, Ziqiang, et al. "A comparison of neuronal population dynamics measured with calcium imaging and electrophysiology." PLoS computational biology 16.9 (2020): e1008198.

[R4] Charles, Adam S., et al. "Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods." bioRxiv (2019): 726174.

[R5] Ganmor, Elad, et al. "Direct estimation of firing rates from calcium imaging data." arXiv preprint arXiv:1601.00364 (2016).

Reviewer #2: This manuscript describes a new method for measuring the performance of models for neural data, accounting for noise in the test set used to evaluate the model without bias. Using simulation, the authors argue that their new estimator is as accurate as or more accurate than previously published estimators designed for the same purpose. They also show that they can measure confidence intervals reliably, which enables identification of neurons with model performance above or below chance. They demonstrate the use of the estimator on several datasets and use it to show that neuronal signal-to-noise provides a critical limitation on model performance.

This study tackles a challenging but important problem. As the authors demonstrate, many methods have been proposed to solve this problem, but none is the obvious best choice. Several details of their estimator indicate that it could be adopted as a standard. Overall, the study appears to have been executed carefully and thoughtfully, and it should be of interest to PLoS CB readership. The comparison with multiple previous methods is particularly commendable. At the same time, to be really convincing, the study should address some additional important points to provide compelling evidence that the proposed metric does in fact work as well or better than existing metrics.

Major concerns

1. (p.4) The assumption that variance is constant, or that the data can be readily scaled for variance to become constant, seems reasonable, but it is important to back up this claim with simulations. One becomes particularly concerned in the case of datasets with very sparse spiking activity and many zero responses, which is typical of neural data. This concern could be addressed with a simulation along the lines of Figs. 2-3 but using more realistic spike-like data mimicking responses to a natural or other complex stimulus.

2. The comparison between methods (Fig. 4) is compelling, but it seems important to perform a similar comparison on a real dataset. Do some of the problematic metrics (CCnorm-pb, r2norm-split-sb) show consistently biased results for the real data? Of course, there is no ground truth here, but it should be possible to show if their estimates are consistently different from r_ER.

3. Can code for measuring r_ER and associated confidence intervals be made available? If it was mentioned somewhere in the manuscript, apologies for missing it. While not absolutely necessary, a simple package with a brief tutorial demonstrating use of this metric would be very helpful to readers and go a long way toward getting researchers to adopt the method.

Lesser concerns

L67 / L473. The concept of neuronal SNR as a metric for screening responsive vs. non-responsive neurons is not new. It may be that the current study has developed more rigorous approach to assessing responsiveness with an SNR metric. But many, many studies exclude a subset of neurons based on some SNR metric for the reasons highlighted in this ms.

L176. “cc2_norm” should this be “cc2_norm-split” to match the label in Fig. 4?

L247. Unclear what is different here, exactly.

L269. “typical” Is this the appropriate term here? This unit seems unusually well-described by the sinusoidal model.

L468. “variance explained for a linear model” versus “Pearson’s r2”. It’s not clear what the difference is between these two scenarios. Perhaps it could be spelled out in a bit more detail?

The methods are quite comprehensive in their scope. While quite dense, they appear to have been developed and described quite rigorously.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: No: I may have missed it, but it doesn't appear that they are sharing the code for their proposed tool.

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 1

Attachments
Attachment
Submitted filename: response_to_reviewers.docx
Decision Letter - Lyle J. Graham, Editor, Frédéric E. Theunissen, Editor

Dear Mr Pospisil,

We are pleased to inform you that your manuscript 'The unbiased estimation of the fraction of variance explained by a model' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Frédéric E. Theunissen

Associate Editor

PLOS Computational Biology

Lyle Graham

Deputy Editor

PLOS Computational Biology

***********************************************************

Thank you for addressing all of our concerns and sharing the code.

Best wishes,

Frederic T.

Formally Accepted
Acceptance Letter - Lyle J. Graham, Editor, Frédéric E. Theunissen, Editor

PCOMPBIOL-D-20-02116R1

The unbiased estimation of the fraction of variance explained by a model

Dear Dr Pospisil,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Zita Barta

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .