Peer Review History

Original SubmissionMay 26, 2021
Decision Letter - Wolfgang Einhäuser, Editor, Ming Bo Cai, Editor

Dear Dr. Lange,

Thank you very much for submitting your manuscript "Task-induced neural covariability as a signature of approximate Bayesian learning and inference" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Ming Bo Cai

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: REVIEW for “Task-induced neural covariability as a signature of approximate Bayesian learning and inference” by Lange and Haefner.

This paper analytically derived neural signature of tasks-specific signature by analyzing noise correlation and choice probability for 2-AFC task in a Bayesian probabilistic coding framework. The analytical results are supplemented by simulations when closed-form solutions were difficult to obtain. The paper also summarized previous experimental results that could be interpreted as evidence supporting the theory. Furthermore, the authors used a simple principal component analysis (PCA) method to show that the statistical structure of the neural responses could be used to infer internal belief about the stimulus in simulated/synthetic data.

Overall, I find this paper to be a valuable contribution to the field of neural coding. It contains various interesting ideas, which are likely to have an impact on the interpretation of noise correlations in the context of feedforward v.s. feedback processing. The analytical derivation under the Bayesian framework is elegant and thorough. There are some concerns I have with regard to the writing, along with some technical points, which need to be further clarified. Although these concerns slightly reduce the enthusiasm, I still consider this paper as a useful and timely addition to the literature.

One main concern of the current version is the readability. The paper is bit difficult to understand at this point. This is partly due to that many results were presented, but there seems to be a lack of focus.

i) The predictions are scattered throughout the paper. The authors may consider summarizing the predictions explicitly somewhere, perhaps in the beginning of the Discussion.

ii) The first three sections of the Results are basically setting up the stages without actual results. I found them to be difficult to follow because too many concepts were introduced/defined. It would be useful if these could be streamlined and compressed.

Another more technical concern is the use of the term “differential correlation” which is used many times in the paper. In the original publication by Moreno-Bote et al (2014), “differential correlation” was defined based on the derivative of the tuning curve at each stimulus value. However, in the present paper, “differential correlation” is computed based on the difference between a pair of stimuli, i.e., f(\\theta_1) - f(\\theta_2)=\\delta f. Although some recent studies have been making no distinction about the two, this does not seem to be a trivial distinction, because Fisher information at any stimulus value is related to the derivative of the tuning curve— this is defined in the absolute sense for that particular stimulus, not relative to another stimulus. Thus fundamentally, it is related to local discrimination, not general discrimination.

Relatedly, there is a sense that the noise correlation along the f’direction ( f’ represents the derivative of the tuning curve, not \\delta f for an arbitrary pair) is specifical, because a small input noise (or stimulus noise) will naturally lead to a component in this direction. Therefore, I find the arguments around Fig. 6c to be unconvincing. In particular, the statement in lIne 449-452 [“This makes the f′ direction largely arbitrary, so the results of Rabinowitz et al. (2015), as well as other recent findings of alignment between f′ and the low-rank modes of covariance (Bondy et al., 2018; Montijn et al., 451 2019; Rumyantsev et al., 2020), ought to be extremely surprising in a feedforward framework or in models of low-rank but task-independent variability”] is quite problematic in my view due to the impact of shared input noise. The same statement was basically repeated in line 552-554. I don’t see why this should be considered as “extremely surprising” as claimed in the paper. In general, statements like this are perhaps too subjective to be useful.

Regarding the generality of the results: the results in the paper are limited to the 2-AFC task. The continuous estimation problem is only briefly discussed in SI (line 1259-1273). It would be useful to bring the discussions on continuous estimation problem up to the main text. This will help partly address the question of generality. Also, Fisher information (and differential correlation) comes naturally with the continuous estimation problem.

Specific comments:

* Line 19-20. It is stated that the predictions of the proposed framework differ from the ones form the feedforward models. Reading through the paper, I am not exactly sure what these differences are. It would be useful to summarize these explicitly.

* Line 39-41 stated that the results hold under general assumptions. However, later it was shown that, when there is noise, the results only hold in some cases. There appears to be a discrepancy between what was claimed here (as well as lines 118-120) and the actual results. One of proof is based on taking the limit in another limiting regime, which is delicate. It would be helpful to re-write these sentences to reflect the subtlety of the results.

* Could the authors clearly state the assumptions underlying the “fundamental self-consistency relationship”, i.e. “the average posterior equals to the prior”? An average reader might not be familiar with this relationship.

* “differential correlation” needs to be explicitly and clearly defined (also see my comments earlier).

*Line 534-536 stated that “some” of the measured “differential” correlation could be understood as near-optimal feedback. I found this description to be rather vague. What if the feedback is not near-optimal? In that case, would some “differential” correlation still be predicted? If the answer is yes, then the interpretation of “near-optimal feedback” based on “differential” correlation would not be valid.

* Line 522-524. The connection to the recurrent neural networks is not clearly described.

* For the clarity of the presentation, it would be useful to label the subsections of Method for the sake of referencing in the main text. Currently, it is difficult to locate the proof for a particular statement stated in the main text.

* Would it be useful to fit a regression line in Fig. 6a? That might help better convey the point.

* It is difficult to see the connection between Fig. 6b and the claims made in Line 439-443, and why Fig 6b is supporting the authors’ claim. Could the authors please unpack the connection there?

* Line 334-349 describes a special class of encoding schemes for which the results (Eq. 8 and Eq. 9) still hold when the noise is present. Is this class of code both necessary and sufficient? It is obviously sufficient based on what is described there, but unclear whether it is also necessary. For the prediction in Line 347-349, I’d think one would need the linearity to be a necessary condition as well. If it is both necessary and sufficient, it would be useful to state that explicitly. If it is not, it would be useful to tone down the claim in the last sentence.

*Line 774-775 is difficult to understand. “map” or “mapping”?

*Line 182- 187. These lines are difficult to follow.

*Fig 1d. I found the scheme is confusing, in particular the direction of arrows. Maybe I don’t quite understand what this figure is trying to show.

*Line 110: “two frameworks”. Not sure if it would be accurate to call the studies on noise correlation as a “framework”—perhaps more fair to use “two branches of research”?

Reviewer #2: The manuscript discusses the effect of probabilistic inference on neural response statistics. In particular, the focus of the paper is inference of task-related variables on the covariation of neural responses. The paper establishes normative relationship between distinct neural phenomena, such as top-down feedback, choice probabilities and information limiting correlations. Besides the appeal that a formal treatment is provided to higher order response statistics, a further aspect that helps appreciating the theory is that it discusses alternative computational approaches jointly. The paper provides important insights into the way computation-level considerations are linked to neural population activity.

The paper is very dense but is clearly written and arguments are well supported by formal analysis. I only have a number of minor clarification suggestions/questions.

Questions:

I have one question that might require some additional writing. In the paragraph starting at line 260 the authors lay out the requirements for their analytical treatment. The assumptions are sound and adequate for the particular setting they analyze but the authors would benefit from a discussion of how these assumptions affect the generality of the claims. For instance, while a fully learned model can be assumed but some qualitative aspects of the model would still hold even if there is a discrepancy between the internal model and the actual model of the task. Importantly, there are tools to infer variations in the strategy followed by animals from behavior alone, therefore one could expect these changes to be captured by the theory presented here. From an experimental point of view, the second assumption is certainly true for the analyzed experiments but the basic consequences of the theory are more general than stimuli presented close to the psychometric threshold.

In the section ‘Variable beliefs in the presence of noise’ the authors argue that the alignment of df/ds and df/d\\pi is critical for the derivations to hold and then they introduce LDC’s that can overcome theoretical hurdles. This fine theoretical insight is proposed to be a basis for experimental validation. While this is indeed true, additional insights would be welcome if this is something that can indeed be measured in electrophysiological data.

The authors argue that the self-consistency claim that the results are based on are general across theories of probabilistic computations. The argument for this claim is properly spelled out. It would be instructive for the reader to briefly discuss if these theories are indistinguishable in the discussed context.

clarifications:

line 110: I suggest spelling out the ‘two frameworks’: The reader might lose track by this time

Caption of fig 2: s is not defined in time to understand the figure.

Caption of fig 2: ‘Our derivation assumes that smoothly changing posteriors (a) corresponds to smooth changes in neural responses’ -> one could introduce f(s) here since later f’ turns up in the caption.

line 149: ‘variability of posterior distributions’ is not sufficiently motivated at this point

Caption of fig 3: ‘Variability in the underlying posterior may appear as correlated variability in spike counts.’ -> I appreciate the argument but I am not sure how the figure motivates this point.

line 255: please explain \\pi = \\frac{12}

line 274: Changes induced by top-down influences affect the tuning curve. It would be useful to note how this affects measuring tuning curves

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No: The authors stated that "No new data were collected in this work. Matlab code for simulation results will be made available on https://github.com/haefnerlab ."

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 1

Attachments
Attachment
Submitted filename: task_induced_cov_replies.pdf
Decision Letter - Wolfgang Einhäuser, Editor, Ming Bo Cai, Editor

Dear Dr. Lange,

We are pleased to inform you that your manuscript 'Task-induced neural covariability as a signature of approximate Bayesian learning and inference' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Ming Bo Cai

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************************************************

We are very happy to accept the paper for publication. Please include a link to the code in the manuscript when you finalize the print-ready paper. This does not need further review.

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: I thank the authors for their response to my questions. The revised version is substantially improved. I would like to recommend the publication of this paper. I consider that it will be a useful contribution to the field of neural coding.

Reviewer #2: The authors did an excellent job and I believe that the paper is ripe for publication.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No: The authors promise to release the code upon publication of this paper.

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Wolfgang Einhäuser, Editor, Ming Bo Cai, Editor

PCOMPBIOL-D-21-00963R1

Task-induced neural covariability as a signature of approximate Bayesian learning and inference

Dear Dr Lange,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Livia Horvath

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .