Peer Review History

Original SubmissionOctober 3, 2023
Decision Letter - Lyle J. Graham, Editor, Bradley Voytek, Editor

Dear Mr Tlaie Boria,

Thank you very much for submitting your manuscript "What does the mean mean? A simple test for neuroscience" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

While normally we would ask for a revised manuscript within 30 days, we understand that this overlaps with a common long holiday period so if you anticipate it taking longer than 30 days, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Bradley Voytek

Guest Editor

PLOS Computational Biology

Lyle Graham

Section Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: Tlaie and colleagues investigate novel criteria for defining brain areas as relevant for a task: reliability of the responses and a relationship between single-trial consistency and behavioral responses. The authors use two datasets for their case study, and analyze these datasets rigorously and in depth. Dataset 1 passes the criteria but dataset 2 does not pass these criteria because, the authors claim, the information is dynamically encoded. However, I think this is an example of the short-comings of the criteria, particularly criterion 1. I discuss the caveats of the approach below, and note some potential discussion points.

The single-trial responses are in most cases positively correlated to the trial-averaged responses to some extent. The close-to-zero correlations observed on many trials are consistent with low firing rates with poisson variability. Alternatively, one extreme example I can think of where the correlation would be low is if there are two populations of neurons that are driven by a given stimulus on average, but on any given stimulus one set of these stimulus-selective neurons is on and the other is suppressed somewhat. Both of these cases do not reduce the ability of a downstream area to decode information: a downstream brain area can decode the stimulus by pooling the stimulus-driven neurons’ inputs.

This for example is the function of the “super-neurons” in (Stringer et al 2021), to pool over many neurons, not to choose the “super-coder” neurons. Indeed, the authors found that the behavioral relevance criterion increases as a function of the number of neurons considered. Also, in (Perez-Ortega et al 2021) neuron groups are found using unsupervised clustering (not directly using stimulus-driven properties), and there is no mention of finding “super-coder” neurons; instead they discuss the stability of these groups of neurons across days.

A case where the decoder *would* fail is if the trial-to-trial variability was information-limiting (Moreno-Bote et al 2014). For example, if in response to the left grating, all the neurons responded on some trials like the grating was on the right, then the stimulus could not be decoded from the population on those trials. A neural population can be unreliable but not have these detrimental correlations. Indeed using dataset 2 the authors claim that motor and visual areas are “barely reliable and behaviorally relevant”, but inactivation of these areas leads to the mouse being unable to perform the task (Zatka-Haas et al 2021). Thus, the claim that these measures, particularly reliability, relate to task performance is unproven.

I still think these criteria are very useful, just with some of these caveats mentioned. For example, it might be useful to discuss how increased numbers of neurons improve the behavioral relevance, suggesting that sampling more neurons per brain area can influence criterion 2. Also, it might be useful to discuss that certain types of unreliability are more detrimental than others to decoding analyses.

Reviewer #2: Review: What does the mean mean? A simple test for neuroscience

In their manuscript, Tlaie and co-authors embark on an exciting crusade to uncover the mean and meaninglessness of the trial-averaged view of population activity. They propose two central measures to capture how informative the averages are to characterize population activity and to predict behavioral outcomes. They start from the first principles:

1. If means are informative, they shall be well correlated with the population activity at each trial, and more so, they should be more correlated with the average activity of the same trial condition than with the different one.

2. If the upstream population (or researcher) is using means as a perfect template for the behavioral choice, closeness to the mean should be predictive of the quality of the behavioral response.

The authors derived the measures to capture how well these two principles are realized in the recorded data and evaluated them on two datasets. Interestingly, they found that in one dataset (with more constrained experimental conditions), the means were a rather good description of the population activity. In contrast, in the less constrained conditions, they were uninformative.

I generally liked the paper: the question is relevant and timely, the methods are solid and imaginative, the images are great, and the writing is quite clear.

I have some moderately minor and some really minor comments.

To me, it feels that results are sometimes stated stronger than what is visible from the images/analysis. Specifically, in the discussion of 5B. and C., the authors state in 208-209, “in Data set 2, the relation between single-trial responses and trial-averaged 209 templates is neither reliable nor specific.” However, in the figure and discussion, they need to use arguments that the indices/differences are significant but not large enough. While I support the usage of effect sizes, I feel they do not warrant, in this case, such a strong statement.

Further on, figure 7A: Here, I am not sure if the expected specificities of different data pre-processing are of the same ranges. In any case, it would be interesting to read why the reliability of the z-scored firing rate is larger than for Firing rates in Data2 and smaller in Data 1, and if the reliability of some data pre-processing is so large, is it still reasonable to say that the means are not informative for data2?

A similar problem seems to appear in the discussion of 6C: 280: “… grouping trials by average pupil size did not improve specificity”. However, panel 6C on the Specificity index clearly shows an increase, even if possibly not sufficient. Even more so for 6B and specificity (we expect it to grow because of picking the more correlated neurons, but the statement seems to contradict the picture)

The part on the surrogate data model is hard to read, the main text didn’t call the distribution (Beta distribution) and didn’t reference the methods. It refers to the function as Gaussian (299), but it just looks bell-shaped, it has a constrained domain so cannot be Gaussian. For Fig 8 it is unclear where the squares indicating data comes from. In the suppl Fig. it does not look like the experimental data is symmetric, so would not be a uni-parametric Beta distribution. Did you fit Beta to responses? How was SNR evaluated for the data?

Some writing comments:

The paper is overall very well written. My suggestions are mainly to improve the ”reader experience”.

I was missing the references to the particular parts of the vast method section. I feel they are also necessary because, at least for me, it was challenging to match the section titles with the information I sought. I would suggest adding numbering/lettering to the subsections in methods and referring to them directly.

For me, some parts feel a bit bloated, making it harder to get to the (simple and straightforward) point. Notably, the discussion repeats the results in a rather long form. I would suggest shortening it, especially the parts about checking alternative hypotheses.

Also, in the discussion (366-372), you mention “benchmarking.” This term has a strong meaning in some communities, overlapping with PLOS CB readership (checking the performance on a standardized test and comparing it to the previously presented solutions). I think this meaning does not fit your case. You illustrate the usage of the metrics, but you do not have ground truth or competitor-metrics.

Minor:

In Fig 2C middle: it seems like the bootstrap is still form the correct trial group. If not than I missed how it is done and why you have still the same value as on the left.

For Fig 4B-C, it feels like the thresholds for B were taken such that the areas would end up being exactly same as in C (at least for the Stimulus Information even the smallest threshold change will add or remove some areas). I think the match is remarkable independent on exact thresholding but maybe the total coincidence invites over-confidence in the results. For the better visibility, I would have colored the names of the most informative areas in C in the colors corresponding to their counterparts in B.

Line 187: I think you might have meant S3A

298 and later in the text something happened with “<” and “>”

314 “Data Set 1 should” be distributed

In methods: 617 you mean the “Pearson correlation coefficient”.

In Behavioral relevance part would be good to tell what are n_X, n_Y

Eq 17: k in {0,1} (you need brackets for the correct notation)

Fig S12. plotting (as, e.g., non-filled circles) the Omega for raw firing rates would make your statement visible

Fig S13 lacks the caption.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 1

Attachments
Attachment
Submitted filename: 2024-01 Response to reviewers PLOS CompBio.pdf
Decision Letter - Lyle J. Graham, Editor, Bradley Voytek, Editor

Dear Mr Tlaie Boria,

We are pleased to inform you that your manuscript 'What does the mean mean? A simple test for neuroscience' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Bradley Voytek

Guest Editor

PLOS Computational Biology

Lyle Graham

Section Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: Thank you to the authors for thoroughly addressing my comments, and clarifying some points which I misunderstood. I think the manuscript is much clearer and is going to be useful for neuroscientists to read.

Reviewer #2: In the revised version of their manuscript “What does the mean mean? A simple test for neuroscience”, Tlaie and co-authors addressed all my concerns. The changes made to the paper are sufficiently reflecting the responses and the manuscript now is ready for publicaition.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Lyle J. Graham, Editor, Bradley Voytek, Editor

PCOMPBIOL-D-23-01578R1

What does the mean mean? A simple test for neuroscience

Dear Dr Tlaie Boria,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Olena Szabo

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .