Peer Review History

Original SubmissionMay 2, 2022
Decision Letter - Samuel J. Gershman, Editor, Marieke Karlijn van Vugt, Editor

Dear Appelhoff,

Thank you very much for submitting your manuscript "EEG-representational geometries and psychometric distortions in approximate numerical judgment" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

Two reviewers have read your paper, and although they see some merit in it, they also have major reservations. I encourage you to have a serious look at these concerns and try to address them.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Marieke Karlijn van Vugt, PhD

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Reviewer #1: Here the authors examine how sequentially presented numerical information is encoded in a judgment (i.e, the average of the stream was larger or smaller to a reference value) and in a choice context (i.e., the blue stream was larger on average than the red stream). Although behavioural analyses indicate that in the judgment context people compress numerical information (akin to a concave representation) and that in the choice context the follow the opposite pattern (akin to a convex representation), EEG encoding analyses reveal an “anti-compression” (convex representation) in both contexts.

The paper deals with an interesting research question. Furthermore, the analyses and description of the results are rigorous and clear. I have the following concerns/ suggestions:

1) The authors claim that in the judgment context representation is compressed. But Figure 1c speaks for a rather linear representation, at least at the average level. The model fits (Figure 1d) deviate significantly from linearity. It is not clear if the psychometric model misfits by large in some cases, giving rise to a significant k<1. Showing the decision weights per participant and performing some parameter recovery exercises will help. Additionally, can the authors identify critical trials in which the “compressed” and “anti-compressed” models make opposite predictions? This will help to further corroborate the claim that k<1 in judgment and k>1 in choice.

2) Does the order of presentation influence behaviour, giving rise to primacy/ recency patterns? Additionally the authors state that “as expected, means choice accuracy was higher in the single-stream task…This suggests that comparing two streams was more difficult than averaging a single stream (of otherwise physically identical inputs)”. It is not clear if the authors suggest that judgment is easier than choice. Decisions in the former are made on the basis of 10 samples whilst in the latter on the basis of 5 samples. Assuming some internal noise, the accuracy difference is not surprising. Please clarify that point and perhaps see if having the same level of internal noise (on a per sample basis) in both tasks, explains away the accuracy difference without necessitating a different value of softmax noise.

3) The psychometric and “neurometric” analyses do not agree. This mismatch can be an important finding, or more simply it could be the case that either of these analytical approaches is inaccurate. Importantly, the authors do not establish that the kappa & bias parameters obtained from the behavioural data correlate with the parameters obtained from the RSA analyses. Is there any association between neurometric and psychometric parameters? This comment also relates to point 1) (can the authors back up the kappa><1 conclusion using complementary behavioural analyses?).

Reviewer #2: In this study the authors analyze human behavioral and neural responses to numbers presented in streams when subjects evaluate their average and are asked to either compare it to a reference (task 1) or to the average of another stream (task 2).

Results report a very weak evidence for two different psycho-metric distortions (compression or anti-compression) across the two tasks in the behavioral data, but a very strong evidence for the same neuro-metric distortion (anti-compression) across tasks in the neural data. The two sets of results are presented as opposite/incongruent and so they are discussed.

I think that the results are presented and discussed in a slightly distorted fashion, one that is not fully supported by the data:

1. Behavioral data: the authors claim that the results reported confirm “robust compression in single-stream averaging and robust anti-compression in dual stream comparison”. First, the evidence for compression in the single-stream is far from robust: indeed it is based on a quite low statistic (p= 0.02), at least compared to the evidence for anti-compression in the dual task (p<0.001). Second, I could find a report of the goodness of fits of the model used to evaluate compression/anti-compressions, and from eye inspection of the plots, the single stream model might have a rather poor fit of the data. Third, what one should really compare to decide for compression vs. anti-compression are especially the extreme values. Again, from visual inspection of the data plotted, while for the dual stream it seems clear that the extreme values are overweighed, in the single stream there is extremely little evidence that the extremes are overweighed. Thus, I am far from convinced that these bahavioural data are so clear-cut different across stream-types as the authors would want to report. The authors should provide more evidence for their claim for such a difference.

2. Neural data: the results are repeatedly reported in negative statements (e.g., “the neural processing of number samples in single-stream averaging was not characterized by compression), while the positive results are preceded by minimizing expressions (e.g. “but—if anything—by anti-compression”). However, the picture is not as nuanced as the authors would like to present it: for the case of the ERP analyses, for example, the results clearly and very significantly indicate that the neural processing of numbers for single AND dual streams are both and equally characterized by anti-compression.

3. Discussion: the discussion is centered on the difference between the behavioural and the neural data. For the reasons reported above (point 1), I am not convinced that after all they are so different. I think that the results session and the discussion of this paper should be deeply revised.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No: 

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 1

Attachments
Attachment
Submitted filename: response_to_reviewers.pdf
Decision Letter - Marieke Karlijn van Vugt, Editor

Dear Appelhoff,

Thank you very much for submitting your manuscript "EEG-representational geometries and psychometric distortions in approximate numerical judgment" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

While this revision has much improved, one of the reviewers still has significant concerns about the paper, and suggests a few analyses that can be gone to clarify ambiguities. I therefore recommend a major revision to address these comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Marieke Karlijn van Vugt, PhD

Section Editor

PLOS Computational Biology

Samuel Gershman

Section Editor

PLOS Computational Biology

***********************

While this revision has much improved, one of the reviewers still has significant concerns about the paper, and suggests a few analyses that can be gone to clarify ambiguities. I therefore recommend a major revision to address these comments.

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors have done a good job in addressing most of my comments.

Point 1): I am fully covered by the reply of the authors. One point that the authors have not touched upon is whether there can be some diagnostic trials (for instance low vs. high variance in the comparison task) that could provide additional corroboration for the shape of the distortion.

Point 2): thanks for clarifying the task details. What has not been addressed is the question of serial position effects. What I am missing is a more accurate characterisation of behaviour in the two tasks. Notably, participants find averaging easier than comparison. What could explain this result asides from assuming that the former has a lower noise level than the latter (which is a re-description of the accuracy difference)? My suggestion is to estimate the "leak" of accumulation and compare across tasks (e.g. Wyart, V., Myers, N. E., & Summerfield, C. (2015). Neural mechanisms of human perceptual choice under focused and divided attention. Journal of neuroscience, 35(8), 3485-349).

Assuming that fitting a "noise, leaky" accumulator reveals processing differences in the two tasks, then the question is how these differences map onto the parameter of the psychometric model and especially the distortion parameter. That is, if simulated data are generated by two leaky accumulators with linear input repreresentation but different noise/ leak parameters, is the psychometric fit going to (mistakenly) reveal differences in the distortion parameters? This is largely a sense-checking step that would increase confidence in the reported behavioural results.

3) I appreciate the additional analyses performed. I recommend reporting the absence of correlations (even briefly) since this strengthens the argument that the author make on the missing link between neural representations and behaviour. Given that the behavioural and neural analyses are central in this paper, exploring the relationship between the two seems an obvious step (despite the potential lack of statistical power).

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 2

Attachments
Attachment
Submitted filename: response_to_reviewers2.pdf
Decision Letter - Marieke Karlijn van Vugt, Editor

Dear Appelhoff,

We are pleased to inform you that your manuscript 'EEG-representational geometries and psychometric distortions in approximate numerical judgment' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Marieke Karlijn van Vugt, PhD

Section Editor

PLOS Computational Biology

Samuel Gershman

Section Editor

PLOS Computational Biology

***********************************************************

Thank you very much submitting your revised version. Both the reviewer and myself think you have fully addressed all concerns and I am happy to now accept the paper. Congratulations!

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The reviewers have done an excellent job addressing my remaining comments. I recommend acceptance of this paper.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Formally Accepted
Acceptance Letter - Marieke Karlijn van Vugt, Editor

PCOMPBIOL-D-22-00680R2

EEG-representational geometries and psychometric distortions in approximate numerical judgment

Dear Dr Appelhoff,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Zsofia Freund

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .