Peer Review History

Original SubmissionFebruary 27, 2020
Decision Letter - Saad Jbabdi, Editor, Samuel J. Gershman, Editor

Dear Mr. Lerma-Usabiaga,

Thank you very much for submitting your manuscript "A validation framework for neuroimaging software: the case of population receptive fields" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Saad Jbabdi

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: This study contains a worthwhile effort to validate the estimates of population receptive field (pRF) parameters with a range of common software packages used in the literature. I welcome the authors' intention because such validation is important and desperately needed. One would certainly hope that the authors of previous pRF analysis tools already performed such simulations on their methods in-house, and in fact several authors have in fact published such validations, which would deserve some more discussion. Nevertheless, a more comprehensive effort is certainly very timely, considering the popularity of these methods now. I would be very interested to extend the authors' approach to our analysis tools (although I am not sure I have the technical knowledge to use the Docker approach presented here without further assistance). In general, I very much like the authors' suggestion of using such an approach to guide the most effective experimental design of future studies.

That said, I think there are a number of limitations to the study presented here and this could make this study more comprehensive and increase its impact on the field:

1. As far as I understand, the simulated BOLD time series are based on Kay's analyzePRF methods. As such it doesn't seem very surprising that this method seems to reproduce the ground truth most accurately in the noise-free data (Figure 5). This design seems asymmetric. If generating the pRF time course using the AFNI method, will this result in more precise estimates for AFNI?

2. The differences between the methods also deserve more discussion. To my knowledge, the AFNI method is purely based on an extensive grid search whereas in mrVista this varies between studies (although the standard method includes an optimisation step). I assume default parameters were used for each method but it would be useful to compare this, perhaps in the form of a table.

3. It is unclear to me why only one pRF at one location and size was chosen for the assessments. Surely the situation might differ considerably across different ground truth eccentricities or with different ground truth pRF sizes (which in turn would probably have implications for different visual areas)?

4. Why did you choose an oriented 2D Gaussian chosen as the underlying pRF model for the simulations? As far as I can tell, the most commonly used pRF model is a circularly symmetric 2D Gaussian. Unless I misunderstood, it also seems like the analysis of he data you present reflects a circularly symmetric model and the data analysis appears to be based on a single example pRF that is also symmetric (see point 3).

5. I may have missed this but is the data using the difference-of-Gaussians (DoG) pRF model actually used in any of the analyses? If not, then it might be sensible to remove this (you could say that this is possible to synthesise if desired). I am also not clear as to why this DoG model seems to have the same sigma (pRF size) parameters for the positive and negative components. For the DoG models implemented by Zuiderbaan et al (2012, J Vis), Schwarzkopf et al (2014, J Neurosci), or Anderson et al (2017, J Neurosci) the negative component is typically a larger surround and thus would have a larger sigma than in positive, central component (and these models did not use oriented, non-circular pRFs).

6. The analysis about sweep duration and stimulus randomisation doesn't appear until the Discussion. I feel this belongs into the results section. It makes sense to me that the randomised designs have a weaker signal-to-noise ratio and this matches what Binda et al (2013, J Vis) observed but then again we found in Infanti & Schwarzkopf (2020, NeuroImage) that randomisation did not impact signal-to-noise all that much - although this could represent a ceiling effect because we only estimated polar angles in those experiments.

7. You suggest that polar angle and eccentricity errors between the methods are negligible. This certainly matches the findings based on the reliability from Senden et al (2014, PLoS One), van Dijk et al (2016, NeuroImage), and Benson et al (2018, J Vis). I think these findings should be discussed in the context of these reliability studies. Of course, the aforementioned caveat applies that all the present results only speak to one simulated pRF of with x,y position 3,3 and radius 2 deg. It would be important to quantify this across a plausible range of pRF parameters.

Minor points:

Line 140: "is comprised of" - I'm probably pedantic and also realise that this is a grey area - but I'd change this to "comprises" or "consists of"

Line 145: typo - "noise-free" should be "synthetic"

Line 261: "summer 2019" - while I know what you mean this is ambiguous unless you're a Flat Earther and don't believe in the existence of the southern hemisphere. Change to "mid-2019"?

Line 284: In all the figure captions you state that the radius SD of the simulated pRF was 2 deg. This matches also the ground truth shown in the figures themselves. But here in the text there appears to be a typo as it is reported as 1 deg.

Jargon and figure labels: these could be made more consistent. For example you seem to move between signal percent change, percent signal change, and relative amplitude - sometimes even within the same figure.

Citations for pRF models: You should probably cite Zuiderbaan et al (2012, J Vis) for the DoG pRF models (although this is a simpler and somewhat different model than you use here - see point 5). For the oriented pRF model you should probably cite Silson et al (2018, J Neurosci). As mentioned earlier, I think the most widely used model is the circularly symmetric Gaussian model first presented by Dumoulin & Wandell (2008).

Reference list citation 28: My name is misspelled in that reference as "Sam Schwarzkopf D" when the correct reference should read "Schwarzkopf DS". I've noticed this before and may have something to do with how reference managers parse the author list. Sorry...

Sam Schwarzkopf

University of Auckland

Reviewer #2: This paper presents a general computational framework for evaluating neuroimaging software, with both computational reproducibility and validity.

The authors demonstrate this framework by applying it to population receptive field (pRF) methods for fMRI, and they present initial findings comparing different pRF tools using this framework.

Two highlighted features of the framework are the containerised architecture (with BIDS implementation) which ensures reproducibility, and the generation of synthetic data with known ground truth to evaluate validity.

Within the pRF context, the authors report that this validation framework allowed them to identify and report issues with some pRF tools, and a significant dependence on the HRF model used in different implementations.

Furthermore, it is proposed that the framework can be extended to evaluate other neuroimaging algorithms.

I acknowledge the importance and need for such frameworks and their utility for comparing across different neuroimaging tools within a common validation framework.

The paper is clear, and mostly well written. However, I am confused about the emphasis of this paper. Initially, from the title, abstract and introduction, I was under the impression that this paper was describing a general tool that could be applied to many neuroimaging contexts, but which would be demonstrated in the pRF context. However, through the methods results and discussion, the focus seem to shift heavily towards the pRF specific application.

I am left unclear if this is a general framework that is demonstrated in a pRF application, or a pRF framework that could, but has not been, adapted to be more general…. I appreciate that this may be a subtle difference, but to many readers it will be an important distinction.

If it is the former, then much more detail is required about how the framework would need to be adapted for other neuroimaging contexts, and how much effort that would be.

If the latter, then I suggest you embrace this as a pRF validation tool and downplay the more general sounding descriptions in the earlier parts of the paper, particularly the abstract and title. You should also promote the pRF results, which figure prominently in the discussion, to the abstract. In this form, you will need to take the evaluation of the pRF tools to its full conclusion; as you currently only describe initial findings.

Major issues:

Other than the emphasis, which I have already discussed, I have no other major issues with this paper.

Minor issues:

I think that the limitations section could benefit from some discussion of ground-truth versus truth, and generalisation. As this framework is dependent on synthetic data, the risk is that software will be optimised towards the ground-truth of synthetic data, which could reduce its generalisation to “real” data. Perhaps it would have been worthwhile performing evaluation with real data alongside the synthesised data, especially considering that the x-Analyze model already supports this.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Sam Schwarzkopf

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

Revision 1

Attachments
Attachment
Submitted filename: Response Letter.docx
Decision Letter - Saad Jbabdi, Editor, Samuel J. Gershman, Editor

Dear Mr. Lerma-Usabiaga,

We are pleased to inform you that your manuscript 'A validation framework for neuroimaging software: the case of population receptive fields' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Saad Jbabdi

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors have addressed my previous comments. With regard to previous literature on validation, I was really referring to previous work that used simulated time courses for assessing the validity of pRF model fits, similar to what the authors do here. Such approaches were for example used by Binda et al 2013 or Senden et al 2014. While the motivation behind that work may have more to do with the scientific research questions of these respective studies, it is arguably also a form of software validation. But perhaps discussing this is outside the scope of the present study.

I am sorry if I have missed this, but could you please give a little more detail as to how the slowed down stimulus sequences worked? Were they slowed down by longer duration at each bar step or by having more bar steps at 1 second per step?

Sam Schwarzkopf

University of Auckland

Reviewer #2: I am satisfied that the authors have made a genuine attempt to address my concerns. There is now a clearer distinction between the general applicability of the tool, and the specific application to pRF. I still would prefer a clearer title, but I am happy to leave that to the authors discretion.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Saad Jbabdi, Editor, Samuel J. Gershman, Editor

PCOMPBIOL-D-20-00318R1

A validation framework for neuroimaging software: the case of population receptive fields

Dear Dr Lerma-Usabiaga,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Laura Mallard

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .