Peer Review History

Original SubmissionMarch 4, 2022
Decision Letter - Wolfgang Einhäuser, Editor, Xue-Xin Wei, Editor

Dear Li,

Thank you very much for submitting your manuscript "Robust deep learning object recognition models rely on low frequency information in natural images" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Xuexin Wei

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note that the reviews of reviewer 1 and 2 are available as pdf.

Reviewer #1: This paper shows that the deep neural network models that are robust to adversarial images or data augmentations share one common feature of preferring the low frequency information in the natural images.

The authors started their analysis from the neural regularized models and found that the mouse-regularized model has a strong preference of the low frequency feature.

They then validated that the other publicly available robust models also share this preference and further proposed the blurring preprocessing as a defense strategy against the attacks.

At a high level, I think the main point of this article (robust networks prefer low-frequency features) is indeed supported by the presented evidence.

This result will be useful for the community to better understand these robust networks and then propose the networks that are even more robust.

However, I think there are several (possibly critical) issues that need to be addressed to make the whole story coherent.

Detailed comments are in the attachment.

Reviewer #2: Uploaded as an attachment

Reviewer #3: Li et al. test an interesting computational hypothesis. Namely that computational models of vision are more robust to adversarial perturbations and image corruptions if they have a bias towards lower spatial frequencies. The authors show via power spectral analysis of adversarial images and a clever ‘mixing’ experiment that neural regularized models (i.e., image classification models biased to have a similar representational geometry as mice/primate visual activity) have a bias towards lower spatial frequencies. In addition, they show that the neural regularization models are more robust by being less susceptible to ‘common corruptions’ (CIFAR10-C, ImageNet-C; Geirhos et al., 2016) and by yielding higher minimal perturbation distances for adversarial images generated under these models. The authors then suggest that a low frequency bias might be useful for robust object recognition in general. They study the spatial frequency bias of robustly trained models (adversarially trained or trained on ‘common corruptions’) and find that these indeed have a bias towards lower spatial frequencies compared to a baseline model. They also compare robust models to a model that includes a simple blur or PCA preprocessing step and show that some aspects of robustness can indeed be explained by biasing the model to lower spatial frequencies in the input.

I enjoyed reading the paper and I consider its contribution to be valuable for our understanding of model robustness. I do have several comments though where I think the paper should be improved.

Comments:

1) I really like the idea of the blur model in the second part of the paper. It is the most direct implementation of the computational hypothesis stated in the paper that a preference for lower spatial frequencies increases the robustness of a model. This model therefore has the highest explanatory value w.r.t the main claim of the paper. Unfortunately, the blur model is missing in the first part of the paper. It is therefore not clear whether the neural regularization model inherits the lower spatial frequency bias *and* the robustness of the mice/primate visual system or whether the neural regularization model is more robust *because* it inherits the lower spatial frequency bias (which is the computational hypothesis of the paper). I would suggest to include blur models in the first part of the paper to tease these possibilities apart.

2) The evaluation of the robustness as a function of spatial frequency preference in the second part is not quite satisfactory. Why averaging the accuracy of the common corruption dataset over the different corruptions after making insightful comparisons between the different severity levels and categories of corruptions in the first part of the paper? It would be good to see the results of Figure 5b separately for the categories and severities. In particular, there seem to be clearly different predictions. In general, it would be most helpful for the reader to perform the same kind of analysis and show the same kind of plots in both parts of the paper which seems possible for almost all analyses in the paper.

3) It would be helpful to include a wider range of baseline models in the second part of the paper (in particular for the mixing experiment and the adversarial perturbations). While it seems plausible that non-robust models prefer higher spatial frequencies compared to robust models, there is only one baseline model presented and it is not clear to what extent its spatial frequency preference might be a function of the specific architecture (which is a WideResNet28-10). Optimally, there would be a baseline model for each of the architectures of the robust models (Table 2). A compromise would be to include a set of baseline models that come close in representing the architectures in Table 2 but do not require model training (i.e., publicly available pretrained models). At the very least one could add a ResNet-18 as additional baseline model.

4) The focus of the paper is on machine learning models. The authors emphasize the importance and value of integrating machine learning and neuroscience which I fully agree with and I commend their approach. However, given that framing, I was missing a treatment (possibly in the discussion section) of the biological system that inspired the computational hypothesis here in a data-driven way. What do we know about whether the biological circuits indeed blur the image? Are psychophysics results and known spatial frequency preferences of mice and primate visual cortices consistent with a blurring hypothesis and the notion that this would increase robustness?

5) Please include more details about the overall procedure (e.g., in an additional procedure subsection in the Methods section) with a level of detail that would allow others to reproduce the analysis. First of two examples: it is unclear what the training and test sets are for adversarial evaluation of the neural regularized models (what is "the full testing dataset"?). Second: “a fixed set of 1000 images were selected from testing set and the incorrect class targets for each image was also fixed". Quite a few details are missing here including what differs between the procedure in the first and the second part of the paper.

Minor comments:

6) It might be very informative to plot the spectra of the blur and pca filters (i.e., of the preprocessing layers) juxtaposed to the corruption spectra (Fig. 7) to understand which part of the robustness results (for the common corruptions) can be explained by a simple filtering explanation.

7) How did the green curve in Figure 5b come about? Please also plot the actual datapoints to which the curve was fit. In addition, it is unclear whether the blur models underlying this curve are all trained with different fixed sigma or whether they correspond to a sweep of sigma on a model trained with a single fixed sigma. See my related point about more detailed description of the procedure.

8) Figure 5b, 6b. Given that the performance of these models on clean images is quite different - wouldn't it make more sense to analyze the decrement in performance due to image corruption (compared to clean image performance) instead of the accuracy on the corrupted images?

9) The second part of the paper should - like the first part - report standard means and standard deviations for the relevant measures.

10) End of last paragraph on p. 3. Reference should be to Fig 2 c, g (not c, f)

11) "Since the mouse regularized models in Fig. 2 are trained on grayscale CIFAR10, it is not included in this comparison" (p. 4). Are there multiple mouse models or just one?

12) The term "corruption accuracy" is a misnomer and slightly confusing when encountering it in Fig. 5b. The legend (in conflict with the y axis label) states "corruption robustness” which is an ambiguous term as well. Accuracy (on corrupted images) would seem like the most accurate term which is also being used in the first part of the paper.

13) "adv" and "crp" are not explained / spelled out in the figure legends or the text.

14) colorbars missing for the power spectra in Fig. 2 and Fig. 4.

15) Even though this is uncommon in machine learning – I would appreciate considerations regarding statistical inference. The standard error of most estimates is a function of the number of evaluation samples and can therefore be made arbitrarily low. Reporting test statistics is therefore somewhat meaningless – but this fact could be made explicit in a short paragraph in the Methods section. It could also be made explicit that the kind of implicit statistical inference made in the paper is at the level of individual models (e.g., the specific instance of a single robustly trained model vs. a specific non-robust model) and not at the level of model classes (e.g., adversarially trained models vs. baseline models).

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No: The authors claim that the data and the codes will be public later.

Reviewer #2: Yes

Reviewer #3: No: The authors state that data and code will be made publicly available in a public repository but I cannot assess to what level of detail and when this will happen.

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Chengxu Zhuang

Reviewer #2: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Attachments
Attachment
Submitted filename: Review_for_PLOS_Comp__Bio.pdf
Attachment
Submitted filename: ploscb review.pdf
Revision 1

Attachments
Attachment
Submitted filename: rebuttal.pdf
Decision Letter - Wolfgang Einhäuser, Editor, Xue-Xin Wei, Editor

Dear Li,

Thank you very much for submitting your manuscript "Robust deep learning object recognition models rely on low frequency information in natural images" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Xuexin Wei

Academic Editor

PLOS Computational Biology

Wolfgang Einhäuser

Section Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: I want to first thank the authors for their work addressing my concerns. They have added new experiments and results, answered my questions, and provided more explanations and discussions. With these modifications, I now recommend accepting the paper.

However, the newly added ImageNet ResNet-50 results make me feel that the robust networks on small and larger resolutions differ significantly on why they are robust. This low-frequency preference matters more for small resolution compared to large resolution. This resolution-relevant difference may also explain why mouse-regularized networks show a much higher preference for low frequency, as their visual systems have lower acuity. The results of this work are still important, as they uncover what underlies the robustness of small-resolution robust networks and report that the robustness in large-resolution networks may require more mechanisms. Given that the large resolution networks are much more widely used in real-world applications, I encourage the authors to think more about how to explain and further facilitate the robustness of those networks.

No attachment is uploaded.

Reviewer #3: I reviewed the revised version of the manuscript and I appreciate the authors’ responses. However, the authors missed responding to one major and a couple of minor concerns in the rebuttal. I could also not find them addressed in the revised manuscript. I will reiterate them and/or quote from my last review.

Major:

1) I had asked the authors to include more baseline models in their analysis (my third point in the comments to the authors), which was not addressed in the rebuttal. The authors propose a computational principle for robust object recognition. Namely, that low spatial frequency preference is one cause of model robustness. Evidence is presented only for robustly trained models with the exception of a single baseline architecture. Spatial frequency preferences may vary quite substantially between architectures with different spatial integration properties. The correlations between spatial frequency preference and robustness (e.g., Fig 4c and 5b) might therefore become less clear-cut if a wider range of non-robustly trained baseline models is considered. This would restrict the proposed computational mechanism from a general principle to the special case of models trained for robustness. Currently, there is a single baseline model (WideResNet28-10) while the robust models have a wider variety of architectures (WideResNet-70-16, WideResNet-28-10, WideResNet-34-20, ResNeXt29-32x4d, PreActResNet-18, ResNet-18).

It should be quite straightforward to include baseline models of at least a few more of the other architecture types (WideResNet-70-16, WideResNet-28-10, WideResNet-34-20, ResNeXt29-32x4d, PreActResNet-18, ResNet-18) for the CIFAR10 part and I ask the authors to include these analyses in the revised manuscript.

Minor:

2) “9) The second part of the paper should - like the first part - report standard means and standard deviations for the relevant measures.”

3) “11) "Since the mouse regularized models in Fig. 2 are trained on grayscale CIFAR10, it is not included in this comparison" (p. 4). Are there multiple mouse models or just one?” -> It would be helpful for the reader to very briefly clarify in the manuscript or supplement what those models are (why not just one VGG19 mouse model?). Not every reader might be deeply familiar with the details of the respective publication.

4) “13) "adv" and "crp" are not explained / spelled out in the figure legends or the text.”

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 2

Attachments
Attachment
Submitted filename: rebuttal.pdf
Decision Letter - Wolfgang Einhäuser, Editor, Xue-Xin Wei, Editor

Dear Li,

We are pleased to inform you that your manuscript 'Robust deep learning object recognition models rely on low frequency information in natural images' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Xue-Xin Wei

Academic Editor

PLOS Computational Biology

Wolfgang Einhäuser

Section Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #3: The authors have addressed all my concerns.

Please make sure to properly cite the URL of the cifar10 model repository (reference [38]) in the paper.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

Formally Accepted
Acceptance Letter - Wolfgang Einhäuser, Editor, Xue-Xin Wei, Editor

PCOMPBIOL-D-22-00328R2

Robust deep learning object recognition models rely on low frequency information in natural images

Dear Dr Li,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Zsofia Freund

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .