Peer Review History

Original SubmissionJune 18, 2024
Decision Letter - Matthias Hennig, Editor

Dear Dr Gjorgjieva,

Thank you very much for submitting your manuscript "Pre-training artificial neural networks with spontaneous retinal activity improves motion prediction in natural scenes" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Matthias Helge Hennig, Ph.D.

Academic Editor

PLOS Computational Biology

Daniele Marinazzo

Section Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: In their paper "Pre-training artificial neural networks with spontaneous retinal activity improves motion prediction in natural scenes", May and Gjorgjieva demonstrate how pretraining artificial neural networks (ANNs) with retinal waves can enhance the learning and processing of dynamic visual stimuli.

Overall I found the results to be novel, interesting, and convincing. Moreover, I found the article to be well written, and the figures to be of high quality. Before I list issues that I believe should be addressed, I'd like to highlight that it is a pleasure to review an article that is submitted in such a polished state.

# Major Issues

(i) There are issues with some of the statistics. There are a two interlocking issues here:

- Firstly, some of the statistics are poorly explained. When exploring the variability of model performance (e.g. Fig 2A-D), the authors report the performance achieved by repeated initializations of the same model on the same dataset. However, when I first read these statistics, it was not clear to me that it was only the model that was reinitialized, and I assumed that some kind of cross-validation was being performed, so that the datasets were being reprocessed somehow. Although the authors state that they're using a two-sided paired t-test, they should also state e.g. the sample size (so number of repeats) and any other relevant information in a brief summary. While the statistics aren't complicated, they should still be stated clearly, without needing to refer to the methods. Moreover, there appears to be post hoc analysis in grouping models (i.e. Fig 2A-C) but the exact method is nowhere stated.

- Secondly, and relatedly, ideally there would be some kind of cross-validation when assessing model performance. Ultimately what we care about (and what I assumed the shaded areas of the plot were supposed to indicate) is how well we can expect the model to perform on new data, which we don't really get an understanding of if there's only one train/test split. I realize that cross-validation of dynamical data is challenging, but there are methods. Moreover, since one of the major datasets being considered seems to be easy to synthesize, the authors should be able to create datasets with an appropriate structure. If this analyses were restricted to synthetic data, I would find this adequate.

(ii) I think the the authors spend too much time on basic validation of the model, and more space could be given to analyses of LSTM/GRU architectures.

- On one hand, I was somewhat impatient while reading the beginning of the Results section, because I was waiting to see if the authors would actually control for total training time. While I agree that showing that the model can be pretrained effectively on the data is interesting, this seems more or less a "corollary" of the results that demonstrate it reduces overall training time.

- On the other hand, the effect that LSTMs/GRUs have is pushed to the supplement. While I understand that exploring these architectures is not central to the goals of the paper, further developing these analyses could help attract a larger audience.

I realize that what I'm suggesting here is a significant restructuring of the paper, and I'll leave it up to the authors to decide how much they feel this criticism is valid. Nevertheless, to summarize, I think the impact of the article could be enhanced by demonstrating how retinal wave pre-training might be practically relevant to machine learning researchers (i.e. by focusing on the results where performance is enhanced, and more strongly emphasizing how this plays out with more sophisticated RNN architectures).

# Line by line issues

"Strikingly, even when matching the total training time by merely replacing initial training epochs on naturalistic stimuli with exposure to retinal waves, an ANN trained on retinal waves temporarily outperforms one trained solely on natural movies" (Abstract)

This sentence is hard to parse. Try to refine it.

“For the natural scene, we aimed to mimic the visual experience of a 75 mouse navigating through a maze with visual cues on the walls, a commonly employed paradigm in studies investigating the visual system” (p. 3) This reads more like a description of the data you'd like to have, rather than what you actually do. State that you are synthesizing the dataset, and point to the methods where you describe the details.

(p. 4) The numbered list is quite dense. Try to explain each paradigm more clearly, and/or point to the relevant method sections.

“We measured” (May and Gjorgjieva, 2024, p. 5) State actual objective

“Statistically significant differences in model performance at this 110 stage were not substantial enough to meaningfully impact frame prediction.” (p. 5) Just re-emphasizing that the actual difference you're evaluating is unclear.

(Fig 2D) Shouldn't early stopping modulate how long each simulation lasts? This is an issue in other places as well.

“This rules out the possibility that the improved performance is solely attributable to the weight distribution achieved during pre-training, highlighting instead the importance of the spatio-temporal features learned from retinal waves during pre-training and encoded in the precise positioning of the network weights.” (p. 5) this could be be better explained. You mean to rule out that this is merely better initialization distribution?

“Statistics refer to a two-sided paired t-test (\*\*\*: p <0.001, \*\*: p <0.01, n.s.: p >0.05).” (p. 6) Post hoc analysis/multiple comparisons control?

“improvement is solely due to the longer training time resulting from pre-training, and second, to determine how the duration of pre-training influences the observed performance enhancement.” (p. 7) Why didn't we just do this in the first place?

“When passing the input frames through the ANN, convolution is iteratively applied to each frame.” (p. 10) Maybe I'm missing something obvious, but this should be better explained.

“convolutional and recurrent neural networks, for instance with simple cells providing intuition for the principle of convolutional layers in ANNs [44]. Additionally, studies have shown that the representation of visual information in a convolutional neural network resembles that in the ventral stream of animals [45–49]. Specifically, the processing hierarchy in ANNs, where the layer hierarchy encodes increasingly complex stimulus features, matches that in the animal brain, where complexity increases in downstream brain areas [50–53]. Additionally, spontaneous activity seems to instruct the formation of independent parallel modules in the visual system [54], similar to the modules employed in ANNs. Despite certain properties being incompatible with the animal brain [55], this substantial line of work speaks to some extent of biological plausibility in such architectures.” (p. 13) This feels a bit out of place in the discussion. Reads much more like an introductory paragraph.

“Methods Dataset creation” (p. 13) The methods are clear, but I think they should be referenced more frequently in the text/there's also key details here that should be in the text.

Reviewer #2: The focus of this work is to study the effect of pre-training with retinal waves on using ANNs for next-frame prediction on natural movies. The conclusion is that pre-training with retinal waves can provide an advantage.

There are a few issues related to the presentation and design of the work. Hope these comments are constructive.

1. The authors demonstrated that DirectedProp and NotDirectedProp exhibit similar behaviors. They also noted that pre-training with retinal waves propagating in all directions yields comparable results. This suggests that directional bias is not critical. If these observations hold true, it raises the question of whether retinal waves are necessary at all. Pre-training with other simple patterns might yield similar outcomes, undermining the argument for the significance of retinal waves.

2. The authors briefly explored models pre-trained with retinal waves biased in all directions, not just temporal-to-nasal. If the primary claim emphasizes the importance of pre-training with retinal waves, it is crucial to systematically investigate different features of retinal waves, including those found in biology and those not yet identified. This approach would provide deeper insights into how retinal waves contribute to the tasks at hand.

3. Similarly, exploring various features of retinal waves for pre-training necessitates examining the characteristics of maze movies as well. For instance, the current setting involves a vertical layout of maze movies. Investigating other layouts and artificial movie patterns, such as moving gratings and looming patterns used in experiments, could be valuable.

4. Good that the authors explored CatCam movies; however, the exploration appears too brief. Given the complexity of features in this dataset, a more systematic examination could be beneficial. In particular, try to align features from Catcam movies and retinal waves could be valuable.

5. To substantiate the importance of retinal waves over other arbitrary patterns, it is essential to validate the modeling settings and demonstrate the unique significance of retinal waves. The current results do not clearly support this point, as it appears that other arbitrary patterns could yield similar outcomes, diminishing the perceived importance of retinal waves. Is that possible to find what kinds of non-retinal waves are not important for the tasks at hand?

6. The study of receptive fields (RFs) lacks clarity. The current results primarily address size and strength with somewhat simplistic characteristics. Recent studies using ANNs have uncovered more detailed RF features of the retina either in simple stimuli or nature images/movies. The authors may consider referencing these studies to refine their investigation of RF characteristics.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: None

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Sacha Sokoloski

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Revision 1

Attachments
Attachment
Submitted filename: Point by point responses.pdf
Decision Letter - Matthias Hennig, Editor

Dear Dr Gjorgjieva,

We are pleased to inform you that your manuscript 'Pre-training artificial neural networks with spontaneous retinal activity improves motion prediction in natural scenes' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Matthias Helge Hennig, Ph.D.

Academic Editor

PLOS Computational Biology

Daniele Marinazzo

Section Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors have made significant improvements to the manuscript, and have adequately addressed my concerns. I have no further comments.

Reviewer #2: The manuscript shows substantial improvement with the addition of results and discussions. I understand the authors' point that some of my questions may be difficult to address completely within the current study

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Sacha Sokoloski

Reviewer #2: No

Formally Accepted
Acceptance Letter - Matthias Hennig, Editor

PCOMPBIOL-D-24-01018R1

Pre-training artificial neural networks with spontaneous retinal activity improves motion prediction in natural scenes

Dear Dr Gjorgjieva,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Anita Estes

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .