Peer Review History
| Original SubmissionJuly 19, 2019 |
|---|
|
Dear Dr Machens, Thank you very much for submitting your manuscript 'Learning to represent signals spike by spike' for review by PLOS Computational Biology. Your manuscript has been fully evaluated by the PLOS Computational Biology editorial team and in this case also by independent peer reviewers. The reviewers appreciated the attention to an important problem, but raised some substantial concerns about the manuscript as it currently stands. While your manuscript cannot be accepted in its present form, we are willing to consider a revised version in which the issues raised by the reviewers have been adequately addressed. We cannot, of course, promise publication at that time. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Your revisions should address the specific points made by each reviewer. Please return the revised version within the next 60 days. If you anticipate any delay in its return, we ask that you let us know the expected resubmission date by email at ploscompbiol@plos.org. Revised manuscripts received beyond 60 days may require evaluation and peer review similar to that applied to newly submitted manuscripts. In addition, when you are ready to resubmit, please be prepared to provide the following: (1) A detailed list of your responses to the review comments and the changes you have made in the manuscript. We require a file of this nature before your manuscript is passed back to the editors. (2) A copy of your manuscript with the changes highlighted (encouraged). We encourage authors, if possible to show clearly where changes have been made to their manuscript e.g. by highlighting text. (3) A striking still image to accompany your article (optional). If the image is judged to be suitable by the editors, it may be featured on our website and might be chosen as the issue image for that month. These square, high-quality images should be accompanied by a short caption. Please note as well that there should be no copyright restrictions on the use of the image, so that it can be published under the Open-Access license and be subject only to appropriate attribution. Before you resubmit your manuscript, please consult our Submission Checklist to ensure your manuscript is formatted correctly for PLOS Computational Biology: http://www.ploscompbiol.org/static/checklist.action. Some key points to remember are: - Figures uploaded separately as TIFF or EPS files (if you wish, your figures may remain in your main manuscript file in addition). - Supporting Information uploaded as separate files, titled Dataset, Figure, Table, Text, Protocol, Audio, or Video. - Funding information in the 'Financial Disclosure' box in the online system. While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see here. We are sorry that we cannot be more positive about your manuscript at this stage, but if you have any concerns or questions, please do not hesitate to contact us. Sincerely, Samuel J. Gershman Deputy Editor PLOS Computational Biology A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: [LINK] Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: ‘Learning to represent signals spike by spike’ is a normative study on learning rules built to represent multiple signals simultaneously in a spiking neural network. This work starts where previous efforts from the same authors (particularly Boerlin et al. (2013)) had left off. In that previous study, it was shown that a carefully crafted arrangement of synaptic weights allows a network of spiking neurons to represent an arbitrary number of continuous time-dependent signals. These results relied on a precise arrangement of synaptic weights, and the authors had to assume that such an arrangement was given a priori. In the present study, they ask if there exists spike-timing dependent synaptic learning rules to let the network to self-organize to this rather convenient state. Following a normative approached based on a greedy optimization of decoding error, they show that there is a learning rule which can maximize encoding precision and is shows at the same time a voltage and spike-timing dependence in a way that matches, qualitatively, some standard in vitro experiments. The authors reports considerable achievements of their learning rule in spiking neural network (I commend the efforts to establish a Dalean network that self-organizes in precise input representation). This work comes in an opportune moment as part of the field of computational neuroscience shows a growing interest in learning rules that will ensure that a particular function is conserved. Learning rules have been shown to have a plethora of shapes and properties, and the recent introduction of inhibitory learning rules is only making things worse. There was much focus on rate based learning rules (FORCE learning and its variants), which has recently been shown to work with spiking neurons. In the same vein, there is much recent research on learning weight matrices that are transpose of a known weight matrix in biological implementation of deep learning. All these problems are connected to the present work. Yet the present study is original and distinct from other studies in the sense that it applies to the predictive coding framework and promises of an energy efficient encoding of information. That being said, I think the paper needs to be revisited carefully in order to unify the narrative, the results presented and and the supplementary material. I expand on my point of view below, but overall I recommend further consideration of this MS for PLoS CB. 1. Abstract. The abstract seems to confuse premises with results. Statements like ‘here we show that many single-neuron quantities including voltages… acquire a precise functional meaning’ summarizes the premise of the work rather than the result. Premise in the sense that these are the assumptions from which the main results are derived, but in that case also because these are the results of a previous paper from the same group. Similarly for the conclusion sentence of the abstract. Going a little further, the question of finding THE level at which THE functional meaning emerges is not a key question in neuroscience. There are multiple levels of description and therefore functionality has multiple levels of description. Multiple levels of description, but also multiple types of systems with membrane potentials (the spike-based predictive coding framework does not apply to non-spiking retina despite the shared coding and energy constraints). These statements is made even more out of place when we consider the fact that I don’t think the work presented in the MS addresses this question. The work is about whether biological-looking learning rules can give rise to the nice benefits of the predictive coding framework. In effect, it would be nice if the abstract would be more to the point. The introduction is good, so just a condensed version of the intro would do. Similar issue with the end of the discussion. 2. Intro. I thought I would mention a few related works that I think are germane to the present study: - Membrane potential as prediction error: Urbanczik and Senn, Neuron (2014). - Learning the transpose of weights: Burbank (2015) uses an STDP setup to do so. - Further learning of the transpose of the weights in rate models; Akrout et al. (2019); Lansdell, Prakash and Kording (2019) 3. Decoder weights. In many places the decoder weights are said to be unknown, but then they are the target of the recurrent weights, later they are the target of feedforward weights. How can the weights be targets without being known? Similarly, the decoder network is sometimes an explicit network elements, but recurrently it is just a virtual presence introduced for the sake of argument. It was particularly confusing in the supplementary materials: D is assumed unknown but should follow S.23, which is in effect F. Then F is assumed unknown, but derived to be D. I am left with the impression that there is a circular argument in the learning of F with D that is not fixed a priori. In my point of view, the circular argument is present in sections like 8.2 of supplementals. Same with section 6, which (6.4 has F to mimic D, but 6.5 chooses D with F). 4. Supplementary material. I could not fully follow the supplementary material. There was too much back and forth between different formalisms and different sets of assumptions. Current based learning, then voltage-based learning, then L1-L2 costs, then summary of some of it, there has to be a more streamlined version. There are a few, perhaps interesting, theoretical results that are not part of the results as far as I can see. Particularly parts of section 5. 5. Figure 2 does not give enough credit to learning recurrent connections. As we can see in Fig. 2A, the error goes down dramatically at then end of the recurrence-learning, so the signal reproduction is near perfect before learning the FF weights. Since it is not clear in Figure 3 whether the recurrent weights have been adequately learned, this brings me to the question eluded to earlier: can you prove that FF learning is essential? In which case? 6. Figure 4 shows inverted I-E connections with respect to EE, but the text mentions the learning rules are the same. Also, on which side is t_post>t_pre in the learning windows? 7. x is assumed to white at many places, but it is simulated as a non white signal (section 14.1 and eq. S.1) 8. Having filtered x and non-filtered x denoted by the same variables in the main results section is disturbing, please fix. 9. Please explain why noise is included in the simulations. I presume it is required to some extent. 10. Please verify the transpose on Eq. S.12 and S.10. 11. L* not defined in first equation of Section 4 of supplementary materials. Wording is a bit confusing just before that equation as it is as if a loss function is defined by the equation. I find this equation and the one in 5.1 confusing. The goal is not to determine L*, but to determine D or o that achieves L*. Why not use argmin? 12. What is a population spike in (section S7) 13. Main results section p14-15. The term decoding weights has been used instead of FF weights. 14. Note that FF weights from thalamus are fixed after critical period. 15. I was a bit frazzled by the overly simple descriptions Reviewer #2: Comments are uploaded as an attachment. ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: None ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Richard Naud Reviewer #2: No
|
| Revision 1 |
|
Dear Dr Machens, We are pleased to inform you that your manuscript 'Learning to represent signals spike by spike' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch within two working days with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Samuel J. Gershman Deputy Editor PLOS Computational Biology *********************************************************** |
| Formally Accepted |
|
PCOMPBIOL-D-19-01208R1 Learning to represent signals spike by spike Dear Dr Machens, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Laura Mallard PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .