Peer Review History

Original SubmissionFebruary 20, 2025
Decision Letter - Hugues Berry, Editor, Yuanning Li, Editor

PCOMPBIOL-D-25-00336

When predict can also explain: few-shot prediction to select better neural latents

PLOS Computational Biology

Dear Dr. Dabholkar,

Thank you for submitting your manuscript to PLOS Computational Biology. After careful consideration, we feel that it has merit but does not fully meet PLOS Computational Biology's publication criteria as it currently stands. As you will see in the attached comments, one of the reviewer has raised serious concerns on unclear method descriptions, overuse of toy examples, lack of real data validation and statistical rigor, and formatting issues. We feel substantial revisions are necessary to address these concerns. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

​Please submit your revised manuscript within 60 days Jun 16 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at ploscompbiol@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pcompbiol/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. This file does not need to include responses to formatting updates and technical items listed in the 'Journal Requirements' section below.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, competing interests statement, or data availability statement, please make these updates within the submission form at the time of resubmission. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter

We look forward to receiving your revised manuscript.

Kind regards,

Yuanning Li

Academic Editor

PLOS Computational Biology

Hugues Berry

Section Editor

PLOS Computational Biology

Journal Requirements:

1) Please ensure that the CRediT author contributions listed for every co-author are completed accurately and in full.

At this stage, the following Authors/Authors require contributions: Kabir Vinay Dabholkar, and Omri Barak. Please ensure that the full contributions of each author are acknowledged in the "Add/Edit/Remove Authors" section of our submission form.

The list of CRediT author contributions may be found here: https://journals.plos.org/ploscompbiol/s/authorship#loc-author-contributions

2) We ask that a manuscript source file is provided at Revision. Please upload your manuscript file as a .doc, .docx, .rtf or .tex. If you are providing a .tex file, please upload it under the item type u2018LaTeX Source Fileu2019 and leave your .pdf version as the item type u2018Manuscriptu2019.

3) Your manuscript is missing the following sections: Results, and Methods.  Please ensure all required sections are present and in the correct order. Make sure section heading levels are clearly indicated in the manuscript text, and limit sub-sections to 3 heading levels. An outline of the required sections can be consulted in our submission guidelines here:

https://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-parts-of-a-submission 

4) Please upload all main figures as separate Figure files in .tif or .eps format. For more information about how to convert and format your figure files please see our guidelines: 

https://journals.plos.org/ploscompbiol/s/figures

5) Please upload a copy of Figures 4C, and D which you refer to in your text on page 8. Or, if the subfigures are no longer to be included as part of the submission please remove all reference to them within the text.

6) Please ensure that all Figure files have corresponding citations and legends within the manuscript. Currently, Figure 5 in your submission file inventory does not have an in-text citation. Please include the in-text citation of the figure.

7) We notice that your supplementary Figures, Tables, and information are included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list. Please cite and label the supplementary tables and figures as “S1 Table” and “S2 Table,” "S1 Figure", S2 Figure" and so forth.

8)  Thank you for stating "Code for the HMM simulations is available at https://github.com/KabirDabholkar/hmm_analysis." This link reaches a 404 error page. Please amend this to a working link.

9) Please amend your detailed Financial Disclosure statement. This is published with the article. It must therefore be completed in full sentences and contain the exact wording you wish to be published.

1) State the initials, alongside each funding source, of each author to receive each grant. For example: "This work was supported by the National Institutes of Health (####### to AM; ###### to CJ) and the National Science Foundation (###### to AM)."

2) State what role the funders took in the study. If the funders had no role in your study, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

10) Please ensure that the funders and grant numbers match between the Financial Disclosure field and the Funding Information tab in your submission form. Note that the funders must be provided in the same order in both places as well. Currently, the order of the funders is different in both places.

11) Please provide a completed 'Competing Interests' statement, including any COIs declared by your co-authors. If you have no competing interests to declare, please state "The authors have declared that no competing interests exist". 

Reviewers' comments:

Reviewer's Responses to Questions

Reviewer #1: This manuscript introduces a novel and interesting few-shot prediction metric to enhance the alignment between inferred latent variables and the true latent. Such a tool holds significant potential for modern neuroscience research, particularly in the context of large-scale population recordings. Nevertheless, the paper lacks sufficient results to support their conclusion and the presentation of the work is kind of messy.

1. When the authors challenge the common assumption on the effectiveness of co-smoothing, they can provide concrete examples in real neural data, instead of using a simplified HMM as a toy example.

2. line 95-96, "It is common to assume that being able to predict held-out parts of X will guarantee that the inferred latent aligns with the true one"

I don't think this is a common assumption. Although co-smoothing is commonly used for benchmarking LVMs, their purpose is to demonstrate their inferred latents recover the true latent signals in some way (e.g., rotation, permutation, linear combinations etc.).

Actually, this is consistent with your statment in line 101 "we hypothesize that good prediction guarantees that the true latents are contained within the inferred ones".

3. Over-parameterization usually achieves better prediction scores. By regularizing the model to have pasimonious parameters, we may achieve good alignment between the inferred latents and the true latents. This can be verified in Fig 1E, small number of latent yields smaller D_{T->S} values. I think the authors should also comment on their few-shot approach vs model selection approach (using AIC/BIC or other regularizers described in line 18-20). One is using limited data, while the other is using limited model parameters.

4. The organization of the manuscript needs major revision. The manuscript put too much space to discuss the toy example HMM and fail to validate its applicability in real LVM on neural data, whose latent are continuous values, instead of discrete states. Most people will agree 'high prediction score not necessarily yields good alignment with the true latents', thus there is no need to use 1 page to verify this with an HMM model. Simply using a LFADS or STNDT results to show the diversity of inferred latent dynamics suffices. Or you can use simulated data to provide ground truth latent.

5. In the discussion of 'why does few-shot work', you can just describe the more insightful comments. The current version is too lengthy and obscure. You can also move them to supplementary materials. I think the manuscript should include more examples of neural data to validate the performance of this few-shot prediction score.

6. The few-shot approach is very similar to bagging in ensemble learning, with an aim of reducing variance of the model estimation. In bagging, a random sample of data in a training set is selected with replacement. After generating several data samples, these weak models (g' in your work) are then trained independently. Proably you can comment your methods by connecting it with bagging.

7. The few-shot prediction approach, which is the main contribution of the manuscript, is not clearly described.

a. Line 152, you used N^{k-out}. What's the typical value for N^{in}, N^{out}, N^{k_out}?

b. The caption of Fig 3B says that 'f and g are frozen'. If so, how does Q^k help with the model training? Are you only using few-shot prediction for model evaluation?

b. If the few-shot prediction is simply used for evaluation, the metric is only used for comparing the inferred models. What if all models fail to align with the true latent?

8. The format of the manuscripts needs huge changes. It looks like a rash change from conference submission. The authors should follow the PLOS journal format and organize their manuscript according to journal standards. For example, the supporting information has subsections with a prefix of A-Z, and different sections are like randomly compiled contents without inherent connections.

9. The time cost in computing few-shot prediction metric should be presented or discussed.

10. What if the data has no trial structure? For example, LFADS uses single trial to infer the latents. Maybe you have different definitions of trials?

11. Only uses mc_maze_20 for real-world experiments. Need more realistic evidence, and maybe some neuroscientific insights about how few-shot decoding can better recover latent dynamics.

12. No statistical representation of performance comparison, but only descriptive words and figures. Need hypothesis testing to justify your assertion about data distributions (un-correlated, negatively correlated, etc).

13. The authors realized using only HMMs for synthetic data is not enough, but just discussed in the appendices and provided additional results with LGSSMs. Why not integrate with the main text in the first place?

Minor:

- Fig 2 is a bit confusing.

- Good student on the right but mentioned first in the figure caption.

- What is "edge width"? Do you mean edge value or edge weight?

- There shouldn't be "invisible edges" in a graph. Maybe use lower alpha value or other colors.

- Line 201, typo, "the good student"

- L223-L225 the paragraph is just one sentence, which could be OK in some scenarios. But it looks more like the manuscript is not well organized.

- Fig 5 is not mentioned in your main text, instead you referred to Figure 19, which has duplicated contents with Fig 5.

- In your writing, you should consistently use only one of Fig , Fig. And Figure

- The real-world dataset needs a demonstration to address a non-expert audience:

- What is the experiment setting? Why should we model it with LVMs?

- How to interpret the latent variables underlying the dataset? And what does "extraneous dynamics" mean in this scenario?

- Without background information, a non-expert reader might not understand your experiments or your results, such as the "trajectories" in Fig 5 and 19, thus doubting the whole work.

- Some important notions (smoothing, co-smoothing, few-shot co-smoothing, cross-decoding) have similar and non-intuitive names, maybe list a name table in the main text or appendix, and use abbreviations in the main text.

- Some typos ("a the" in line 89, "LGSMM" in line 478, missing section index in line 184).

Reviewer #2: This manuscript presents a new metric for evaluating latent variable models used in neuroscience, the few-shot co-smoothing score. When combined with standard co-smoothing, the new metric can identify models that fit the observed data well but that have unnecessarily complicated latent states (where the unneeded complexity is hidden by the observation model). In other words, the metric can help to identify models that have more vs less parsimonious representations of the latent dynamics. It is extremely common in modern neuroscience to fit latent state models and interpret the inferred latents in scientific terms -- so it is a huge problem that inferred latent states can be unnecessarily complicated, even when a model is a good fit to data. Hence the current manuscript is a timely and valuable contribution to the literature. It is also well-written, convincing, and thorough.

My only substantive comment is that, as the authors demonstrate, the few-shot relationship to ground truth is very sensitive to the choice of the number of few-shot neurons k, and the appropriate choice depends on the model class. So I think it is important to discuss the choice of k in the main text rather than only discussing it in the appendix.

I include some minor additional comments below.

Minor comments:

- Equation 1: I think it would be clearer to use "Z hat" here, because f is estimating the true unknown latent state

- Page 5 line 118: It would be good to justify your use of xi rather than z hat, e.g. "we use its posterior probability mass function as the relevant intermediate representation because it reflects a richer representation of the knowledge about the latent state than a single discrete state estimate" or "... because it captures the degree of belief in a given latent state rather than just the most likely discrete state" or "... because the true latent state z is unknown, and xi completely summarizes the current knowledge of it"

- Figure 4: I believe the color still represents M in this figure -- if so, please include M in the legend (like in 1D,E). Same comment for all similar figures.

- Equations 9, 10: It wasn't obvious to me right away that xi and mu were the (posterior) probabilities of the latent states at time 1, 2. It would be good to say it explicitly.

- Equation 11: missing Bhat_1(xi) subscript

- Page 8 line 197: "we see that" -- you aren't showing the bias/variance properties here, so you should instead refer the reader to the appendix.

- Page 9 line 211: Acronym SOTA used without definition ("state of the art" is used on page 2 line 29)

- Page 9 line 236: Missing reference "as in Section we"

- Page 12 line 286: Typo "arguement"

- Page 12 line 294: Wording "may be thus can evaluated"

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Emily P Stephen

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

Figure resubmission:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. If there are other versions of figure files still present in your submission file inventory at resubmission, please replace them with the PACE-processed versions.

Reproducibility:

To enhance the reproducibility of your results, we recommend that authors of applicable studies deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 1

Attachments
Attachment
Submitted filename: few shot rebuttal.docx
Decision Letter - Hugues Berry, Editor, Yuanning Li, Editor

PCOMPBIOL-D-25-00336R1

When predict can also explain: few-shot prediction to select better neural latents

PLOS Computational Biology

Dear Dr. Dabholkar,

Thank you for submitting your manuscript to PLOS Computational Biology. After careful consideration, we feel that it has merit but does not fully meet PLOS Computational Biology's publication criteria as it currently stands. One of the reviewers raised additional concerns and suggestions that should be addressed. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

​Please submit your revised manuscript within 60 days Nov 18 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at ploscompbiol@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pcompbiol/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. This file does not need to include responses to formatting updates and technical items listed in the 'Journal Requirements' section below.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, competing interests statement, or data availability statement, please make these updates within the submission form at the time of resubmission. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter

We look forward to receiving your revised manuscript.

Kind regards,

Yuanning Li

Academic Editor

PLOS Computational Biology

Hugues Berry

Section Editor

PLOS Computational Biology

Journal Requirements:

1) We note that your Manuscript files are duplicated on your submission. Please remove any unnecessary or old files from your revision, and make sure that only those relevant to the current version of the manuscript are included.

2) Your manuscript is missing the following section: Results.  Please ensure all required sections are present and in the correct order. Make sure section heading levels are clearly indicated in the manuscript text, and limit sub-sections to 3 heading levels. An outline of the required sections can be consulted in our submission guidelines here:

https://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-parts-of-a-submission 

3) We notice that your supplementary information (Appendices) is included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list.

4) Please amend your detailed Financial Disclosure statement. This is published with the article. It must therefore be completed in full sentences and contain the exact wording you wish to be published.

1) State what role the funders took in the study. If the funders had no role in your study, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

5) Please ensure that the funders and grant numbers match between the Financial Disclosure field and the Funding Information tab in your submission form. Note that the funders must be provided in the same order in both places as well. Currently, the order of the grants is different in both places.

Note: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

Reviewers' comments:

Reviewer's Responses to Questions

Reviewer #2: The revisions address my issues with the first submission, and I approve of the additional changes.

Minor comments:

- In the section "Why does few-shot work?", you present the linear regression case first without saying that's what you're doing. That is, on p7 line 142, you introduce three models (LR, HMM, prototype). The next paragraph would be clearer if you started it with "For the linear regression case..." or similar.

- In Figure 5, it took me a second to notice that in Panel A higher is better (likelihood), while in Panels B and C lower is better (error). So I didn't get right away that all three were showing the same trend with respect to extraneous noise. It would be helpful just to state it explicitly in the text and/or caption.

- On p8 lines 156-158, you compare the methods in terms of their bias/variance decompositions. I assume you are referring to the analysis in the Methods sections "Theoretical analyis of...". If so, please refer the reader to these methods section (again).

Reviewer #3: This manuscript highlights the problem of extraneous/spurious dynamics in latent variable models and introduces two evaluation approaches for identifying it: few-shot co-smoothing and cross-decoding. This problem is important and hampers interpretation and scientific conclusions drawn from these models, so suitable evaluation frameworks are a timely and valuable contribution. After the first revision, the manuscript is much clearer and more well-motivated. However, I do still have a few questions/comments about the work:

1. Given that 1) few-shot co-smoothing is still used in conjunction with standard co-smoothing, and 2) cycle consistency and cross-decoding also indicate presence of spurious dynamics (but not co-smoothing quality), what does few-shot co-smoothing exactly offer that the combination of co-smoothing and e.g., cycle consistency does not? Can few-shot co-smoothing be sufficient alone, without also evaluating standard co-smoothing? Why is it important to specifically have a prediction-based metric for parsimony of latents?

2. It is stated in the text that cycle consistency relies on the models having “perfect co-smoothing” or having “rate predictions [that are] perfect proxies of the true dynamics,” while cross-decoding does not. However, it is also stated in the text that cross-decoding also relies on the assumption that “high co-smoothing models contain the teacher latent.” How is this assumption different from that of cycle consistency?

3. Though linear-exponential-poisson readouts are most common, some LVMs do not have this emissions model (e.g., linear-softplus, MLP). I assume this makes (linear) cross-decoding not directly applicable, and I wonder if few-shot co-smoothing scores are comparable across readout models. I assume the few-shot generalization behavior would vary, especially for a higher parameter count, neural network-based readout as in ODIN (Versteeg et al. 2023), so I would appreciate some empirical exploration and/or discussion of this limitation (if I am correct in assuming that it is a limitation).

4. Though it is maybe obvious, I think the two-bit flip flop is potentially a good example to briefly discuss the consequence of spurious dynamics on accurate interpretation of the system. Are the red and green stars in Fig 2 unstable and stable fixed points? Do the spurious dynamics in the “bad” model lead to incorrect fixed point topology (as computed in Maheswaranathan et al. 2019, for example)?

5. Though maybe only directly applicable to models with strictly linear emissions models, I would appreciate some discussion of Procrustes-style metrics from Alex Williams and others, which also penalize spurious dynamics without needing two separate metrics.

6. I appreciate the qualitative difference in smoothness and separation of latents in Fig 7. Can you perform any quantitative evaluations that support this point? For example, can conditions be more accurately classified from initial conditions of the “good” model?

7. Minor typo: disucssion (line 214)

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Emily P Stephen

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

Figure resubmission:

While revising your submission, we strongly recommend that you use PLOS’s NAAS tool (https://ngplosjournals.pagemajik.ai/artanalysis) to test your figure files. NAAS can convert your figure files to the TIFF file type and meet basic requirements (such as print size, resolution), or provide you with a report on issues that do not meet our requirements and that NAAS cannot fix.

After uploading your figures to PLOS’s NAAS tool - https://ngplosjournals.pagemajik.ai/artanalysis, NAAS will process the files provided and display the results in the "Uploaded Files" section of the page as the processing is complete. If the uploaded figures meet our requirements (or NAAS is able to fix the files to meet our requirements), the figure will be marked as "fixed" above. If NAAS is unable to fix the files, a red "failed" label will appear above. When NAAS has confirmed that the figure files meet our requirements, please download the file via the download option, and include these NAAS processed figure files when submitting your revised manuscript.

Reproducibility:

To enhance the reproducibility of your results, we recommend that authors of applicable studies deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 2

Attachments
Attachment
Submitted filename: Few shot rebuttal.pdf
Decision Letter - Hugues Berry, Editor, Yuanning Li, Editor

Dear Mr Dabholkar,

We are pleased to inform you that your manuscript 'When predict can also explain: few-shot prediction to select better neural latents' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please also consider addressing the final comments from the reviewer regarding more discussion in the text on the limitations of the proposed methods.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Yuanning Li

Academic Editor

PLOS Computational Biology

Hugues Berry

Section Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #3: The authors have largely addressed all of my comments and misunderstandings. I only have one remaining minor comment, which is that I would appreciate a bit more discussion in the text on the limitations of the methods proposed here for comparing models with different readout/emissions models. I think the current manuscript convincingly shows that the proposed metrics are effective for model selection across models of the same architecture, but I think it remains unclear how to compare raw few-shot co-smoothing values across architectures, especially when they have different readout models, which is essential if the metric is to be used widely for benchmarking.

There are more exceptions to the conventional linear-exp-Poisson readout than just ODIN [1]. Old-school methods like GPFA [2] and some of their extensions like (m)DLAG [3] still (unfortunately) use a linear-Gaussian emissions model, and others like SLDS often use linear-softplus (for example, in the SLDS NLB baseline). There are also methods incorporating spike history [4] and binomial/negative-binomial count models [5]. Most critically, I think there is growing interest in modelling neural dynamics on nonlinear manifolds, which has maybe most prominently led to CEBRA [6] (not really an LVM of course) but also e.g., LVMs with tuning-curve-based readout models [7,8,9].

None of these methods are really state-of-the-art in current benchmarking settings, so I don’t think this is a concerning limitation of few-shot co-smoothing and I don’t think you need to cite every single one of these, but I think it would be better to more clearly acknowledge this potential limitation.

[1] Versteeg, C., Sedler, A. R., McCart, J. D., & Pandarinath, C. (2023). Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity. ArXiv, arXiv-2309.

[2] Yu, B. M., Cunningham, J. P., Santhanam, G., Ryu, S., Shenoy, K. V., & Sahani, M. (2008). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems, 21.

[3] Gokcen, E., Jasper, A., Xu, A., Kohn, A., Machens, C. K., & Yu, B. M. (2023). Uncovering motifs of concurrent signaling across multiple neuronal populations. Advances in Neural Information Processing Systems, 36, 34711-34722.

[4] Zhao, Y., & Park, I. M. (2017). Variational latent gaussian process for recovering single-trial dynamics from population spike trains. Neural computation, 29(5), 1293-1316.

[5] Keeley, S., Zoltowski, D., Yu, Y., Smith, S., & Pillow, J. (2020). Efficient non-conjugate Gaussian process factor models for spike count data using polynomial approximations. In International conference on machine learning (pp. 5177-5186). PMLR.

[6] Schneider, S., Lee, J. H., & Mathis, M. W. (2023). Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960), 360-368.

[7] Wu, A., Roy, N. A., Keeley, S., & Pillow, J. W. (2017). Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Advances in neural information processing systems, 30.

[8] Jensen, K., Kao, T. C., Tripodi, M., & Hennequin, G. (2020). Manifold GPLVMs for discovering non-Euclidean latent structure in neural data. Advances in Neural Information Processing Systems, 33, 22580-22592.

[9] Genkin, M., Shenoy, K. V., Chandrasekaran, C., & Engel, T. A. (2025). The dynamics and geometry of choice in the premotor cortex. Nature, 1-9.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

Formally Accepted
Acceptance Letter - Hugues Berry, Editor, Yuanning Li, Editor

PCOMPBIOL-D-25-00336R2

When predict can also explain: few-shot prediction to select better neural latents

Dear Dr Dabholkar,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

For Research, Software, and Methods articles, you will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Anita Estes

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .