Reader Comments
Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Thank You!
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.
closeMiscellaneous comments and questions
Posted by NicholasReich on 02 Nov 2022 at 14:52 GMT
Thanks for this interesting paper! We read it during our research group’s lab meeting this week and have a few questions and comments.
1. The text/equations of the paper says that the models are fit to data on COVID-19 deaths, but many of the figures show "COVID-19 cases" on the axes. Which outcome variable was used?
2. It was not fully clear to us from the text of the paper whether a new set of models were fit to data each week. Does the "best model" change each week when the model is fit to the data? Figure 3 suggests so, but this is not really stated explicitly in the text. Additionally, it would be interesting to see representative weights. It appears that perhaps the weights were mostly tilted towards the "best model" (understandably, given the weighting algorithm).
3. Were "finalized" death (or case) data used, or was any attempt made to use data as it was available in real time?
4. The authors mention that they follow the EPIFORGE guidelines, but do not provide the checklist that accompanies those guidelines. This is related to the comment immediately above, since EPIFORGE 2020 Checklist item #4 is "Identify whether the forecast was performed prospectively, in real time, and/or retrospectively".
5. What was the reasoning behind only using a maximum of 2 sub epidemics?
Thanks,
Nick Reich