Peer Review History

Original SubmissionSeptember 3, 2019
Decision Letter - Oliver Gruebner, Editor

PONE-D-19-24776

Can syndromic surveillance help forecast winter hospital bed pressures in England? - Using routine daily syndromic surveillance data to forecast the winter peak in demand for hospital beds.

PLOS ONE

Dear Dr Morbey,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I would be happy if you can clarify on the points both reviewers have raised, with particular focus on the methodological section. Many thanks!

We would appreciate receiving your revised manuscript by Dec 13 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Oliver Gruebner

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on software sharing (http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-software) for manuscripts whose main purpose is the description of a new software or software package. In this case, new software must conform to the Open Source Definition (https://opensource.org/docs/osd) and be deposited in an open software archive. Please see http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-depositing-software for more information on depositing your software." 2) please ask the following, ping the Data team with follow-up: "Please amend your Data availability statement to outline how other researchers may access the data used in this study, for instance by providing a direct link/URL or contact details, including an email address or phone number, to the relevant authority where the data is kept. Please also ensure that the specific dataset is identified, or that the Methods section contains enough detail for another researcher to reproduce the dataset.

3. Please remove the 'Draft not for onward circulation' watermark from the background of the manuscript pages (page 1 onwards).

4. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this study, the authors have investigated whether the range of syndromic

surveillance data available in the England may help to predict the winter

demand for hospital beds. This bed demand is clearly influenced by the amount

of respiratory disease in the community, so these syndromic data may indeed be

particularly valuable for healthcare planning and this is potentially a very

valuable study. However, I'm afraid that the methods section doesn't provide

enough detail about the model design and evaluation for me to thoroughly

assess the results and the authors conclusions.

Major comments

==============

1. It's unclear exactly what each model comprised. Regarding the peak timing

models, the authors state:

Models included a variable for linear trend, and Fourier variables to model

seasonal variation (15). In addition to variables for weekends and public

holidays, 25 December was also included as an additional binomial variable

to account for differing presenting behaviour at Christmas compared to

other public holidays (16). Each syndromic indicator was modelled

separately, stratified by age group.

If I have interpreted this correctly, for each of the syndromic indicators

there were 7 regressions models, one for each of the following age groups:

under 1 year, 1 to 4, 5 to 14, 15 to 44, 45 to 64, 65 to 74 and 75+ years.

Is this correct? If so, presumably these 7 models (for a single indicator)

did not necessarily share the same coefficients for the linear term, the

seasonal terms, and the weekend, public holiday, and Christmas Day binomial

variables? What were the coefficients for each model, and which seasonal

terms (sines and cosines) were included?

2. What criteria was used to fit each model? Am I correct in thinking that the

authors minimised the mean absolute error, as they used in a later section

to assess model forecasts?

3. Regarding the peak timing models, why wasn't a null model (i.e., without a

syndromic indicator) included for comparison? This would be very helpful

for interpreting the adjusted R-squared values listed in Table 2.

4. The same models were then used to "assess the utility of syndromic data for

forecasting" (and this time a null model was included). In this case, for

each year Y in turn, the models were fit to data from all years except Y.

From Figure 2 I gather that the daily bed demand on day D was predicted by

each model using the syndromic indicator data up to, and including, day D.

Is this correct? If so, these results should be called "nowcasts" instead

of "forecasts". If not, and the models were used to predict daily bed

demand for future days D+1 onward, Figure 2 should identify the forecasting

date(s) and include confidence intervals.

5. In evaluating the improvements in forecast performance, the authors report

the mean absolute error for each model and each age group in Table 3,

averaged over the five winters. Relative to the null model, some of the

syndromic models showed substantial improvements for children aged under 5

and adults aged 75+. But this was not evident for those aged 5-74. Did the

authors also look at, e.g., histograms of the absolute errors for each

model? It would be very interesting to know whether some of the models did

yield better predictions than the null model for those aged 5-74 for the

most part, but also yielded a handful of predictions with large errors.

6. It would also be very useful to see the mean absolute errors presented

separately for each year, since the study period (2013-2018) included two

"atypical influenza seasons" (2014/15 and 2017/18). This should hopefully

provide more detailed evidence to support the authors claim that "syndromic

data can improve real-time forecasts of the intensity of peaks in emergency

hospital admissions and excess activity outside of the peak period

associated with seasonal respiratory disease".

In a similar vein, how did the adjusted R-squared for each model (as listed

in Table) vary from one year to the next? This could be nicely presented in

a reasonably simple plot, and would complement the results in Table 2.

Minor comments

==============

1. In the methods section, I find the sub-section titles "Peak timing" and

"Peak intensity" somewhat confusing, when they could seemingly be titled

"Model fitting" and "Model nowcasts/forecasts". This is especially true

since the peak demand for hospital beds was always the 29th or 30th of

December.

Reviewer #2: Thank you for an excellent manuscript. I have read it with great interest. The aims and methods of the study are clearly stated, results are presented in a clear way, figures illustrate the most important points, discussion offers relevant insights into the applicability of the results and future directions for research, and conclusion corresponds to the stated aims. I would only have very minor suggestions and questions for the authors to consider:

1. The conclusion, as formulated in the first paragraph of Discussion, outlines very well what is the biggest advantage of adding syndromic indicators (forecasting the intensity of peak activity in atypical seasons), and where it add less value (predicting timing of the highest seasonal peak). Perhaps the conclusion in the Abstract could be specified similarly.

2 The authors discuss why they used all emergency admissions rather than respiratory admissions as outcome (264-266). I wonder if the reasoning for this choice could be mentioned already in the Introduction. Could using all admissions contribute to the regular peak in 29-30 December? Potentially, respiratory-diseases related admissions' peak might be more variable.

3. Moving averages are described in Methods (line 110) and Figure 1 suggests that they are averaged over 7 days. If that is correct, this could be mentioned in the Methods.

4. The authors discuss briefly the lag of temporal association (from line 224). Have you also modelled the emergency admissions considering some time lag (the indicator value from x days ago)? As a reader, I would be interested to see whether you tested such models as well, and if not, why.

5. Lines 377 and 379 could be removed.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Response to reviewers

Thank you for the helpful comments from the editor and reviewers. We have addressed each point raised below, our responses shown in red.

Editor comments

Please remove the 'Draft not for onward circulation' watermark from the background of the manuscript pages (page 1 onwards).

We have removed the watermark from the manuscript.

Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

We have amended the title in the online submission form so that it is the same as in the manuscript.

Reviewers' comments:

Reviewer #1:

In this study, the authors have investigated whether the range of syndromic surveillance data available in the England may help to predict the winter demand for hospital beds. This bed demand is clearly influenced by the amount of respiratory disease in the community, so these syndromic data may indeed be particularly valuable for healthcare planning and this is potentially a very valuable study. However, I'm afraid that the methods section doesn't provide enough detail about the model design and evaluation for me to thoroughly assess the results and the authors conclusions.

We thank the reviewer for their constructive comments throughout their comprehensive review of our paper. The major comments seem to be asking more for clarification for the reviewer rather than requesting amendments to the manuscript however where appropriate we have provided additional material to provide further clarification for the reader.

Major comments

==============

1. It's unclear exactly what each model comprised. Regarding the peak timing models, the authors state:

Models included a variable for linear trend, and Fourier variables to model seasonal variation (15). In addition to variables for weekends and public holidays, 25 December was also included as an additional binomial variable to account for differing presenting behaviour at Christmas compared to

other public holidays (16). Each syndromic indicator was modelled separately, stratified by age group.

If I have interpreted this correctly, for each of the syndromic indicators there were 7 regressions models, one for each of the following age groups: under 1 year, 1 to 4, 5 to 14, 15 to 44, 45 to 64, 65 to 74 and 75+ years.

Is this correct? If so, presumably these 7 models (for a single indicator) did not necessarily share the same coefficients for the linear term, the seasonal terms, and the weekend, public holiday, and Christmas Day binomial variables? What were the coefficients for each model, and which seasonal

terms (sines and cosines) were included?

The reviewer is correct in their interpretation that there were separate regression models for each syndromic indicator, each of which with separate coefficients for the different terms, seasonal, weekend etc. We do not feel that it is possible to present separately the coefficients for each model because there are too many separate models to make this practicable. Furthermore, in addition to separate models for each indicator and age group we also stratified by four English regions and five seasons (we have not presented separate results by region because we felt this was an unnecessary complication that did not add to the paper). Whereas the stratification by season was necessary for the k-fold validation of forecasts. Consequently, to publish model coefficients would require an additional 1540 (=11*7*4*5) tables!

Two seasonal terms were used in our models, the following code shows how they were created in Stata using the day of the year (doy) function: cos(2*_pi*doy(date)/365), sin(2*_pi*doy(date)/365) We have amended the text to clarify that a single pair of Fourier terms were used.

2. What criteria was used to fit each model? Am I correct in thinking that the authors minimised the mean absolute error, as they used in a later section to assess model forecasts?

Yes, this is correct in that we sought to minimise mean absolute error in our forecast model fitting.

3. Regarding the peak timing models, why wasn't a null model (i.e., without a syndromic indicator) included for comparison? This would be very helpful for interpreting the adjusted R-squared values listed in Table 2.

Thank you for this suggestion, we have added a row for a null model to table 2.

4. The same models were then used to "assess the utility of syndromic data for forecasting" (and this time a null model was included). In this case, for each year Y in turn, the models were fit to data from all years except Y. From Figure 2 I gather that the daily bed demand on day D was predicted by

each model using the syndromic indicator data up to, and including, day D. Is this correct? If so, these results should be called "nowcasts" instead of "forecasts". If not, and the models were used to predict daily bed demand for future days D+1 onward, Figure 2 should identify the forecasting

date(s) and include confidence intervals.

The reviewer is correct in saying daily bed demand on D was predicted using syndromic data available up to day D, however this does not include syndromic data for day D, only up to the day before. Our aim was to replicate the conditions in which forecasts could be applied within our surveillance service, i.e. a daily service where each day we are looking at the syndromic data collected during the previous day. We did not include confidence intervals for figure 2 because it is not in the format where the left most point represents the date of the forecast with a single fan-shaped confidence interval showing increasing uncertainty over time. Instead figure 2 shows a series of daily forecasts each predicting just one-day ahead.

5. In evaluating the improvements in forecast performance, the authors report the mean absolute error for each model and each age group in Table 3, averaged over the five winters. Relative to the null model, some of the syndromic models showed substantial improvements for children aged under 5 and adults aged 75+. But this was not evident for those aged 5-74. Did the authors also look at, e.g., histograms of the absolute errors for each model? It would be very interesting to know whether some of the models did yield better predictions than the null model for those aged 5-74 for the most part, but also yielded a handful of predictions with large errors.

We have added a line to the text to reflect that we did examine the histograms for the different indicators across the age bands and did not find any evidence of outliers having undue influence. Although, there was a wider variance and longer tails for the 5-64 years’ age bands this was also the case for the null model.

6. It would also be very useful to see the mean absolute errors presented separately for each year, since the study period (2013-2018) included two "atypical influenza seasons" (2014/15 and 2017/18). This should hopefully provide more detailed evidence to support the authors claim that "syndromic data can improve real-time forecasts of the intensity of peaks in emergency hospital admissions and excess activity outside of the peak period associated with seasonal respiratory disease".

We agree with the reviewer that showing mean absolute errors for each season could be useful so we’ve added this as a supplementary table, S1 and provided some additional text in the results. In most cases the absolute errors are highest for 2017/18 which agrees with our commentary that the models under-estimated this season.

In a similar vein, how did the adjusted R-squared for each model (as listed in Table) vary from one year to the next? This could be nicely presented in a reasonably simple plot, and would complement the results in Table 2.

We decided not to include a table or plot of model fit by season because we felt this would require considerable extra explanation to ensure the results were not miss-interpreted. The reason these results could be miss-interpreted is due to the k-fold validation; e.g. the models we created for forecasting the 2017/2018 season included data from all 5 seasons except 2017/18, we didn’t create forecast models using just the data for 2017/2018. Therefore, if we were to present a version of table 2 stratified by season we’d have to explain that each model is based on the exclusion of one year instead of the inclusion of one year. Consequently, if season 2017/2018’s data was causing a poor fit this would result in all of the models having a poorer fit except for the one labelled “2017/2018 excluded.”

Minor comments

==============

1. In the methods section, I find the sub-section titles "Peak timing" and "Peak intensity" somewhat confusing, when they could seemingly be titled "Model fitting" and "Model nowcasts/forecasts". This is especially true since the peak demand for hospital beds was always the 29th or 30th of

December.

We agree with the reviewer that these sub-headings are unhelpful, particularly for the methods section, and we welcome the suggestion to change them to Model fitting and Model forecasts. For consistency, we have also changed the sub-heading in the results.

Reviewer #2:

Thank you for an excellent manuscript. I have read it with great interest. The aims and methods of the study are clearly stated, results are presented in a clear way, figures illustrate the most important points, discussion offers relevant insights into the applicability of the results and future directions for research, and conclusion corresponds to the stated aims. I would only have very minor suggestions and questions for the authors to consider:

We thank the reviewer for their very positive and supportive review.

1. The conclusion, as formulated in the first paragraph of Discussion, outlines very well what is the biggest advantage of adding syndromic indicators (forecasting the intensity of peak activity in atypical seasons), and where it add less value (predicting timing of the highest seasonal peak). Perhaps the conclusion in the Abstract could be specified similarly.

Thank you for this suggestion, we have changed the abstract conclusion accordingly.

2. The authors discuss why they used all emergency admissions rather than respiratory admissions as outcome (264-266). I wonder if the reasoning for this choice could be mentioned already in the Introduction. Could using all admissions contribute to the regular peak in 29-30 December? Potentially, respiratory-diseases related admissions' peak might be more variable.

Prior to our study, we did spend considerable time discussing with relevant experts the advantages and disadvantages of modelling ‘all emergency admissions’ vs ‘respiratory admissions’. Although, we believed that using just respiratory emergency admissions would give us stronger associations and better model fit, our stakeholders informed us that a forecast for total emergency admissions would be more valuable for healthcare service planners dealing with and managing pressures. Also, the greatest pressures felt by hospitals are when total emergency admissions peak, whatever the case mix. Therefore, we decided that the most useful research question was whether syndromic data was useful for forecasting all emergency admissions. We agree with the reviewer that this should be mentioned earlier in the paper and have added a sentence to the methods section under data collection.

3. Moving averages are described in Methods (line 110) and Figure 1 suggests that they are averaged over 7 days. If that is correct, this could be mentioned in the Methods.

We have added clarification to the methods that it is a 7-day moving average that is used.

4. The authors discuss briefly the lag of temporal association (from line 224). Have you also modelled the emergency admissions considering some time lag (the indicator value from x days ago)? As a reader, I would be interested to see whether you tested such models as well, and if not, why.

We did explore lags in association in the preparation work for this study. However, our primary concern was to create as accurate a forecast as possible rather than model the lagged associations between the time series and we found the most accurate forecasts included the most recent data available. We have added a comment to the limitations section to note the importance of considering time lags in any future studies.

5. Lines 377 and 379 could be removed.

We have removed the un-necessary heading for Figure Legends.

[end]

Attachments
Attachment
Submitted filename: Response to reviewersV03.docx
Decision Letter - Oliver Gruebner, Editor

Can syndromic surveillance help forecast winter hospital bed pressures in England?

PONE-D-19-24776R1

Dear Dr. Morbey,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Please see the comments of reviewer #1 under point 6 below. Thank you!

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Oliver Gruebner

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this study, the authors have investigated whether the range of syndromic

surveillance data available in the England may help to predict the winter

demand for hospital beds. This bed demand is clearly influenced by the amount

of respiratory disease in the community, so these syndromic data may indeed be

particularly valuable for healthcare planning and this is potentially a very

valuable study.

I thank the authors for their responses to my original comments, which have

thoroughly addressed my concerns about the level of detail in the methods

section. I have only one minor suggestion (see note 3, below) about this

revised version.

1. Given the multiple levels of model stratification, I agree with the authors

that the sheer number of model parameters is simply too large to include as

additional tables. Thank you for clarifying that each model included two

seasonal terms, rather than an arbitrary number of seasonal terms.

2. Thank you for including null model results in Table 2, which I find helpful

for putting the results obtained from the other models into context.

3. My confusion about forecasting versus nowcasting stems from the use of the

phrase "[using data] up to day D", because I find this wording ambiguous

about whether day D itself is included. I think it would be helpful to

include a remark to this effect in the "Model forecasts" part of the

Methods section.

4. I thank the authors for noting in the text that there was no evidence of

outliers affecting the mean absolute errors.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Formally Accepted
Acceptance Letter - Oliver Gruebner, Editor

PONE-D-19-24776R1

Can syndromic surveillance help forecast winter hospital bed pressures in England?

Dear Dr. Morbey:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Oliver Gruebner

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .