Skip to main content
Advertisement

< Back to Article

Fig 1.

Overview of region-level influenza surveillance data in the US.

(A) Map of the 10 U.S. Health and Human Services regions. Influenza forecasts are made at this geographic scale. (B) Publicly available wILI data from the CDC website for the national level. The y-axis shows the estimated percentage of doctor’s office visits in which a patient presents with influenza-like illness for each week from September 2010 through July 2018. The dashed vertical line indicates the separation of the data used by the models presented here for the training (retrospective) and testing (prospective) phases of analysis. (C) Publicly available wILI data for National level and each of the 10 HHS regions. Darker colors indicate higher wILI.

More »

Fig 1 Expand

Fig 2.

Training phase performance of the five pre-specified multi-model ensembles.

The five ensembles tested were Equal Weights (EW), Constant Weights (CW), Target-Type Weights (TTW), Target Weights (TW), and Target-Region Weights (TRW). The models are sorted from simplest (left) to most complex (right), with the number of estimated weights (see Methods) for each model shown at the top. Each point represents the average forecast score for a particular season, with overall average across all seasons shown by the X.

More »

Fig 2 Expand

Fig 3.

Component model weights for the FluSight Network Target-Type Weights (FSNetwork-TTW) ensemble model in the 2017/2018 season.

Weights were estimated using cross-validated forecast performance in the 2010/2011 through the 2016/2017 seasons.

More »

Fig 3 Expand

Fig 4.

Overall test and training phase performance scores for selected models.

Displayed scores are averaged across targets, regions, and weeks, and plotted separately for selected models. Models shown include the FSNetwork-TTW model, the top performing model from each team during the training phase and, for the last two training seasons and the test season, the unweighted average of all FluSight models received by CDC. Model ranks within each row are indicated by color of each cell (darker colors indicates higher rank and more accurate forecasts) and the forecast score (rounded to two decimal places) is printed in each cell. Note that a component’s standalone accuracy does not necessarily correlate to its contribution to the overall ensemble accuracy. See discussion in the Ensemble Components subsection of the Methods.

More »

Fig 4 Expand

Fig 5.

Average forecast scores and ranks by target and region for 2017/2018.

Models shown include the FSNetwork-TTW model, the top performing model from each team during the training phase and the unweighted average of all FluSight models received by CDC. Color indicates model rank in the 2017/2018 season (darker colors indicates higher rank and more accurate forecasts) and the forecast score (rounded to two decimal places) is printed in each cell. Regions are sorted with the most predictable region overall (i.e. highest forecast scores) at the top.

More »

Fig 5 Expand

Fig 6.

Forecast score for the FSNetwork-TTW model in 2017/2018 by week relative to peak.

Scores for the two peak targets in each region were aligned to summarize performance relative to the peak week. On the x-axis, zero indicates the peak week and positive values represent weeks after the peak week. The black line indicates the overall geometric average across all regions. The grey band represents the geometric average across all regions and all seasons prior to 2017/2018.

More »

Fig 6 Expand