Figures
Abstract
Disease monitoring and surveillance play a crucial role in control and eradication programs, as it is important to track implemented strategies in order to reduce and/or eliminate a specific disease. The objectives of this study were to assess the performance of different statistical monitoring methods for endemic disease control program scenarios, and to explore what impact of variation (noise) in the data had on the performance of these monitoring methods. We simulated 16 different scenarios of changes in weekly sero-prevalence. The changes included different combinations of increases, decreases and constant sero-prevalence levels (referred as events). Two space-state models were used to model the time series, and different statistical monitoring methods (such as univariate process control algorithms–Shewart Control Chart, Tabular Cumulative Sums, and the V-mask- and monitoring of the trend component–based on 99% confidence intervals and the trend sign) were tested. Performance was evaluated based on the number of iterations in which an alarm was raised for a given week after the changes were introduced. Results revealed that the Shewhart Control Chart was better at detecting increases over decreases in sero-prevalence, whereas the opposite was observed for the Tabular Cumulative Sums. The trend-based methods detected the first event well, but performance was poorer when adapting to several consecutive events. The V-Mask method seemed to perform most consistently, and the impact of noise in the baseline was greater for the Shewhart Control Chart and Tabular Cumulative Sums than for the V-Mask and trend-based methods. The performance of the different statistical monitoring methods varied when monitoring increases and decreases in disease sero-prevalence. Combining two of more methods might improve the potential scope of surveillance systems, allowing them to fulfill different objectives due to their complementary advantages.
Citation: Lopes Antunes AC, Jensen D, Halasa T, Toft N (2017) A simulation study to evaluate the performance of five statistical monitoring methods when applied to different time-series components in the context of control programs for endemic diseases. PLoS ONE 12(3): e0173099. https://doi.org/10.1371/journal.pone.0173099
Editor: Hiroshi Nishiura, Hokkaido University Graduate School of Medicine, JAPAN
Received: September 8, 2016; Accepted: February 15, 2017; Published: March 6, 2017
Copyright: © 2017 Lopes Antunes et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data used in this study are publicly available at the following link: https://figshare.com/s/8760d1be0d738e57292b (DOI: 10.6084/m9.figshare.4272260).
Funding: The authors would like to thank the Pig Research Centre – SEGES for providing part of the data used in this study, and the Danish Food and Agriculture Administration for funding the project.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Surveillance and monitoring systems are critical for the timely and effective detection of changes in disease status. Over the last decade, several studies have applied different statistical monitoring methods for detecting outbreaks of (re-)emerging diseases in the context of syndromic surveillance in both human and veterinary medicine [1–3]. Different types of models (such as linear models, logistic regression and time-series models) have been implemented in the context of syndromic surveillance in order to evaluate the performance and implementation of these methods [4].
However, it may not be possible to make generalizations about the performance of these methods when used for monitoring endemic diseases and control programs. In this case, the availability of control measures (such as vaccination or health-management programs) results in lower incidence rates for endemic diseases than for (re)-emerging diseases. The dynamics of disease spread and immunity within a population from previous exposure also contribute to a lower incidence, resulting in slow and gradual changes in incidence and prevalence for endemic diseases [5]. It is important to follow-up on implemented control strategies in order to reduce and/or eliminate a specific disease [6]. Unexpected changes (such as an increase in disease prevalence or a failure to achieve a target value of disease prevalence within a certain period of time) indicate that the implemented strategies should be revised. When a control program fails to achieve certain goals, it can have a devastating impact on herds with susceptible animals.
In previous work, we assessed the performance of univariate process control algorithms (UPCA) in monitoring changes in the burden of endemic diseases based on sentinel surveillance [7]. However, these methods were not tested in the context of voluntary disease control and monitoring programs. In such cases, the frequency of testing depends on the monetary value of the animal and not just on the impact of the disease [6]. Programs for monitoring endemic diseases include the Danish Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) monitoring program. Despite disease-control efforts, PRRSV has contributed to economic losses since its first diagnosis in 1992 [8]. Monitoring of PRRSV is primarily based on serological testing within the Specific Pathogen Free System (SPF System) [9]. The frequency of testing depends upon the health status of the herd within this system. As a consequence, the number of samples is not constant and it is necessary to use methods with a more dynamic structure, allowing the parameters to change over time, thus taking into account the variation in sample size. Previous studies have also discussed the influence of variation in the number of samples (i.e. the noise present in data) on the performance of different monitoring methods [7,10].
State-space models have a flexible structure, allowing parameters to be updated for each time step [11]. In addition, they can be decomposed, and changes in the components (such as trends and seasonal patterns) can be monitored for inference [12]. While state-space models have been used to monitor influenza in humans [13–15] as well as and for herd-management decisions [16–19], it has not yet been determined how useful these techniques are for monitoring endemic diseases.
The objectives of this study were to assess the performance of different statistical monitoring methods for endemic disease control programs, and to explore what impact of variation (noise) in the data had on the performance of these statistical monitoring methods. The simulation study was motivated by the Danish PRRSV monitoring program.
Two state-space models were chosen for this study based on their ability to monitor changes in different time-series components [11]. Five different statistical monitoring methods were evaluated for each model: three UPCA used in process-control monitoring [20], and two methods for monitoring changes based on the trend component of the time series.
Materials and methods
All methods described in this section were implemented using R version 3.1.1 [21].
Data
Laboratory submission data stored in the National Veterinary Institute–Technical University of Denmark (DTU Vet) information management system and in the Laboratory for Swine Diseases–SEGES Pig Research Centre (VSP-SEGES) were used to determine the weekly PRRS sero-prevalence in Danish swine herds between January 2007 and December 2014(418 weeks in total). The weekly PRRS sero-prevalence was calculated using the same method described in a previous study [7]. A total of 51,639 laboratory submissions from 5,095 Danish swine herds were included. The average between-herd PRRS sero-prevalence was 0.24 (minimum = 0, maximum = 0.38) and the median number of herds tested for PRRS was 122 (minimum = 8, maximum = 191) per week.
Simulation study
A baseline scenario for sero-prevalence was defined based on the method described by Lopes Antunes et al. [7], where the number of positive herds per week was derived from a binomial distribution with probability (p) and sample size (n) equal to the number of Danish herds tested for PRRS in a given week. The data is publicly available at the following link: https://figshare.com/s/8760d1be0d738e57292b (DOI: 10.6084/m9.figshare.4272260).The weekly sero-prevalence was calculated as the simulated number of sero-positive herds divided by the total number of herds tested per week.
There was a constant initial sero-prevalence of 0.24 for the first 104 weeks of all simulated scenarios, corresponding to the average PRRS sero-prevalence observed in Danish herds in the diagnostic laboratory data from 2007 to 2014 (Fig 1). In Scenario A, this period was followed by an increase in the weekly sero-prevalence (Event 1), a constant level, and then a decrease (second event). Scenario B consisted of a decrease in the sero-prevalence (Event 1) followed by a constant level, then an increase during the subsequent weeks (Event 2). Each scenario was simulated with changes in the weekly sero-prevalence, including gradual increases to 0.33 and 0.38 (for Scenario A) and gradual decreases to 0.15 and 0.10 (for Scenario B) over 52 and 104 weeks. Different combinations and durations of events (increases/decreases in sero-prevalence) were tested for each scenario, resulting in a total of 16 simulated scenarios (Table 1). Event 1 of each scenario was started at a random time between weeks 104 and 156, and Event 2 was started after a random interval of between 52 and 104 weeks following the end of Event 1.
The between-herd weekly sero-prevalence was simulated using a binomial distribution based on the Danish herds tested for PRRSV during the corresponding week. An initial sero-prevalence of 0.24 was maintained for at least 104 weeks. This was followed by either an increase to 0.38 or a decrease to 0.10 over 52 weeks in two different events. The different statistical monitoring methods were evaluated for each event.
Modeling
A Dynamic Linear Model (DLM) and a Dynamic Generalized Linear Model (DGLM), both with a linear growth component as described previously [11], were used to model the simulated data.
The general objective of state-space models is to estimate an underlying parameter vector from observed data (θ) combined with any prior information available at time 0 (D0), i.e. before an observation is made. The estimated parameter vector is updated each time there is a new observation (e.g. of the PRRS sero-prevalence). Specifically, the distribution of θt conditional on Dt (θt|Dt) is estimated for each time step t. These models can be used to estimate a one-step forecast of the mean, allowing for a comparison between observed and forecasted values.
Briefly, the DLM is represented by a set of two equations, defined as the observation equation (Eq 1) and the system equation (Eq 2). (1) (2) Where Yt was the observed sero-prevalence for week t, and Vt and Wt are referred to as the observational variance and system variance, respectively. In our study, the observational variance was adjusted for the number of submissions in a given week (see Eq 5 below). The transposed design matrix (F’) had the following structure: (3) Eq 2 describes the evolution of θ from time t-1 to t. The system matrix (G) for a local linear trend model is given as: (4)
The linear trend component enabled us to include a time-varying slope (or local linear trend), allowing the system to adapt to a potential positive or negative trend for each t. Assuming that the PRRS sero-prevalence was not auto-correlated over time, the observational variance was defined as: (5) where nt was the number of herds tested for PRRS that week.
Unlike the DLM, the DGLM was based on a binomial distribution. The observation equation (Eq 6) for the DGLM was defined as: (6)
For both DLM and DGLM, the variance-covariance matrix (Wt) describes the evolution of variance and covariance of each parameter for each time step. Rather than estimating Wt, the system variance was modeled using a discount factor (δ), as previously described by [22] and [17].
State-space model initialization and discount factors
Reference analysis was used to estimate the initial parameters D0∼[m0, C0] as described by West and Harrison [11].
The discount factors (δ) were defined using the method described by Kristensen [23], and were selected in order to optimize the performance of the model forecasts (i.e. minimizing the normalized forecast errors ). The DLM and the DGLM models were run for 418 weeks with a constant simulated sero-prevalence of 0.24, using different δ-values ranging from 0.1 up to 1 in increments of 0.01. The δ-value that minimized the sum of the squared normalized forecast errors was chosen for the analysis. For both models, the forecast errors were normalized with respect to the forecast variance Q, such that .
Monitoring methods
Univariate process control algorithms (UPCA).
Three monitoring methods were used to generate alarms: the Shewhart Control Chart, Tabular Cumulative Sums, and V-Mask [20]. These methods are useful when only small changes are expected in the data [20].
The Shewhart Control Chart and Tabular Cumulative Sums were applied to the normalized forecast errors, whereas the V-Mask was applied to simple cumulative sums of the normalized forecast errors. The first 104 weeks of data were used as a “burn-in” period for the models and the alarms were generated from the third year onwards (>108 weeks) when the simulated events started.
The fixed upper and lower control limits (UCL and LCL) required for the Shewhart Control Chart to generate alarms in a given week were calculated based on the following equations [20]: (7) (8) where μt is the center line (μt = 0), L is the selected number of standard deviations and σt is the standard deviation of the normalized forecast errors from t>104.
The Tabular Cumulative Sums for week t were calculated as described by Montgomery [20]. This method accumulates derivations from T0 (target value) that are above the target with one statistic C+, and below the target with another statistic C−. The C+ and C− for a given week (t) were calculated as: (9) (10) where T0 = 0 and K is the reference value expressed as K = (1 * σt)/2. Alarms were raised if or exceeded a threshold H (expressed in terms of the standard deviation) in a given week t. The starting values of and were defined as zero.
The V-Mask was applied to successive values of the cumulative sum of normalized forecast errors, which was calculated as follows [20]: (11)
The V-Mask is defined by the lead distance d and the angle Ψ, which were equivalent to the cumulative sum as described by Montgomery [20] (Fig 2). The point O of the V-Mask was directly applied to each value of the cumulative sumt with the line OP parallel to the horizontal axis. The V-Mask was applied to each new point on the cumulative sum chart and the arms extended backwards towards the origin. If all the cumulative sums in previous time steps were within the two arms of the V-Mask, the process was considered to be ‘in-control’; if any of the cumulative sums lay outside of the arms, the process was considered ‘out-of-control’ and an alarm was given. The value of the cumulative sumt was reset to zero each time an alarm was given.
The point O is positioned on the cumulative sum for each time t, and the line OP defines the lead distance d of the V-mask (a) as expressed using horizontal plotting time steps and it is applied to the cumulative sum (b).
Calibration.
In order to calibrate the process control algorithms, the generalized DLM and DGLM were applied to 418 weeks of simulated data with a constant sero-prevalence of 0.24. The process control algorithms were calibrated for a false alarm rate of 1% when applied to the weekly (excluding the first 104 weeks, which represented the “burn-in” period of both models). The Shewhart Control Chart was calibrated with L ranging from 1 to 4 standard deviations of the , and μt was defined as zero. For the Tabular Cumulative Sums, values of H ranging from 1 to 4 standard deviations of the were tested. This process was simulated 2,000 times for each parameter of the algorithm during calibration, and the median value of the false alarm rate was used as the summary statistic for evaluation.
Montgomery [20] suggested using Ψ = tan−1(K) and d = H/K in order for the V-Mask to be comparable to the Tabular Cumulative Sums. For this reason, these values were adopted for the implementation of the V-Mask in this study.
Monitoring the time-series trend.
For both the DLM and DGLM, the trend was extracted from the θ vector for each time step t. The variance of the trend parameter was calculated from the variance-covariance matrix for the posterior distribution, as previously described [11]. This variance was used to calculate 99% confidence intervals (CI) (Fig 3). Alarms were generated based on the trend when significant differences above and below zero were found according to the 99% CI. In addition, a second method was used to generate alarms when the absolute values of the trend component changed the sign from positive to negative and vice versa (Trend Sign).
The rugs indicate where the trend component was significantly above (red) or below (blue) zero.
Performance assessment
The performance was also assessed using the method proposed by Lopes Antunes et al [7]. The cumulative sensitivity (CumSe) was calculated as: (12) where xj is the number of iterations in which an alarm was given j weeks after an event started, and Niter is the total number of iterations. An increase in the sero-prevalence was considered to have been detected if an alarm was generated for each week i after the event was started (i ≥ 0).
Convergence
A total of 10,000 iterations were simulated, with an initially constant sero-prevalence of 0.24 followed by a steady decrease to 0.15 over a period of 52 weeks. The decrease was randomly started between weeks 104 and 156. The number of iterations required to reach a stable detection time was determined visually using a plot of the variance in time to generate an alarm. This was done for each of the five statistical monitoring methods based on both types of models after the event was started with a stepwise increase of 100 iterations. Stable variance was observed after 2,000 iterations, therefore all simulated scenarios were run using this number of iterations.
Assessing the impact of noise in the data on the performance of detection methods
In order to assess the impact of noise in the data, the simulation study was repeated with n fixed at 600 herds tested per week. This value corresponds to a five-fold increase in the average number of Danish swine herds tested for PRRSV per week between 2007 and 2014, and it reduced the variation in the baseline (Fig 4).
The weekly sero-prevalence was simulated using a binomial distribution based on the Danish herds tested for PRRSV during the corresponding week (grey line), and with five times the average number of Danish herds (n = 600) tested for PRRSV (blue line). The red straight lines indicate the actual values of the simulated sero-prevalence.
Results
Parameters used for calibration
The selected values used to define a 1% false alarm rate for the UPCA based on the DLM model corresponded to L = 2.6 for the Shewhart Control Chart, H = 6 and K = 6 for the Tabular Cumulative Sums, and a distance of 2 units for the V-Mask. For the DGLM model, the values corresponded to L = 2.5, H = 16, K = 5, and a distance of 3.2 units. These parameters were recalibrated to maintain a 1% false alarm rate when the number of herds tested per week was increased to 600, in order to simulate the baseline. The DLM model used parameters of L = 2.3 for the Shewhart Control Chart, H = 1.8 and K = 1 for the Tabular Cumulative Sums and a distance of 1.8 units for the V-Mask for a constant number of herds tested. For the DGLM model, these parameters were defined as L = 2.2, H = 11, K = 6 and a distance equal to 1.07 units.
A discount factor δ = 0.99 was used to define the system variance for the DLM and the DGLM.
Statistical monitoring methods based on the DLM
The number of weeks needed to identify 50% of all iterations simulated (CumSe = 50%) for each event is given in Table 2. A CumSe = 50% was achieved most rapidly by the Trend Sign, followed by the V-Mask for Event 1 of Scenario A based on the DLM. For Event 2, the fastest CumSe = 50% was achieved using the V-Mask and Shewhart Control Chart. Using the Trend Sign to monitor the changes, we noted an increase in the number of weeks needed to achieve CumSe = 50% when comparing Event 1 and Event 2. As an example: for Event 1, 37 weeks were required to detect an increase in sero-prevalence from 0.24 to 0.38 over a period of 104 weeks based on 99% CI, and 2 weeks were required for the same increase and time period based on the Trend Sign. The same CumSe was achieved 74 and 59 weeks after the start Event 2 for the 99% CI and the Trend Sign, respectively. Furthermore, the Tabular Cumulative Sums detected changes in Event 1 of Scenario A more quickly than Event 2, with the exception of scenarios where changes occurred over 104 weeks. The main differences found when comparing scenarios A and B (Table 2) were: the Tabular Cumulative Sums was able to achieve a CumSe = 50% more quickly Event 2 of Scenario B than Scenario A; the Shewhart Control Chart achieved CumSe = 50% faster during Event 1 of Scenario B, and this value could not be achieved for Event 2 (expressed as NA in Table 2); the V-Mask quickly detected changes in Event 2 for Scenario B. Moreover, the 99% CI and the Trend Sign had similar results in both scenarios.
Table 3 shows the CumSe52 (CumSe achieved 52 weeks after the event started) for the different statistical monitoring methods based on the DLM, indicating the likelihood of detecting the simulated events in the baseline for each method. For Scenario A, higher CumSe52 was achieved by the trend-based methods (99% CI and Trend Sign) and the V-Mask for Event 1. For Event 2, the Shewhart Control Chart and the V-Mask had higher CumSe52, and the trend-based methods were the worst performing (CumSe52≤0.3). When comparing scenarios A and B, the major differences were seen for the Shewhart Control Chart, corresponding to a better performance (higher CumSe52) for Event 1and a poorer performance for Event 2 of Scenario B. The other statistical monitoring methods presented similar results in both scenarios.
Comparing the results from both models
Results revealed that the statistical monitoring methods required more time to achieve CumSe = 50% when applied to DGLM (Table 4) compared to DLM (Table 2), with the exception of monitoring the Trend Sign in Event 1 (Scenario A) and the V-Mask in Event 1 (Scenario B). In these cases, CumSe = 50% was achieved at least twice as quickly for the DLM.
The trend-based methods produced identical results based on the DGLM (Table 5) and the DLM (Table 3). In general, these methods achieved the highest CumSe52 based on the DLM for all simulated scenarios.
Impact of noise on the different detection methods
Reducing noise in the data (by increasing the sample size to 600 herds tested per week) resulted in higher CumSe for the statistical monitoring methods (Fig 5). The time required to achieve a CumSe = 1 was reduced by a factor ≥2 for the Shewhart Control Chart and Tabular Cumulative Sums. Similar results were found for the remaining 15 simulated scenarios (including Scenario B). The time required to achieve CumSe = 50% was reduced by 117 weeks for the Shewhart Control Chart for Event 1 of Scenario A, with an increase in sero-prevalence from 0.24 to 0.33 over 52 weeks based on the DLM. The Tabular Cumulative Sums achieved similar CumSe 8 weeks earlier based on the DLM than when based on the DGLM. The impact of baseline noise in the V-Mask and both trend-based methods had similar results, with only small differences (up to 2 weeks) in the time required to achieve CumSe = 50%.
The results are shown for Scenario A, corresponding to an increase in sero-prevalence from 0.24 to 0.33 over 52 weeks (Event 1), followed by a decrease from 0.33 to 0.24 over 52 weeks (Event 2). The CumSe of the Shewhart Control Chart (purple), Tabular Cumulative Sums (green), V-Mask (orange), 99% CI (grey) and Trend Sign (black) are shown based on the actual number of herds tested for PRRSV (straight lines) and on a fixed number (n = 600) of herds tested per week (dashed lines). The horizontal and vertical blue lines represent a CumSe = 50% and the CumSe achieved 52 weeks after the start of the event, respectively.
Discussion
We investigated the performance of different methods for detecting changes in endemic disease (sero-) prevalence. The study included: 1) univariate process control methods applied to residuals, and 2) monitoring changes in the trend component of the time series based on CI and absolute values. The Shewhart Control Chart detected increases in sero-prevalence better than decreases for both scenarios, whereas the opposite was observed for the Tabular Cumulative Sums. The trend-based methods were effective when detecting Event 1, but their performance was inferior when adapting to several consecutive events. The V-Mask seemed to be the method with the most consistent performance seemed to be. Additionally, the impact of noise in the baseline was more profound for the Shewhart Control Chart and Tabular Cumulative Sums, and lower for the V-Mask and the trend-based methods.
Study design
This study was conducted based on sero-prevalence data from the Danish PRRS monitoring program. The different simulated scenarios were chosen to represent potential changes in sero-prevalence in the context of disease control programs, and were based on Danish pig production, where almost 40% of herds must follow rules concerning biosecurity, health control and transportation [9].
The approach used to simulate sero-prevalence was based on a binomial distribution defined by n and p. Both parameters have an effect on the variance of the binomial distribution, as higher values of p (up to 0.5) result in greater variance in the data obtained in each trial for a constant n, and lower values of p reduce the variance [24]. Event 1 of Scenario A and Event 2 of Scenario B represented an increase in sero-prevalence (p), resulting in greater variance of the data, which might have affected the detection rates presented in this study. However, higher values of n for the same value of p also have an impact on the variance of the simulated data, which facilitates the reduction of noise in the simulated time-series by defining n as five times the average number of herds tested.
A predefined false alarm rate of 1% was used for standardization, and to enable comparison between the different statistical monitoring methods. The value of 1% was chosen as a compromise between false alarms and maintaining confidence in the system.
Results of the performance evaluation
Event 1 was started after 104 weeks in order to guarantee that the “burn-in” period of the model was sufficient for representative inferences to be made. From a practical point of view, false alarms can be generated, and true alarms can be masked thus reducing the sensitivity of the system for monitoring changes during this period.
As anticipated, larger changes in sero-prevalence were indicated earlier. These results are consistent with the expected performance of control charts [20].
The simulations showed that the Shewhart Control Chart was faster than the Tabular Cumulative Sums for detecting decreases in sero-prevalence. Conversely, the Tabular Cumulative Sums was faster at detecting increases. According to Montgomery [20], the Tabular Cumulative Sums is the recommended method for detecting gradual changes. However, the same author also mentioned that the Shewhart Control Chart might detect decreases earlier than the Tabular Cumulative Sum, as verified in this study. In addition, the variance in the simulated time-series was higher (due to a higher p) during Event 2 for Scenario B, which might explain the superior performance of the Tabular Cumulative Sums. Furthermore, the results for the trend component showed that both models needed time to adapt to Event 2 of both scenarios. It is possible that the models are forced to adapt to three consecutive stages of the sero-prevalence (“constant-event-constant”) prior to Event 2. This occurred because the system variance (modeled using a discount factor) was optimized for a constant level, resulting in slower model-trend changes for Event 2. As a consequence, the normalized forecast errors were higher and the Tabular Cumulative Sums generated alarms earlier, and as a result CumSe = 50% was achieved more quickly. The same argument can also be used to explain why the V-Mask attained a faster CumSe = 50% in Event 2 of Scenario B.
The V-Mask showed the most consistent results among the univariate methods in relation to the number of weeks required to achieve a CumSe = 50%. This can be explained by the greater flexibility of the V-Mask method compared to other univariate process control methods based on pre-defined control limits.
Regarding the trend-based methods, the Trend Sign was quicker at detecting changes than the 99% CI. However, it is possible that the instantaneous detection of Event 1 for both scenarios based on the Trend Sign might occur due to the variation (above and below zero) of the trend component. In this case, changes in the sign (from positive to negative and vice versa) might occur by chance.
Impact of noise in the baseline
Decreasing the noise in the time-series resulted in higher CumSe for the Shewhart Control Chart and Tabular Cumulative Sums, whereas no important changes were found for the V-Mask or the trend-based methods. This shows the impact of variation in the time series and the importance of choosing the correct monitoring method. When the Shewhart Control Chart and Tabular Cumulative Sums were used, alarms were generated according to the intensity of noise in the data, regardless of whether they were applied to forecast errors or directly to the data. The superior performance of the Shewhart Control Chart may be due to the upper and lower control limits being defined based on data with less variation. Despite recalibrating to a 1% false alarm rate, the applied control limits were defined based on lower standard deviations, which contributed to the alarms being generated earlier. One possible explanation for the superior performance of the Tabular Cumulative Sums is that the noise in the simulated data was greater during the increase in sero-prevalence, thus increasing the chances of alarms being generated. There has also been previous reference to the impact of noise in the data on the Tabular Cumulative Sums [1,7].
Decomposing the time-series also offers a way to monitor the underlying trend usually masked by random noise in the data. Monitoring the trend component based on CI or target values provides a more stable pattern compared to monitoring the forecast errors.
Perspectives
Choosing the correct methods for the prediction and determination of anomalies is critical for their effective detection [25]. Over the last decade, research has focused on the detection of (re-)emerging disease outbreaks [1–3]. Nevertheless, it is also important to follow up on implemented strategies in order to reduce and/or eliminate specific endemic diseases [6], and control and eradication programs play an important role within this context [26].
In this study, we showed that there is no robust method for all scenarios. Similar conclusions were drawn in previous studies on syndromic surveillance for (re)-emerging diseases [1,2,27,28], where the authors concluded that no single method was suitable for use with all outbreak signals. A surveillance system should be able to detect a variety of outbreaks with different characteristics [29,30]. This is important when the outbreak signature is unknown. The same challenges are extrapolated to the context of endemic diseases and eradication programs for monitoring changes in (sero-)prevalence.
The efficiency with which changes in prevalence were monitored varied among the different methods. Choosing one specific monitoring method is therefore challenging, and the objectives of the monitoring program and the performance of the statistical monitoring methods in different time patterns should be taken into account [31]. Furthermore, it is important to consider the objectives of the control program, the nature of the disease, political and economic factors, and the infrastructure of the country in which it will be implemented [32].
In this study, state-space models were used to monitor endemic disease and control programs using two distinctive monitoring approaches for the time-series components. The principles can also be applied to general modeling, and the monitoring and surveillance of (re-)emerging diseases in human and veterinary sciences. The need to monitor declining changes in the context of veterinary syndromic surveillance has previously been discussed [33]. This author referenced the importance of monitoring decreases in the number of submissions (such as a decrease in the compliance of farms with passive disease surveillance) and the need for detection and action in the context of active surveillance.
Conclusions
Surveillance and monitoring systems are critical for the timely and effective control of infectious diseases. The different statistical monitoring methods used in this study performed differently in monitoring changes in disease sero-prevalence. In this context, choosing a single method is challenging, and the objectives of the monitoring program as well as the performance of the statistical monitoring methods in different time patterns should be taken into account. Furthermore, noise in the simulated baseline had an impact on the Shewhart Control Chart and the Tabular Cumulative Sums, whereas no substantial changes were found for the trend-based methods. Using the V-Mask or monitoring the trend component provided a consistent approach to monitoring changes in disease sero-prevalence.
Acknowledgments
The authors would like to thank the Pig Research Centre–SEGES for providing part of the data used in this study, and the Danish Food and Agriculture Administration for funding the project.
Author Contributions
- Conceptualization: ACLA DJ TH NT.
- Data curation: ACLA.
- Formal analysis: ACLA DJ NT.
- Funding acquisition: TH NT.
- Investigation: ACLA.
- Methodology: ACLA DJ NT.
- Project administration: NT.
- Resources: ACLA DJ.
- Software: ACLA DJ NT.
- Supervision: NT.
- Validation: ACLA.
- Visualization: ACLA.
- Writing – original draft: ACLA.
- Writing – review & editing: DJ TH NT.
References
- 1. Dórea FC, McEwen BJ, McNab WB, Sanchez J, Revie CW. Syndromic Surveillance Using Veterinary Laboratory Data: Algorithm Combination and Customization of Alerts. Pappalardo F, editor. PLoS One. Public Library of Science; 2013;8: e82183. pmid:24349216
- 2. Jackson ML, Baer A, Painter I, Duchin J. A simulation study comparing aberration detection algorithms for syndromic surveillance. BMC Med Inform Decis Mak. 2007;7: 6. pmid:17331250
- 3. Buckeridge DL, Switzer P, Owens D, Siegrist D, Pavlin J, Musen M. An evaluation model for syndromic surveillance: assessing the performance of a temporal algorithm. MMWR Morb Mortal Wkly Rep. 2005;54 Suppl: 109–115.
- 4. Rodríguez-Prieto V, Vicente-Rubiano M, Sánchez-Matamoros A, Rubio-Guerri C, Melero M, Martínez-López B, et al. Systematic review of surveillance systems and methods for early detection of exotic, new and re-emerging diseases in animal populations. Epidemiol Infect. 2014; 1–25.
- 5. Carslake D, Grant W, Green LE, Cave J, Greaves J, Keeling M, et al. Endemic cattle diseases: comparative epidemiology and governance. Philos Trans R Soc Lond B Biol Sci. 2011;366: 1975–86. pmid:21624918
- 6. Doherr MG, Audigé L. Monitoring and surveillance for rare health-related events: a review from the veterinary perspective. Philos Trans R Soc Lond B Biol Sci. 2001;356: 1097–1106. pmid:11516387
- 7. Lopes Antunes AC, Dórea F, Halasa T, Toft N. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms. Prev Vet Med. Elsevier B.V.; 2016;127: 15–20. pmid:27094135
- 8.
Kvisgaard L, Hjulsager C, Rathkjen P, Breum S, Trebbien R, Larsen LE. PRRSV outbreak with high mortality in northen part of Denmark. EuroPRRS2011-“Understanding and combating PRRS in Europe.” Novi Sad, Serbia; 2011. p. 60.
- 9.
SPF-DANMARK [Internet]. 2015 [cited 12 May 2015]. Available: https://www.spf.dk/
- 10. Dórea FC, McEwen BJ, Mcnab WB, Revie CW, Sanchez J. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation. J R Soc Interface. 2013;10: 20130114. pmid:23576782
- 11.
West M, Harrison J. Bayesian Forecasting and Dynamic Models. 2nd Ed. New York, USA: Springer; 1997.
- 12. West M. Time Series Decomposition. Biometrika. 1997;84: 489–494.
- 13. Cao P-H, Wang X, Fang S-S, Cheng X-W, Chan K-P, Wang X-L, et al. Forecasting influenza epidemics from multi-stream surveillance data in a subtropical city of China. PLoS One. Public Library of Science; 2014;9: e92945. pmid:24676091
- 14. Smith AF, West M. Monitoring renal transplants: an application of the multiprocess Kalman filter. Biometrics. 1983;39: 867–878. pmid:6367844
- 15. Cowling BJ, Wong IOL, Ho L-M, Riley S, Leung GM. Methods for monitoring influenza surveillance data. Int J Epidemiol. 2006;35: 1314–1321. pmid:16926216
- 16. Thysen I. Monitoring Bulk Tank Somatic Cell Counts by a Multi-Process Kalman Filter. Acta Agric Scand. 1993; 58–64.
- 17. Madsen TN, Kristensen AR. A model for monitoring the condition of young pigs by their drinking behaviour. Comput Electron Agric. 2005;48: 138–154.
- 18. Ostersen T, Cornou C, Kristensen a. R. Detecting oestrus by monitoring sows’ visits to a boar. Comput Electron Agric. Elsevier B.V.; 2010;74: 51–58.
- 19.
Jensen DB, Cornou C, Toft N, Kristensen AR. A multi-dimensional dynamic linear model for monitoring slaughter pig production. 7th European Conference on Precision Livestock Farming. Milan; 2015. pp. 503–512.
- 20.
Montgomery D. Introduction to Statistical Quality Control. 6th Ed. Arizona State University: John Wiley & Sons, Inc.; 2009.
- 21.
R Core Team. R. A Language and Environment for Statistical Computing. Vienna, Austria; 2014.
- 22. Cornou C, Vinther J, Kristensen AR. Automatic detection of oestrus and health disorders using data from electronic sow feeders. Livest Sci. Elsevier B.V.; 2008;118: 262–271.
- 23.
Kristensen AR, Jørgensen E, Toft N. “Advanced” topics from statistics. Herd management science II Advanced topics. Copenhagen: University of Copenhagen, Faculty of life sciences, Department of large animal sciences.; 2010. pp. 331–348.
- 24.
Reddy TA. Applied Data Analysis and Modeling for Energy Engineers and Scientists. Springer Science & Business Media; 2011.
- 25. Elbert Y, Burkom HS. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data. Stat Med. 2009;28: 3226–3248. pmid:19725023
- 26.
Salman MD. Surveillance and Monitoring Systems for Animal Health Programs and Disease Surveys. Animal Disease Surveillance and Survey Systems Methods and Aplications. 1st Editio. Iowa: Blackwell Publishing; 2003. pp. 3–13.
- 27.
Yahav I, Shmueli G. Algorithm Combination for Improved Performance in Biosurveillance Systems. In: Zeng D, Gotham I, Komatsu K, Lynch C, Thurmond M, Madigan D, et al., editors. Intelligence and Security Informatics: Biosurveillance. Springer; 2007. pp. 91–102.
- 28. Dupuy C, Morignat E, Dorea F, Ducrot C, Calavas D, Gay E. Pilot simulation study using meat inspection data for syndromic surveillance: use of whole carcass condemnation of adult cattle to assess the performance of several algorithms for outbreak detection. Epidemiol Infect. Cambridge University Press; 2015;143: 2559–2569. pmid:25566974
- 29. Reis BY, Pagano M, Mandl KD. Using temporal context to improve biosurveillance. Proc Natl Acad Sci U S A. 2003;100: 1961–5. pmid:12574522
- 30. Lombardo JS, Burkom H, Pavlin J. ESSENCE II and the framework for evaluating syndromic surveillance systems. MMWR Morb Mortal Wkly Rep. 2004;53 Suppl: 159–165.
- 31.
Wagner MM, Moore AW, Aryel RM. Handbook of Biosurveillance. Handbook of Biosurveillance. Elsevier; 2006. pp. 217–234.
- 32.
Christensen J. Application of Surveillance and Monitoring Systems in Disease Control Programs. In: Salman M, editor. Animal Disease Surveillance and Survey Systems Methods and Aplications. 1 edition. Blackwell Publishing; 2003. pp. 15–34.
- 33. Dórea FC, Lindberg A, McEwen BJ, Revie CW, Sanchez J. Syndromic surveillance using laboratory test requests: A practical guide informed by experience with two systems. Prev Vet Med. 2014;116: 313–24. pmid:24767815