Skip to main content
Advertisement
  • Loading metrics

The potential of resilience indicators to anticipate infectious disease outbreaks, a systematic review and guide

Abstract

To reduce the consequences of infectious disease outbreaks, the timely implementation of public health measures is crucial. Currently used early-warning systems are highly context-dependent and require a long phase of model building. A proposed solution to anticipate the onset or termination of an outbreak is the use of so-called resilience indicators. These indicators are based on the generic theory of critical slowing down and require only incidence time series. Here we assess the potential for this approach to contribute to outbreak anticipation. We systematically reviewed studies that used resilience indicators to predict outbreaks or terminations of epidemics. We identified 37 studies meeting the inclusion criteria: 21 using simulated data and 16 real-world data. 36 out of 37 studies detected significant signs of critical slowing down before a critical transition (i.e., the onset or end of an outbreak), with a highly variable sensitivity (i.e., the proportion of true positive outbreak warnings) ranging from 0.03 to 1 and a lead time ranging from 10 days to 68 months. Challenges include low resolution and limited length of time series, a too rapid increase in cases, and strong seasonal patterns which may hamper the sensitivity of resilience indicators. Alternative types of data, such as Google searches or social media data, have the potential to improve predictions in some cases. Resilience indicators may be useful when the risk of disease outbreaks is changing gradually. This may happen, for instance, when pathogens become increasingly adapted to an environment or evolve gradually to escape immunity. High-resolution monitoring is needed to reach sufficient sensitivity. If those conditions are met, resilience indicators could help improve the current practice of prediction, facilitating timely outbreak response. We provide a step-by-step guide on the use of resilience indicators in infectious disease epidemiology, and guidance on the relevant situations to use this approach.

Introduction

Infectious disease outbreaks are a leading cause of mortality worldwide, especially in low-income countries and for children [1], with substantial economic and psychological repercussions. Prevention measures such as vaccination and non-pharmaceutical interventions can reduce the consequences of epidemics, and even eliminate some diseases [2]. Measures are most effective if executed before cases start increasing exponentially. However, as outbreaks are hard to anticipate, control efforts often start too late.

Early warning systems have been developed to predict when and where outbreaks will start [3]. These typically depend on the statistical association between the risk of an outbreak and predictive variables. The development of such methods requires having access to various data sources, testing associations, and building statistical models [4]. Diverse factors can be used as predictors, such as climate, geographical settings, population, or socioeconomic data. The use of early warning systems to anticipate outbreaks and predict their consequences have shown to be effective in some cases, for instance, in the anticipation of malaria as well as influenza outbreaks [5, 6]. Other early-warning systems, such as Google Flu Trends, yielded more modest and variable performance and showed rather low associations between the predictors and the risk of an outbreak [7].

Early-warning systems are highly context-dependent, and no standard protocol to build and evaluate them has been proposed [8]. They require consistent parametrization and model fitting. Moreover, complex interactions between the variables, as well as confounding effects, are hard to capture. Developing such models is a long and fastidious process and requires a long cycle of evaluations and adaptations. Further, previously effective early-warning systems might become outdated due to changing conditions and have to be updated [9]. As such, early-warning systems require regular rounds of re-evaluation. A generic, model-free approach would be valuable to improve and complement outbreak anticipation. The use of resilience indicators could be such a generic approach, and was shown to be effective in detecting critical transitions in other complex systems [10].

The start of an outbreak can be defined as a critical transition, a phenomenon observed in many complex systems. Complex systems are defined as systems involving many components interacting with one another and thus leading to non-linear behaviors that are hard to predict. Examples of complex systems are financial markets, ecosystems, the climate and, indeed, infectious diseases in populations. In complex systems, a critical transition occurs when a small change in an underlying condition brings the system across a critical threshold beyond which change becomes self-propelling, driving the system towards a new state. Many complex systems may undergo critical transitions. For instance, financial markets may collapse [11], vegetated ecosystems may shift to a barren state [12], and coral reefs may be overgrown by macroalgae [13]. Being able to anticipate such shifts could enable to prevent their consequences.

Mathematically it can be shown that systems become slow close to a critical transition. This phenomenon is known as critical slowing down [10]. It implies that approaching a critical transition, systems are expected to lose their resilience, i.e., the ability to maintain their normal stabilizing dynamics (e.g., a disease-free state) when subjected to disturbances [14]. In such situations, they are found to recover more slowly from external perturbations. It is usually not possible to directly measure the recovery rate of a system. Therefore, statistical indicators of critical slowing down (e.g., variance, autocorrelation) are computed from representative time series to estimate how close the system may be to undergoing a critical transition [15]. We will refer to these metrics as resilience indicators. Some more background is provided in Box 1.

Box 1. Critical slowing down to anticipate sharp changes

When conditions change, some complex systems can approach a critical transition, which is a threshold where they lose their stability. Before the threshold is reached, they lose their resilience which is reflected in the intrinsic properties of the system. In particular, the recovery from perturbations becomes slower; a phenomenon called critical slowing down.

As the slower recovery of the system pushed by external perturbations can often not be measured directly, statistical metrics are used as a proxy. They are referred to as resilience indicators. This loss of resilience can be observed in the time series of the system. Since most systems are constantly affected by external perturbations, the increasing time to return to equilibrium is visible in the autocorrelation structure of the time series [15]. When looking at this structure, significant trends are displayed as the system approaches the transition. A rolling window is used to measure these trends: indicators are calculated repeatedly in overlapping subsets of the data to reveal their evolution over time [17].

Similarly, indicators of complexity can be used to anticipate a critical transition. These indicators measure the complexity of a system, defined as its level of disorder. Similar to resilience indicators, complexity indicators are expected to display trends prior to a critical transition, as the complexity of a system is expected to change when approaching a sharp change. However, complexity measures as an indicator of an upcoming critical transition yielded contrasting results in previous studies in other fields [18, 19].

In general, the critical transition in models of infectious diseases is mathematically a transcritical bifurcation. This means that below the critical threshold R = 1, the system represented by the number of cases is stabilized at a disease-free state, where only a few cases are observed. Once the threshold R = 1 is crossed, the disease-free state becomes unstable as the disease emerges, and major outbreaks can take place. As the critical threshold R = 1 is approached, the system’s recovery time increases. This means that, when for instance perturbed by the introduction of infected individuals, the number of cases will take longer to vanish (Fig 1B). When the threshold of R = 1 is crossed, the disease-free equilibrium becomes unstable (Fig 1C): any perturbation, i.e., the introduction of an infected individual, can result in a major outbreak.

thumbnail
Fig 1. Illustration of resilience indicators based on simulated data using an SIR model.

In the model, the transmission rate increases linearly over time, resulting in a critical transition when R crosses one. A-C are potential landscapes, showing the energy of the system for different states. The ball represents the state of the system. A R is relatively far from the threshold: the system will recover easily from an external perturbation. B R is close to the critical threshold: the potential to recover from external perturbation is low, and the system undergoes critical slowing down. C The threshold is crossed: the system will stabilize at a state for which the disease is endemic. D incidence time series generated using a SIR model. The system is undergoing a critical transition: R increases linearly over time until it crosses one (shaded area). E and F are associated resilience indicators calculated in the simulated time series (daily resolution) using a rolling window. We observe a significant increase of autocorrelation and variance prior to the outbreak.

https://doi.org/10.1371/journal.pgph.0002253.g001

Pathogen transmission is a complex dynamic process too, as it involves many individuals interacting with one another. When an epidemic starts, the system undergoes a critical transition from a disease-free state to disease emergence. This happens when the effective reproduction number R, i.e., the number of secondary cases arising from an average infected individual in a population, exceeds one. This can be due to a gradual change in conditions, such as a decrease in vaccination rates or improving climatic conditions for the pathogen. Critical slowing down is expected in epidemiological systems prior to R crossing one [16]. Therefore, resilience indicators could theoretically be used to anticipate epidemiological critical transitions based only on incidence time series, allowing to improve timely decision-making. However, the method raises challenges regarding the quality of data required, the processing of the data, and the data interpretation.

This review summarizes the latest findings on the application of resilience indicators to anticipate disease outbreaks based on simulated and real-world data. We address the types of disease, data types, and types of transition suitable to be anticipated using resilience indicators. We review the sensitivity of resilience indicators in public health contexts and discuss their limitations.

Material and methods

We performed a comprehensive literature review to evaluate the current knowledge of resilience indicators to anticipate infectious disease critical transitions. An information retrieval process was performed to review the state of the art of these indicators applied to infectious disease epidemiology. Targeted studies were peer-reviewed research publications using resilience indicators as early warning signals to anticipate infectious disease transitions. The review protocol was not registered.

Search strategy

This review focuses on resilience indicators based on the theory of critical slowing down. Two high-impact papers published in Nature and Science, cited 2,431 and 1,191 times respectively, are the main references regarding the theory of critical slowing down [10, 20]. We assumed that any study using this theory would cite one of these papers. We carried out a forward citation search intersected with a thematic search to avoid retrieving too many irrelevant results. The search was carried out on September 1st, 2022, using Scopus.

Among the studies citing one of these two papers, a thematic search was performed to only retrieve studies aiming at anticipating critical transitions related to infectious disease outbreaks. The keywords used for the thematic search were outbreak, epidemic, disease, infecti*, ill*, epidemiolog*, pest, virus, pandemic, bacteria, pathogen, parasite. To ascertain that the keywords were relevant, we also checked if adding the name of the top 20 infectious diseases according to WHO in the search keywords would yield new results. This did not result in additional results.

A specific search in the main databases was also used to prevent missing key studies. This additional search also prevented us from missing studies that did not cite one of the two key studies mentioned above. Scopus, Web of Science and PubMed were used for the database search. The search was performed using all keywords of the thematic search described heretofore combined with the term "early-warning signals" using "AND". The search was purposely kept specific to avoid retrieving too many irrelevant results.

Selection

The selection was then performed (Fig 2). Pathogens affecting humans or animals were the point of focus. Vegetal or crop pathogens were excluded. Indicators based on the theory of critical slowing down were the point of focus; other methods to anticipate outbreaks were excluded. Only primary publications were considered. The first selection was made based on the title and abstract. We retrieved 60 publications in this round. The second round of selection was based on full-text, using the selection criteria (Table 1), retrieving a final 37 publications (Fig 2, S3 Table). Both rounds of selection were performed by two reviewers independently (CD, RR). At the end of each round, disagreements were discussed with both reviewers until a consensus was reached.

thumbnail
Fig 2. PRISMA flowchart.

PRISMA flowchart of the literature search process.

https://doi.org/10.1371/journal.pgph.0002253.g002

Classification

We classified the included studies based on the following criteria (S2 Table):

  • The type of disease studied: generic disease, seasonal disease, vector-borne disease, or COVID-19.
  • Identified best performing indicator: the indicator yielding the best performance to anticipate disease transition in the study.
  • The type of data used: simulated using mechanistic models or real-world data.
  • The type of transition anticipated: onset of an outbreak or termination/elimination.

The following data were extracted from the included studies: authors, year of publication, research question, performance of the indicators (quantification of how often indicators could anticipate an upcoming transition, as reported in the study), and method to estimate the performance, false positive rate and lead time (how long in advance the transition could be anticipated). Finally, when applicable, we extracted the performance of the two most popular indicators, variance and autocorrelation. The data were extracted by two reviewers independently (CD, IAL, EHN). Disagreements were discussed at the end of the information retrieval process until an agreement was reached. The studies’ methodological quality and potential biases were also extracted and discussed in the narrative synthesis.

Results

Among the retrieved studies, 37 met the inclusion criteria. Included studies were published between 2013 and 2022. There has been an increasing interest in resilience indicators to anticipate disease outbreaks, and an increasing number of studies have been published on that topic, especially since 2020 when COVID-19 data became publicly available (Fig 3A). Many of the studies (n = 15, 41%) did not focus on a specific disease and used generic models of infectious diseases to investigate critical slowing down. In the studies investigating specific diseases (n = 22, 59%), 12 different diseases were studied, the main one being COVID-19 (n = 9, 24%). A total of 20 indicators were investigated, the most popular being variance, autocorrelation, mean, and coefficient of variation. These indicators were reported to be among the best-performing ones, respectively in 51% (n = 19), 32% (n = 12), 22% (n = 8) and 22% (n = 8) of the studies. Most of the time, resilience indicators were calculated in simulated data only to anticipate factitious critical transitions (n = 20, 54%). However, the performance of resilience indicators was also investigated on real-world data in a few studies (n = 17, 46%) (Fig 3B). The onset of outbreaks was most often examined (n = 32, 86%). The termination of outbreaks was investigated in a few studies (n = 10, 27%) (Fig 3C). When quantified, the performance was typically calculated using the area under the ROC curve (AUC). We will further refer to the AUC by (prediction) performance, unless specified otherwise.

thumbnail
Fig 3. Overview of the 37 papers included in this review.

A Number of included papers per year. The number of studies on resilience indicators to anticipate epidemics has shown an increasing trend in the last few years. Since 2020, more studies have been published as data on the COVID-19 pandemic became publicly available. B Included papers classified according to the type of study into three categories: case studies, simulation studies, and simulation studies supported by case studies. C Included papers classified according to the type of transition, into three categories: the onset of an outbreak, disease elimination, and both.

https://doi.org/10.1371/journal.pgph.0002253.g003

Indicators of resilience and complexity

A large variety of indicators can be used to monitor resilience. In the included studies, 20 different indicators were investigated in total (S1 Table). In n = 24 studies (65%), the reported best-performing indicators were autocorrelation, variance, mean, or coefficient of variation. Other well-performing indicators were the logarithmic distance, and composite indicators. The best-performing indicator may vary by disease system (S1 Table). For example, wavelet reddening provided the best performance with periodic data [21], whereas the coefficient of variation outperformed other indicators in anticipating immune-waning induced re-emergence of a disease [22]. Here, we describe the use of variance and autocorrelation as well as alternative indicators such as combinations of indicators, dynamical network markers, and deep learning algorithms.

Variance was reported to be one of the best indicators in 19 studies (51%), yielding a prediction performance between 0.5 and 1. However, it is not robust to all types of transition and stochasticity. Supporting Dakos et al.’s findings [23], O’Regan et al. found that variance displays a different trend depending on the type of data, the type of transition and the type of stochasticity [24]. O’Regan et al. showed that specific types of noise could alter the trend in variance: a decrease or no trend at all was sometimes observed, making variance an unreliable indicator in those cases [24].

Autocorrelation, coefficient of variation, and power spectrum are more robust to the type of stochasticity compared to variance: an increase is expected prior to a critical transition. Additionally, autocorrelation is reported to be the best-performing indicator in n = 12 (32%) studies, is robust to data imperfections (section “Data imperfections”), and yielded a performance ranging from 0.2 to 1.

Combinations of indicators have also been studied to anticipate disease emergence. Brett et al. used a supervised learning algorithm to establish an optimal weighted combination of indicators, including mainly skewness, kurtosis, and coefficient of variation [25]. The performance of this combination of indicators was investigated in simulated as well as real-world data. The authors yielded a prediction performance between 0.7 and 0.85 in anticipating several diseases’ re-emergence, such as mumps and pertussis. The lead time, namely how long in advance an upcoming outbreak is detected, was between 6 months and 4 years. Similarly, O’Brien et al. could anticipate 2 of the 3 COVID-19 waves in the UK with a lead time ranging from 0 to 48 days using a composite of variance, autocorrelation and skewness [26].

When case reports are discriminated between locations, dynamical network markers (DNM) can be used to anticipate disease (re-)emergence. These indicators were investigated in 5 studies [2731]. Locations were integrated into a weighted network structure using information on transport between these regions, traffic conditions and population. The correlation of the number of cases between the locations was used to calculate the landscape network entropy index [27, 30, 31], or the minimum spanning tree [28, 29]. The sensitivity of this method ranged from 0.74 to 1, with a lead time between 3 days to 2 months [2731].

Apart from resilience indicators, some studies investigated the performance of indicators of complexity to anticipate critical transitions [3234]. Complexity indicators measure the system’s level of disorder. In the included studies, six indicators of complexity were investigated: Fisher information [32], Kolmogorov complexity and Shannon entropy [34], mutual information, joint counts, and Geary’s C coefficient [33] (S1 Table). In accordance with previous studies, complexity indicators had a lower performance than resilience indicators [18, 19], and failed to identify a transition in one study [32].

Lastly, Bury et al. compared the performance of resilience indicators such as variance and autocorrelation to a deep learning algorithm [35]. They found that resilience indicators slightly outperformed their deep learning algorithm in predicting the onset of an outbreak in simulated data (performance of 0.54 for the deep learning algorithm, and 0.55–0.57 for resilience indicators), a result consistent with other included studies [2729].

Simulated data

In total, 25 studies used simulated data (68%) to test whether epidemiological systems display signs of critical slowing down, including 20 relying on simulated data only without accompanying a case study (54%). The data were simulated using compartmental SIR-type models. In such models, the population is divided into categories such as susceptible (S), infected (I), or recovered (R) based on their epidemiological status. Individuals transition from one compartment to another. Such models can be kept purposefully generic or be parametrized for a specific disease. Generic models were investigated in 15 studies as a proof of principle for resilience indicators applied to epidemiological systems as well as to investigate additional complexities (further discussed in the section “Dealing with complexities”) [16, 21, 24, 3345]. O’Regan et al. were the first to demonstrate that critical slowing down arises when an epidemic threshold is being approached [38]. These findings were confirmed in more complex epidemiological systems by including vaccination [42], seasonality [21], age structure [34], mosquito-borne transmission [46], or social behavior [37]. Ten simulation studies used compartmental models parametrized for a specific disease, including COVID-19 [47], measles [22, 4851], pertussis [50], and smallpox [49]. Various mechanisms of (re-)emergence were studied within these studies, such as annual seasonal outbreaks [48], or re-emergence because of decreasing vaccine uptake [22, 50]. In all studies, signs of critical slowing down were displayed before a critical transition, and they could signal an upcoming outbreak with a highly variable performance, between 0.03 and 1. These studies were used to investigate additional complexities arising in epidemiological systems (further discussed in the section “Dealing with complexities”), or to support a case study and confirm findings from real-world data [22, 25, 48, 52].

Real-world data

In total, 17 studies used real-world data to study the performance of resilience indicators. Nine diseases were studied: measles [22, 48], mumps [25], pertussis [25, 53], lymphatic filariasis (a parasitic worm disease) [54], plague [25], dengue [25], malaria [55], influenza [27, 29], and COVID-19 [26, 28, 3032, 52, 56, 57]. We distinguish three different categories of diseases studied: (i) seasonal diseases with R fluctuating around one, (ii) vector-borne diseases, and (iii) COVID-19.

Seasonal diseases.

Despite the particularly complex dynamical patterns of seasonal diseases, signs of critical slowing down were detected in six case studies on measles [22, 48], mumps [25], pertussis [25, 53] and influenza outbreaks [27, 29]. In these studies, case reports were used to (i) anticipate long-term re-emergence because of a decline in vaccination or an increase in the infection probability [22, 25], (ii) discriminate locations where epidemics would take place or not [25, 53], and (iii) anticipate annual emergence because of seasonal variations [27, 29, 48]. First, Brett et al. used a combination of indicators to anticipate the long-term re-emergence of mumps and pertussis up to several years in advance [25]. Specifically, a combination of resilience indicators could have anticipated the 2004 national mumps outbreak in England with a lead time of four years [25]. Second, they were able to discriminate localities where an outbreak would occur and localities with low levels of transmission based on local case reports. The authors anticipated pertussis outbreaks in nearly all 37 states that experienced one. However, 30 to 50% of the 12 states that did not experience an outbreak raised a false alarm. Third, Chen et al. and Yang et al. were able to anticipate annual influenza outbreaks in Japan in several areas using case reports per location and a weighted network of the locations to compute dynamical network markers [27, 29]. They yielded a performance of 0.898 with a lead time between 3 and 9 weeks.

Vector-borne diseases.

The anticipation of vector-borne disease transitions using resilience indicators was shown in three studies investigating dengue in Puerto Rico [25], plague in Madagascar [25], and malaria re-emergence in Kenya [55], and lymphatic filariasis elimination [54]. In these studies, re-emergence was a slow process due to respectively the sequential introduction of serotypes, change of transmission route, or decline in treatment efficacy, and elimination was due to mass drug administration. Brett et al. showed that the outbreaks of DENV-2 and DENV-3 in Puerto Rico could have been anticipated with respective lead times of 18 months and 6 months using a combination of indicators [25]. Further, they illustrated the potential anticipation of the 2017 plague outbreak in Madagascar 30 days before its onset using reports of suspected cases [25]. A recent study suggests that these suspected case reports poorly represented the true extent and temporal evolution of the outbreak [58]. While it is not clear how this has affected the results in [25], it highlights the importance of assuring that data used with resilience indicators are a good representation of the underlying disease dynamics to avoid misleading results. Harris et al. showed that the re-emergence of malaria in Kenya could have been anticipated 6 to 24 months prior to the critical transition using resilience indicators calculated over the hospital case counts [55]. Lastly, signs of critical slowing down were displayed prior to the elimination of lymphatic filariasis, as demonstrated by Michael and Madon [54]. The autocorrelation decreased and served to anticipate the elimination of the disease. Although vector-borne diseases display complex dynamics due to the vector-host interactions, their re-emergence and elimination can be anticipated using resilience indicators calculated in case reports or hospital counts.

COVID-19.

Three studies attempted to anticipate the first wave of COVID-19, despite the sparse data, and yielded contrasting results [26, 32, 56]. Ma et al. used the Fisher information as a critical slowing down indicator using incidence time series from March 2019 in various countries [32]. The author failed to detect critical slowing down. However, Fisher information is generally considered an indicator of complexity. Complexity measures as an indicator of an upcoming critical transition yielded contrasting results in previous studies, possibly explaining why they failed to anticipate the COVID-19 outbreak [18, 19]. Similarly, O’Brien et al. showed an especially high false-negative rate (0.62 ± 0.02) for the first wave due to the short time series and high variability of the data, consistent with previous results [25, 26, 32]. Only one study by Kaur et al. [56] succeeded at anticipating the emergence of COVID-19 in 7 out of the 9 countries studied, but with no mention of the lead time and false-positive rate.

During the subsequent COVID-19 waves, consistent testing became the norm in most Western European countries, creating a context of high-quality monitoring ideal for the use of resilience indicators, although still yielding contrasting results. Additionally, information on the geographic location of the cases was available. Six studies investigated the use of resilience indicators to anticipate the waves of COVID-19 [26, 28, 30, 31, 52, 57], including three using dynamical network markers [28, 30, 31]. Overall, signs of critical slowing down were detected with a lead time ranging from 0 days to 2 months, and a performance ranging from 0.04 to 1. Dynamical network markers yielded the highest performance, ranging from 0.825 to 1, but they require location data and the implementation of a location network structure. Additionally, Dablander et al. found that fast successions of elimination and re-emergence hampered the performance of resilience indicators as indicators were sometimes still picking signals of disease elimination before a new wave of COVID-19 [52]. They detected signs of critical slowing down in only 16 out of the 27 countries studied, with some countries raising an alarm for only one of the 10 waves studied.

Dealing with complexities

Eleven publications discussed the prerequisites for resilience indicators to anticipate accurately critical transitions in infectious diseases [16, 21, 22, 24, 4043, 50, 52, 57]. These will be discussed in detail in the following.

Data types.

Most studies discussed up to now have used incidence time series to calculate resilience indicators. These data can be obtained from case reports or hospital case counts. Other data types were also explored and compared: prevalence, rate of incidence as well as alternative sources of data such as Google Trends and Twitter data. By reproducing different types of data using mechanistic models combined with an observation process, O’Dea et al. [41], Brett et al. [40], and Southall et al. [42] showed that prevalence and incidence data portray similar trends in resilience indicators prior to disease emergence. Thereby, a prediction performance around 1 using variance was observed prior to disease emergence in prevalence as well as incidence time series [42]. Similar trends in the variance, as well as a similar prediction performance, were observed in rate of incidence data prior to disease emergence. However, variance can display different trends depending on the data type, making it an unreliable indicator [42]. Additionally, alternative sources of data giving an indirect measure of transmission were also investigated: social media data and Google trend data [22]. Pananos et al. looked at the evolution of the amount of pro-vaccine and anti-vaccine tweets and generated time series to anticipate measles re-emergence [22]. This showed a significant trend in the indicators several years in advance, prior to the re-emergence of measles due to a rising anti-vaccine sentiment.

Resolution of the data.

The number of data points and the temporal resolution of the time series strongly affect the prediction performance. In case studies, the amount of available data ranged from 10 to 30 years of monthly case reports, being around 120 to 360 data points. O’Dea et al. used simulated datasets to investigate the relationship between data quantity and prediction performance [43]. They showed that the observation period should be much greater than the oscillation period of a seasonal pattern. For instance, for an annual seasonal disease, several years of observation should be available. Moreover, the resolution of the data affects the prediction performance of autocorrelation: equidistant data are necessary for a good estimation of autocorrelation, and the collection interval should be smaller than the infectious period [43].

Data imperfections.

Epidemiological data is subject to imperfect observations due to misreporting and underreporting, data aggregation and reporting delay making it difficult to report cases accurately. Brett et al. examined the impact of overdispersion, underreporting and aggregation into periodic reports on the prediction performance using simulated time series [40]. Mean and variance were found to be the least impacted indicators by underreporting and aggregation. Strikingly, their predictive powers were unaffected if the data were not highly overdispersed, meaning displaying a high variability, and if the aggregation period was shorter than the infectious period. Other usually top-performing indicators, such as autocorrelation, performed well for aggregated data but were affected by overdispersion and low reporting probability [40]. Additionally, when reporting rate is increasing together with a varying transmission probability, indicators can struggle to distinguish an increase in transmission probability, leading to an outbreak, from an increase in reporting rate. O’Dea showed that using multiple time series can help confirm that the signal in resilience indicators is the result of an upcoming outbreak and not just a change in transmission probability, and that the second factorial moment is an indicator insensitive to the variation of the reporting probability [41].

Seasonality.

Another common characteristic of infectious diseases reflected in epidemiological data is seasonality. Miller et al. simulated time series of infectious diseases subject to seasonal patterns by varying the transmission rate periodically with different levels of amplitude. They found that seasonality does not highly affect the performance of the indicators, as for time series with the highest amplitude of seasonal transmission the performance decreased by 0.02 to 0.07 compared to a sensitivity of 0.85 for non-seasonal simulations. Seasonal detrending did not significantly improve the performance, especially in datasets with low amounts of seasonal fluctuations [21]. Dessavre et al. found that detrending can help improve the accuracy of prediction for some indicators in the case of disease elimination in multiple subpopulations for instance, an argument supported by O’Dea et al. [16, 41].

Speed of change of R.

The theory of critical slowing down and the use of resilience indicators to anticipate critical transitions are exclusively dedicated to critical transitions caused by a slow change in an underlying condition. This assumption applies to the anticipation of epidemic transitions as well, as shown by Dablander et al. [52]. The authors showed that overall, the performance of resilience indicators decreases as the speed of change of R increases, meaning that resilience indicators fail at anticipating epidemics emerging too fast. Using simulated data of several waves of COVID-19, they found a performance of variance dropping from around 0.99 to 0.6 as the speed of change increases. Proverbio et al. proposed a method to verify the assumption of a slow increase in R [57]. By using a Bayesian approach, they compute the R over time and measure its speed of change. They consider the assumption to be verified if R reaches one in a period much longer than the serial interval of the disease. Additionally, in the case of multi-wave diseases such as COVID-19, the stabilization time between two waves should be long enough for the epidemics to stabilize in a non-endemic state. Dablander et al. showed that when the time between two waves is too short, resilience indicators fail to anticipate the new outbreak and might pick up signals from the elimination of the previous wave instead [52].

Discussion–Guidelines on how to use resilience indicators in epidemiology

The advantage of resilience indicators lies in the fact that it is a data-driven, generic method applicable to a wide range of epidemiological systems without the need for frequent recalibration. Simulation studies supported by real-world case studies showed that critical slowing down can indeed be detected prior to disease outbreaks or eliminations, using good quality incidence time series. The 37 studies we reviewed suggest that resilience indicators have the potential to anticipate outbreaks but yield a highly variable sensitivity. Although the AUC was almost always used to quantify the performance, the false positive rate was poorly documented (only reported in one study). As false positives can result in the implementation of unnecessary interventions or even a precipitated halt of disease elimination strategies, it is important to get a better understanding of how the specificity of resilience indicators is affected by complexities in the data and the disease system. Similarly, lead time was not always quantified in the studies (only reported in ten studies), even though this is a key aspect of disease anticipation. Additionally, our information retrieval is likely subject to publication bias, which may result in an overestimation of the performance of these tools.

To bridge the gap between theory and practice, it is necessary to get a better understanding of the factors affecting the performance of resilience indicators as well as the suite of disease and monitoring systems that are best suited for the use of resilience indicators. Here, we present a step-by-step approach to assess whether a disease and its monitoring system are suitable for the use of resilience indicators. We also suggest how such an early warning system may be set up for the system at hand (Fig 4).

thumbnail
Fig 4. Decision tree.

Step-by-step approach to use resilience indicators in epidemiology.

https://doi.org/10.1371/journal.pgph.0002253.g004

Prior considerations

Although resilience indicators can help anticipate critical transitions, this may only be expected to work in specific contexts. First, we cannot expect signals of critical slowing down prior to a transition in all situations. At least two requirements must be fulfilled to use resilience indicators: suitable data should be available and external conditions should change slowly [15, 52]. We can distinguish several reasons for a new outbreak. A common mechanism is the emergence of a new unknown pathogen due to spillover from wild animals, for example. In this case, no suitable data will be available to observe critical slowing down. Another possibility is a pathogen remaining close to endemicity as their R fluctuates around 1, and that is subject to seasonal variations leading to sudden outbreaks. Under those circumstances, the seasonal change in conditions is likely too fast to detect critical slowing down. By contrast, when the risk for a pathogen to cause an outbreak rises gradually due to changing conditions, the outbreak might be anticipated using critical slowing down. Examples could include changes that may bring the R gradually closer to 1, such as a decline in vaccine uptake, mutation of the pathogen inducing immunity escape, and change in the immunity profile of a population due to waning immunity. Statistical tests have been proposed to check the assumption of slow change [57].

Second, the type of transition can affect the trend in some of the indicators. Disease outbreaks as well as disease elimination can be anticipated using resilience indicators. Prior to both transitions, critical slowing down is displayed in the system, as was shown in simulation studies as well as case studies. However, depending on the type of data, variance might not increase before the elimination of a disease [38]. Thus, autocorrelation should always be the first choice as it displays consistent trends, insensitive to the data types.

Furthermore, enough data points should be available, with a sufficient resolution to capture slowing down in order to anticipate disease critical transitions. The collection interval should be smaller than the infectious period [40] with a reasonable number of data points, meaning that the duration of observation is at least as long as the period of any oscillation in the data [43]. For instance, if the disease is a seasonal disease coming back every winter, at least a year of observations should be available. As a comparison, the included case studies based their analyses on around a decade of monthly case reports. Data should be equidistant for a good estimation of autocorrelation. Additionally, case reports discriminated per location can help improve the prediction performance using dynamical network markers. However, only five studies were published using these types of indicators. Thus, further investigation is required.

Few countries have surveillance systems able to achieve such high-quality data, especially since sufficient data prior to the start of an outbreak is necessary. In general, we can distinguish active surveillance efforts, sentinel surveillance, and passive surveillance. Active surveillance, namely the active seeking of cases of a given disease in a population, allows to estimate the prevalence of a disease with high accuracy but is extremely costly and not achievable in the long term. Sentinel surveillance system, i.e., the monitoring of disease prevalence in a population via a network of general practitioners, can help achieve consistent, good-quality data assuming that these are reported with a sufficient temporal resolution and that the system is sufficiently sensitive [40]. Covid-19 and influenza are examples of diseases that are monitored via sentinel surveillance. Influenza is monitored through sentinel surveillance systems within the GISRS initiative from WHO, and cases are reported with a weekly resolution in the FluID database [59]. Similarly, covid-19 is nowadays consistently monitored via sentinel surveillance systems and sometimes reported in publicly available databases [6063]. However, the quality of sentinel surveillance systems depends on the health-seeking behavior and access to healthcare facilities in the population. Similarly, the quality of data provided by passive surveillance systems is context-dependent. Most included case studies relied on data obtained from passive surveillance systems in areas with sufficient access to healthcare facilities and for diseases with sufficient symptomatic cases. However, such systems can underestimate the prevalence in periods of low transmission [64, 65], and hamper the prediction performance of resilience indicators if the reporting rate increases together with the prevalence of the disease [41]. Low-income countries with circulating diseases of poverty such as cholera or Ebola as well as neglected tropical diseases lack a constant surveillance system in place prior to outbreaks and the surveillance is mostly reactive, making the data unsuitable for resilience indicators [6669].

Alternatively, substitute types of more accessible time series representing the state of an epidemic indirectly could be considered. Critical slowing down in Google trends or social media data was investigated, and significant trends were displayed prior to a measles outbreak [22]. Other types of alternative data could also be envisioned, such as excess mortality data [70], news feed [71] or wastewater surveillance data [72]. Wastewater surveillance, in which biomarkers related to a specific disease are quantified in untreated sewage data, provides real-time data and allows monitoring the state of the epidemic with less effort than by counting the new cases. However, investigations would be required to make sure that critical slowing down is also displayed in this type of data.

Data processing

Once we know the disease transition is relevant with regard to resilience indicators, pre-processing of the data should be conducted prior to the analysis. Detrending of the data is usually necessary to avoid spurious trends in the indicators due to slow changes in the mean [17]. This is essential, especially for seasonal data. Seasonality affects the spread of a number of diseases, creating periodic fluctuations in the data. These fluctuations have an effect both on variance and autocorrelation, introducing misleading results. When studying a disease subject to periodicity, the number of data points should be much higher than the period. In other terms, if the disease has waves every winter, one should have data over several years. This helps assess if the trend in the indicators is truly due to long-term re-emergence and not to seasonal fluctuations.

Several types of data can represent the state of the system. Incidence time series represent the count of new cases, while prevalence time series count the number of infected individuals at different time points. The rate of incidence is the rate at which newly infected cases occur in a population. The rate of incidence can be estimated from incidence time series using a rolling window approach [42]. Critical slowing down is displayed prior to a transition in all these types of data. However, when using variance as an indicator of resilience, the type of data can affect the trend prior to a transition. Moreover, although the rate of incidence requires additional computations to be obtained, it displayed a more significant trend prior to disease elimination in one study [42]. If dynamical network markers are to be used, it is necessary to build a location network using population data, using the information on transport between these regions, traffic conditions and population.

As monitoring is never perfect, epidemiological data are subject to imperfections. The data are aggregated into weekly or monthly case reports. Underreporting is often observed as a result of asymptomatic cases as well as poor access to health facilities. Moreover, various types of stochasticity are inherent to the data. Again, these characteristics can affect the trend in variance. Furthermore, when combined, imperfections can be detrimental to the performance of resilience indicators. Imperfections likely to be encountered in the data should be clearly stated in order to select relevant indicators. Variance and mean perform poorly when data are highly overdispersed, meaning that data show great variability. Similarly, autocorrelation performs poorly when the reporting rate is highly overdispersed or when the aggregation period is too high. If the reporting rate is expected to change, the second-order moment could be used as it is insensitive to variations in the reporting rate [41].

Computing

After pre-processing the data, the resilience indicators can be computed using packages, for instance, in R or Matlab [73, 74].

The indicators should be picked carefully based on the prior reflection presented above. Variance was the top-performing indicator in a majority of studies and least impacted by underreporting and aggregation. However, when choosing variance, the trend can be inverted. Autocorrelation was among the best-performing indicators in a majority of studies, and its trend is not affected by the type of transition. However, the performance of autocorrelation is impacted in the case of low reporting probability and highly overdispersed data, and equally spaced data are necessary to calculate autocorrelation. A variety of indicators can be used for specific situations (S1 Table). Combinations of indicators yielded the best performance [25]. However, the best combinations were determined using an optimization algorithm trained on a large dataset of simulated time series, and their performance remains to be proved in other contexts.

The size of the rolling window should be picked carefully to observe a trend at a consistent scale. An arbitrary value is to take 50% of the size of the dataset as a window size [17]. However, if several transitions occur, then a smaller rolling window size should be picked to be able to observe a trend before each transition. In addition, enough data points should be present in the window in order to accurately estimate the autocorrelation; however, a too-large window will reduce the absolute increase [57]. It is good practice to check the effect of the window size and detrending in a sensitivity analysis [17].

When a trend is observed, its significance needs to be assessed. Due to the sliding window approach, standard statistical tests are not applicable as the observations are not independent. A proposed approach to assess the significance of the trend is to produce surrogate datasets to compare the trend estimates [17]. Several methods to produce consistent surrogate datasets have been proposed and implemented in the resilience indicators packages [73, 74]. The choice of the threshold should be calibrated based on previous data, as a poorly calibrated threshold can induce misleading results [52].

Conclusion and future directions

To conclude, resilience indicators have the potential to help public health organizations anticipate infectious disease transitions, as they constitute a generic, data-driven method. Real-time calculation of resilience indicators could be put into practice to monitor the risk of an upcoming outbreak, provided sufficient, good-quality case reports are available. However, further investigations are required to strike the right balance between false negative and false positive rates, and lead time. This will differ by setting, disease system, and data availability and quality. To overcome the data and model limitations, a combination with other early-warning systems, as well as other sources of data, might help improve early detection. The potential of such combined approaches remains to be explored. Moving forward, a close collaboration between experts in resilience indicators and public health practitioners is needed to bridge the gap between theory and practice, and determine how and when resilience indicators could contribute to more timely outbreak response.

Supporting information

S1 Table. Summary of the indicators and their usage.

Original references refer to primary studies not included in that review that studied the use of the indicators. Mathematical derivation of the indicators is given in [53].

https://doi.org/10.1371/journal.pgph.0002253.s001

(XLSX)

S2 Table. Summary of the included studies.

Summary of the included studies and their classification.

https://doi.org/10.1371/journal.pgph.0002253.s002

(XLSX)

S3 Table. Summary of the publications retrieved during database search, and the inclusion/exclusion decisions.

https://doi.org/10.1371/journal.pgph.0002253.s003

(XLSX)

References

  1. 1. Becker K, Hu Y, Biller-Andorno N. Infectious diseases—A global challenge. International Journal of Medical Microbiology. Elsevier GmbH; 2006. pp. 179–185. https://doi.org/10.1016/j.ijmm.2005.12.015 pmid:16446113
  2. 2. Pinheiro P, Mathers CD, Krämer A. The Global Burden of infectious diseases. Modern Infectious Disease Epidemiology: Concepts, Methods, Mathematical Models, and Public Health. 2009.
  3. 3. Morin CW, Semenza JC, Trtanj JM, Glass GE, Boyer C, Ebi KL. Unexplored Opportunities: Use of Climate- and Weather-Driven Early Warning Systems to Reduce the Burden of Infectious Diseases. pmid:30350265
  4. 4. Racloz V, Ramsey R, Tong S, Hu W. Surveillance of Dengue Fever Virus: A Review of Epidemiological Models and Early Warning Systems. Anyamba A, editor. PLoS Negl Trop Dis. 2012;6: e1648. pmid:22629476
  5. 5. Thomson MC, Connor SJ. The development of Malaria Early Warning Systems for Africa. Trends Parasitol. 2001;17: 438–445. pmid:11530356
  6. 6. Vega T, Lozano JE, Meerhoff T, Snacken R, Mott J, Ortiz de Lejarazu R, et al. Influenza surveillance in Europe: establishing epidemic thresholds by the Moving Epidemic Method. Influenza Other Respir Viruses. 2013;7: 546–558. pmid:22897919
  7. 7. Santillana M, Zhang DW, Althouse BM, Ayers JW. What Can Digital Disease Detection Learn from (an External Revision to) Google Flu Trends? Am J Prev Med. 2014;47: 341–347. pmid:24997572
  8. 8. Chaves LF, Pascual M. Comparing Models for Early Warning Systems of Neglected Tropical Diseases. Utzinger J, editor. PLoS Negl Trop Dis. 2007;1: e33. pmid:17989780
  9. 9. Liang S, Yang C, Zhong B, Guo J, Li H, Carlton EJ, et al. Surveillance systems for neglected tropical diseases: Global lessons from China’s evolving schistosomiasis reporting systems, 1949–2014. Emerging Themes in Epidemiology. BioMed Central Ltd.; 2014. pmid:26265928
  10. 10. Scheffer M, Carpenter SR, Lenton TM, Bascompte J, Brock W, Dakos V, et al. Anticipating critical transitions. 2012;338: 344–348. Science (1979). pmid:23087241
  11. 11. Diks C, Hommes C, Wang J. Critical slowing down as an early warning signal for financial crises? Empir Econ. 2019;57: 1201–1228.
  12. 12. Dakos V, Kéfi S, Rietkerk M, van Nes EH, Scheffer M. Slowing down in spatially patterned ecosystems at the brink of collapse. American Naturalist. 2011;177. pmid:21597246
  13. 13. van de Leemput IA, Hughes TP, van Nes EH, Scheffer M. Multiple feedbacks and the prevalence of alternate stable states on coral reefs. Coral Reefs. 2016;35: 857–865.
  14. 14. Holling CS. Resilience and Stability of Ecological Systems. Annu Rev Ecol Syst. 1973;4: 1–23.
  15. 15. Dakos V, Carpenter SR, van Nes EH, Scheffer M. Resilience indicators: Prospects and limitations for early warnings of regime shifts. Philosophical Transactions of the Royal Society B: Biological Sciences. 2015;370: 1–10.
  16. 16. Gama Dessavre A, Southall E, Tildesley MJ, Dyson L. The problem of detrending when analysing potential indicators of disease elimination. J Theor Biol. 2019;481: 183–193. pmid:30980869
  17. 17. Dakos V, Carpenter SR, Brock WA, Ellison AM, Guttal V, Ives AR, et al. Methods for detecting early warnings of critical transitions in time series illustrated using simulated ecological data. PLoS One. 2012;7. pmid:22815897
  18. 18. Rector JL, Gijzel SMW, van de Leemput IA, van Meulen FB, Olde Rikkert MGM, Melis RJF. Dynamical indicators of resilience from physiological time series in geriatric inpatients: Lessons learned. Exp Gerontol. 2021;149: 111341. pmid:33838217
  19. 19. Dakos V, Soler-Toscano F. Measuring complexity to infer changes in the dynamics of ecological systems under stress. Ecological Complexity. 2017;32: 144–155.
  20. 20. Scheffer M, Bascompte J, Brock WA, Brovkin V, Carpenter SR, Dakos V, et al. Early-warning signals for critical transitions. Nature. 2009;461: 53–59. pmid:19727193
  21. 21. Miller PB O’Dea EB, Rohani P, Drake JM. Forecasting infectious disease emergence subject to seasonal forcing. Theor Biol Med Model. 2017;14. pmid:28874167
  22. 22. Pananos AD, Bury TM, Wang C, Schonfeld J, Mohanty SP, Nyhan B, et al. Critical dynamics in population vaccinating behavior. Proc Natl Acad Sci U S A. 2017;114: 13762–13767. pmid:29229821
  23. 23. Dakos V, Van Nes EH, D’Odorico P, Scheffer M. Robustness of variance and autocorrelation as indicators of critical slowing down. Ecology. 2012;93: 264–271. pmid:22624308
  24. 24. O’Regan SM, Burton DL. How Stochasticity Influences Leading Indicators of Critical Transitions. Bull Math Biol. 2018;80: 1630–1654. pmid:29713924
  25. 25. Brett TS, Rohani P. Dynamical footprints enable detection of disease emergence. PLoS Biol. 2020;18. pmid:32433658
  26. 26. O’Brien DA, Clements CF. Early warning signal reliability varies with COVID-19 waves. Biol Lett. 2021;17: 20210487. pmid:34875183
  27. 27. Chen P, Chen E, Chen L, Zhou XJ, Liu R. Detecting early-warning signals of influenza outbreak based on dynamic network marker. J Cell Mol Med. 2019;23: 395–404. pmid:30338927
  28. 28. Dong M, Zhang X, Yang K, Liu R, Chen P. Forecasting the COVID-19 transmission in Italy based on the minimum spanning tree of dynamic region network. PeerJ. 2021;9: 1–17. pmid:34249495
  29. 29. Yang K, Xie J, Xie R, Pan Y, Liu R, Chen P. Real-Time Forecast of Influenza Outbreak Using Dynamic Network Marker Based on Minimum Spanning Tree. Biomed Res Int. 2020;2020. pmid:33062696
  30. 30. Liu R, Zhong J, Hong R, Chen E, Aihara K, Chen P, et al. Predicting local COVID-19 outbreaks and infectious disease epidemics based on landscape network entropy. Sci Bull (Beijing). 2021;66: 2265–2270. pmid:36654453
  31. 31. Li M. A Novel Method to Detect the Early Warning Signal of Covid-19 Transmission. 2022;2019.
  32. 32. Ma Z. Predicting the Outbreak Risks and Inflection Points of COVID-19 Pandemic with Classic Ecological Theories. Advanced Science. 2020;7. pmid:33042733
  33. 33. Phillips B, Anand M, Bauch CT. Spatial early warning signals of social and epidemiological tipping points in a coupled behaviour-disease network. Sci Rep. 2020;10. pmid:32376908
  34. 34. Brett T, Ajelli M, Liu QH, Krauland MG, Grefenstette JJ, Van Panhuis WG, et al. Detecting critical slowing down in highdimensional epidemiological systems. PLoS Comput Biol. 2020;16: 1–19. pmid:32150536
  35. 35. Bury TM, Sujith RI, Pavithran I, Scheffer M, Lenton TM, Anand M, et al. Deep learning for early warning signals of tipping points. Proc Natl Acad Sci U S A. 2021;118. pmid:34544867
  36. 36. Ullon W, Forgoston E. Controlling epidemic extinction using early warning signals. Int J Dyn Control. 2022. pmid:35910509
  37. 37. Jentsch PC, Anand M, Bauch CT. Spatial correlation as an early warning signal of regime shifts in a multiplex disease-behaviour network. J Theor Biol. 2018;448: 17–25. pmid:29614264
  38. 38. O’Regan SM, Drake JM. Theory of early warning signals of disease emergenceand leading indicators of elimination. Theor Ecol. 2013;6: 333–357. pmid:32218877
  39. 39. Brett TS, Drake JM, Rohani P. Anticipating the emergence of infectious diseases. J R Soc Interface. 2017;14. pmid:28679666
  40. 40. Brett TS O’Dea EB, Marty É, Miller PB, Park AW, Drake JM, et al. Anticipating epidemic transitions with imperfect data. PLoS Comput Biol. 2018;14. pmid:29883444
  41. 41. O’Dea EB, Drake JM. Disentangling reporting and disease transmission. Theor Ecol. 2019;12: 89–98. pmid:34552670
  42. 42. Southall E, Tildesley MJ, Dyson L. Prospects for detecting early warning signals in discrete event sequence data: Application to epidemiological incidence data. PLoS Comput Biol. 2020;16. pmid:32960900
  43. 43. O’Dea EB, Park AW, Drake JM. Estimating the distance to an epidemic threshold. J R Soc Interface. 2018;15. pmid:29950512
  44. 44. Drake JM, Brett TS, Chen S, Epureanu BI, Ferrari MJ, Marty É, et al. The statistics of epidemic transitions. PLoS Comput Biol. 2019;15. pmid:31067217
  45. 45. Kuehn C. A mathematical framework for critical transitions: Normal forms, variance and applications. J Nonlinear Sci. 2013;23: 457–510.
  46. 46. O’Regan SM, Lillie JW, Drake JM. Leading indicators of mosquito-borne disease elimination. Theor Ecol. 2016;9: 269–286. pmid:27512522
  47. 47. Nazarimehr F, Pham VT, Kapitaniak T. Prediction of bifurcations by varying critical parameters of COVID-19. Nonlinear Dyn. 2020;101: 1681–1692. pmid:32836801
  48. 48. Kuehn C, Zschaler G, Gross T. Early warning signs for saddle-escape transitions in complex networks. Sci Rep. 2015;5. pmid:26294271
  49. 49. Drake JM, Hay SI. Monitoring the path to the elimination of infectious diseases. Tropical Medicine and Infectious Disease. MDPI AG; 2017. pmid:30270879
  50. 50. O’Regan SM O’Dea EB, Rohani P, Drake JM. Transient indicators of tipping points in infectious diseases. J R Soc Interface. 2020;17. pmid:32933375
  51. 51. Tredennick AT O’Dea EB, Ferrari MJ, Park AWRohani P, Drake JM. Anticipating infectious disease re-emergence and elimination: a test of early warning signals using empirically based models. J R Soc Interface. 2022;19. pmid:35919978
  52. 52. Dablander F, Heesterbeek H, Borsboom D, Drake JM. Overlapping timescales obscure early warning signals of the second COVID-19 wave. Proceedings of the Royal Society B: Biological Sciences. 2022;289. pmid:35135355
  53. 53. Southall E, Brett TS, Tildesley MJ, Dyson L. Early warning signals of infectious disease transitions: A review. J R Soc Interface. 2021;18. pmid:34583561
  54. 54. Michael E, Madon S. Socio-ecological dynamics and challenges to the governance of Neglected Tropical Disease control. Infect Dis Poverty. 2017;6. pmid:28166826
  55. 55. Harris MJ, Hay SI, Drake JM. Early warning signals of malaria resurgence in Kericho, Kenya. Biol Lett. 2020;16. pmid:32183637
  56. 56. Kaur T, Sarkar S, Chowdhury S, Sinha SK, Jolly MK, Dutta PS. Anticipating the Novel Coronavirus Disease (COVID-19) Pandemic. Front Public Health. 2020;8. pmid:33014985
  57. 57. Proverbio D, Kemp F, Magni S, Gonçalves J. Performance of early warning signals for disease re-emergence: A case study on COVID-19 data. Althouse Beditor. PLoS Comput Biol. 2022;18: e1009958. pmid:35353809
  58. 58. Ten Bosch Q, Andrianaivoarimanana V, Ramasindrazana B, Mikaty G, Rakotonanahary RJL, Nikolay B, et al. Analytical framework to evaluate and optimize the use of imperfect diagnostics to inform outbreak response: Application to the 2017 plague epidemic in Madagascar. PLoS Biol. 2022;20: e3001736. pmid:35969599
  59. 59. FluID. In: https://www.who.int/teams/global-influenza-programme/surveillance-and-monitoring/fluid, Accessed on 21-06-2023.
  60. 60. Cohen JM, Mosnier A, Valette M, Bensoussan JL, Van Der Werf S. Médecin généraliste et veille sanitaire: l’exemple de la grippe en France. Med Mal Infect. 2005;35: 252–256. pmid:15878816
  61. 61. Rakotoarisoa A, Randrianasolo L, Tempia S, Guillebaud J, Razanajatovo N, Randriamampionona L, et al. Lessons from the field Evaluation of the influenza sentinel surveillance system in Madagascar. Bull World Health Organ. 2017;95: 375–381.
  62. 62. Sena F, Id N, Paa Edu-Quansah E, Khumalo Kuma G, Eleeza J, Kenu E, et al. Evaluation of the sentinel surveillance system for influenza-like illnesses in the Greater Accra region, Ghana, 2018. 2019. pmid:30870489
  63. 63. Babakazo P, Kabamba-Tshilobo J, Wemakoy EO, Lubula L, Manya LK, Kebela Ilunga B, et al. Evaluation of the influenza sentinel surveillance system in the Democratic Republic of Congo, 2012–2015. pmid:31823763
  64. 64. Yang X, Liu D, Wei K, Liu X, Meng L, Yu D, et al. Comparing the similarity and difference of three influenza surveillance systems in China. Sci Rep. 2018;8. pmid:29434230
  65. 65. Olotu A, Fegan G, Williams TN, Sasi P, Ogada E, Bauni E, et al. Defining Clinical Malaria: The Specificity and Incidence of Endpoints from Active and Passive Surveillance of Children in Rural Kenya. Snounou G, editor. PLoS One. 2010;5: e15569. pmid:21179571
  66. 66. L Worsley-Tonks KE, Bender JB, Deem SL, Ferguson AW, Fèvre EM, Martins DJ, et al. Strengthening global health security by improving disease surveillance in remote rural areas of low-income and middle-income countries. 2022;10.
  67. 67. De Macedo Couto R, Santana GO, Ranzani OT, Waldman EA. One Health and surveillance of zoonotic tuberculosis in selected low-income, middleincome and high-income countries: A systematic review. PLoS Negl Trop Dis. 2022;16. pmid:35666731
  68. 68. Goutard FL, Binot A, Duboz R, Rasamoelina-Andriamanivo H, Pedrono M, Holl D, et al. How to reach the poor? Surveillance in low-income countries, lessons from experiences in Cambodia and Madagascar. Prev Vet Med. 2015;120: 12–26. pmid:25842000
  69. 69. Woolhouse MEJ, Rambaut A, Kellam P. Lessons from Ebola: Improving infectious disease surveillance to inform outbreak management. Sci Transl Med. 2015;7. pmid:26424572
  70. 70. Mazick A, Europe on behalf of the participants of a workshop on mortality monitoring in. Monitoring excess mortality for public health action: potential for a future European network. Weekly releases (1997–2007). 2007;12: 3107.
  71. 71. Keller M, Blench M, Tolentino H, Freifeld CC, Mandl KD, Mawudeku A, et al. Use of Unstructured Event-Based Reports for Global Infectious Disease Surveillance. Emerg Infect Dis. 2009;15: 689. pmid:19402953
  72. 72. Mao K, Zhang K, Du W, Ali W, Feng X, Zhang H. The potential of wastewater-based epidemiology as surveillance and early warning of infectious disease outbreaks. Current Opinion in Environmental Science and Health. Elsevier; 2020. pp. 1–7. pmid:32395676
  73. 73. Dakos V. Earlywarnings in R. 2015.
  74. 74. van Nes EH. Grind for matlab. 2017.