Skip to main content
  • Loading metrics

Epidemic management and control through risk-dependent individual contact interventions


Testing, contact tracing, and isolation (TTI) is an epidemic management and control approach that is difficult to implement at scale because it relies on manual tracing of contacts. Exposure notification apps have been developed to digitally scale up TTI by harnessing contact data obtained from mobile devices; however, exposure notification apps provide users only with limited binary information when they have been directly exposed to a known infection source. Here we demonstrate a scalable improvement to TTI and exposure notification apps that uses data assimilation (DA) on a contact network. Network DA exploits diverse sources of health data together with the proximity data from mobile devices that exposure notification apps rely upon. It provides users with continuously assessed individual risks of exposure and infection, which can form the basis for targeting individual contact interventions. Simulations of the early COVID-19 epidemic in New York City are used to establish proof-of-concept. In the simulations, network DA identifies up to a factor 2 more infections than contact tracing when both harness the same contact data and diagnostic test data. This remains true even when only a relatively small fraction of the population uses network DA. When a sufficiently large fraction of the population (≳ 75%) uses network DA and complies with individual contact interventions, targeting contact interventions with network DA reduces deaths by up to a factor 4 relative to TTI. Network DA can be implemented by expanding the computational backend of existing exposure notification apps, thus greatly enhancing their capabilities. Implemented at scale, it has the potential to precisely and effectively control future epidemics while minimizing economic disruption.

Author summary

During the ongoing COVID-19 pandemic, exposure notification apps have been developed to scale up manual contact tracing. The apps use proximity data from mobile devices to automate notifying direct contacts of an infection source. The information they provide is limited because users receive only rare and binary alerts. Here we present network data assimilation (DA) as a new digital approach to epidemic management and control. Network DA uses the same data as exposure notification apps but uses it more effectively to provide frequently updated individual risk assessments to users.

Network DA is based on automated learning about individuals’ risk of exposure and infection from crowd-sourced health data and proximity data. The data are aggregated with models of disease transmission to produce statistical assessments of users’ risks. In an extensive simulation study of the COVID-19 epidemic in New York City (NYC), we show that network DA with diagnostic testing achieves epidemic control with fewer than half the deaths that occurred during NYC’s lockdown, while isolating a far smaller fraction of the population (typically only 5–10% of the population at any given time).

Implemented at scale, then, network DA has the potential to effectively control epidemics while minimizing economic and social disruption.


Until a majority of the global population has reached immunity against continuously evolving virus variants through vaccination or infection, the ongoing COVID-19 pandemic and future epidemics will need to be fought with non-pharmaceutical interventions (NPIs) [1, 2]. They include social distancing, mask usage, and restrictions of mass gatherings. But NPIs such as lockdowns come at catastrophic costs to individuals, economies, and societies, with disproportionate burdens carried by disadvantaged groups [3, 4]. Even if imposed only intermittently and regionally, lockdowns are an inefficient means of epidemic management and control: they isolate much of the population, although even at extreme epidemic peaks, only a small fraction is infectious [5, 6]. If individuals who are at high risk of being infectious could be identified before they infect others, control measures could be made more efficient by targeting them to this high-risk group.

Testing and contact tracing have been discussed and partly implemented as strategies to identify individuals who are at high risk of being infectious [79]: testing determines who is infectious, contact tracing identifies those who may have been exposed through contact with an infectious individual, and this high-risk group is then isolated. However, controlling the COVID-19 epidemic by testing, contact tracing, and isolation (TTI) has been complicated by frequent asymptomatic and presymptomatic transmission, which support silent spread, and a short serial interval, the period between the onset of any symptoms in infector and infectee [7, 1012]. Even in ideal scenarios, contact tracing needs to identify upward of 75% of infections to achieve epidemic control [8, 11]. Quickly diagnosing such a large fraction of infections and manually identifying exposed individuals requires testing and a contact tracing workforce at a scale that has been challenging to realize in most countries [13, 14].

To scale up the contact tracing component of TTI without a massive expansion of the workforce, exposure notification apps have been developed. They rely on proximity data from smartphones or other mobile devices to identify close contacts between users [15, 16]. If an individual user is identified as being infectious, prior close contacts are notified and can then self-isolate. The exposure notification is deterministic (a user is only notified when potentially exposed), and it only uses nearest-neighbor information on the network of close contacts among users. Exposure notification apps have not seen widespread use, in part perhaps because of early implementation difficulties and privacy concerns but also because they do not provide users with information except in the rare case when they receive an exposure notification [17]. Nonetheless, where they have been used, these apps have helped prevent the spread of infections [18].

Here we present a new and much more effective way of exploiting the same information on which exposure notification apps rely. Unlike these apps, however, this method provides users with continuously updated assessments of their individual risks. The core idea is to learn about individual risks of exposure and infectiousness by propagating crowdsourced information about infection risks over a dynamic contact network assembled from proximity data from mobile devices. Instead of the deterministic assessments of exposure notification apps, our approach exploits data from diverse sources probabilistically. Various types of information, including their uncertainties, can be harnessed. For example:

  • Diagnostic tests, including sensitive but slow molecular tests, less sensitive but rapid antigen tests, or pooled diagnostic tests [19].
  • Serological tests, which indicate a reduced probability of susceptibility when antibodies specific to SARS-CoV-2 (or the causative agent of another targeted disease) are detected.
  • Self-reported clinical symptoms, elevated body temperature readings, or other wearable sensor data, which can indicate an elevated probability of infectiousness and virus transmission [20, 21].

Quantification of individual risks is achieved by assimilating data into a model of virus transmission and disease progression defined on a dynamic contact network assembled from proximity data. For decision making, periodically updated individual risks of having been exposed or of being infectious take the place of the deterministic assessments in exposure notification apps. The probabilistic network approach propagates data farther along the contact network than contact tracing, consistent with models of disease progression and rates of virus transmission. It harnesses more information than contact tracing, both by being able to include diverse data sources with their uncertainties and by exploiting information inherent in the network structure itself: an individual with many contacts generally is at greater risk of having been exposed than an individual with fewer contacts [22, 23], and such contact rates are available from the proximity data from mobile devices.

The network and the information it contains are dynamically updated in periodic data assimilation (DA) cycles. These cycles resemble the daily DA cycles that weather forecasting centers use operationally [24]. The quantitative information that is provided by the risk assessment platform we outline in what follows can be used in similar ways as weather forecasts: to inform personal decisions by users based on their desire to avoid risk (in the weather forecasting analogy, staying home rather than going on a mountain in the face of a likely downpour) and to inform public policy when aggregate risk measures indicate that wider mandates are necessary (analogous to evacuating a city to protect lives and avoid overwhelming public health and social infrastructures when a hurricane is likely to make landfall).

Network data assimilation

Our point of departure is a variant of the widely used susceptible–exposed–infectious–resistant (or recovered) (SEIR) model of epidemiology, extended through inclusion of hospitalized (H) and deceased (D) compartments to an SEIHRD model [25]. Compartmental epidemiological models have traditionally been applied on the level of aggregated individuals (e.g., the population of a city or country) [26]; here we follow more recent work and apply the SEIHRD on an individual level on a time-dependent contact network [23, 27]. Each individual is represented by a node on the network; time-dependent edges between the nodes are established by close contacts between individuals, as recorded by proximity data from mobile devices. Virus transmission can occur during close contacts from infectious or hospitalized nodes to susceptible nodes, which thereupon become exposed. The probability of transmission increases with contact duration, and the transmission rate can vary from node to node and with time, for example, to reflect time-varying transmission rates resulting from virus mutations or a reduced transmission rate when masks are worn. From being exposed, nodes progress to becoming infectious, and later they may progress to requiring hospitalization, recover, or die.

At any time t, each node i is in one of the six health and vital states Si(t), Ei(t) etc. of the SEIHRD model (see Methods). Network DA learns about the probabilities 〈Si(t)〉, 〈Ei(t)〉, etc. of finding an individual node i at time t in each of the different states. We adopt a sequential Bayesian learning approach that propagates an ensemble of individual probabilities 〈Si(t)〉, 〈Ei(t)〉, etc. across the network and periodically updates them and the SEIHRD model parameters with new data [10, 2830]. Data falling within a DA window of length Δ (typically, Δ ≈ 1 day) are incorporated into the model by adjusting the ensemble to minimize the misfit to the data in the window. An interval Δ later, the updating procedure is repeated (see Methods for details). Such DA cycles and the underlying algorithms are used daily in weather forecasting to estimate up to 109 variables characterizing the state of the atmosphere; they easily scale to network epidemiology models with millions of nodes or more. Essentially all types of data and their error characteristics can be assimilated with this approach, even data that are less sensitive to infectiousness, such as readings of heart rates [31] or body temperatures [20, 21] (Fig 1).

Fig 1. Schematic of the personalized risk assessment platform.

Proximity-tracking data from mobile devices is used to assemble a contact network, in which nodes represent individuals and edges represent close contacts between individuals. An epidemiological model defined on the contact network is then fused with diverse health data, including diagnostic tests, hospitalization status, and possibly data such as body temperature readings. The model spreads risk of infectiousness from a positive individual (red) to others, taking into account knowledge about disease progression, the time and duration of contacts, and the use of personal protective equipment (PPE), among other factors. The result of the network DA is an assessment of individual risks, for example, of being infectious, which then can be used to target contact interventions.

Synthetic network for proof-of-concept

To illustrate the methods with simulated data in a computational proof-of-concept, we construct a large synthetic contact network with N = 97, 942 nodes and about 1 million connections among them. The network has typical characteristics of a human contact network. It has a time-dependent contact rate minimum at night and a maximum midday, and it has a connectivity (degree) distribution similar to human contact networks: there are many individuals with few connections and a few highly connected individuals who are more likely to become superspreaders [32] (S5 and S6 Figs).

The network also contains a block representing hospitals, where hospitalized patients are connected to healthcare workers, who in turn are connected to the community in the rest of the network. Transmission rates in hospitals are reduced by a factor of 10 to reflect the use of PPE, which has proven effective in making SARS-CoV-2 transmission in hospitals rare (Methods). The purpose of explicitly including hospitals in the network architecture is twofold: first, to illustrate how reliable data such as hospital admittance records can be incorporated in the network DA approach; second, to enable comparison of hospitalization rates in the simulated and real epidemic while mimicking the reduced transmission rate in hospitals. Realistic human contact networks contain other structures, such as households, workplaces, and schools [33]. Such features are not explicitly taken into account in our synthetic network architecture; rather, contact clusters arise randomly in the synthetic network. In the real world, such clusters would arise naturally in the contact network assembled from proximity data, without the need to account for them explicitly.

As surrogates for real-world health data, we use stochastic simulations of the epidemiology model for the state variables Si(t), Ei(t), etc. on the network. We reproduce various scenarios of the early COVID-19 epidemic in New York City (NYC), beginning during March 2020. Because age is an important risk factor for COVID-19 severity, we assign ages to nodes based on the age distribution of NYC and use them to model age-dependent disease progression according to current knowledge about COVID-19 (Methods). While we assign ages to nodes randomly, the realism of the model could be improved with age-stratified contact patterns [34, 35]. With the resulting surrogate worlds of contact patterns and disease progression, we explore how individual risk assessment and epidemic management and control can be achieved in what-if scenarios.


Lockdown and world avoided

As an illustrative example, we simulate the evolution of an epidemic that, when scaled up from our network size to the NYC population of 8.3 million, resembles the early evolution of the COVID-19 epidemic in NYC in 2020 (Fig 2).

Fig 2. Evolution of an outbreak in surrogate-world simulations with a lockdown (blue) and without (orange).

The left column shows infection rates and the right column death rates. Upper row for cumulative counts and lower row for daily counts, smoothed with a 7-day moving average filter. Red bars represent confirmed and probable COVID-19 deaths and confirmed infection rates for New York City [36], with the red axis labels on the right for confirmed infection counts. Solid lines indicate the corresponding counts in the simulations, with the black axis labels on the left for infections on the network. (The axes for infections in the simulations are stretched by a factor 10 relative to the axes for confirmed NYC infections, reflecting the undercount of infections by confirmed cases [37]). The light lines show 20 simulations that only differ by random seeds, with the thicker lines indicating the ensemble mean; thus, they give an indication of sampling variability. The average contact rate for all nodes is reduced by 58% from March 25, 2020 onward to mimic the lockdown effect (blue solid line).

If the contact rate on the network remains unchanged in the simulations, the number of infections and deaths rises from early March into April, with daily deaths reaching a peak of around 21 per 100,000 population in the second half of April.

However, this world was avoided by a lockdown, which became mandatory in NYC from March 22 onward. In its wake, the number of daily new cases and deaths began to decline from mid-April onward (Fig 2). We can reproduce similar behavior in the stochastic simulations by reducing the average contact rate of all nodes by 58% starting March 25 (Fig 2). The infection rates in the stochastic simulations exceed the number of confirmed cases in NYC, presumably because the latter undercount actual infections [37]. However, the death rates in the stochastic simulations are close to the NYC death rate (Fig 2). The hospitalization rates in the simulation also track the actual hospitalization rates closely (S8 Fig).

Thus, the simulated epidemics on the synthetic network reproduce statistics similar to the actual early epidemic in NYC, with realistic parameter choices for transmission rates and disease progression (Methods). Notwithstanding the simplifications of the network structure, this points to the qualitative adequacy of the synthetic network epidemiology model as a testbed for network DA, which makes no a priori assumptions about the structure of the network.

Accuracy of individual risk assessment

To demonstrate the accuracy of individual risk assessments, we assume the network DA platform has users who exchange proximity data with each other, with 25% to 100% of the population in the user base (i.e., ). In an idealization, the contact patterns of individuals within the user base are assumed to be known completely; the contact patterns of individuals outside the user base are assumed unknown. We also assume the number of external contacts of individuals in the user base to be known, for example, from proximity-sensing devices that can also detect devices of non-users. (However, we have verified that this assumption can be replaced by only assuming knowledge of the average number of external contacts, without material effects on the results; see Methods and S15 and S16 Figs). For subsets of the users, we assimilate results of simulated rapid diagnostic tests from the corresponding nodes in the surrogate-world simulation. A fraction f = 1%, 5%, or 25% of the user base is assumed to be tested daily, with results available on the day of test administration; that is, every user is tested on average every 100, 20, or 4 days (Methods). Testing the population of a major metropolitan center such as NYC every 100 or 20 days is achievable with current testing capacity. For more limited user bases (), test rates of 25% per day within the user base are locally achievable and in fact are routine, for example, on some college campuses.

We first illustrate network DA in the worst-case scenario of the free-running synthetic epidemic, in which contact patterns do not change. DA begins on March 5. We show results for April 9, near the epidemic peak, when about 7% of the population are infectious, and for April 30, when new infections are waning (Fig 2). (In this free-running epidemic, the maximum prevalence of infectiousness is considerably higher than in the lockdown simulations, in which prevalence peaks at 1.5%–2%—more in line with what actually occurred during the lockdown in NYC).

We classify users i as possibly infectious (“positive”) when the estimated probability of infectiousness 〈Ii(t)〉 exceeds a threshold cI. The true positive rate (TPR)—the rate at which users who are infectious in the stochastic surrogate-world simulation are classified as positive—naturally increases as the classification threshold cI decreases; at the same time, the positive predicted fraction (PPF)—the rate at which users overall are classified as positive, whether correctly or incorrectly—also increases because the false positive rate (FPR) increases. Receiver operating curves (ROC) trace out these competing changes in TPR and PPF (or FPR) as the classification threshold cI is varied (Fig 3). Choosing a classification threshold cI means finding a trade-off between wanting a high TPR while keeping FPR and hence PPF low.

Fig 3. Receiver operating characteristic (ROC) curves for classification as possibly infectious.

ROC curves trace out the true positive rate (TPR) vs. the predicted positive fraction (PPF) as the classification threshold is varied. TPR and PPF are given relative to the user base size . Green shades of the ROC curves from lighter to darker correspond to increasing diagnostic test rates f. Left column for April 9; right column for April 30. (a, b) For the ideal user base of . For comparison, the filled circles are for a test-only scenario when only users with positive diagnostic tests are classified as positive. (The 1%/day case falls outside the plotting region; values for panel (a) are (7×10−4, 0.008) and for (b) are (2×10−4, 0.01).). The open circles are for a contact-tracing scenario in which additionally prior close contacts of users with positive diagnostic tests are classified as positive. Also shown is a sensors-only scenario in which 75% of the user base is assumed to provide daily body temperature readings. (c, d) For user bases consisting of neighborhoods in the network covering 25%, 50%, and 75% of the total population (S7 Fig), with the same test rates f in shades of green as in (a, b). The black dashed line represents a random classifier that provides a lower bound on performance.

In the ideal albeit unrealistic scenario when the user base encompasses the whole population (), TPRs for April 9 are 12%, 19%, and 47% for a PPF of 8% and test rates from f = 1%, 5%, and 25%. Later in the epidemic, for April 30, TPRs are 13%, 27%, and 59% (Fig 3A and 3B). That is, the classification results improve as the network model learns about the evolution of the epidemic. The classification results are insensitive to the user base coverage : the accuracy of the classification does not change for user bases consisting of neighborhoods in the network covering between 25% and 100% of the total population, even though the scenarios with more limited user bases only use contact information for the users, not for non-users (Fig 3C and 3D). The results are also insensitive to the user base topology (S7 Fig): classification performance is not substantially affected whether the user base consists of neighborhoods in the total population network (Fig 3) or of randomly selected nodes (S9 Fig).

To put these results in context, compare them with the following two traditional approaches:

  • If only users with positive diagnostic tests are classified as positive, TPRs reach 0.8%, 4%, and 22% for test rates f = 1%, 5%, and 25%, respectively, with PPFs 0.07%, 0.4%, and 1.8% on April 9 and corresponding TPRs of 1%, 4%, and 23% with PPFs 0.02%, 0.1%, and 0.7% on April 30 (Fig 3A and 3B, solid circles). This test-only TPR is close to but slightly smaller than f because the test sensitivity is less than 100%. Classification by network DA can achieve much higher TPRs than testing alone, especially at low test rates, at the expense of increased but still modest PPFs.
  • Contact tracing and exposure notification apps classify as positive users with positive diagnostic tests, plus their potentially exposed nearest neighbors on the network. If, following standard contact tracing protocols, individuals are classified as positive if, over the 10 days preceding the diagnosis, they had at least one contact of more than 15 minutes length with a user who had a positive diagnostic test, the so-obtained contact-tracing TPRs for April 9 are 2.4%, 11%, and 45% for test rates of f = 1%, 5% and 25%, with PPFs of 1%, 5% and 23% (Fig 3A and 3B, open circles). For April 30 the corresponding TPRs are 2%, 6%, and 32% with PPFs 0.2%, 1.5%, and 7.6%. Network DA exploits the same data as contact tracing and exposure notification apps but achieves substantially higher TPRs at the same PPF. For example, at the same PPF as contact tracing, network DA achieves about a 40% higher TPR than contact tracing for April 9, and about a 100% (factor 2) higher TPR for April 30 (Fig 3A and 3B, vertical lines above open circles). That is, network DA in this synthetic example exploits the exact same data as contact tracing or exposure notification apps, but it does so much more effectively.

Network DA can also be used to assess quantitatively to what extent lower-fidelity data can improve classification. As an example, we conducted a set of experiments in which 75% of the users were assumed to report body temperatures daily—for example, with wearable sensors [21]—with infectiousness indicated by elevated temperature readings with 20% sensitivity [20]. Such temperature readings improve the classification when no or few (f = 1%) diagnostic tests are available; however, they do not provide a substantial benefit when f = 5% of the user base or more can be tested daily (Fig 3A and 3B). Nonetheless, if widely adopted, temperature sensors can provide a modest benefit when diagnostic testing capacity is low [21].

The results show that network DA allows identification of a large fraction of infectious individuals, provided widespread testing is available. The improved identification of infectious individuals over traditional methods is insensitive to the fraction of the population covered by the user base, to the user base topology, and to stochastic variability of the epidemic. Network DA extends classification beyond the nearest network neighbors on which contact tracing and exposure notification apps focus. This gives it an advantage especially when testing capacity is limited.

The capability of network DA to identify infectious individuals can be used to tailor individualized contact interventions for epidemic management and control. For epidemic management and control to be effective, however, it is important not only that the classification accuracy is high but also that the user base coverage is sufficiently large so that a large fraction of infectious individuals can be identified in the population, rather than just within the user base.

Risk-tailored contact interventions

The individual risk assessments can be used to prompt those who are classified as possibly infectious for contact interventions. As an illustrative example of such individual contact interventions, we assume that users of the app self-isolate by reducing their contact rate with others by 91%, to an average of 4 contacts per day, during the time when they are classified as positive and 5 days thereafter; all others in the population, whether app users or not, do not change their behavior. As a baseline for comparison, we present TTI scenarios with the same contact rate reduction but continuing over 14 days after diagnosis or identification as possibly exposed through contact with an infectious individual. For this baseline TTI scenario, an individual is classified as exposed if over the preceding 10 days, they had at least one contact lasting more than 15 minutes with an individual who had a positive diagnostic test; that is, the contact trace stage of this baseline TTI emulates techniques used in exposure notification apps, relying on the same data as those available for network DA in our synthetic examples. For a direct and fair comparison with network DA, TTI compliance is assumed to be confined to the user base. We use uniform testing regimes with test rates f = 1%, 5%, and 25% within the user base. As classification threshold, we choose a fixed threshold cI = 1%, resulting in TPR ≳ 40% and PPF ≲ 9% when contact interventions commence. Choosing the classification threshold cI adaptively, in response to current prevalence of infectiousness in the population, may further improve the results.

In the idealized but unrealistic case with full user base coverage (), the epidemic is more strongly suppressed with the network DA interventions than in the lockdown scenario, with 50–70% fewer cumulative deaths (Fig 4). However, whereas in the lockdown scenario the entire population has reduced contacts, with network DA only a small fraction of the population self-isolates. The self-isolation fraction has an initial peak of 15–17% for about a week and then falls quickly to 5–10%, with damped relaxation oscillations over several weeks in the case with lower test rates (f = 5%); 50% of those who isolate do so for 7 days or less, and 90% for 14 days or less. That is, in this idealized case, risk-tailored self-isolation achieves effective epidemic control with isolation of only a small fraction of the population at a time. Network DA does not squash daily infections to zero, because the classification threshold cI was chosen as a compromise between wanting a reliable classification with a high TPR while avoiding isolation of a too large fraction of the population with a too high PPF (Fig 3). For comparison, TTI with 100% compliance does not achieve epidemic control at a test rate f = 1%; at a test rate f = 5%, cumulative deaths are 3 times higher than with network DA because TTI misses more infections than network DA. At the test rate f = 25%, the cumulative death rate with TTI is comparable to or lower than with network DA, but at the expense of a 2–5 times higher isolated fraction of the population. Whereas the performance of TTI is strongly test-rate dependent, that of network DA is less sensitive to test rate, and it is always more efficient than TTI.

Fig 4. Comparison of different contact intervention scenarios for full user base with .

Shown are the lockdown scenario (blue) from Fig 2, the results of network DA and isolation of positive individuals for test rates f = 1%, 5%, and 25% (greens), and the results of TTI with the same test rates as for network DA (purples).

In the somewhat more realizable case with user base coverage, we simulate a demanding scenario in which testing and contact interventions are confined to the user base; no contact information among non-users is harnessed, and non-users maintain their contact patterns without isolation. In this case, risk-tailored self-isolation still achieves epidemic control at all test rates of f = 1%, 5%, and 25% within the user base (Fig 5), and attains a cumulative death rate similar to the 100% user base. The fraction of the population in isolation again peaks at just over 15% initially and then drops to 5–10%. As before, TTI with 75% compliance and with the highest test rates (f = 25%) also achieves epidemic control, but with a higher isolated fraction of the population. At the test rate f = 5%, TTI results in an about four times higher cumulative death rate than isolation tailored by network DA, which additionally isolates fewer individuals. TTI fails to achieve epidemic control at a test rate f = 1%.

Fig 5. Comparison of different contact intervention scenarios for a user base with .

Plotting conventions as in Fig 4. TTI here is confined to the same user base as network DA, implying 75% compliance.

With a further reduced user base coverage of , classification remains accurate, and isolation tailored by network DA can still achieve epidemic control and can remain more effective than a lockdown in preventing infections and deaths (S11 Fig). The initial fraction of the population in isolation increases to around 30%, and then drops again to between 5–10%. However, this means that initially, the majority of the user base (50% of the population) is in isolation, which creates perverse incentives: it effectively puts the user base, but not others, in a lockdown. TTI with 50% compliance fails to control the epidemic for test rates below f = 5% but still achieves some control at f = 25%, albeit with a higher isolated population fraction than with network DA.

For the yet smaller user base coverage of , classification remains accurate (Fig 3); however, here the dominance of non-users within the population, who do not isolate, rules out epidemic control (S13 and S14 Figs). As with any epidemic management measure, control cannot be achieved with low compliance rates.

These results for reduced user bases are for sub-networks consisting of neighborhoods in the overall population network. Results for user bases consisting of nodes selected at random from the overall population are qualitatively similar for , albeit with an adjusted classification threshold and a higher fraction of the population in isolation (S10 Fig). For with a random user base, network DA, while still being able to identify a large fraction of infectious individuals in the user base (S9 Fig), ceases to be effective for epidemic control (S12 Fig); similar behavior is observed in the case. That is, while network topology was rather unimportant for the accuracy of classification, it does play a role for the effectiveness of epidemic management and control strategies. It is possible the performance of network DA in managing the epidemic may be improved with data-adaptive classification thresholds.

In scenarios in which the user base and/or test rates are too small to achieve epidemic control, there is still a pronounced reduction in the cumulative death rate of users relative to the general non-user population (Fig 6). For test rates f = 1%, 5%, and 25% per day within the user base consisting of neighborhoods in the overall population network, the cumulative death rate is respectively 29%, 48%, and 42% lower than the death rate among non-users. Additionally, although the contact interventions are confined to the user base, the death rate in the non-user population is still reduced by about 50% compared with the no-intervention scenario (Fig 2). For a user base consisting of nodes selected at random, the results are qualitatively similar: Death rates among users relative to non-users are reduced by 47%, 52%, and 56% for respective test rates f = 1%, 5%, and 25% (S17 Fig).

Fig 6. Cumulative death rate of users vs. non-users for the user base consisting of neighborhoods in the overall population network.

Individual contact interventions are applied within the user base from March 15 onward.

That is, risk-tailored isolation on the basis of network DA generally outperforms TTI as an epidemic management and control approach when both are presented with the same contact and test data. Even when it does not achieve epidemic control because of low compliance rates, it still offers advantages to users in terms of reduced death rates.


We have demonstrated a platform concept for individual health risk assessment, which exploits the same proximity data from mobile devices that exposure notification apps rely upon but is substantially better at identifying infectious individuals. It achieves these gains by assimilating crowdsourced data from diverse sources into an epidemiological model defined on a contact network. Network DA provides informative and actionable risk assessments for individuals, even when only a modest fraction of the population uses the app necessary to obtain proximity data. The accuracy of the risk assessments is largely independent of the fraction of the population using the platform and of the user base topology; it improves with increasing diagnostic test rates, as should be expected.

When the user base is sufficiently large (covering around 75% of the population), the platform can be used to tailor interventions that are more efficient for epidemic management and control than lockdowns or TTI. For example, with a user base covering 75% of the population and users tested every 20 days, simulations for NYC showed that risk-tailored self-isolation achieves epidemic control with 63% fewer deaths than during NYC’s lockdown, with typically only 5–10% of the population in isolation at any given time. This risk-tailored isolation approach is more effective at preventing infections and deaths than a TTI approach that uses the same contact and diagnostic test data. Our experiments were solely based on self-isolation among app users, without considering other public health interventions. As a result, 75% coverage may be a conservative estimate. In reality, multiple non-pharmaceutical interventions will likely be employed simultaneously at the population level, which may reduce the user coverage required to achieve epidemic control.

We have produced a modular codebase that allows for exploration and benchmarking of tools to manage and control epidemics in a synthetic setting. To validate and further optimize our choices of diagnostic test and intervention strategies, further analyses are required. For example, our results may be improved by the inclusion of additional information from the contact network or more data-adaptive use of the risk assessments provided by network DA. Additionally, it is possible to learn about the parameters that appear in the network epidemiology model; we have only skimmed the surface with respect to what is possible in this regard, so far with limited success (Methods). Further investigation to delineate which model parameters are identifiable from data would be beneficial.

The platform has a relatively low barrier to widespread implementation. It can be realized by expanding the computational backend of existing exposure notification apps. High-precision proximity data are now available through Bluetooth protocols [15], and lower-precision location data from mobile devices have been exploited commercially for some time. Statistical techniques may be required to optimize the reconstruction of contact networks from such proximity data in practical implementations with imperfect knowledge of contact patterns [38]. To be effective, the platform requires that users provide proximity data and other crowdsourced data, such as test results and reports of clinical symptoms. The more detailed data users make available, the more accurate and detailed risk profiles can be produced in return. Uptake rates of exposure notification apps have already reached up to 75% in some urban areas, as in our simulated scenarios (e.g., more than 90% of Singapore’s population over 6 years of age [39] is using an exposure notification app). Uptake rates on a national scale so far have been more modest (e.g., a third of the UK population [18]), in part, for example, because of rural-urban digital divides but also, probably, because of the limited information provided by current exposure notification apps. However, smartphone usage rates worldwide are around 50% and continue to grow rapidly [40]; thus, widespread use of network DA in future epidemics will become technically possible. And while routine surveillance test rates in much of the world are still low, more widespread surveillance testing on the scale of major cities or regions at this point is feasible; for example, NYC currently is already testing up to roughly 2.5% of its population daily [41]. Our conclusions provide further evidence of the benefits of widespread testing, especially when that is combined with network DA to spread the test information over dynamic contact networks assembled from proximity data.

Challenges to widespread and successful adoption of a network DA platform center around equity, compliance, and privacy questions. Smartphone use is not equitably distributed within the population, and there are disincentives (e.g., unavailability of sick leave) to comply with individual contact interventions. Conversely, classification of users as “low risk” may encourage risky and counterproductive behavior. It is also unknown, and we did not address, how correlations between smartphone use, compliance, and factors influencing infection risk would affect our results. An additional impediment to widespread adoption of network DA are concerns about protecting users’ privacy. The network DA platform requires data to be transferred temporarily to a central computing facility for data assimilation [42]. This makes the platform more difficult to harden against malicious exploitation than exposure notification apps, which only require central data exchange when there is direct evidence of an infection [43]. Nonetheless, the data need not be stored beyond a data assimilation window that is at most a few days long. Additionally, the platform requires only anonymized proximity data but not absolute location data, and it does not rely on humans in the loop, reducing risks of malicious exploitation. There may be ways to harden the platform itself and the data exchange with users against privacy breaches [44].

The network DA platform provides obvious benefits in managing and controlling epidemics, for example, in reducing the need for lockdowns while preventing infections and deaths, and in providing users tools to manage their personal risks. It provides a scalable alternative to manual TTI programs, and a backend that delivers more accurate and actionable information than current digital TTI and exposure notification programs developed by many governments [39, 45]. The effectiveness of such programs has been modelled [79], but their impact in practice is only beginning to be elucidated [18]. Given that many TTI programs are voluntary, and documentation of contacts in manual programs is subjective, it will be important to compare both the control and cost effectiveness of manual and digital trace programs with the more objective and automated network DA approach presented here.

In addition to its health impacts, the COVID-19 pandemic has exacted an enormous economic toll on countries throughout the world [3, 4]. There is a continuing need to identify approaches that precisely and effectively control epidemics while minimizing economic disruption. With sufficient uptake and testing, the platform described here provides a means for achieving these dual aims.

Methods and models

SEIHRD model on a contact network

We consider a population of N individuals i (with i = 1, …, N). At any time t, an individual i is in exactly one of six possible health and vital states:

  1. Susceptible Si(t) when they can get infected with the virus;
  2. Exposed Ei(t) when infected with the virus but not yet infectious;
  3. Infectious Ii(t) when shedding the virus (with or without clinical symptoms) but not hospitalized;
  4. Hospitalized Hi(t) when hospitalized with active disease, in which case individuals are assumed to be shedding virus;
  5. Resistant Ri(t) when immune to the disease through either vaccination or immunity conferred by a prior infection (we assume lifelong resistance for now but can relax that assumption if evidence becomes available that immunity is temporary);
  6. Deceased Di(t).

We take Si(t), Ei(t), Ii(t), Hi(t), Ri(t), and Di(t) to be Bernoulli random variables that depend on time t and take only the values 0 and 1. That is, Si(t) = 1 when individual i is susceptible at time t, and otherwise Si(t) = 0 (and analogously for the other variables). Because the six SEIHRD states enumerate all health and vital states of individuals in this model, we have (1) Therefore, there are only 5 independent states.

In the network epidemiology model, a close contact between individuals i and j establishes a temporary network edge with weight wji(t) = 1 for the duration τ of the contact; outside the contact period, wji(t) = 0. Transmission along the temporary edges from one node to another and transitions between health and vital states within each node are modeled as independent Poisson processes [22, 23, 27, 46]. Each process is characterized by a rate that may vary from node to node and may depend on external variables such as age, sex, and medical risk factors (see S1 Fig for a schematic).

We make the following assumptions about the transmission rate and the parameters characterizing transition rates between SEIHRD states, including prior distributions used in the network model for DA:

  • Transmission rate: During the contact period between an infectious or hospitalized individual (Ij(t) = 1 or Hj(t) = 1) and a susceptible individual (Si(t) = 1), virus can be transmitted across the edge between nodes j and i. When transmission occurs, the susceptible node i becomes exposed and switches state to Ei(t) = 1. During the contact period in which an edge is active (wji(t) = 1), we assume the transmission rate to a susceptible node with Si(t) = 1 from an infectious node with Ij(t) = 1 is , and that from a hospitalized node with Hj(t) = 1 is . The parameter β is a transmission rate across active edges, which we set to a global constant in the stochastic surrogate-world simulations and learn on a nodal basis in the model used for DA; aji(t) and are time-dependent transmission modifiers that can be adjusted to incorporate additional information that may be available, for example, user-supplied information that individual i is using PPE at time t. In our proof-of-concept simulations, we use aji(t) = 0.1 within hospitals and ajk(t) = 1 otherwise, to reflect the rarity of SARS-CoV-2 transmission in hospitals in which PPE is worn [47]. (In reality, however, depending on the types of PPE and adherence to hygiene protocols, the degree of transmissibility reduction may vary substantially among hospitals [47]). A typical value for the transmission rate of respiratory viruses is around β = 0.5 h−1 = 12 day−1 [48].
    Because we model transmission as a Poisson process, the probability that transmission occurs during contact increases with the duration of the contact period τ, e.g., for an infectious node as [49] (This holds provided the contact period τ is short relative to the duration of infectiousness, so that the infectiousness status of a node does not change during contact).
    In the model used for DA, we do not assume perfect knowledge of the transmission rate; instead, we learn a partial transmission rate βi for each node i, and compute transmission rates from node j to node i as the averages and . We assume independent normal priors for βi for each node, with a mean of 12 day−1 and a standard deviation of 3 day−1. We truncate these distributions to [1 day−1, 20 day−1], though in practice these bounds are rarely reached.
  • Latent period: Exposed nodes with Ei(t) = 1 transition to being infectious with Ii(t) = 1 at the rate σi, which is the inverse of the latent period: the time it takes for an exposed individual to become infectious. For COVID-19, the latent period lies between about 2 days and about 12 days [6, 10, 50]. We take the latent period to be fixed for each node i but heterogeneous across nodes. In the model used for DA, we represent it as , where li has a gamma prior distribution with shape parameter k = 1.35 and scale parameter θ = 2 day; hence, the minimum latent period is 1 day, and its prior mean value is 3.7 days (1 day + ).
  • Duration of infectiousness in community: Infectious nodes with Ii(t) = 1 transition to resistant, hospitalized, or deceased at the rate γi, which is the inverse of the duration of infectiousness in the community (i.e., outside hospitals). For COVID-19, the median duration of infectiousness is around 3.5 days [10], but its distribution has a long tail, for example, from individuals with serious or critical disease progression [12]. Like σi, we take γi to be fixed for each node i but heterogeneous across nodes. In the model used for DA, we model the duration of infectiousness as , where gi has gamma prior distribution with shape parameter k = 1.1 and scale parameter θ = 2 days; hence, the minimum duration of infectiousness is 1 day, and its prior mean value is 3.2 days [10, 12].
  • Duration of hospitalization: Hospitalized nodes with Hi(t) = 1 transition to resistant or deceased at the rate , which is the inverse of the duration of hospitalization. As before, we take to be fixed for each node i but heterogeneous across nodes. In the model used for DA, we model the duration of hospitalization as , where has a gamma prior distribution with shape parameter k = 1.0 and scale parameter θ = 4 days; hence, the minimum duration of hospitalization is 1 day, and its prior mean value is 5 days. We assume hospitalized nodes are infectious. (If there is evidence that a hospitalized patient no longer sheds the virus, this can be taken into account by setting the transmission rate modifier aji(t) from the corresponding node to zero; however, we are not considering this situation in our proof-of-concept.)
  • Hospitalization rate: We assume a fraction hi of infectious nodes with Ii(t) = 1 requires hospitalization after becoming infectious. More precisely, we assume that infectious nodes transition to becoming hospitalized at the rate hi γi. This implies that, over a period Δt that is short relative to the duration of infectiousness , the probability of transitioning from being infectious to hospitalized, relative to the total probability of leaving the infectious state, is We take hi to be fixed for each node i but heterogeneous across nodes; it generally depends on age and other risk factors [25, 51]. We model the age dependence in the stochastic surrogate-world simulations according to clinical data as described below (Table 1), and we assume the same parameters in the model used for DA.
  • Mortality rate: We assume a fraction di of infectious nodes with Ii(t) = 1 and a fraction of hospitalized nodes with Hi(t) = 1 die. More precisely, we assume infectious nodes die at the rate di γi, and hospitalized nodes die at the rate . Both di and are fixed for each node but are heterogeneous across nodes, depending on age and other risk factors [25, 51]. Both in the stochastic surrogate-world simulation and in the model used for DA, we assume the same age-dependent mortality rates (Table 1).
  • Resistance: For now, we assume resistance to be lifelong, so that an individual who becomes resistant remains so indefinitely and does not return to being susceptible. This assumption can be relaxed by allowing transitions back to the susceptible state if resistance is not permanent.

Table 1. Age-dependent mean hospitalization and mortality rates in the surrogate-world simulation.

The share f(a) of the population in each age group a is taken from U.S. Census data [69]. The age-dependent death rate in hospitals d′ is obtained from cumulative hospitalization and death rates in NYC by June 1, 2020 [36], under the assumption that 90% of deaths occurred in hospitals. Age-dependent hospitalization rates h(a) and mortality rates d(a) in the community (outside hospitals) are difficult to obtain directly from NYC data because of an age-dependent undercount of infections [37]. We choose hospitalization rates h(a) that approximate data from France [70], adjusting the rates so that the overall hospitalization rate is ∑a f(a)h(a) ≈ 3.1%, which is NYC’s overall hospitalization rate if one assumes a cumulative COVID-19 incidence rate of 23% [71], together with NYC’s actual hospitalization count (52,333 on June 1, 2020) and population (8.34 million) [36]. Similarly, the mortality rate in the community d(a) is chosen such that the overall infection fatality rate is ∑a f(a)[d(a) + h(a)d′(a)] ≈ 1.1%, which is NYC’s overall infection fatality rate if one considers the same cumulative incidence of 23% and the confirmed and probable cumulative death count from COVID-19 (21,607 by June 1, 2020).

The health and vital states and transition rates define a Markov chain for the individual-level SEIHRD states. The SEIHRD Markov chain on a contact network can be simulated directly with kinetic Monte Carlo methods [52], as in previous studies [46, 48, 53, 54]. We use kinetic Monte Carlo simulations both to benchmark a model for the SEIHRD probabilities and to provide a surrogate for the real world in our proof-of-concept simulations.

Reduced master equations

We are principally interested in the individual SEIHRD probabilities, which are the expected values 〈Si(t)〉, 〈Ei(t)〉, etc. associated with the Bernoulli random variables for the states. That is, 〈Si(t)〉 is the probability that individual i is susceptible at time t.

These probabilities could be obtained as averages over an ensemble of kinetic Monte Carlo simulations; however, it is more computationally efficient to solve reduced master equations for the probabilities directly. The equations are [23, 55] (2a) (2b) (2c) (2d) (2e) (2f) where (2g) is the total infectious pressure on node i from within the network formed by the users. The infectious pressure represents the possibility of transmission to node i from all network nodes that are at least temporarily connected with node i. Additionally, we have included an exogenous infection rate ηi. This allows for infection from outside the network of users when the master equation network represents only a subset of a larger network with N nodes, and so transmission can occur from unaccounted nodes. The exogenous infection rate ηi is scaled by the number of external neighbors of node i that are not part of the user network, by the probability 〈wi〉 of an edge of node i being active, and by the time-dependent prevalence of infectiousness P(t), estimated from the network of users as described below in (11). The probability of exogenous infection then increases with the prevalence of infectiousness P(t) within the user base, which is taken as a proxy of prevalence outside the user base. In an idealization that may not be achievable in practice, we take the number of external neighbors as given from the network structure. In practice, the number of external neighbors can be estimated through use of the same proximity technologies (e.g., Bluetooth) on which exposure notification apps rely, which allow the sensing of other nearby mobile devices, even if they do not participate in the proximity sensing and exposure notification protocol. While this is unlikely to yield perfect knowledge about the number of external neighbors, it may be combined with statistical approximations [38]. The net effect of these assumptions and approximations is that a user surrounded by other users will have no exogenous infection rate, while users with many external neighbors will have a larger exogenous infection rate. We have confirmed in simulations that exact knowledge of the number of neighbors can be replaced by statistical knowledge; for example, replacing the node-dependent by the user-network average for all nodes (external connectivity in Table 2) yields similar results (S15 and S16 Figs).

Table 2. Details of the different user bases.

The percentage represents approximately , for the user population . The interior defines how many users are completely surrounded by other users. The exterior connectivity gives the average number of exterior nodes connected to a node inside the user base.

We integrate these ordinary differential equations with a Runge-Kutta-Fehlberg 4(5) scheme, with an adaptive timestep of maximum length 3 hours. The weights wji(t) vary on shorter timescales. This is taken into account in the numerical integration by averaging wji(t) over the length of a time step, rather than evaluating wji(t) at discrete intervals.

Closure of reduced master equations

The master equations (2) for the probabilities are not closed because they depend on the joint probabilities 〈Si(t)Ij(t)〉 and 〈Si(t)Hj(t)〉 in the infectious pressure (Eq 2g). Various approaches to closing this term have been proposed [23, 27, 55]. Our approach is to estimate it from the ensemble used in the DA cycle, as follows.

The joint-event probability 〈Si(t)Ij(t)〉 and the marginal probabilities 〈Si(t)〉 and 〈Ij(t)〉 in the master equations are related through the ratio (3) which is the rescaled joint probability of Si(t) and Ij(t). We estimate the rescaled joint probability by its ensemble analogue (4) where denotes the mean over the ensemble (with m = 1, …, M labeling ensemble members). Thus, we approximate the joint probability in the infectious pressure (Eq 2g) as (5) With this empirical approximation, we obtain a closed-form expression for the second moment for each ensemble member m, which we use in the reduced master equations. The second moment follows analogously. If and , this reduces to the mean-field approximation that is commonly made in epidemiological models [23, 55] and that often is accurate on real-world networks [56].

We verified this closure against direct kinetic Monte Carlo simulations of the SEIHRD model on the synthetic network described below, in the free-running NYC simulation without lockdown (Fig 2). The closure has similar performance as the mean-field approximation and adequately, albeit not perfectly, captures the stochastic network dynamics (S2 and S3 Figs). The closure correction coefficients (4) concentrate close to the value of 1 (S4 Fig), which explains the similar performance to the mean-field approximation.

Data assimilation algorithm

For DA, we use a version of the ensemble adjustment Kalman filter (EAKF) [28], which has previously been used with epidemiological models [5, 10, 29, 30]. EAKF treats an ensemble of M model parameters and states , , etc. (m = 1, …, M) from a previous DA cycle as a prior and then linearly updates the ensemble of model parameters and states to obtain an approximate Bayesian posterior given the available data. Unlike other algorithms for computing Bayesian posteriors on networks [57], it makes no assumptions about the network structure, and it scales well to high-dimensional problems [28].

To learn about parameters and the states of nodes on the network, we use a scheme based on iterating forward passes of the master equations over a time window Δ, with EAKF updates between each pass; a similar scheme has been used in epidemiology models before [5, 10, 29, 30]. In this way, we effectively use EAKF as a smoother, harnessing all available data in a DA window (tf − Δ, tf). There are two parts to the DA procedure:

  1. Update stage: An EAKF update step is performed to assimilate all data available for the window (tf − Δ, tf), using the previous ensemble model run as prior. The mismatch between the simulated ensemble trajectories and the data is used to update the combined ensemble of parameters and states at the initial time tf − Δ.
  2. Forecast stage: The updated ensemble of states , , etc., with the updated model parameters, is integrated forward up to time tf, to serve as prior for the next update cycle.

EAKF relies on linear updates and assumes Gaussian error statistics. However, the forward equations (2) are nonlinear. As a result, the EAKF update does not always conserve total probability, in the sense that SEIHRD probabilities for each node will not always sum to 1. We therefore augment the state with an additional total probability conservation variable, with observation equal to the target probability sum 1. The Gaussian assumption is also at odds with probabilities in [0, 1]. We have experimented with approaches of transforming variables to an unbounded space, leading to total probability conservation becoming highly nonlinear. We found it to be more robust to work in the original space where total probability conservation is a linear constraint. This approach does, however, violate Gaussianity assumptions about the ensemble when we reinforce the probability bounds by clipping the states , , etc. to [0, 1].

We assume data errors to be uncorrelated, so that their error covariance matrix is diagonal (see below for how we specify error variances in the proof-of-concept simulations). Prior information on parameters and states is specified in EAKF through the initial condition of the ensemble. We draw the parameters of the ensemble from the above-specified prior distributions, and we initialize the state by seeding each ensemble member with a fraction of (possibly different) infectious nodes, the rest being susceptible. The initial fraction of infectious nodes is drawn from a beta distribution with shape parameters α = 0.0016 and β = 1 (not to be confused with the transmission rate β). The mean fraction (here, 0.16%) of initially infected nodes agrees with the fraction of initially infected nodes in the stochastic surrogate-world simulations.

To account for the multi-fidelity nature of the assimilated data, we perform EAKF in multiple passes. This allows for better conditioned data covariance matrices and for different hyperparameter choices for the different types of data. We perform the following passes to assimilate data from the lowest to the highest fidelity:

  • In a first EAKF pass, we update parameters and states at tf − Δ using the poorest fidelity data (e.g., temperature sensor data), followed by a forecast over (tf − Δ, tf);
  • In a second EAKF pass, we update parameters and states at tf − Δ using moderate-accuracy diagnostic test data, followed by another forecast over (tf − Δ, tf);
  • In a final EAKF pass, we update parameters and states at tf − Δ using data about hospitalization and death status with small error variances, followed by a final forecast over (tf − Δ, tf).

There are three well-established challenges that ensemble-based filters must tackle when assimilating a number of parameters/states that is large relative to the ensemble size [58]: overestimation of long-range covariances, underestimation of variances, and ensemble collapse.

  1. To prevent spurious long-range covariances, we localize the effect of observations on states within a single node [58, 59]. That is, direct updates of a nodal state are only due to observations at that node during the DA window. This also provides large computational savings because EAKF updates may be performed sequentially node-by-node, in any order.
  2. To prevent underestimation of variances by the finite-size ensemble, which can lead to discounting of data points [28], and to ensure well-posedness of the matrix inversions, we use regularization of the ensemble covariance matrix Σ. If Λmin and Λmax denote the minimum and maximum eigenvalues of Σ, we replace Σ in the EAKF algorithm with with Σ + max(δmax − Λmin), δmin)I. We choose δ = 5/M to assimilate diagnostic test data, and δ = 1/M to assimilate hospitalization/death status; δmin is taken to be the mean observational noise standard deviation of the update.
  3. To prevent ensemble collapse, we add a hybrid inflation to an assimilated state with a map , where is the ensemble mean state and is Gaussian noise with mean zero and standard deviation [58]. We take a = 3.0 and b = 0.1.

Because of the binary nature of the hospitalization and death data, we do not update these states directly; doing so can lead to shocks in the system dynamics. We only update the SEIR states 〈Si(tf − Δ)〉, 〈Ei(tf − Δ)〉, 〈Ii(tf − Δ)〉, 〈Ri(tf − Δ)〉 at the beginning of a DA window tf − Δ ≤ ttf when assimilating hospitalization and death data that fall within the DA window.

Synthetic network for proof-of-concept

We generate a synthetic time-dependent contact network in two steps:

  1. We generate a static degree-corrected stochastic block model (SBM) [60, 61], consisting of N nodes in three groups. The three groups represent (a) hospitalized patients, (b) healthcare workers with contacts both within hospitals and in the community, and (c) the community of all remaining individuals (e.g., people in an urban environment). Hospital beds in group (a) are filled when infected nodes become hospitalized; we assume an infinite supply of hospital beds. Healthcare workers in group (b) make up 5% of all nodes, and the remaining 95% of nodes constitute group (c).
    We describe connections within groups (a) and (b) with an Erdős–Rényi model and use mean degrees of 5 in group (a) and 10 in group (b), based on a social-contact analyses in a hospital setting [62]. Hospitalized patients in group (a) can interact only with each other and with the healthcare workers in group (b). To model the interactions between groups (a) and (b), we set the corresponding mean degrees per node to 5 for edges connecting the groups. We parameterize the contacts among nodes in the community group (c) with a power-law degree correction. As pointed out in [63], when groups are ignored, degree distributions associated with social interactions are well-described by a negative binomial distribution, which, for example, has also been used to describe degree distributions associated with sexual-contact networks [64]. In the presence of groups, however, degree distributions of social-interaction networks have been found to exhibit a power-law tail with an exponent of about 2.5 [63]. In accordance with the results presented in [63, 65], we therefore describe parts of the synthetic contact network by a stochastic block model with power-law degree correction with exponent 2.5, mean degree , and maximum degree 100; S5 Fig shows the degree distribution. The community (c) as a group only interacts with healthcare workers (b), and we set the corresponding mean degree to 5.
  2. To model time-dependence of the network, we make the edges of the static SBM network created in the first step time-dependent by switching them on and off. That is, neighbors of all nodes remain fixed in time, but their connections are activated or deactivated with time. We account for day/night cycles in the edge weights wji(t), but we ignore, e.g., weekly cycles. We generate a diurnal cycle that replicates some properties of observed time-dependent human contact networks [66]: The fraction of active edges is small at night and in the early morning hours, reaches a maximum around noon, and approaches small values again in the evening.
    To model the time-dependence of wji(t), we use a birth-death process commonly used in queuing theory. The birth-death process is a Markov chain in which arrivals (edge activations) are inhomogeneous Poisson processes with a diurnally varying mean rate Aji(t); contact durations are exponentially distributed with a mean contact duration τ (i.e., a mean rate parameter μ = τ−1). We choose a mean duration of τ = 2 min (hence μ = 720 day−1), based on high-resolution human contact data [48]. We model the mean edge activation rate as (6) Here, t = 0 starts at midnight, and is the mean degree of the network in the community group (c), so that , when averaged over edges, is an average contact rate per node. The diurnally averaged edge activation rate then is (7) For the minimum and maximum contact rates per node, λi,min and λi,max, we choose the default values λmin = 4 day−1 and λmax = 84 day−1. If the default contact rates apply for all nodes, this gives for the community group (c) a mean contact rate per node of (8) this is about a factor 3–4 larger than typical human contact rates as assessed by self-reports [67], consistent with the fact that we also take fleeting contacts into account that would likely not be self-reported. The minimum and maximum contact rates λi,min and λi,max for a node i are the principal parameters we vary to explore the effect of contact interventions. Reducing λi,min and λi,max for a node reduces the fraction of time edges connecting to node i are active. The contact rate and total contact duration over the network for five simulated days are displayed in S6 Fig.
    The time dependencies of all edges wji(t) are treated as independent. We update the time-dependence of each edge at midnight every simulated day, running independent birth-death processes with parameters Aji(t) and μ for the next day.

If a node becomes hospitalized, it is deactivated at its previous location in the network and transferred to the hospital group (a). Hospitalized nodes are assumed to be infectious. (This assumption may later be relaxed to model the situation that a patient is no longer infectious but may still be hospitalized with ongoing disease).

Different choices of network architecture are, of course, possible and justifiable. The network merely serves to generate simulated data for our proof-of-concept, and the algorithms we demonstrate adapt to any network architecture, which in practice would be provided by proximity data. We do not expect our results to depend sensitively on our choice of network architecture.

Selection of subnetwork for user base

To select a subnetwork for a user base with users, we construct subgraphs with two different topologies. First, a neighbor-based subgraph is constructed from a randomly selected seed user, by adding all neighbors of this user to the subgraph, then in a greedy fashion adding all neighbors of each member of this new subgraph, and so on, until a desired user population is reached. Second, a random subgraph is constructed by randomly choosing nodes from the full network. S7 Fig illustrates the different user base topologies, and Table 2 summarizes their characteristics.

From Table 2, we see the topology of the user base affects the average number of external neighbors. To mitigate this effect, we take into account that users can be infected by neighbors external to the user base, through the additional infectious pressure terms in the equations for and in Eq (2). Such neighbors are still detectable by proximity technologies (e.g., Bluetooth), but because they are not users of the network DA protocol, we do not know their current state.

Surrogate world simulation

To generate surrogate worlds with which to test the DA algorithm and interventions, we simulate epidemics on the synthetic network stochastically with kinetic Monte Carlo methods [52]. For these stochastic simulations (but not for the model used for DA), we choose mean transmission and transition rates between SEIHRD states that are homogeneous across nodes, except for hospitalization and mortality rates that depend on age. The mean rates we use are based on current knowledge about COVID-19 (Table 3). We simulate 20 epidemics that only differ in their random seed. They are initialized on March 5, 2020, with 0.16% of nodes randomly assigned to be infectious.

Table 3. Mean transmission and transition rates and maximum/minimum contact rates for the surrogate-world simulations with the stochastic SEIHRD model (S1 Fig).

The mean rates are taken to be the same for all nodes; hence, the nodal indices are suppressed. The latent period σ−1 and duration of infectiousness in the community γ−1 are approximated from those in refs. [10] and [12]; the duration of hospitalization γ−1 is from ref. [68], and the transmission rate β is fit to be consistent with data for respiratory viruses [48] and to roughly reproduce NYC data.

The dependencies of the hospitalization rate hi = h(a) and mortality rates di = d(a) and on age a are estimates based on recent data (Table 1). To model age-dependent disease progression, we randomly assign ages to network nodes in the community group (c) according to the age distribution for NYC, as given by U.S. Census data [69]. Additional factors we neglect in our synthetic examples, such as age-dependent contact patterns, are likely small perturbing factors for the risk assessment results we show. We assign ages to nodes in the healthcare worker group (b) according to the age distribution among working-age adults (21–65 years old). Initially, there are no hospitalized nodes (i.e., group (a) is empty).

The network has 97,942 nodes (with the difference to 100,000 arising from stochastic effects in the generative algorithm). We choose the global mean transmission rate β so that our simulations are qualitatively aligned with the evolution of the COVID-19 epidemic in NYC [36]. We find that a global value of β = 12 day−1 can qualitatively reproduce the observed rate of infections and can quantitatively reproduce the rate of hospitalizations and deaths during the initial phases of the epidemic in NYC.

Synthetic data

We sample synthetic data from the stochastic surrogate-world simulation on the network with N nodes and assimilate data for a possible subset of users in the reduced master equation model. We consider the following data and error rates:

  • A positive virus test for node i is taken to imply (9) at the time the test sample is taken. The positive predictive value (PPV) is calculated as (10) where we take the sensitivity of the test to be 80% and the specificity to be 99%, which we use as an approximation of the currently imprecisely known actual sensitivities and specificities [72, 73]. As an estimate of the infectiousness prevalence P(t) in the population, we use the average of the infectiousness probabilities both over the network of size and over the ensemble of size M, (11) The cutoff of is included to guard against erroneously assuming prevalence to be zero because of subsampling on the reduced network. For the DA, we assume an error rate of 1 − PPV for a positive test result.
  • A negative virus test for node i is similarly taken to imply (12) at the time the test sample is taken. The false omission rate (FOR) is calculated as with the same sensitivity, specificity, and prevalence as for a positive virus test. For the DA, we assume an error rate equal to FOR for a negative test result.
  • To assimilate low-fidelity data such as those from temperature sensors, we assume 〈Ii(t)〉 = PPV as for a positive virus test when they indicate infectiousness (e.g., when a temperature reading is elevated). However, we use a sensitivity of 20% and specificity of 98% to reflect the lack of sensitivity of temperature sensors in detecting COVID-19 infection [20]. For the DA, we assume an error rate equal to 1 − PPV, analogous to a positive virus test.
  • Data about hospitalization with COVID-19 imply that Hi(t) = 1 for the duration of hospitalization. We assume the hospitalization status of all users to be known with certainty, that is, we assimilate the hospitalization status Hi(t) = 0 or Hi(t) = 1 for all users; however, we only update the SEIR probabilities at the beginning of a DA window tf − Δ ≤ ttf with hospitalization data.
  • Death implies Di(t) = 1. We assume the vital status of all users to be known with certainty, that is, we assimilate Di(t) = 0 or Di(t) = 1 for all users; as for hospitalization, however, we only update the SEIR probabilities at the beginning of a DA window tf − Δ ≤ ttf with death data.
  • For completeness, we state that a positive serological test for SARS-CoV-2 for node i would be taken to imply with the positive predictive value calculated from (10). Typical values for sensitivity would be 90% and for specificity 95% [74], and the prevalence of resistance can be estimated from the resistance probabilities on the reduced master equation network. We would again assume an error rate equal to 1 − PPV. However, we did not assimilate simulated serological tests in our proof-of-concept because currently achievable serological test rates are low.

To model data errors, we randomly corrupt the synthetic data sampled from the surrogate world network with the false positive and false negative rates implied by the sensitivity (false negative rate = 1 − sensitivity) and specificity (false positive rate = 1 − specificity).

Testing strategy.

We use a simple testing strategy that randomly tests a given budget of nodes once per day. Our framework provides a testbed for different strategies. We found random testing consistently outperformed three other simple strategies: (i) concentrating the test budget on near-neighbors of positively tested nodes; (ii) continuous testing of a fixed subset of the population; and (iii) testing nodes with high predicted infectiousness values. We attribute this to the low prevalence of disease. However, it is possible that more effective testing strategies can be discovered that exploit estimated nodal states, the network structure, and the intervention strategy. In a real-world scenario, systematic biases in testing (e.g., testing biased toward certain workplaces or educational institutions) may also affect quantitative details of our results.

Parameter learning.

In addition to assimilating probabilities of SEIHRD states, we can in principle learn about parameters in the reduced master equation model (2), for example:

  • Individual partial and time-dependent transmission rates βi;
  • Individual inverse latent periods σi;
  • Individual inverse durations of infectiousness γi and hospitalization ;
  • Individual hospitalization rates hi and mortality rates di and ;
  • Exogenous infection rates ηi.

We have not fully explored the efficacy of learning about the different parameters from data. For now, we include only the partial transmission rates βi, the inverse latent periods σi, and the durations of infectiousness γi and hospitalization in the DA, all with the priors stated above. S18 Fig shows the prior distributions of the parameters at the beginning of the epidemic, as well as the posterior distributions as the epidemic evolves and the network model learns about the parameters. The results show that the DA does not refine the prior estimates of the parameters. When priors were not initially centered on the true values, they remained biased during the simulation. Further investigations focusing, for example, on learning statistical averages of parameters rather than individual node-per-node parameters would be beneficial.

The hospitalization rates hi and mortality rates di and are fixed at the same values as in the stochastic surrogate-world simulation (Table 1). We assume the exogenous infection rates ηi to be equal to the partial transmission rates βi, and we estimate the probability of an edge of node i being active as (13) where is the diurnally averaged edge activation rate, (14) with (15) With our parameters, this is for isolated nodes and otherwise. For a stationary birth-death process, this estimate for 〈wi〉 is the stationary probability of an edge being active; it approximates the diurnally averaged probability in the case of the birth-death process with diurnally varying edge activation rates. Through this probability 〈wi〉, the exogenous infectious pressure depends on the isolation status of a node.

Classification in network DA.

Nodes i in the community group (c) are classified as possibly infectious () or not () according to (16) Here, cI is a classification threshold, which can be determined from receiver operating characteristic (ROC) curves as some optimum tradeoff between wanting to achieve high true positive rates while keeping false positive rates modest. The ROC curves we use are adapted to the setting in which we are primarily interested in the fraction of users that is classified as possibly infectious (and thus may be asked to self-isolate). We plot the true positive rate (TPR, nodes with for which Ii = 1 in the stochastic simulation) against the predicted positive fraction (PPF, fraction of nodes with in the user base of size ). ROC curves are traced out by lowering the classification threshold cI, thereby increasing both TPR and PPF.

Classification in TTI.

For the TTI scenarios, we assume the dynamic contact network among users is known, as in the network DA scenarios, and we assume instantaneous tracing. When a node i in the community group (c) is tested positive, it is classified as infectious; all nodes that have had at least one 15-minute contact with node i within the preceding 10 days are classified as exposed. All infectious and exposed community nodes are immediately isolated. This TTI scheme mimics the methods of typical exposure notification apps; although it is idealized and overestimates TTI performance achievable in practical settings [8, 75], it provides a fair baseline for comparison with network DA.

Contact interventions.

We implement two types of intervention scenarios in our test cases. In the first, a lockdown scenario (Fig 2), we set λi,max for all nodes in the community group (c) to 33 day−1. This amounts to a reduction of the mean contact rate (8) in group (c) by 58%. In the second, a time-limited isolation intervention, we reduce the contact rates of targeted high-risk nodes by setting λi,max = λi,min = 4 day−1; thus, these high-risk nodes are assumed to self-isolate, with only 4 contacts per day on average, corresponding to a reduction of their average contact rate by 91%.

The duration of contact reduction takes three possible values. In the lockdown scenario (Fig 2), all nodes have contact reduction from the inception of the lockdown until the end of the simulation. In the TTI scenario, self-isolating nodes have contact reduction for 14 days, in accordance with current CDC guidelines [76], after which contact rates are reset to the original values. For the network DA scenario, self-isolating nodes have contact reduction until they are classified as negative () for a period of 5 consecutive days, after which contact rates are reset to their original values. This corresponds to reinstatement of original contact rates for 50%, 90% and >99% of isolated nodes within 7, 14 and 21 days, respectively.

Supporting information

S1 Fig. Schematic of SEIHRD model [25].

Infected and hospitalized nodes infect susceptible nodes at rates κI and κH, respectively. After being infected, susceptible nodes become exposed. Exposed nodes become infectious at rate σ. Infected nodes may get hospitalized at rate , die at rate , or become resistant at rate (1 − hd)γ. Once hospitalized, nodes either become resistant at rate (1 − d′)γ′ or die at rate dγ′.


S2 Fig. Overall epidemic dynamics from SEIHRD model using mean-field approximation.


S3 Fig. Overall epidemic dynamics from SEIHRD model using mean-field approximation with ensemble correction.


S4 Fig. Histograms of correction coefficients (top row) and (bottom row) at different times during the simulated epidemic.


S5 Fig. Distribution of degrees k in synthetic contact network with 97,942 nodes.


S6 Fig. Dynamic contact network behavior in the first five simulated days, batched into 3-hour windows (starting at midnight).

Displayed are the ensemble-averaged and node-averaged contact rate and total contact duration.


S7 Fig. Illustration of different user base topologies.

(a) Neighbor-based user base, constructed by iteratively adding neighborhoods. (b) Random subnetwork of users. Red nodes and edges are part of a user base, grey nodes and edges of the overall population. The shown networks have 982 nodes and 5,916 edges. Both user bases contain 5% of all nodes.


S8 Fig. Hospitalization rates in surrogate-world simulation with a lockdown (blue) and without (orange).

The left panel shows cumulative hospitalizations and the right panel daily hospitalizations per 100,000 population for the same simulations as those in Fig 2. Red bars represent COVID-19-related hospitalization rates for New York City [36]. As in Fig 2, the simulation data are smoothed with a 7-day moving average filter.


S9 Fig. Receiver operating characteristic (ROC) curves for classification as possibly infectious.

As in Fig 3, but for subnetworks with randomly selected nodes rather than for subnetworks with a neighborhood topology. For the filled circles, the 1%/day case falls outside the plotting region; values for panel (A) are (7×10−4, 0.009) and for panel (B) are (2×10−4, 0.01).


S10 Fig. Comparison of different contact intervention scenarios for random user base with .

As in Fig 5, but with a subnetwork with randomly selected nodes and with a classification threshold cI = 0.25%.


S11 Fig. Comparison of different contact intervention scenarios for neighborhood user base with and with a classification threshold cI = 0.5%.


S12 Fig. Comparison of different contact intervention scenarios for random user base with and with a lower classification threshold cI = 0.25%.

As in S11 Fig, but with a subnetwork with randomly selected nodes.


S13 Fig. Comparison of different contact intervention scenarios for neighborhood user base with and with a classification threshold cI = 0.25%.


S14 Fig. Comparison of different contact intervention scenarios for random user base with and with a lower classification threshold cI = 0.01%.

As in S13 Fig, but with a subnetwork with randomly selected nodes.


S15 Fig. As in Fig 5, comparison of different contact intervention scenarios for neighborhood user base with and with a classification threshold cI = 1%, but replacing the user-dependent number of external neighbours by the constant exterior connectivity from Table 2.


S16 Fig. As in S10 Fig, comparison of different contact intervention scenarios for random user base with and with a classification threshold cI = 0.25%, but replacing the user-dependent number of external neighbours by the constant exterior connectivity from Table 2.


S17 Fig. Cumulative death rate of users vs. non-users for the user base consisting of nodes selected at random from the overall population network.

Individual contact interventions are applied within the user base from March 15 onward.


S18 Fig. Distribution of ensemble averaged model parameters across nodes as a function of time during the epidemic.

The shaded regions contain 50%, 80% and 90% of the distribution. The dashed line represents the true parameters in the stochastic simulation. During the first 8 days, no DA is performed, and the parameter distributions are the prior distributions.



We thank Tobias Bischoff, Mason Porter, and Andrew Stuart for helpful discussions.


  1. 1. Brauner JM, Mindermann S, Sharma M, Johnston D, Salvatier J, Gavenčiak T, et al. Inferring the effectiveness of government interventions against COVID-19. Science. 2021;371(6531):eabd9338. pmid:33323424
  2. 2. Haug N, Geyrhofer L, Londei A, Dervic E, Desvars-Larrive A, Loreto V, et al. Ranking the effectiveness of worldwide COVID-19 government interventions. Nature Human Behaviour. 2020;4(12):1303–1312. pmid:33199859
  3. 3. König M, Winkler A. COVID-19: Lockdowns, Fatality Rates and GDP Growth. Intereconomics. 2021;56:32–39. pmid:33518787
  4. 4. Chang S, Pierson E, Koh PW, Gerardin J, Redbird B, Grusky D, et al. Mobility network models of COVID-19 explain inequities and inform reopening. Nature. 2021;589: 82–87. pmid:33171481
  5. 5. Pei S, Kandulaa S, Shaman J. Differential Effects of Intervention Timing on COVID-19 Spread in the United States. Science Advances. 2020;6:eabd6370. pmid:33158911
  6. 6. Kissler SM, Tedijanto C, Goldstein E, Grad YH, Lipsitch M. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science. 2020;368:860–868. pmid:32291278
  7. 7. Hellewell J, Abbott S, Gimma A, Bosse NI, Jarvis CI, Russell TW, et al. Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts. Lancet Global Health. 2020;8:E488–E496. pmid:32119825
  8. 8. Ferretti L, Wymant C, Kendall M, Zhao L, Nurtay A, Abeler-Dörner L, et al. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science. 2020;368:eabb6936. pmid:32234805
  9. 9. Kucharski AJ, Klepac P, Conlan AJK, Kissler SM, Tang ML, Fry H, et al. Effectiveness of isolation, testing, contact tracing, and physical distancing on reducing transmission of SARS-CoV-2 in different settings: a mathematical modelling study. Lancet Infectious Diseases. 2020;20:P1151–1160. pmid:32559451
  10. 10. Li R, Pei S, Chen B, Song Y, Zhang T, Yang W, et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Science. 2020;368:489–493. pmid:32179701
  11. 11. Peak CM, Kahn R, Grad YH, Childs LM, Li R, Lipsitch M, et al. Individual quarantine versus active monitoring of contacts for the mitigation of COVID-19: a modelling study. Lancet Infectious Diseases. 2020;20:1025–1033. pmid:32445710
  12. 12. He X, Lau EH, Wu P, Deng X, Wang J, Hao X, et al. Temporal dynamics in viral shedding and transmissibility of COVID-19. Nature Medicine. 2020;26:672–675. pmid:32296168
  13. 13. Allen D, Block S, Cohen J, Eckersley P, Eifler M, Gostin L, et al. Roadmap to Pandemic Resilience: Massive Scale Testing, Tracing, and Supported Isolation (TTSI) as the Path to Pandemic Resilience for a Free Society. Edmond J. Safra Center for Ethics at Harvard University; 2020.
  14. 14. Watson C, Cicero A, Blumenstock J, Fraser M. A National Plan to Enable Comprehensive COVID-19 Case Finding and Contact Tracing in the US. Baltimore, MD: Johns Hopkins Bloomberg School of Public Health, Center for Health Security; 2020.
  15. 15. Apple/Google. Privacy-Preserving Contact Tracing; 2020.
  16. 16. Cencetti G, Santin G, Longa A, Pigani E, Barrat A, Cattuto C, et al. Digital proximity tracing on empirical contact networks for pandemic control. Nature Communications. 2021;12:1655. pmid:33712583
  17. 17. Lewis D. Contact-tracing apps help reduce COVID infections, data suggest. Nature. 2021;591:18–19. pmid:33623147
  18. 18. Wymant C, Ferretti L, Tsallis D, Charalambides M, Abeler-Dörner L, Bonsall D, et al. The epidemiological impact of the NHS COVID-19 App. Nature. 2021; pmid:33979832
  19. 19. Shental N, Levy S, Skorniakov S, Wuvshet V, Shemer-Avni Y, Porgador A, et al. Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers. Science Advances. 2020;6:eabc5961. pmid:32917716
  20. 20. Bielecki M, Crameri GAG, Schlagenhauf P, Buehrer TW, Deuel JW. Body temperature screening to identify SARS-CoV-2 infected young adult travellers is ineffective. Travel Medicine and Infectious Disease. 2020;37:101832. pmid:32763495
  21. 21. Quer G, Radin JM, Gadaleta M, Baca-Motes K, Ariniello L, Ramos E, et al. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nature Medicine. 2021;27:73–77. pmid:33122860
  22. 22. Meyers LA, Pourbohloul B, Newman ME, Skowronski DM, Brunham RC. Network theory and SARS: predicting outbreak diversity. Journal of Theoretical Biology. 2005;232:71–81. pmid:15498594
  23. 23. Pastor-Satorras R, Castellano C, Mieghem PV, Vespignani A. Epidemic processes in complex networks. Reviews of Modern Physics. 2015;87:925–979.
  24. 24. Bauer P, Thorpe A, Brunet G. The quiet revolution of numerical weather prediction. Nature. 2015;525:47–55. pmid:26333465
  25. 25. Arenas A, Cota W, Gómez-Gardeñes J, Gómez S, Granell C, Matamalas JT, et al. Modeling the Spatiotemporal Epidemic Spreading of COVID-19 and the Impact of Mobility and Social Distancing Interventions. Physical Review X. 2020;10:041055.
  26. 26. Bertozzi AL, Franco E, Mohler G, Short MB, Sledge D. The challenges of modeling and forecasting the spread of COVID-19. Proc Natl Acad Sci. 2020. pmid:32616574
  27. 27. Kiss IZ, Miller JC, Simon PL. Mathematics of Epidemics on Networks. Springer; 2017.
  28. 28. Anderson JL. An Ensemble Adjustment Kalman Filter for Data Assimilation. Mon Wea Rev. 2001;129:2884–2903.
  29. 29. Shaman J, Karspeck A. Forecasting seasonal outbreaks of influenza. Proceedings of the National Academy of Sciences. 2012;109:20425–20430. pmid:23184969
  30. 30. Pei S, Kandula S, Yang W, Shaman J. Forecasting the spatial transmission of influenza in the United States. Proceedings of the National Academy of Sciences. 2018;115:2752–2757. pmid:29483256
  31. 31. Radin JM, Wineinger NE, Topol EJ, Steinhubl SR. Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study. Lancet Digital Health. 2020;2:e85–e93. pmid:33334565
  32. 32. Endo A, Leclerc QJ, Knight GM, Medley GF, Atkins KE, Funk S, et al. Implication of backward contact tracing in the presence of overdispersed transmission in COVID-19 outbreaks. Wellcome Open Research. 2021;5:239. pmid:33154980
  33. 33. Aleta A, Martin-Corral D, y Piontti AP, Ajelli M, Litvinova M, Chinazzi M, et al. Modelling the impact of testing, contact tracing and household quarantine on second waves of COVID-19. Nature Human Behaviour. 2020;4(9):964–971. pmid:32759985
  34. 34. Mistry D, Litvinova M, y Piontti AP, Chinazzi M, Fumanelli L, Gomes MF, et al. Inferring high-resolution human mixing patterns for disease modeling. Nature Communications. 2021;12(1):1–12. pmid:33436609
  35. 35. Monod M, Blenkinsop A, Xi X, Hebert D, Bershan S, Tietze S, et al. Age groups that sustain resurging COVID-19 epidemics in the United States. Science. 2021;371(6536):eabe8372. pmid:33531384
  36. 36. NYC Coronavirus Disease 2019 (COVID-19) Data; 2020.
  37. 37. Yang W, Kandula S, Huynh M, Greene SK, Van Wye G, Li W, et al. Estimating the infection-fatality risk of SARS-CoV-2 in New York City during the spring 2020 pandemic wave: a model-based analysis. Lancet Infectious Diseases. 2021;21:203–212. pmid:33091374
  38. 38. Simoski B, Klein MC, de Mello Araújo EF, van Halteren AT, van Woudenberg TJ, Bevelander KE, et al. Understanding the complexities of Bluetooth for representing real-life social networks. Personal and Ubiquitous Computing. 2020; p. 1–20. pmid:32837500
  39. 39. TraceTogether; 2022. Available from:
  40. 40. Statista. Penetration rate of smartphones in selected countries 2021; 2022. Available from:
  41. 41. Coronavirus in New York City; 2022.
  42. 42. Berman G, Carter K, Herranz MG, Sekara V. Digital contact tracing and surveillance during COVID-19: General and Child-specific ethical issues; 2020.
  43. 43. Hinch R, Probert W, Nurtay A, Kendall M, Wymant C, Hall M, et al. Effective Configurations of a Digital Contact Tracing App: A report to NHSX; 2020.
  44. 44. van der Waal MB, dos S Ribeiro C, Ma M, Haringhuizen GB, Claassen E, van de Burgwal LHM. Blockchain-facilitated sharing to advance outbreak R&D. Science. 2020;368:719–721. pmid:32409465
  45. 45. New York City. Test and Trace Corps; 2021.
  46. 46. Ferguson NM, Cummings DA, Cauchemez S, Fraser C, Riley S, Meeyai A, et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437:209–214. pmid:16079797
  47. 47. Richterman A, Meyerowitz EA, Cevik M. Hospital-acquired SARS-CoV-2 infection: lessons for public health. JAMA. 2020;324(21):2155–2156. pmid:33185657
  48. 48. Salathé M, Kazandjiev M, Lee JW, Levis P, Feldman MW, Jones JH. A high-resolution human contact network for infectious disease transmission. Proceedings of the National Academy of Sciences. 2010;107:22020–22025. pmid:21149721
  49. 49. Newman ME. Spread of epidemic disease on networks. Phys Rev E. 2002;66:016128.
  50. 50. Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith H, et al. The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application. Annals of Internal Medicine. 2020;172:577–582. pmid:32150748
  51. 51. Wu Z, McGoogan JM. Characteristics of and Important Lessons From the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72314 Cases From the Chinese Center for Disease Control and Prevention. JAMA. 2020;323:1239–1242. pmid:32091533
  52. 52. Gillespie DT. Exact Stochastic Simulation of Coupled Chemical Reactions. Journal of Physical Chemistry. 1977;81:2340–2361.
  53. 53. Ferguson NM, Cummings DAT, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature. 2006;442:448–452. pmid:16642006
  54. 54. Liu QH, Ajelli M, Aleta A, Merler S, Moreno Y, Vespignani A. Measurability of the epidemic reproduction number in data-driven contact networks. Proceedings of the National Academy of Sciences. 2018;115:12680–12685. pmid:30463945
  55. 55. Sharkey KJ. Deterministic epidemiological models at the individual level. Journal of Mathematical Biology. 2008;57:311–331. pmid:18273619
  56. 56. Gleeson JP, Melnik S, Ward JA, Porter MA, Mucha PJ. Accuracy of mean-field theory for dynamics on real-world networks. Physical Review E. 2012;85:026106.
  57. 57. Altarelli F, Braunstein A, Dall’Asta L, Lage-Castellanos A, Zecchina R. Bayesian Inference of Epidemics on Networks via Belief Propagation. Physical Review Letters. 2014;112. pmid:24702425
  58. 58. Houtekamer PL, Zhang F. Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation. Monthly Weather Review. 2016;144:4489–4532.
  59. 59. Anderson JL. Localization and Sampling Error Correction in Ensemble Kalman Filter Data Assimilation. Monthly Weather Review. 2012;140:2359–2371.
  60. 60. Karrer B, Newman ME. Stochastic block models and community structure in networks. Physical Review E. 2011;83:016107.
  61. 61. Peixoto TP. The graph-tool Python library; 2014. Available from:
  62. 62. Duval A, Obadia T, Martinet L, Boëlle PY, Fleury E, Guillemot D, et al. Measuring dynamic social contacts in a rehabilitation hospital: effect of wards, patient and staff characteristics. Scientific Reports. 2018;8:1–11.
  63. 63. Danon L, House TA, Read JM, Keeling MJ. Social encounter networks: collective properties and disease transmission. Journal of The Royal Society Interface. 2012;9(76):2826–2833. pmid:22718990
  64. 64. Handcock MS, Jones JH. Likelihood-based inference for stochastic models of sexual network formation. Theoretical Population Biology. 2004;65(4):413–422. pmid:15136015
  65. 65. Brown C, Noulas A, Mascolo C, Blondel V. A place-focused model for social networks in cities. In: 2013 International Conference on Social Computing. IEEE; 2013. p. 75–80.
  66. 66. Sociopatterns. DATASET: High school dynamic contact networks; 2014.
  67. 67. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Medicine. 2008;5:e74. pmid:18366252
  68. 68. Richardson S, Hirsch JS, Narasimhan M, Crawford JM, McGinn T, Davidson KW, et al. Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area. JAMA. 2020;323:2052–2059. pmid:32320003
  69. 69. U S Census. Age demographics, New York, NY; 2018.
  70. 70. Salje H, Kiem CT, Lefrancq N, Courtejoie N, Bosetti P, Paireau J, et al. Estimating the burden of SARS-CoV-2 in France. Science. 2020;368:eabc3517. pmid:32404476
  71. 71. Rosenberg ES, Tesoriero JM, Rosenthal EM, Chung R, Barranco MA, Styer LM, et al. Cumulative incidence and diagnosis of SARS-CoV-2 infection in New York. Annals of Epidemiology. 2020;48:23–29. pmid:32648546
  72. 72. Sethuraman N, Jeremiah SS, Ryo A. Interpreting Diagnostic Tests for SARS-CoV-2. JAMA. 2020;323:2249–2251. pmid:32374370
  73. 73. Wang W, Xu Y, Gao R, Lu R, Han K, Wu G, et al. Detection of SARS-CoV-2 in different types of clinical specimens. JAMA. 2020;323:1843–1844. pmid:32159775
  74. 74. U S Food and Drug Administration. EUA Authorized Serology Test Performance; 2020.
  75. 75. Contreras S, Dehning J, Loidolt M, Zierenberg J, Spitzner FP, Urrea-Quintero JH, et al. The challenges of containing SARS-CoV-2 via test-trace-and-isolate. Nature Communications. 2021;12(1):1–13. pmid:33452267
  76. 76. Centers for Disease Control and Prevention. Contact Tracing for COVID-19; 2021.