Skip to main content
Advertisement
  • Loading metrics

Emulation of epidemics via Bluetooth-based virtual safe virus spread: Experimental setup, software, and data

  • Azam Asanjarani ,

    Contributed equally to this work with: Azam Asanjarani, Aminath Shausan

    Roles Data curation, Methodology, Project administration, Writing – original draft, Writing – review & editing

    azam.asanjarani@auckland.ac.nz

    Affiliation Department of Statistics, The University of Auckland, Auckland, New Zealand

  • Aminath Shausan ,

    Contributed equally to this work with: Azam Asanjarani, Aminath Shausan

    Roles Data curation, Formal analysis, Project administration, Visualization, Writing – original draft, Writing – review & editing

    Affiliation School of Mathematics and Physics, The University of Queensland, Brisbane, Queensland, Australia

  • Keng Chew ,

    Roles Investigation, Validation

    ‡These authors also contributed equally to this work.

    Affiliation School of Chemistry and Molecular Biosciences, The University of Queensland, Brisbane, Queensland, Australia

  • Thomas Graham ,

    Roles Data curation, Software, Visualization

    ‡These authors also contributed equally to this work.

    Affiliation School of Mathematics and Physics, The University of Queensland, Brisbane, Queensland, Australia

  • Shane G. Henderson ,

    Roles Conceptualization, Methodology, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation School of Operations Research and Information Engineering, Cornell University, Ithaca, New York, United States of America

  • Hermanus M. Jansen ,

    Roles Conceptualization, Methodology, Software, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation Department of Engineering, University College Roosevelt, Middelburg, the Netherlands

  • Kirsty R. Short ,

    Roles Investigation, Supervision, Validation

    ‡These authors also contributed equally to this work.

    Affiliation School of Chemistry and Molecular Biosciences, The University of Queensland, Brisbane, Queensland, Australia

  • Peter G. Taylor ,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation School of Mathematics and Statistics, The University of Melbourne, Melbourne, Victoria, Australia

  • Aapeli Vuorinen ,

    Roles Conceptualization, Data curation, Methodology, Software, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation Department of Industrial Engineering and Operations Research, Columbia University, New York, United States of America

  • Yuvraj Yadav ,

    Roles Visualization

    ‡These authors also contributed equally to this work.

    Affiliation Mechanical Engineering Department, Indian Institute of Technology Delhi, New Delhi, Delhi, India

  • Ilze Ziedins ,

    Roles Methodology, Project administration, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation Department of Statistics, The University of Auckland, Auckland, New Zealand

  • Yoni Nazarathy

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Supervision, Visualization, Writing – review & editing

    ‡These authors also contributed equally to this work.

    Affiliation School of Mathematics and Physics, The University of Queensland, Brisbane, Queensland, Australia

Abstract

We describe an experimental setup and a currently running experiment for evaluating how physical interactions over time and between individuals affect the spread of epidemics. Our experiment involves the voluntary use of the Safe Blues Android app by participants at The University of Auckland (UoA) City Campus in New Zealand. The app spreads multiple virtual safe virus strands via Bluetooth depending on the physical proximity of the subjects. The evolution of the virtual epidemics is recorded as they spread through the population. The data is presented as a real-time (and historical) dashboard. A simulation model is applied to calibrate strand parameters. Participants’ locations are not recorded, but participants are rewarded based on the duration of participation within a geofenced area, and aggregate participation numbers serve as part of the data. The 2021 experimental data is available as an open-source anonymized dataset, and once the experiment is complete, the remaining data will be made available. This paper outlines the experimental setup, software, subject-recruitment practices, ethical considerations, and dataset description. The paper also highlights current experimental results in view of the lockdown that started in New Zealand at 23:59 on August 17, 2021. The experiment was initially planned in the New Zealand environment, expected to be free of COVID and lockdowns after 2020. However, a COVID Delta strain lockdown shuffled the cards and the experiment is currently extended into 2022.

Author summary

In this paper, we describe the Safe Blues Android app experimental setup and a currently running experiment at the University of Auckland City Campus. This experiment is designed to evaluate how physical interactions over time and between individuals affect the spread of epidemics. The Safe Blues app spreads multiple virtual safe virus strands via Bluetooth based on the subjects’ unobserved social and physical proximity. The app does not record the participants’ locations, but participants are rewarded based on the duration of participation within a geofenced area, and aggregate participation numbers serve as part of the data. The 2021 experimental data is available, and once the experiment is complete, the remaining data will be made available. The experimental setup, software, subject recruitment practices, ethical considerations, and dataset description are all described in this paper. In addition, we present our current experimental results in view of the lockdown that started in New Zealand at 23:59 on August 17, 2021. The information we provide here may be useful to other teams planning similar experiments in the future.

Introduction

The COVID-19 pandemic is the most significant global event of the 21st century to date. In response to the pandemic, multiple solutions have been and are still being developed and deployed, including vaccines and contact tracing technologies. As part of this effort, various initiatives that integrate digital health and “AI systems” (artificial intelligence for pandemics) are being thought out. A key initiative includes measuring the spread of pathogens as well as the level of physical human contact. The Safe Blues project is one such idea, where virtual safe virus-like tokens are spread between cellular phones in an attempt to mimic biological virus spread for purposes of measurement and analysis, while respecting the privacy and safety of the population.

Much COVID-19 data is being gathered by contact-tracing apps to aid in identifying infected people or their contacts. However, there can be a time lag of 1 to 2 weeks between being infected and being diagnosed as positive with the result that data obtained in this way is always lagging and biased. Asymptomatic cases who may have already spread the virus to others are frequently missed by such methods. Data delays and bias make it difficult for public health officials and others who want to use the data to implement timely mitigation measures. Also, many contact tracing apps do not save information about the number, distance, and duration of contacts on a centralised server for scientific research. Our approach, on the other hand, is specifically designed to make inferences about characteristics of an epidemic in real-time, allowing governments to implement relevant mitigation measures in a timely fashion.

Safe Blues, introduced in [1, 2], works by spreading virtual ‘virus-like’ tokens, which we call strands. The strands can be of Susceptible-Exposed-Infectious-Removed (SEIR), Susceptible-Infectious-Removed (SIR), Susceptible-Exposed-Infectious (SEI), or Susceptible-Infectious (SI) type. Each strand is artificially seeded into the system at chosen times and can then spread between phones of users. At any given time, a phone can be infected with many strands, and the phone reports its strand infections to the server periodically. Individuals’ identities and social contacts are not recorded in this reporting, ensuring anonymity. A key aim of the Safe Blues idea is to give policymakers another tool that they can use in their effort to track the real-time spread of an epidemic. In contrast to those systems that model population contact and implement agent-based simulations, Safe Blues is an emulation of a group of epidemics based upon a contact process that takes place in the population itself.

We devised a campus-wide experiment at The University of Auckland City Campus. This is the first attempt to implement such a system. An outcome of this experiment is an open-source (virtual) epidemic spread dataset which can be used for further modeling, training, and analysis. Our initial plan was to conclude the experiment during November 2021, with the release of data afterwards. However, due to an extensive lockdown in Auckland, the experiment will now run through the second half of 2022, After requesting an ethics amendment, we have now released the data from 2021. In this paper, our primary focus is on the experiment’s methods and the experience gained. Also, we illustrate general outcomes and results to date. The details we present may be valuable to other teams planning similar experiments in the future. Table 1 describes the phases of the experiment, their timelines, and the period at the University of Auckland during which these phases run.

thumbnail
Table 1. The timeline of the experiment at The University of Auckland.

https://doi.org/10.1371/journal.pdig.0000142.t001

As an illustration of the experiment and some of the collected results, consider Fig 1 where we depict the timeline July 28—September 9, 2021. Phones of participants were “infected” with strands on July 29 and the figure presents the trajectories of the ensuing epidemics along with the number of participants who attended the campus during that period. There are multiple Safe Blues strand trajectories, the (artificial) infection on July 29 included multiple repeats of the same type of strand and multiple types of strands. In fact, not displayed in this figure, about 600 strands were seeded into the participating population. The black trajectory depicts the daily count of campus participants. The weekly attendance pattern, with lower attendance at weekends, can be seen clearly. The green and red trajectories represent the hourly counts of participants whose phones were in the states of exposed (infected but not infectious) and infectious, respectively. As is apparent from the plot, Safe Blues infections continued until the week of August 17 at which point the campus was closed due to a (real) government lockdown. At that point, the number of participants who attended the campus immediately dropped to fewer than 5 per day. As a result, the number of new infections (exposed participants) immediately decreased and within several weeks the number of infectious participants also decreased to zero.

thumbnail
Fig 1. The effect of campus closure, due to actual lockdown in New Zealand on August 17, on the virtual Safe Blues epidemic.

Red and green trajectories show the daily counts of participants whose phones were infected (red) and exposed (green) respectively. These strands depict an SEIR type epidemic. For all strands the initial probability of infection and maximal infection radius were 0.1 and 50, respectively. Their infection strength was set as 0.08, 0.16 and 0.48. The black trajectory shows the daily count of participants who attended the campus.

https://doi.org/10.1371/journal.pdig.0000142.g001

The Safe Blues experiment was not intended to interact with actual COVID numbers or lockdowns. In fact, we chose New Zealand as a destination because it was essentially COVID free for the second half of 2020 and the first half of 2021 and we believed that a university campus could serve as a good first testbed for Safe Blues. In making this decision, we were aware that the university campus did not directly mimic the population dynamics in all of New Zealand. For instance, during the Auckland lockdown in Phase 2, the campus was completely shut down, while in contrast, people in greater New Zealand still interacted, for example, to go shopping. We did not foresee this lockdown in planning the experiment. Nevertheless, the closure of the campus due to the actual physical lockdown served to illustrate the key point of Safe Blues: safe virtual virus strands that are measured in real-time can give an indication of how actual viruses are spreading, and with enough data, the application of machine learning techniques allows us to carry out prediction and state estimation. The Safe Blues system could thus be applied to predict the spread of viral diseases within a subgroup of the population.

Machine learning based prediction using Safe Blues data was initially developed in [1, 2] where both standard neural networks and scientific machine learning based techniques were employed. The measurements of Safe Blues data together with viral data were artificially simulated using several alternative models, and this synthetic data was used to calibrate and test the machine learning techniques. Specifically, scientific machine learning methods which include universal ODE (ordinary differential equations) estimation using techniques as described in [3] were used. Future research using data collected from the current Safe Blues experiment will be used to further fine-tune and develop machine learning techniques.

The focus of this current paper is not on the machine learning, estimation, and prediction per-se but is rather on the experimental setup, software, subject recruitment practices, ethical considerations, and dataset description of the experiment. We also presents initial experimental results. Our goal in doing so is to showcase the methodologies and experience gained from the experiment. The source code for the project is freely and openly available at [4]. Further, data collected in the experiment during 2021 (and used for the dispalys in this paper) is available via [5].

Background

We now present an overview of current practices and specific non-clinical experimental studies that share similar concepts with Safe Blues. Most COVID-19 data are gathered by public health authorities from testing, hospitalizations, and deaths. Various non-government organizations, such as the World Health Organization (WHO) [6], the Center for Systems Science and Engineering at Johns Hopkins University [7], and nCOV2019 [8], collect this data on a global scale and provide daily trend updates. However, such data are prone to lags, biases and inconsistencies, and may not reveal the true characteristics of the disease in real-time. Hence, alternative surveillance methods are needed.

Participatory syndromic-surveillance is one such approach that collects self-reported data on COVID-19 symptoms, test results, and other risk factors for COVID-19 via mobile applications or web-based surveys. Examples of such web surveys include InfluenzaNet [9], FluTracking [10], Outbreaks Near Me [11], CoronaSurveys [12], and the Global COVID Trends and Impact Survey [13]. Among the most recent mobile apps are the COVID Symptom Study [14] and Beat COVID-19 Now [15].

To identify potential COVID-19 hot spots, artificial intelligence is being used in conjunction with information obtained from informal sources, such as Google News, eyewitness reports, social media, and validated official alerts. HealthMap [16, 17], BlueDot [18], and Metabiota [19] are such tools. Early detection of outbreak regions through wastewater examination is also used in some countries [20]. Remote patient monitoring devices, such as continuous wearable sensors (e.g. smartwatches, Fitbit, Oura Ring, WHOOP strap) and smart thermometers, are also being tested as potential tools for tracking COVID-19 [2126]. These tools measure some of the physiological indicators of an individual’s health, such as temperature, heart rate, blood oxygen level, pulse rate, sleep performance, and step counts, on a daily basis. The device can identify deviations from an individual’s baseline level which may indicate the possibility of an illness developing.

Contact tracing is a popular method for identifying infected but asymptomatic individuals. Under this approach, people who have a history of exposure to a positive case are identified and tested as soon as possible. Various mobile applications and web-based surveys are used for contact tracing [12, 13, 2734]. However, many Bluetooth-based apps were abandoned after their initial release in 2020 [35, 36]. Instead, many jurisdictions have adopted QR code scanning systems to track and manage COVID-19. In general, information from Bluetooth, QR-code based, or other apps tends to become less useful as prevalence increases. One reason for this is that the contact tracing workforce and infrastructure become overloaded when prevalence is high, which makes timely contact tracing infeasible. A consequence is that there may be a decrease in public trust. This in turn results in the population reducing engagement with the associated apps. In some jurisdictions, contact tracing apps initially raised expectations, yet were largely abandoned when they did not lead to the expected containment. For example, the Check In Qld app [37] in the Australian state of Queensland was successfully applied to trace contacts of positive cases during 2020 to mid-December 2021 when the Queensland state border was closed and case numbers were single or double digit at most. However, once the border opened and daily new cases grew to three or four digit figures, the Check In Qld app was generally abandoned.

Moving on to experimental studies, we mention two major non-clinical citizen science experiments conducted prior to the pandemic for disease surveillance purposes. The first is the FluPhone experiment which took place in the United Kingdom between 2009 and 2011 [38]. In this experiment, participants reported their influenza like illness symptoms using the FluPhone app, which also recorded the proximity of participants’ devices via Bluetooth and their location via GPS. The number of people encountered by each participant was then estimated and published on the study website [39]. The FluPhone app, like the Safe Blues app, modeled the spread of virtual SEIR type diseases, allowing participants to see real-time profiles of disease propagation in their contact network [40]. However, unlike the Safe Blues app, which is designed to simulate hundreds or thousands of strands, the FluPhone software was designed specifically to mimic the spread of SARS, flu, and the common cold. FluPhone was a unique experiment at that time, but in contrast to Safe Blues, it was designed with less of a focus on capturing physical social interactions in a privacy preserving manner, and more of a focus on mimicking real disease. The second study is “Contagion! The BBC Four Pandemic experiment”, which also took place in the UK, but this time in 2018–2019. The BBC Pandemic mobile phone app was used in the experiment to record participants’ locations and self-reported contacts. A subset of this dataset was used to simulate various non-pharmaceutical intervention (NPI) strategies, such as case isolation, tracing, contact quarantining, and social distancing, to investigate their effectiveness in limiting the spread of COVID-19 [41]. An additional related system that uses Bluetooth is Operation Outbreak, see [42] and [43], which was developed with a focus on education. This system records the full connectivity graph of participants and thus does not require “virtual safe viruses”. Importantly, Operation Outbreak includes iOS implementations in addition to Android although, due to the way in which iOS operating systems manage background apps, the iOS apps often need to be “woken up” using user interaction.

Since the middle of 2020, many countries have been investigating the risk factors involved in opening their society through mass-gathering experiments. Two well-known examples are the RESTART-19 [44] experiment, which took place in Germany in August 2020, and a study which took place in Spain in December 2020 [45]. Both assessed the risk of COVID transmission during an indoor live concert, using a variety of seating, standing, and hygiene measures, as well as maintaining optimal air ventilation inside the venue. In both studies, contact tracing devices were used to measure contacts made during the event, and PCR tests were performed a few days later. The RESTART-19 study showed that when moderate physical distancing was applied in conjunction with mask-wearing and the conditions for good ventilation were met, indoor mass-gathering events could be held safely. Also, the trial in Spain demonstrated that with comprehensive safety measures, such as face masks and adequate ventilation, indoor mass events could be held without the need for physical distancing.

Some experiments have included a series of mass-gathering events with a variety of indoor and outdoor settings, seated and standing audience styles, structured and unstructured audience styles, and participant numbers. Two such examples are the Fieldlab Events [46] which took place in the Netherlands in February and March 2021, and the Events Research Program [47], which took place in the United Kingdom from April to July 2021. In both experiments, comprehensive public health measures, such as face mask use, hand sanitizing, social distancing, and adequate ventilation at indoor events were observed. Following the events, contact tracing and PCR testing were carried out. According to the Fieldlab Events, outdoor events with 50–75% of the normal visitor capacity could be held provided that strict non-pharmaceutical intervention measures are followed. A robust result from the Events Research Program is yet to be published.

Other relevant experiments include a health workers protest [48] in South Korea in August 2020 and a martial arts competition [49] in the UAE in July 2020. During both events, participants were required to wear face masks, practice hand hygiene, and maintain physical distance. COVID-19 symptoms were self-reported by protesters in South Korea after the rally. All PCR tests performed on a subset of rally participants returned negative results. PCR tests were conducted twice weekly during the UAE event, and none of the contestants had positive results, indicating that mass-gathering events with restrictive measures could be held safely.

Materials and methods

We now describe the experimental setup, ethics, software, participant management, and data collection aspects of the experiment as well as supporting tools such as a simulation model.

Experimental setup and the Safe Blues system

As stated in the introduction, the overarching purpose of the campus experiment is to test the performance of the Safe Blues system. In doing so, we are interested in assessing the ability to use virtual safe virus-like tokens to predict the spread of pathogens. However, an experiment involving actual biological pathogens, or relying on the actual spread of disease is infeasible and hence our experiment uses measurements from the digital domain. The key question is then to test if the spread of some Safe Blues strands can be detected and predicted by measuring the spread of other strands.

In its most basic form, our purpose is to treat a single strand as a red strand which is assumed not to be measurable in real time. Further, we treat all other Safe Blues strands as real-time measurable virtual viruses, namely blue strands. The statistical goal is then to benchmark predictions of the future evolution of the red strand based on either,

  1. (I). only past measurements of the red strand (a proxy for estimation in the absence of blue strands), or
  2. (II). the combination of past measurements of the red strand, past measurements of blue strands, and current measurements of blue strands.

For example, Fig 2, which also appeared in [1], presents a simulation run where blue strands are measured in real time, but the red strand is only measurable with a two-week delay. Here the Safe Blues machine learning framework was used to predict the current unobserved state of the red strand. Similarly, it can be used for near future predictions. However, this figure is taken from Monte Carlo simulations of physical contact processes, and not from actual experimental measurements. The Safe Blues experiment attempts to improve upon this by using the actual physical mobility of individuals. The experiment aims to test whether (II) can yield much better predictions than (I).

thumbnail
Fig 2. Estimation via simulated epidemics from Model III of [1].

At day 115, we only have red strand information up to day 100. Nevertheless, current blue strand measurements allow us to estimate the current state of the epidemic during days 101–115.

https://doi.org/10.1371/journal.pdig.0000142.g002

An additional salient feature of Safe Blues is the interaction with social distancing measures. For this, we would ideally like to ask participants to group together or stay apart similarly to the way that government social distancing measures work. However, this is clearly not feasible with real-life participants and hence the experiment creates virtual social distancing to mimic social-distancing measures. The details of how this is done are described below in the subsection Virtual social distancing.

As a first attempt for such an experiment, we chose the University of Auckland City Campus due to the fact that the campus was open to students and staff during 2021 (up until the unexpected lockdown of Aug 17, 2021). The experiment consists of 5 phases. Table 1 provides the timeline of the experiment including the time period of the year, the study period and a brief description of each phase. The target population of the experiment is the student body, but participation is open to any regular attendee or visitor of the UoA City Campus who is at least 16 years of age and uses an Android mobile phone. All participation is voluntary, and at any time, participants could opt-out of the experiment and uninstall the Safe Blues app. By default, participants are invited to join prize draws which we carefully designed to maximize participation (see details in subsection Participant management and ethical considerations below). However, participants are allowed to take part in the experiment without joining the prize draws.

The Safe Blues system is made out of four components: (1) the Safe Blues app, (2) the campus simulation dashboard, (3) the campus experiment leader dashboard, and (4) the Safe Blues data dashboard. All four components are available online at [5, 50], [51], and [52] respectively. Fig 3 displays a snapshot of these four components.

thumbnail
Fig 3. The Safe Blues system: The Safe Blues app (top left), the simulation dashboard (top right), the campus experiment (campus hours leader board on bottom left), and the data dashboard (bottom right).

https://doi.org/10.1371/journal.pdig.0000142.g003

Strand and device management

Participants run the Safe Blues app [50] (see also Fig 3 (top left) for an illustration of the app) as they go about their normal day to day activities on campus while enabling Bluetooth and location services. Location services are only needed for prize based rewards as described below. A participant’s app then communicates with the apps of other participants via Bluetooth to pass on digital ‘virus-like’ tokens, namely Safe Blues ‘strands’. This simulates an epidemic spreading through the community. There are many types of strands of such virtual safe epidemics, and the live emulation of all of the epidemics happens in parallel driven by the actual physical contact processes of participants.

The app is not malicious and does not interact with any other app that users may be running on their phone. Open-source code is available on GitHub via the Safe Blues website [53]. Nevertheless, the app, like any other mobile app, consumes the battery of the phone. It is the participants’ responsibility to manage their phone battery usage, and our experience has shown that some participants turn off the app while away from the campus. The app is only available for Android due to the fact that iOS phones cannot run such an app in the background. This clearly limits the participating population. We discuss the technical details of the app software in S2 Appendix.

With the exception of participant reward information, described in subsection Participant management and ethical considerations below, information recorded by the Safe Blues system is limited to the aggregated counts of each strand. Every 15 minutes a phone uploads the status of its infections in terms of ‘exposed’, ‘infectious’, and ‘recovered’ for each strand. The total number of ‘susceptibles’ is then inferred based on the total number of phones participating at any given time. This uploading occurs via a temporary anonymous 256 bit ID which changes every 24 hours on the phone. Thus the Safe Blues server does not keep track of the individual infections of phones and it cannot uniquely identify a phone beyond a 24 hour period. The temporary ID is still useful for correct counting of infections on the server side, since messages are sometimes lost or not sent if the phone is without connectivity (see S4 Appendix where we describe the algorithm for interpolation and imputation of counts to handle this). The individual phone strand information is never cross-referenced with private participant information, further preserving the anonymity of participants.

The injection of new infections into the participating phone population is carried out via an API (Application Program Interface) available to the phones. In each phase of the experiment we inject multiple strands with each batch containing a collection of individual strands. For example, in phase 1, where we focused on testing and tweaking the system, there were 7 batches in total labeled 1.01 to 1.07. Similarly, in phase 3 there were 22 batches in total, labeled 3.01 to 3.22. We discuss the number of strands in each batch, their parameters, and their purposes in the experiment in S5 Appendix. When a batch of strands is ‘injected’, all participating phones become aware of the strands of the batch and each strand has a pre-specified seeding probability which is typically set to 0.05, 0.1, or 0.2. Then at a specified start time, each phone is independently infected by the new strand in accordance with the seeding probability. This ‘seeding’ of new strands thus emulates the arrival of new outbreaks of the epidemic into the population.

We applied four types of epidemic models for the phone population: SEIR, SIR, SEI, and SI. In both SEIR and SEI models, when a susceptible phone receives a strand, it first becomes exposed and remains in this state for a random time period known as the incubation period. During the incubation period, the phone is unable to infect other phones. After the incubation period, the phone becomes infected and remains in this state for a random time period known as the infection period. During this time, the phone has the ability to infect nearby phones by exchanging a Bluetooth token. The distribution of the incubation and infection periods is explained further below. The infection period in the SEIR type epidemic is finite, and the phone stops infecting other phones at the end of this period. Consequently, its state is labeled as recovered. The infection period in the SEI epidemic is infinite, and the phone cannot recover. There is no incubation period in the SIR and SI epidemics, and a susceptible phone becomes infectious immediately after receiving the strand. The SIR epidemic has a finite infection period and the phone recovers at the end of it. The infection period of the SI epidemic is infinite, and the phone remains in the infectious state throughout the epidemic.

During phases 1–3, and including the intervening period between phase 3 and phase 4, we injected 4155 strands in total into the system. Of these, 28% were of the SI or SEI type (not involving removal/recovery) and the remaining 72% were of the SIR or SEIR type (allowing recovery). Fig 4 depicts the cumulative counts of strands released over time during phases 1–3.

thumbnail
Fig 4. The cumulative number of strands over time broken up into SI, SEI, SIR, and SIER types and phases of the experiment.

https://doi.org/10.1371/journal.pdig.0000142.g004

Beyond the classification of SI, SEI, SIR, and SEIR, each strand is uniquely identified by a strand_id, which has specific parameters that influence its spread. A full specification of the protocol for these parameters is in Appendix A1 of [2]. However, the protocol there does not deal with specific distributional information and the infection probability mechanism. Hence we now outline these details.

We use gamma distributions for both the incubation and infection times and parameterize them by a mean μ and a shape parameter k. That is, for each x ∈ (0, ∞), the probability density function is (1) where Γ(⋅) is the gamma function. In this case, the ratio between the variance and the square of the mean is 1/κ. The other important strand information deals with the probability of infections of nearby phones. At every time where two participating phones are near each other, Bluetooth messages are exchanged during a session and throughout this session, distance measurements are carried out. In principle using individual Bluetooth messages may appear to be preferable to creating sessions. However the nature of the Bluetooth protocol and the underlying software implies that sessions are the preferable technique; see S2 Appendix.

At the end of the session (capped at 30 minutes), the median distance in meters from all messages is computed and denoted by d. The duration of the session in seconds is denoted by t. We expect that with closer distances and longer duration, infection is more likely. We chose the probability of infection parameterized by the strand’s strength, σ (with units as the inverse of a second), and the maximal infection distance in meters, as ρ, to be (2)

In general, we expect strands with higher ρ or higher σ to be more infectious. The parameter ρ defines a radius outside of which infection is not possible, thereby providing an operational definition of the standard epidemiological notion of “contact.” The parameter σ can be related to standard models of the contact process. For example, a model described in [54, p. 268], postulates a per-unit-time probability β of infection. Thus, the probability of not being infected over an integer-length interval of length t is (1 − β)t, which is consistent with (2) if we equate eβ ≈ 1 − β with eσ(1 − d/ρ). Thus we can interpret σ(1 − d/ρ) as the rate of transmission per unit time, and σ as a parameter controlling the rate of transmission after accounting for the distance between two individuals.

In setting the strand parameters σ and ρ we initially used a simulation model (see subsection A campus simulation model for details). Subsequently, we adjusted the parameters based on field experience (see section Results and discussion section for details).

Participant management and ethical considerations

Our goal in participant management is to motivate participants to run the Safe Blues app while on campus. A first decision was whether to couple participation data with strand data (number of virtual infections). We chose not to do so. While such coupled data could be useful, our primary goal is the strand-count time series for which coupling is not needed.

A second decision was whether to pay participants a ‘flat rate’ for participation, for example with coffee vouchers proportional to their participation hours or to use prizes. As the total budget was limited, and in accordance with other experimental research [55, 56], we opted for prizes. As this is a digital experiment we chose iPad, Android phone, and Fitbit prizes, with 9 prizes per prize draw. See S1 Appendix for details of the prize draw rules.

Participants were recruited directly via online flyers, posters and videos. To take part in the experiment, participants first needed to install the Safe Blues Android app on their mobile phones. This gave them a random 10 digit ID which identifies them only for purposes of experiment participation and prizes but is not associated with their strand infections. With this ID, participants can then register their email address which is used for communicating experiment messages and prize winners.

As participants enter the city campus, an Android geofencing mechanism spawns an event on the app, and then when they leave the campus (leave the geofence) an additional event is spawned. A message of participation hours is recorded on a server. The participation hours contribute to the chance of winning a prize. In general, the more hours a participant runs the app on campus, the higher the chance of winning a prize (see S1 Appendix). The app does not track the location of participants with the exception of indicating whether or not the participant is within the campus geofence area. Fig 5 (left) displays a snapshot map of the UoA City Campus with the geofenced area marked on it.

thumbnail
Fig 5.

The University of Auckland City campus with the geofenced area supporting the experiment marked as a circle (left). A heatmap representation of buildings used in the simulation prior to the experiment (right).

https://doi.org/10.1371/journal.pdig.0000142.g005

As an additional side-benefit of the experiment, we provide the aggregated visits to campus and duration statistics as part of the dataset. The left side plot in Fig 6 depicts the daily number of participants who were registered, reporting, and attending the campus in the experiment during phases 1–3. The right side plot in this figure displays the distribution of the means for the daily campus hours collected by participants, over weekdays and weekends, during phases 1–3. As per the prize draw rules (S1 Appendix) the maximum daily campus hours that each participant can collect is capped at 10.

thumbnail
Fig 6.

Evolution of the daily number of participants who were on campus (Attending), reporting, and the cumulative number of participants who were registered in the experiment, during phases 1 to 3 (left). Box plot of the means from the 5 number summaries for daily campus hours (right).

https://doi.org/10.1371/journal.pdig.0000142.g006

By the end of phase 2, about 20% of the registered participants were not running the app. In an attempt to enroll more participants in the experiment, we upgraded the reward scheme during phase 2. This included an ‘invite-a-friend’ option, which increased a participant’s chance to win a prize if new participants joined the experiment through their invitation. Those who joined the experiment through the invite-a-friend mechanism were also rewarded with bonus eligible hours. See S1 Appendix for further details about rules and the invite-a-friend mechanism. Joining the prize draws was not compulsory for those taking part in the experiment. We realize that the ‘invite-a-friend’ mechanism has the potential to introduce additional bias into the system, since social contacts of already-existing participants are more likely to join the experiment in comparison to general members of the community. However, since our general recruitment is on a voluntary basis, we believe that this additional bias is negligible in comparison to the existing biases occurring in our study, and any study where participants self-select to join.

Although we were not conducting any clinical or health research including human data, we required approval from the University of Auckland Human Participants Ethical Committee (UAHPEC) before doing any form of research involving university volunteers. The study was approved by UAHPEC under ethics number 22143 in March 2021. In this application we addressed ethics considerations, including naming all researchers, description of the study, location of study, methodology, participants and recruitment process, data management, funding, Māori-focused consultation and engagement, and consistency with the principles of Te Tiriti o Waitangi.

We also provided the ethics committee with a copy of all the Safe Blues website pages, a permission letter from course directors (for big statistics courses where the project was advertised) and head of the Department of Statistics, participant information sheet, poster, the consent form, the data management plan, and the observation schedule.

Data management

The experiment is managed through two distinct databases. A Participant Management System (PMS) is used to store the email addresses and consent agreements, as well as a record of the campus hours. The PMS is hosted at the UoA, and data are used only for the purposes of managing the prize draws and the list of participants. Data in the PMS is completely disconnected from the experiment data and will not be publicized in any way, with the exception of analysis of aggregated participation counts over time (for example, see Fig 6).

The second database, called the Anonymous Data Server (ADS), is managed in the cloud and contains an aggregate, anonymized, time-stamped record of the number of phones with each strand. For each strand, we record this data on an hourly and daily basis, and indicate the aggregate number of phones in each epidemiological state (susceptible, exposed, infectious, recovered) over time. As this database follows the Safe Blues protocol, phone (app) identities are not revealed during communication, and phones (apps) only have temporary IDs that are replaced on a daily basis. We spell out the technical details of how we record data in the PMS and ADS in S3 Appendix.

We record aggregated participation counts from the PMS as daily and hourly measurements, along with the daily and hourly means and five number summaries–that is, the minimum, first quartile, second quartile, third quartile, and maximum–of campus hours in a CSV file. On days where there are fewer than 5 participants these numbers are omitted for privacy reasons. Similarly, the aggregate Safe Blues data from the ADS are also stored in several CSV files, one for each strand. See S6 Appendix. for specific details of the Safe Blues data repository. The Safe Blues data will be made publicly available in the Safe Blues data repository after the experiment concludes. Plots of the data are currently available as a web-based dashboard at [52]. See Fig 3, bottom right, for a snapshot of the dashboard.

By agreeing to take part in the experiment, a participant agrees to share the Safe Blues data of their Safe Blues app. At any point in time, a participant may choose to withdraw from the experiment and this will result in deletion of their personal information from the PMS. However, their aggregated anonymized data already recorded on the ADS will remain in the database and will potentially contribute to the scientific findings of the experiment.

A campus simulation model

We used a simple simulation model to approximately capture the expected behavior of the participants, and to aid with our initial choice of strand parameters to be used in the experiment. This discrete-time stochastic spatial compartmental SEIR model was used as an initial guide for ranges of the maximal infection distance and infection strength parameters in Eq (2).

A Safe Blues strand is characterized by its seeding probability, π, infection strength, σ, maximal infection distance, ρ, incubation time distribution, and infection time distribution. In the simulation, the initial infections were determined by Bernoulli random variables with each simulated participant independently having a chance π of becoming infected when the strand was activated. The remaining participants could only become infected by being a ‘close contact’ of an already infectious person. After each time step, the positions of participants were independently drawn from a heat map designed to resemble likely locations attended by participants in the real-world experiment; see Fig 5 (right). The time step considered here was 1 hour, which corresponds to the duration of a lecture. Each susceptible individual who was d meters away from an infectious individual for t minutes was infected with the probability given by Eq (2).

After a strand was successfully transmitted to a susceptible individual, they became exposed and remained in this compartment for a random amount of time (incubation time) drawn from a gamma distribution with mean μE and shape κE as in Eq (1). Subsequently, once their exposure time elapsed, they became infectious and were able to infect further individuals with this strand. The duration of their infection (infection time) was again gamma distributed with mean μI and shape κI. A strand without incubation (SI or SIR) can be described by setting μE ≈ 0 and and a strand without recovery (SI or SEI) can be described by setting and . Further details are in the code repository within the Safe Blues GitHub repository [4].

We developed a web-based interactive simulation dashboard; see Fig 3 (top right) and [51]. The interactive dashboard has a variety of control features that enable a user to set the model and its parameters. We used the simulation to identify ranges for a strand’s infection strength σ and maximal infection distance ρ that ensure nontrivial probability (neither near 0 nor near 1) of an epidemic amongst our users. We explored values for σ in the range [0, 0.1] and ρ in the range [0, 20], and fixed the remaining parameters at specific values. We chose π = 0.1, μE = 24 hours (or a single day), κE = 5, μI = 168 hours (or a single week), κI = 5.

The simulation indicated that with a population of 50 or more participants, some level of Safe Blues spread is possible. Further we simulated epidemics on population sizes 100, 200, and 500. Based on 1000 simulation runs of the model, we observed that,

  1. there existed a minimum value for ρ for sustained transmission, which decreased as population size increased, and
  2. there existed a region of transitional parameters, which narrowed as population size increased.

The simulation results allowed us to conclude that, for a population of size 100, values of σ ∈ [0, 0.05] and ρ ∈ [10, 20] gave nontrivial probabilities of an epidemic among Safe Blues users. Likewise, for a population of size 200, reasonable choices are σ ∈ [0, 0.05] and ρ ∈ [5, 15], and for a population of size 500, reasonable choices are σ ∈ [0, 0.04] and ρ ∈ [2, 12].

These observations then guided our initial strand parameter choices.

Virtual social distancing

Apart from using Safe Blues as a tool for collecting data on virtual epidemic spread, we also tested it as a means to explore how ‘virtual social distancing’ affects these epidemics. Our goal in implementing ‘virtual social distancing’ was to provide a rich dataset which could be used by researchers or public health bodies to explore future intervention strategies through social distancing.

In order to implement the ‘virtual social distancing’, we tweaked the measured (observed) distance, d, in Eq (2) by a ‘social distancing factor’. Thus, if the measured distance was 4m and the social distancing factor was 1.5, then the distance, d, for infection computation was 4*1.5 = 6m. See the Results and discussion section below for results of this testing mechanism.

Results and discussion

Calibration of the maximal infection distance parameter

At the start of the experiment we used the parameter ranges determined from the campus simulation study as an initial guide for choosing strand parameters, σ and ρ as in (2). Our initial purpose was to find dynamic ranges of both the infection strength, σ, and maximal infection distance, ρ, that affect the spread of strands. Initial results from the campus experiment immediately confirmed the effect of the maximal infection distance, while the effect of infection strength was not apparent in the range of values that we used. We continue to investigate why the infection strength did not influence the observed strand behaviors, but we believe that this happens because of the difference between our simulation and reality, and how those differences interact with our selected parameter ranges. For example, if the distances d between individuals are smaller and the length of sessions t are larger during potential transmission events in reality than in the simulation, then the exponent in (2) could be much more negative in reality than in the simulation, irrespective of our selected values of σ. Then potential transmissions where two individuals are within the distance threshold ρ will almost always yield transmission, making the effect of σ moot. We further explored the range for the maximal infection distance parameter using the strands released in batches 1.05 and 1.06, during phase 1 trials. In particular, we experimented with the maximal infection distance parameter within the range [7.5, 500] while choosing the strength parameter within the range [0.1, .24].

In batch 1.05, we released 30 SI type strands with the strength parameter fixed at 0.16 for all strands and the maximal infection distance parameter chosen from the set {7.5, 15, 30, 60, 120, 500}. We observed that, in general, there were 3 ranges for the distance parameter that produced distinct epidemics. Specifically, the epidemics established when the maximal infection distance parameter was greater than 30. Increasing the maximal distance parameter beyond 120 did not necessarily produce more severe epidemics. Further, the epidemics did not propagate for most of the strands when the maximal distance parameter was less than 30. With this observation in mind, we fine tuned the search grid for the maximal distance parameter using the strands released in batch 1.06. We released 90 SI type strands in that batch, with the maximal distance parameter chosen from the set {20, 26, 34, 44, 57, 74, 97, 125, 163, 212} and the strength parameter chosen from the set {0.1, 0.16, 0.24}.

The plots in Fig 7 depict examples of the effect of varying the maximal infection distance parameter on the propagation of epidemics. The left side plot shows the daily infection trends of strands in batch 1.06 over 5 days categorized by 3 maximal infection distance ranges. The right side plot displays the difference in infection count between the 1st day and 5th day for the strands in both batches as the distance parameter varies. In both these plots we ignored the effect of infection strength since its confounding effect was negligible.

thumbnail
Fig 7.

Infection trends over 5 days for strands in batch 1.06, categorized by 3 maximal infection distance ranges (left). Effect of varying maximal infection distance on the difference of infection (day 5—day 1) for strands in batches 1.05 and 1.06 (right). The red curve on the right side plot is a fitted sigmoid function. In both plots, all epidemics are SI type and each strands’ initial probability is set to 0.1. The maximal infection distance of strands in batch 1.06 was 0.16, and that for strands in batch 1.05 was set as 0.1, 0.16, or 0.24.

https://doi.org/10.1371/journal.pdig.0000142.g007

We fitted a two parameter scaled and shifted sigmoidal curve to the data, plotted in red. The curve clearly indicates an upward infection effect of the maximal infection distance with saturation at distances over 80 meters, and less than 30 meters. This is not unexpected based on our design of the infection formula (2), yet we initially found the magnitude of the distances puzzling. One may expect Bluetooth transmission to be effective at distances that are significantly shorter. Towards that end, we believe the observed distance, d in (2), may be skewed in our app measurements which are based on averaging of RSSI Bluetooth signal strengths. Such bias between the actual distance of devices and the observed distance, d, may be further investigated via direct phone to phone measurements of the app. We have yet to carry out such measurements to completion, but initial tests indicated a mismatch of the order of 30 meters, meaning that phones that are x meters apart perceive a distance in the order of d = x + 30. In general, these distances agree with Bluetooth communication range, yet the actual effective communication range varies based on building topology. Note that since we do not know the locations of participants within the campus we have no direct indication if spread occurs more within buildings, or in open air environments.

Herd immunity

Herd immunity occurs when a significant proportion of a population become immune to an infectious disease through either vaccination or previous infection, making the disease unlikely to spread within the population. The ‘herd immunity threshold’ is the minimum proportion of the population that must be immune in order to achieve herd immunity. In the simplest SIR model, this quantity can be calculated as 1 − 1/R0, where R0 is the basic reproduction number of the disease [57]. The basic reproduction number is the expected number of secondary infections caused by a single infectious person in an otherwise susceptible population [58, 59]. The basic reproduction number for the delta variant of COVID-19 is estimated as 5.1 [60], and thus the herd immunity threshold for the delta variant is approximately 80%.

In our experiment, we observed the herd immunity phenomenon from some of the strand’s epidemics. For example, Fig 8 displays the epidemics of two SEIR type Safe Blues strands with different mean infection periods. Although it is difficult to draw firm conclusions from the two sample paths shown in the two plots, these plots suggest that about 80% of the participating population recovered and about 20% remained susceptible after the disease died off.

thumbnail
Fig 8.

Two epidemic trajectories of an SEIR-type Safe Blues strand with mean infection period 10 days (left), and 5 days (right). The remaining strand parameters are common to both strands. The ‘Exposed’ curve does not appear to be consistent with the ‘Infected’ curve. This happens because the average duration of exposure is only 12 hours and because the ‘Exposed’ curve plots the number of exposed individuals at a fixed time on each day. Thus, the plotted exposure counts represent only that subset of exposures that overlap the daily time at which they are counted.

https://doi.org/10.1371/journal.pdig.0000142.g008

Testing the virtual social distancing mechanism

Social distancing is intended to increase spatial separation. It could be implemented by putting a lower bound on d or scaling d up. As mentioned previously, we implemented virtual social distancing by multiplying the observed distance parameter, d, in Eq 2 by a given social distancing factor. We considered 4 social distancing factors; 1 (no social distancing) 1.25 (low), 1.5 (medium), and 3.0 (high). We tested virtual social distancing for the strands released in batch 1.07, starting from the 3rd day of their release. This batch comprised 60 SI type strands.

We compared each strand’s infection counts on the days prior to and after virtual social distancing was imposed. Fig 9 displays an example of the effect of physical distancing and the maximal infection distance on infection counts of three strands. The coordinates of each data point are the number of infections one day before the virtual lockdown and the number of infections one day after the virtual lockdown. We saw three distinct patterns for the counts one day prior to implementing virtual social distancing. When maximum distance was 40, the counts were less than 2, when it was 60, the counts were between 2 and 6, and when it was above 80, the counts had similar values. This distinction was not visible for the infection counts after imposing virtual social distancing.

thumbnail
Fig 9. Infection counts of three Safe blues strands on the day prior to and after implementing virtual social distancing, categorized by their maximal infection distance.

Each point is the centroid of the triangle formed from the infection counts of the three strands. Social Distancing (SD) is categorized as; 1.0 (no SD), 1.25 (low SD), 1.5 (medium SD), and 3.0 (high SD).

https://doi.org/10.1371/journal.pdig.0000142.g009

In terms of the effect of the physical distance factor, we can observe that infections on the day after the virtual lockdown were, in general, lower for the high (3.0 SD) and medium (1.25 SD) factors. This result highlights the fact that our initial trials on virtual social distancing had an impact in reducing the severity of the epidemics. Higher participant numbers and using more strands would probably strengthen this calculation. We intend to explore virtual social distancing in our future trials once the experiment restarts in 2022.

The effect of an actual lockdown on the experiment

In the previous section we highlighted that we were able to observe reduction of strand infection counts based on artificially imposed social distancing. However, we were able to observe the same phenomenon after the actual lockdown that occurred in New Zealand during phase 2 trial of our experiment.

The actual lockdown in Auckland, took place on August 17, 2021 at the time when we released batch 2.01 strands, and the lockdown was later extended until the end of 2021. There were 600 strands in total in this batch. These were the first set of experimental strands that were released after we determined the maximal infection distance parameter ranges and tested virtual social distancing. The campus was shut down due to the lockdown, and the number of attendees on campus immediately dropped. Consequently, we saw an immediate reduction in the number of exposed participants, and within weeks the number of infectious participants reduced to zero. Thus, our data showcased the effectiveness of the actual lockdown.

In Fig 10 we depict the infection trajectories of all strands in batch 2.01, categorized into their model type (SEIR, SIR, SEI, SI). For all model types we can see a clear effect of the actual lockdown on their strand’s trajectories. As expected, for both the SEIR and SIR models (top two plots in Fig 10), the infection trends gradually reduced or remained close to zero within weeks after the lockdown, and for both the SEI and SI models (bottom two plots in Fig 10), the infection trends stabilized and minor fluctuations are seen due to the varying number of participants. We also see that the effect of the lockdown was immediate, showcasing the real-time nature of Safe Blues information.

thumbnail
Fig 10.

Infection trajectories for all strands in batch 2.01, categorized into SEIR (top left), SIR (top right), SEI (bottom left), and SI (bottom right). The solid curves show the five day moving average of the medians categorized into three maximal infection distances.

https://doi.org/10.1371/journal.pdig.0000142.g010

Conclusion

We have described the design of a Safe Blues experiment in Auckland, New Zealand. The experiment was interrupted by a lockdown in August 2021, so we have extended the experiment to include two more phases once the second semester begins in June 2022. The partial data we collected while calibrating the experiment in early phases, especially after the lockdown, suggests that Safe Blues data will be a valuable tool in the fight against pandemics. In particular, we saw the effect of the lockdown immediately in the strand data, even though in reality the effect of the lockdown would not be observed for several days, since infection counts lag true infections by the incubation period together with the time it takes to get tested and to record the test result which can be several days. The value of Safe Blues real-time data is even greater in the presence of under-reporting of cases, which arises in the presence of asymptomatic infections, or skepticism or ignorance of the value of reporting or where the efficiency of other methods (contact-tracing, wastewater monitoring or even PCR with group testing) is reduced when prevalence is high.

Our focus in this paper was on the design of the experiment. We discussed the databases, strand management, measures taken to ensure participant privacy, and participation incentives that are essential in such an effort. A simulation tool was useful in initially calibrating strand parameters, but data from initial phases became more valuable than simulation for full calibration. In the early stages of the experiment we were able to approximately model the effect of social-distancing mandates and the results suggest the potential for the full experiment to showcase the potential effectiveness of such measures, as did the data from the true lockdown. As part of the experiment we have built a number of visualizations and these, along with all necessary source code, are available on an open repository. We look forward to providing data from Phases 4 and 5 of the experiment once these are complete.

Supporting information

S1 Appendix. Prize draw rules.

Details of Prize draw rules.

https://doi.org/10.1371/journal.pdig.0000142.s001

(PDF)

S2 Appendix. App software.

Details of the Safe Blues app software.

https://doi.org/10.1371/journal.pdig.0000142.s002

(PDF)

S3 Appendix. ADS and PMS servers.

The ADS & PMS technical details.

https://doi.org/10.1371/journal.pdig.0000142.s003

(PDF)

S4 Appendix. Interpolation and imputation algorithms.

Details of the interpolation and imputation algorithms.

https://doi.org/10.1371/journal.pdig.0000142.s004

(PDF)

S5 Appendix. Strand details.

Details of strands and their purposes.

https://doi.org/10.1371/journal.pdig.0000142.s005

(PDF)

S6 Appendix. Data structure.

Structure of Safe Blues dataset.

https://doi.org/10.1371/journal.pdig.0000142.s006

(PDF)

References

  1. 1. Dandekar R, Henderson SG, Jansen HM, McDonald J, Moka S, Nazarathy Y, et al. Safe Blues: The case for virtual safe virus spread in the long-term fight against epidemics. Patterns. 2021;2(3):100220. pmid:33748797
  2. 2. Dandekar RA, Henderson SG, Jansen HM, Moka S, Nazarathy Y, Rackauckas C, et al.. Safe Blues: a method for estimation and control in the fight against COVID-19; medRxiv:20090258v1 [preprint]. 2020 [cited 2021 December 20]. Available from: https://www.medrxiv.org/content/10.1101/2020.05.04.20090258v1.
  3. 3. Rackauckas C, Ma Y, Martensen J, Warner C, Zubov K, Supekar R, et al. Universal differential equations for scientific machine learning. arXiv preprint arXiv:200104385. 2020;.
  4. 4. Safe Blues. Safe Blues GitHub organization; 2021 May [cited 21 December 2021]. Available from: https://github.com/SafeBlues.
  5. 5. Safe Blues. Safe Blues Data; 2021 July [cited 21 December 2021]. Available from: https://safeblues.org/data.
  6. 6. WHO. WHO Coronavirus (COVID-19) Dashboard; 2020 April [cited 21 December 2021]. Available from: https://covid19.who.int.
  7. 7. Dong E, Du H, Gardner L. An interactive web-based dashboard to track COVID-19 in real time. The Lancet, Infectious Diseases. 2020;20(5):553–534. pmid:32087114
  8. 8. nCOV2019. ncov2019 epidemiological data; 2020 January [cited 21 December 2021]. Available from: https://ncov2019.live.
  9. 9. Influenzanet. Influenzanet; 2009 January [cited 21 December 2021]. Available from: http://influenzanet.info.
  10. 10. FluTraking. The surveillance system for monitoring influenza; 2006 June [cited 21 December 2021]. Available from: https://info.flutracking.net.
  11. 11. Boston Children’s Hospital. Outbreaks Near Me; 2020 January [cited 21 December 2021]. Available from: https://outbreaksnearme.org/us/en-US.
  12. 12. CoronaSurveys. Estimating global Covid-19 trend; 2020 January [cited 21 December 2021]. Available from: https://coronasurveys.org.
  13. 13. Astley CM, Tuli G, Mc Cord KA, Cohn EL, Rader B, Varrelman TJ, et al. Global monitoring of the impact of the COVID-19 pandemic through online surveys sampled from the Facebook user base. Proceedings of the National Academy of Sciences. 2021;118(51):e2111455118. pmid:34903657
  14. 14. Chan AT, Drew DA, Nguyen LH, Joshi AD, Ma W, Guo CG, et al. The COronavirus Pandemic Epidemiology (COPE) consortium: a call to action. Cancer Epidemiology and Prevention Biomarkers. 2020;29(7):1283–1289. pmid:32371551
  15. 15. Swinburne University Technology. Beat COVID–19 Now; 2020 April [cited 21 December 2021]. Available from: https://www.swinburne.edu.au/research/global-health-equity/our-research-projects/beatcovid19now.
  16. 16. Freifeld CC, Mandl KD, Reis BY, Brownstein JS. HealthMap: global infectious disease monitoring through automated classification and visualization of Internet media reports. Journal of the American Medical Informatics Association. 2008;15(2):150–157. pmid:18096908
  17. 17. HealthMap. Novel Coronavirus 2019-nCoV (interactive map); 2020 January [cited 21 December 2021]. Available from: https://www.healthmap.org/covid-19.
  18. 18. BlueDot. BlueDot Products: AI-driven infectious disease surveillance; 2013 January [cited 21 December 2021]. Available from: https://bluedot.global.
  19. 19. Metabiota. The Epidemic Tracker; 2009 January [cited 21 December 2021]. Available from: https://www.metabiota.com/epidemic-tracker.
  20. 20. Randazzo W, Truchado P, Cuevas-Ferrando E, Simón P, Allende A, Sánchez G. SARS-CoV-2 RNA in wastewater anticipated COVID-19 occurrence in a low prevalence area. Water Research. 2020;181:115942. pmid:32425251
  21. 21. Robert Koch Institut. Corona-Dadatenspende; 2020 April [cited 21 December 2021]. Available from: https://corona-datenspende.de/science/en.
  22. 22. Quer G, Radin JM, Gadaleta M, Baca-Motes K, Ariniello L, Ramos E, et al. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nature Medicine. 2021;27(1):73–77. pmid:33122860
  23. 23. Shandhi MMH, Cho PJ, Hseih J, Kalodzitsa A, Lu X, Wang WK, et al. CovIdentify: Using commercial wearable devices and smartphones to detect and monitor COVID–19. Presented at IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI’21). Athens, Greece;.
  24. 24. Chamberlain SD, Singh I, Ariza CA, Daitch AL, Philips PB, Dalziel BD. Real-time detection of COVID-19 epicenters within the United States using a network of smart thermometers; medRxiv:20039909v1 [preprint]. 2020 [cited 2021 December 21]. Available from: https://www.medrxiv.org/content/10.1101/2020.04.06.20039909v1.
  25. 25. Smarr BL, Aschbacher K, Fisher SM, Chowdhary A, Dilchert S, Puldon K, et al. Feasibility of continuous fever monitoring using wearable devices. Scientific Reports. 2020;10(1):1–11. pmid:33318528
  26. 26. Miller DJ, Capodilupo JV, Lastella M, Sargent C, Roach GD, Lee VH, et al. Analyzing changes in respiratory rate to predict the risk of COVID-19 infection. PLoS One. 2020;15(12):e0243693. pmid:33301493
  27. 27. The National Health Service. The NHS COVID-19 app; 2020 January [cited 21 December 2021]. Available from: https://www.nhs.uk/nhs-app.
  28. 28. Singapore Government Agency Website. Trace Together; 2020 April [cited 21 December 2021]. Available from: https://www.tracetogether.gov.sg.
  29. 29. Apple Inc, Google Inc. Privacy-preserving contact tracing; 2020 April [cited 21 December 2021]. Available from: https://www.apple.com/covid19/contacttracing.
  30. 30. Australian Government. Protect yourself and the community; 2020 April [cited 21 December 2021]. Available from: https://www.covidsafe.gov.au.
  31. 31. Boulos MNK, Geraghty EM. Geographical tracking and mapping of coronavirus disease COVID-19/severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic and associated events around the world: how 21st century GIS technologies are supporting the global fight against outbreaks and epidemics. International Journal of Health Geographics. 2020;19:8.
  32. 32. Zhou C, Su F, Pei T, Zhang A, Du Y, Luo B, et al. COVID-19: challenges to GIS with big data. Geography and Sustainability. 2020;1(1):77–87.
  33. 33. Jhunjhunwala A. Role of telecom network to manage COVID-19 in India: Aarogya Setu. Transactions of the Indian National Academy of Engineering. 2020;5:157–161.
  34. 34. State of Israel Ministry of Health. HaMagen: the Ministry of Health app for fighting the COVID-19 outbreak; 2020 May [cited 21 December 2021]. Available from: https://govextra.gov.il/ministry-of-health/hamagen-app/download-en.
  35. 35. White L, Van Basshuysen P. Without a trace: Why did corona apps fail? Journal of Medical Ethics. 2021;47(12):e83–e83. pmid:33419939
  36. 36. Garrett PM, White JP, Lewandowsky S, Kashima Y, Perfors A, Little DR, et al. The acceptability and uptake of smartphone tracking for COVID-19 in Australia. Plos One. 2021;16(1):e0244827. pmid:33481841
  37. 37. Queensland Government. Check In Qld; 2021 February [cited 21 December 2021]. Available from: https://www.covid19.qld.gov.au/check-in-qld/about-the-app-and-when-to-use-it.
  38. 38. Yoneki E, Crowcroft J. EpiMap: Towards quantifying contact networks for understanding epidemiology in developing countries. Ad Hoc Networks. 2014;13:83–93.
  39. 39. University of Cambridge. FluPhone Project: Understanding Spread of Infectious Disease and Behavioural Responses; 2009 January [cited 21 December 2021]. Available from: https://www.cl.cam.ac.uk/research/srg/netos/projects/archive/fluphone2.
  40. 40. Yoneki E. Fluphone study: Virtual disease spread using haggle. In: Proceedings of the 6th ACM Workshop on Challenged Networks; 2011. p. 65–66.
  41. 41. Firth JA, Hellewell J, Klepac P, Kissler SM, Kucharski AJ, Spurgin LG, et al. Combining fine-scale social contact data with epidemic modelling reveals interactions between contact tracing, quarantine, testing and physical distancing for controlling COVID-19; medRxiv:26.20113720v2 [preprint]. 2020 [cited 2021 December 21]. Available from: https://www.medrxiv.org/content/10.1101/2020.05.26.20113720v2.
  42. 42. Specht I, Sani K, Loftness BC, Hoffman C, Gionet G, Bronson A, et al.. Analyzing the Impact of a Real-life Outbreak Simulator on Pandemic Mitigation: an Epidemiological Modeling Study; medRxiv:22270198v2 [preprint]. 2022 [cited 2022 July 22]. Available from: https://www.medrxiv.org/content/10.1101/2022.02.04.22270198v2.
  43. 43. Operation Outbreak. Operation Outbreak homepage; 21 Jan. Available from: https://operationoutbreak.org/.
  44. 44. Moritz S, Gottschick C, Horn J, Popp M, Langer S, Klee B, et al. The risk of indoor sports and culture events for the transmission of COVID-19. Nature Communications. 2021;12(1):1–9. pmid:34413294
  45. 45. Revollo B, Blanco I, Soler P, Toro J, Izquierdo-Useros N, Puig J, et al. Same-day SARS-CoV-2 antigen test screening in an indoor mass-gathering live music event: a randomised controlled trial. The Lancet Infectious Diseases. 2021;21(10):1365–72. pmid:34051886
  46. 46. Fieldlab Events program. Fieldlab Events; 2021 February [cited 21 December 2021]. Available from: https://fieldlabevenementen.nl/fieldlab-englis.
  47. 47. Department for Digital, Culture, Media & Sport. Events Research Program; 2021 April [cited 21 December 2021]. Available from: https://www.gov.uk/government/publications/events-research-programme-phases-i-ii-and-iii-data-release.
  48. 48. Kim SK, Kim EO, Kim SH, Jung J. Universal screening of severe acute respiratory syndrome coronavirus 2 with polymerase chain reaction testing after rally of trainee doctors. Journal of Korean Medical Science. 2020;35(42):e380. pmid:33140592
  49. 49. Hagemann G, Hu C, Al Hassani N, Kahil N. Infographic. Successful hosting of a mass sporting event during the COVID-19 pandemic. British Journal of Sports Medicine. 2021;55(10):570–571.
  50. 50. Safe Blues. Safe Blues App; 2020 May [cited 21 December 2021]. Available from: https://play.google.com/store/apps/details?id=org.safeblues.mobile.
  51. 51. Safe Blues. Safe Blues Simulation; 2021 November [cited 21 December 2021]. Available from: https://simulation.safeblues.org.
  52. 52. Safe Blues. Safe Blues Dashboard; 2021 November [cited 21 December 2021]. Available from: https://analytics.safeblues.org.
  53. 53. Safe Blues. Safe Blues homepage; 2020 May [cited 21 December 2021]. Available from: https://safeblues.org.
  54. 54. Diekmann O, Heesterbeek H, Britton T. Mathematical tools for understanding infectious disease dynamics. vol. 7. Princeton University Press; 2013.
  55. 55. Petry NM, Martin B, Cooney JL, Kranzler HR. Give them prizes and they will come: Contingency management for treatment of alcohol dependence. Journal of Consulting and Clinical Psychology. 2000;68(2):250. pmid:10780125
  56. 56. Petry NM, Peirce JM, Stitzer ML, Blaine J, Roll JM, Cohen A, et al. Effect of prize-based incentives on outcomes in stimulant abusers in outpatient psychosocial treatment programs: a national drug abuse treatment clinical trials network study. Archives of General Psychiatry. 2005;62(10):1148–1156. pmid:16203960
  57. 57. Fine P, Eames K, Heymann DL. “Herd immunity”: a rough guide. Clinical infectious diseases. 2011;52(7):911–916. pmid:21427399
  58. 58. Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford University Press; 1991.
  59. 59. Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proceedings of the Royal Society of London Series A, Containing Papers of a Mathematical and Physical Character. 1927;115(772):700–721.
  60. 60. Liu Y, Rocklöv J. The reproductive number of the Delta variant of SARS-CoV-2 is far higher compared to the ancestral SARS-CoV-2 virus. Journal of Travel Medicine. 2021;28(7):1–3.