Computerized algorithms known as symptom checkers aim to help patients decide what to do should they have a new medical concern. However, despite widespread implementation, most studies on symptom checkers have involved simulated patients. Only limited evidence currently exists about symptom checker safety or accuracy when used by real patients. We developed a new prototype symptom checker and assessed its safety and accuracy in a prospective cohort of patients presenting to primary care and emergency departments with new medical concerns.
A prospective cohort study was done to assess the prototype’s performance. The cohort consisted of adult patients (≥16 years old) who presented to hospital emergency departments and family physician clinics. Primary outcomes were safety and accuracy of triage recommendations to seek hospital care, seek primary care, or manage symptoms at home.
Data from 281 hospital patients and 300 clinic patients were collected and analyzed. Sensitivity to emergencies was 100% (10/10 encounters). Sensitivity to urgencies was 90% (73/81) and 97% (34/35) for hospital and primary care patients, respectively. The prototype was significantly more accurate than patients at triage (73% versus 58%, p<0.01). Compliance with triage recommendations in this cohort using this iteration of the symptom checker would have reduced hospital visits by 55% but cause potential harm in 2–3% from delay in care.
The prototype symptom checker was superior to patients in deciding the most appropriate treatment setting for medical issues. This symptom checker could reduce a significant number of unnecessary hospital visits, with accuracy and safety outcomes comparable to existing data on telephone triage.
Citation: Chan F, Lai S, Pieterman M, Richardson L, Singh A, Peters J, et al. (2021) Performance of a new symptom checker in patient triage: Canadian cohort study. PLoS ONE 16(12): e0260696. https://doi.org/10.1371/journal.pone.0260696
Editor: Andrea Gruneir, University of Alberta, CANADA
Received: December 19, 2020; Accepted: November 15, 2021; Published: December 1, 2021
Copyright: © 2021 Chan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The majority of the relevant data is within the paper and its Supporting Information files. The current information provided in the supplemental should be enough for any interested reviewer to decide if the data was interpreted appropriately. Sensitive patient information (i.e. written by treating physicians) will be available with approval from the Western University Research Ethics Board or Lawson Health Research for researchers who meet the criteria for access to confidential data. The contact for Western Research and Lawson is provided below, as they are the most responsible for safeguarding patient information in this study. Western Research Room 5150 Support Services Building, 1393 Western Road London, Ontario, Canada, N6G 1G9 Tel: 519-661-2161 | Research Ethics: 519-661-3036 firstname.lastname@example.org Lawson Health Research 750 Base Line Road East, Suite 300 London, Ontario, Canada N6C2R5 Tel: 519-667-6649 Email: email@example.com.
Funding: FC received the Unnur Brown Leadership Award in Health Policy, which was granted by the Dr. Adalsteinn Brown and the Larry and Cookie Rossy Family Foundation and the Schulich School of Medicine and Dentistry (schulich.uwo.ca). FC also received a Resident Research Grant from the PSI Foundation (www.psifoundation.org). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Worldwide, 70% of internet users go online to access health information [1–3]. Most inquiries begin with search engines such as Google, but results are often incomplete, inaccurate, or inaccessible to lay persons [3–5]. Potential consequences of unreliable or inappropriate health information include delay of care, inappropriate hospital visits, and cyberchondria [6, 7]. These online health-seeking behaviors may be driven in part by a knowledge gap with respect to what people should do if they become ill, given that public health education focuses primarily on preventive health [8, 9]. This knowledge is also difficult to teach, given that even trained health professionals may struggle to identify patients who would most benefit from emergency department care [9–13].
During the COVID-19 pandemic, computerized algorithms known as “symptom checkers” were widely implemented to limit unnecessary contact between patients and healthcare providers, alleviate pressures on telehealth systems, and empower patients to decide where best to access care for their symptoms [14–18]. Symptom checkers are not new, and many already existed prior to the pandemic . Unfortunately, few studies have been published about the safety or accuracy of symptom checkers and, of existing studies, all suffer from at least one of the following limitations: the literature was not peer-reviewed; the results were based solely on simulated data, the selection of patients covered a limited range of conditions; the data was not generated by patients entering in their own symptoms; or the studies did not report safety outcomes . Development of a safe and effective symptom checker could lower barriers to accessing care, encourage patients to go to the hospital for potentially life-threatening problems, discourage unnecessary healthcare visits, and reduce wait times by encouraging appropriate healthcare utilization [19, 21–25].
We developed our own prototype symptom checker  and conducted this study to address the methodological limitations found in existing studies [27, 28]. In particular, we detail the performance of the symptom checker when used by adults seeking care from family physicians and emergency departments.
Development of the symptom checker
The prototype symptom checker was designed by author FC and coded by computer science students at Western University . The version of the algorithm used for this study was comprised of a total of 247 questions or computational steps manually compiled into a decision tree with recursive elements. Relevant questions are presented sequentially to the user based upon previous responses.
To create the questions, common medical diagnoses and concerns were first collected through medical textbooks (e.g. Toronto Notes, 2018; Edmonton Manual, 2018; Harrison’s Internal Medicine 20e) and online physician resources (e.g. Uptodate, Dynamed, College of Family Physicians of Canada). Literature review was then performed for each diagnosis to identify questions and factors that would impact escalation of treatment. Diagnostic questions were specifically gathered for conditions considered emergency and conditions that can be managed by patients at home. Diagnostic questions that did not change triage recommendations were discarded. No patient vignettes were used for the development of this symptom checker.
A prospective cohort study was done with patients who self-presented to emergency departments and primary care clinics. Ethics approvals were obtained from the research ethics boards of Western University, the University of British Columbia, and Whitehorse General Hospital, respectively. The prospective sample included Canadian patients presenting to 2 emergency departments (Whitehorse General Hospital in Whitehorse, YT; Royal Inland Hospital in Kamloops, BC) and 13 full-time family physician practices based out of 3 centres in London, Ontario (Victoria Family Medical Centre, St. Joseph’s Family Medical Centre, and Byron Family Medical Centre). The research ethics committees approved the lack of parent or guardian consent for minors because all intended participants are 16 years of age or older. The sample was representative of adult Canadian patients who seek professional medical attention.
In the waiting rooms of each location, an unattended kiosk and information poster invited adult patients to use a tablet computer programmed with the prototype symptom checker. Inclusion criteria were adults ≥16 years reporting that they were seeking care for a new medical issue. The information poster contained details typically found in a patient letter of information. Patients were incentivized to participate with opportunities to win $50 gift cards. On the tablets, patients signed consent and entered their name, age, gender, and symptom checker responses. Patient recruitment and data collection occurred from June 2019 to February 2020. Patients were blinded to the symptom checker’s triage recommendation after completion and treating physicians were blinded to patient participation.
Identifying information and responses to the prototype symptom checker were stored in separate files on the tablets to ensure blinding during analysis. Identifying information was used to link medical records. If the corresponding patient records could not be found, the symptom checker responses were excluded from analysis. A list of valid medical records was collected and authors (MP, LR, AS, JP, AT, CP, TR) extracted the impression and plan written by the treating physician.
The diagnoses and treatments provided by the treating physician were reviewed by physician authors FC and SL and, by consensus, assigned an appropriate categorization for comparison with triage recommendations provided by the symptom checker (Table 1). The process of dividing patient encounters into broad categories of immediate/emergent, urgent, primary/routine care, and home/self-care is a standard process in similar studies that examine the performance of telemedicine triage and emergency triage systems [29, 30]. Categorization was done with blinding to triage recommendations made by the symptom checker. The treating physicians’ diagnoses and management plans were used the gold standard for comparison with recommendations provided by the symptom checker; patients’ presenting symptoms were not considered when categorizing the true severity of the patient’s medical issue.
Some physicians documented multiple diagnoses or issues for the healthcare visit. In these cases, each diagnosis was separately categorized as an emergency, urgency, routine, or home appropriate. For example, a patient encounter with dual diagnoses of “1) vaginal discharge not yet diagnosed and 2) lower back pain” were categorized as routine and home appropriate, respectively, because investigations were ordered by the physician for the vaginal discharge and home-treatment solutions were recommended for the lower back pain. Symptom checker responses were then assessed to determine the most appropriate physician diagnosis. In this specific encounter, the patient selected on the symptom checker that the concern was not related to pain; thus the diagnosis “vaginal discharge not yet diagnosed” was used for the purposes of assessing symptom checker performance.
Patients were excluded from final analysis if the treating physician commented that the patient was being seen for follow-up, or if there was a very significant mismatch between patient and physician concerns for visit (e.g. the patient selected “I want to seek help related to violence against women” and the visit diagnosis was “otitis media”).
Based on data gathered in the first month, study recruitment to allow for 80% power was guided by a predicted 20% prevalence and a conservative 75% sensitivity for urgencies. 234 participants were required from each of hospital and clinic settings. To adjust for patient records that could not be identified, minimum recruitment was set at 292 participants.
Outcomes measures were pre-specified. Triage was considered accurate if the triage categorization, as determined by the treating physician’s management plan, matched the triage recommendation provided by the symptom checker. Data were calculated with 95% confidence intervals about the mean. Sensitivity and positive predictive value (PPV) were calculated with 95% confidence intervals (Wilson-score method). A two-tailed McNemur’s Test was used to compare the accuracy of patient decision making with the prototype. Calculations were done with Microsoft Excel (2016) and SPSS (version 21).
Included for final analysis were 281 and 300 patients who presented to emergency departments and primary care clinics, respectively (Fig 1). Mean patient age was 38±16 years (range 16 to 91 years) for emergency department patients and 48±18 years (range 16 to 91 years) for patients seen by primary care family physician. 366 of 581 patients (63%) were female. There was a diverse collection of patient concerns captured including trauma, mental health, pregnancy, immunizations, infections, cardiovascular, respiratory, gastrointestinal, and dermatologic concerns, among others. Of patients with mismatches between patient and physician responses, mean age was 40±15 years (range 19 to 94 years) for emergency patients and 50±20 years (range 19 to 94 years) for patients seen by primary care. The algorithm’s triage “accuracy” for these mismatched cases was 90%, but was not included for analysis because the patient used the symptom checker for an entirely different reason than what was discussed with the physician.
For emergency department patients (Table 2; n = 281), the accuracy of the prototype symptom checker (172/281 encounters or 61%; 95% CI of 55% to 67%) was significantly better than that of patients (90/281 encounters, 32%; CI of 27% to 38%; p<0.01). No emergencies were missed by the symptom checker (n = 8); the diagnoses of these emergencies were: ingestion of death camas; incarcerated inguinal hernia; biceps tendon tear or rupture; alcohol intoxication with coffee ground emesis; upper gastrointestinal bleed requiring transfusion; intoxication, chest pain, and possible Brugada; self-inflicted wrist laceration; hand laceration from a drill through the hand; and antepartum hemorrhage. Sensitivity to urgencies was 90% (73/81 encounters; CI of 82% to 95%). Under-triage for urgencies occurred in 3% (8 encounters, CI of 1% to 6%); diagnoses for these cases were: abscess of thigh, phalanx fracture, boxer’s fracture, post lipoma excision hematoma, hand laceration, metacarpal fracture, toe fracture, and metallic foreign body in shin. Over-triage to hospital occurred in 76 encounters (27%, CI 22% to 33%). Patient compliance with the symptom checker’s triage advice would have reduced total hospital visits by 55% (155/281 encounters, CI of 49% to 61%).
For patients who presented to primary care (Table 3; n = 300), the accuracy of the symptom checker (85% or 255/300 encounters; CI of 81%-89%) was similar to that of patients’ (83% or 249/300; CI of 78% to 87%; p = 0.11). One patient with an emergency diagnosis of “possible deep vein thrombosis” presented to primary care, but the symptom checker would have correctly advised the patient to go to the hospital. Sensitivity to urgencies was 97% (34/35, CI 85% to 99%). Under-triage occurred in one urgency encounter wherein the diagnosis was “possible cellulitis surrounding tracheostomy”. Over-triage to hospital occurred in 20 encounters (7%, CI 4% to 10%) in which hospital triage was suggested for issues that could be managed either at home or by primary care.
The overall accuracy of the symptom checker was 73% (427/581, 95% CI 70% to 77%), which was significantly better than the 58% accuracy of patients (339/581, CI 54% to 62%; p<0.01). Under-triage causing potential harm from delay in care occurred in a total of 9 encounters (2%, CI 1% to 3%), in which the symptom checker recommended home management was for an urgency.
This is the first study to report a direct comparison between the triage accuracy of a symptom checker against decisions made by patients. The overall accuracy of our prototype symptom checker was 73% (95% CI 70% to 77%), with potential to harm in 2–3% of encounters. These results are promising given that, in context, telephone triage is reported to have a median accuracy 75% and a rate of harm from under-triage of 1.3–3.2% . The results of this study may be most directly applied to a scenario wherein presenting patients at healthcare facilities have an opportunity to obtain a rapid computer generated opinion about their medical concern, with the possibility of redirecting care to a nearby hospital or clinic providing primary care for walk-in patients.
The symptom checker was less accurate among patients who presented to hospital compared to those who presented to primary care. Over-triage to hospitals occurred more often for patients who self-presented to the emergency department (76/281, 27%) compared to those who sought primary care (20/300, 7%). This may have occurred because patients who go to the hospital reported having more severe symptoms when using the symptom checker, which resulted in the symptom checker recommending a higher acuity response. Under-triage for urgencies occurred in several instances specifically related to distal extremity injuries and improvements will need to be made for these types of presenting concerns. The physician visit diagnosis and patient’s symptom checker responses were significantly mismatched for 135 patients and excluded from analysis (S1 Table); this may have been because patients wanted to utilize the symptom checker out of personal interest, but did not wish to reveal details about their personal health information.
This is the first study to fulfill three key criteria in the assessment of a symptom checker’s triage performance which was found to be lacking in previous studies: triage recommendations are compared to care provided by a clinician as the reference standard; testing was done in a general population of patients who continue to receive standard care; and an unrestricted range of symptoms was assessed [27, 28]. All peer-reviewed studies involving patients published thus far do not meet one or more of the above criteria (Table 4) [25, 31–39]. Three non-peer reviewed reports were reviewed; two had significant risks for bias and the one government report did not provide sufficient information to interpret outcome measures [40–42].
All studies were limited by at least one of three factors (in grey).
Across emergency department and primary care settings, the prototype appropriately triaged 10 of 10 patients diagnosed with an emergency but the study was not powered for emergencies. The sample size of emergencies was small because of the relative infrequency of emergencies compared to other medical presentations. Emergencies were also more likely to arrive by ambulance which bypasses the waiting room. Patients who felt very unwell were also less likely to use the symptom checker, based on the study design. Further studies will be needed to ensure sensitivity to emergencies.
Reproducibility of the study’s results in other countries may be challenging given that local guidelines, cultural factors, and access to resources may differ. The results of these studies cannot be generalized to other symptom checkers given the diversity of triage approaches .
Some questions remain unanswered by this study. It is unclear if patients will respond to recommendations by the symptom checker. It is also uncertain how the symptom checker would perform among a population of patients who have chosen to stay at home, instead of seeking care at a hospital or outpatient clinic. Implementation of a symptom checker on a community level would be necessary to demonstrate if implementation results in changes in healthcare utilization and changes in population morbidity.
The prototype symptom checker was superior to patients in deciding the most appropriate treatment setting for medical issues. Use of the symptom checker by patients seeking medical care could reduce a significant number of unnecessary hospital visits, with accuracy and safety outcomes comparable to existing data on telephone triage.
S1 Table. Raw data: Comparison of symptom checker recommendations to physician diagnosis and treatment.
De-identified data containing information ported directly from patient charts may be available upon approval from the Western Research Ethics Board for authorized individuals.
This study was done with the cooperation of the staff at London Health Science Centre’s family medical centres (Victoria Family Medical Centre and Byron Family Medical Centre), St. Joseph’s Family Medical and Dental Centre, Whitehorse General Hospital, and Royal Inland Hospital. The prototype symptom checker app was programmed by Sama Rahimian, Brandon Kong, Zenen Treadwell, and Hussein Fahmy. We thank all the physicians, custodial staff, administrative staff, and study patients for help with planning, accommodating the needs of this study, and sampling. In particular, we thank Dr. Ian Mitchell, Dr. Sonny Cejic, Dr. Saadia Hameed, Dr. Evelyn Vingilis, Lindsey Page, and Mike Rickson for their support.
- 1. Andreassen H, Bujnowska-Fedak M, Chronaki C, Dumitru RC, Pudule I, Santana S et al. European citizens’ use of E-health services: A study of seven countries. BMC Public Health. 2007;7. pmid:17233919
- 2. Statistics Canada. Canadian Internet Use Survey. https://www150.statcan.gc.ca/n1/daily-quotidien/100510/dq100510a-eng.htm. Published May 10, 2020. Updated July 5, 2011. Accessed April 1, 2020.
- 3. Fox S, Duggan M. Health Online 2013. Pew Research Center. http://www.pewinternet.org/2013/01/15/health-online-2013. Published January 15, 2013. Accessed April 1, 2020. pmid:25825187
- 4. Tang H, Ng J. Googling for a diagnosis—use of Google as a diagnostic aid: internet based study. BMJ. 2006;333:1143–1145. pmid:17098763
- 5. Berland GK, Elliott MN, Morales LS, Algazy JI, Kravitz RL, Broder MS et al. Health Information on the Internet: Accessibility, Quality, and Readability in English and Spanish. JAMA. 2001;285(20):2612. pmid:11368735
- 6. Vismara M, Caricasole V, Starcevic V, Cinosi E, Dell’Osso B, Martinotti G et al. Is cyberchondria a new transdiagnostic digital compulsive syndrome? A systematic review of the evidence. Compr Psychiatry. 2020;99:152167. pmid:32146315
- 7. Shroff P, Hayes R, Padmanabhan P, Stevenson M. Internet Usage by Parents Prior to Seeking Care at a Pediatric Emergency Department: Observational Study. Interact J Med Res. 2017;6(2):e17. pmid:28958988
- 8. Berkman N, Sheridan S, Donahue K, Halpern D, Crotty K. Low Health Literacy and Health Outcomes: An Updated Systematic Review. Ann Intern Med. 2011;155(2):97. pmid:21768583
- 9. Sempere-Selva T, Peiró S, Sendra-Pina P, Martínez-Espín C, López-Aguilera I. Inappropriate use of an accident and emergency department: Magnitude, associated factors, and reasons—An approach with explicit criteria. Ann Emerg Med. 2001;37(6):568–579. pmid:11385325
- 10. Durand A, Gentile S, Devictor B, Palazzolo S, Vingnally P, Gerbeaux P et al. ED patients: how nonurgent are they? Systematic review of the emergency medicine literature. Am J Emerg Med. 2011;29(3):333–45. pmid:20825838
- 11. Allen A, Spittal M, Nicolas C, Oakley E, Freed G. Accuracy and interrater reliability of paediatric emergency department triage. Emerg Med Australas. 2015;27(5):447–452. pmid:26268051
- 12. Mistry B, Ramirez SSD, Kelen G, Schmitz PSK, Balhara KS, Levin S et al. Accuracy and Reliability of Emergency Department Triage Using the Emergency Severity Index: An International Multicenter Assessment. Ann Emerg Med. 2018;71(5):581–587.e3. pmid:29174836
- 13. Jordi K, Grossmann F, Gaddis G, Cignacco E, Denhaerynck K, Schwendimann R et al. Nurses’ accuracy and self-perceived ability using the Emergency Severity Index triage tool: a cross-sectional study in four Swiss hospitals. Scand J Trauma Resusc Emerg Med. 2015;23(1). pmid:26310569
- 14. Government of Ontario. COVID-19 Self Assessment for Ontarians. https://covid-19.ontario.ca/self-assessment. Accessed April 1, 2020.
- 15. Government of France. Maladie Coronavirus—Testez vos symptômes de Coronavirus COVID-19. https://maladiecoronavirus.fr. Accessed April 1, 2020.
- 16. Healthdirect Australia. Coronavirus (COVID-19) Symptom Checker. https://www.healthdirect.gov.au/symptom-checker/tool?symptom=CORO. Accessed April 1, 2020.
- 17. NHS. NHS 111 Online—About coronavirus (COVID-19). https://111.nhs.uk/covid-19. Accessed April 1, 2020.
- 18. Centre for Disease Control and Prevention. Coronavirus Disease 2019 (COVID-19): Testing for COVID-19. https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html. Accessed April 1, 2020.
- 19. Semigran H, Linder J, Gidengil C, Mehrotra A. Evaluation of symptom checkers for self diagnosis and triage: audit study. BMJ. 2015; 351:h3480. pmid:26157077
- 20. Chambers D, Cantrell A, Johnson M, Preston L, Baxter SK, Booth A, et al. Digital and online symptom checkers and assessment services for urgent care to inform a new digital platform: a systematic review. Health Services and Delivery Research. 2019;No.7.29. pmid:31433612
- 21. Wyatt J. Fifty million people use computerised self triage. BMJ. 2015:h3727. pmid:26156750
- 22. Hibbard J, Greenlick M, Jimison H, Capizzi J, Kunkel L. The Impact of a Community-Wide Self-Care Information Project on Self-Care and Medical Care Utilization. Eval Health Prof. 2001;24(4):404–423. pmid:11817199
- 23. Meyer A, Giardina T, Spitzmueller C, Shahid U, Scott T, Singh H. Patient Perspectives on the Usefulness of an Artificial Intelligence–Assisted Symptom Checker: Cross-Sectional Survey Study. J Med Internet Res. 2020;22(1):e14679. pmid:32012052
- 24. Hitchings S, Barter J. Effect of self-triage on waiting times at a walk-in sexual health clinic. J Fam Plann Reprod Health Care. 2009;35(4):227–231. pmid:19849916
- 25. Sinha M, Khor K, Amresh A, Drachman D, Frechette A. The Use of a Kiosk-Model Bilingual Self-Triage System in the Pediatric Emergency Department. Pediatr Emerg Care. 2014;30(1):63–68. pmid:24378865
- 26. Symptom Check Canada. Symptom Checker. https://www.symptomcheck.ca. Published April 20, 2020. Accessed April 20, 2020.
- 27. Fraser H, Coiera E, David W. Safety of patient-facing digital symptom checkers. Lancet. 2018;392(10161):2263–2264. pmid:30413281
- 28. Fraser H, Clamp S, Wilson C. Limitations of Study on Symptom Checkers. JAMA Intern Med. 2017;177(5):740–741. pmid:28460096
- 29. Blank L, Coster J, O’Cathain A, Knowles E, Tosh J, Turner J et al. The appropriateness of, and compliance with, telephone triage decisions: a systematic review and narrative synthesis. J Adv Nurs. 2012;68(12):2610–2621. pmid:22676805
- 30. Zachariasse JM, van der Hagen V, Seiger N, Mackway-Jones, van Veen M, Moll HA. Performance of triage systems in emergency care: a systematic review and meta-analysis. BMJ Open, 2019;9(5):e026471. pmid:31142524
- 31. Berry A, Cash B, Mulekar M, Wang B, Melvin A, Berry B. Symptom Checkers vs. Doctors, the Ultimate Test: A Prospective Study of Patients Presenting with Abdominal Pain. Gastroenterol. 2017;152(5):S852–S853.
- 32. Nijland N, Cranen K, Boer H, van Gemert-Pijnen J, Seydel E. Patient use and compliance with medical advice delivered by a web-based triage system in primary care. J Telemed Telecare. 2010;16(1):8–11. pmid:20086260
- 33. Poote A, French D, Dale J, Powell J. A study of automated self-assessment in a primary care student health centre setting. J Telemed Telecare. 2014;20(3):123–127. pmid:24643948
- 34. Verzantvoort N, Teunis T, Verheij T, van der Velden A. Self-triage for acute primary care via a smartphone application. Practical, safe and efficient?. PloS One. 2018;13(6):e0199284. pmid:29944708
- 35. Sole M, Stuart P, Deichen M. Web-Based Triage in a College Health Setting. J Am Coll Health. 2006;54(5):289–294. pmid:16539221
- 36. Price AR, Fagbuyi D, Harris R, Hanfling D, Place F, Taylor T et al. Feasibility of Web-Based Self-Triage by Parents of Children with Influenza-Like Illness. JAMA Pediatr. 2013;167(2):112–118. pmid:23254373
- 37. Winn A, Somai M, Fergestrom N, Crotty B. Association of Use of Online Symptom Checkers With Patients’ Plans for Seeking Care. JAMA Netw Open. 2019;2(12):e1918561. pmid:31880791
- 38. Cowie J, Calveley E, Bowers G, Bowers J. Evaluation of a Digital Consultation and Self-Care Advice Tool in Primary Care: A Multi-Methods Study. Int J Environ Res Public Health. 2018;15(5):896. pmid:29724040
- 39. Miller S, Gilbert S, Virani V, Wicks P. Patients’ utilisation and perception of an AI based symptom assessment and advice technology in a British primary care waiting room: Exploratory pilot study (Preprint). JMIR Human Factors. 2020.
- 40. Middleton K, Butt M, Hammerla N, Hamblin S, Mehta K, Parsa A. Sorting out symptoms: design and evaluation of the ’babylon check’ automated triage system. arXiv. https://arxiv.org/abs/1606.02041. Published June 7, 2016. Accessed April 1, 2020.
- 41. Babylon Health. NHS111 Powered By Babylon: Outcomes Evaluation. https://assets.babylonhealth.com/nhs/NHS-111-Evaluation-of-outcomes.pdf. Published October 2017. Accessed April 1, 2020.
- 42. NHS England. NHS111 Online Evaluation. https://askmygp.uk/wp-content/uploads/111-Online-Evaluation-DRAFT_.pdf. Published December 2017. Accessed April 1, 2020.