Skip to main content
Advertisement
  • Loading metrics

ePOCT+ and the medAL-suite: Development of an electronic clinical decision support algorithm and digital platform for pediatric outpatients in low- and middle-income countries

  • Rainer Tan ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    rainer.tan@unisante.ch

    Affiliations Digital and Global Health Unit, Unisanté, Centre for Primary Care and Public Health, University of Lausanne, Lausanne, Switzerland, Swiss Tropical and Public Health Institute, Basel, Switzerland, Ifakara Health Institute, Dar es Salaam, United Republic of Tanzania, University of Basel, Basel, Switzerland

  • Ludovico Cobuccio,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Digital and Global Health Unit, Unisanté, Centre for Primary Care and Public Health, University of Lausanne, Lausanne, Switzerland, Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Fenella Beynon,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Gillian A. Levine,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Nina Vaezipour,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Lameck Bonaventure Luwanda,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Ifakara Health Institute, Dar es Salaam, United Republic of Tanzania

  • Chacha Mangu,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation National Institute of Medical Research–Mbeya Medical Research Centre, Mbeya, United Republic of Tanzania

  • Alan Vonlanthen,

    Roles Methodology, Project administration, Software, Writing – review & editing

    Affiliation Information Technology & Digital Transformation sector, Unisanté, Center for Primary Care and Public Health, University of Lausanne, Switzerland

  • Olga De Santis,

    Roles Methodology, Software, Writing – review & editing

    Affiliations Digital and Global Health Unit, Unisanté, Centre for Primary Care and Public Health, University of Lausanne, Lausanne, Switzerland, Institute of Global Health, University of Geneva, Geneva, Switzerland

  • Nahya Salim,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Ifakara Health Institute, Dar es Salaam, United Republic of Tanzania, Department of Pediatrics and Child Health, Muhimbili University Health and Allied Sciences (MUHAS), Dar es Salaam, United Republic of Tanzania

  • Karim Manji,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics and Child Health, Muhimbili University Health and Allied Sciences (MUHAS), Dar es Salaam, United Republic of Tanzania

  • Helga Naburi,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics and Child Health, Muhimbili University Health and Allied Sciences (MUHAS), Dar es Salaam, United Republic of Tanzania

  • Lulu Chirande,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics and Child Health, Muhimbili University Health and Allied Sciences (MUHAS), Dar es Salaam, United Republic of Tanzania

  • Lena Matata,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, Ifakara Health Institute, Dar es Salaam, United Republic of Tanzania, University of Basel, Basel, Switzerland

  • Method Bulongeleje,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation PATH, Dar es Salaam, United Republic of Tanzania

  • Robert Moshiro,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics and Child Health, Muhimbili University Health and Allied Sciences (MUHAS), Dar es Salaam, United Republic of Tanzania

  • Andolo Miheso,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation PATH, Nairobi, Kenya

  • Peter Arimi,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation College of Health Sciences, University of Nairobi, Nairobi, Kenya

  • Ousmane Ndiaye,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics, Cheikh Anta Diop University, Dakar, Senegal

  • Moctar Faye,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics, Cheikh Anta Diop University, Dakar, Senegal

  • Aliou Thiongane,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics, Cheikh Anta Diop University, Dakar, Senegal

  • Shally Awasthi,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Pediatrics, King George’s Medical University, Lucknow, India

  • Kovid Sharma,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation PATH, Lucknow, India

  • Gaurav Kumar,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Josephine Van De Maat,

    Roles Formal analysis, Investigation, Methodology, Writing – review & editing

    Affiliation Radboudumc, Department of Internal Medicine and Radboudumc Center for Infectious Diseases, Nijmegen, Netherlands

  • Alexandra Kulinkina,

    Roles Methodology, Project administration, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Victor Rwandarwacu,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • Théophile Dusengumuremyi,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • John Baptist Nkuranga,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation Department of Paediatrics, King Faisal Hospital, Kigali, Rwanda

  • Emmanuel Rusingiza,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliations University Teaching Hospital of Kigali, Kigali, Rwanda, School of Medicine and Pharmacy, University of Rwanda, Kigali, Rwanda

  • Lisine Tuyisenge,

    Roles Investigation, Methodology, Software, Writing – review & editing

    Affiliation University Teaching Hospital of Kigali, Kigali, Rwanda

  • Mary-Anne Hartley,

    Roles Formal analysis, Investigation, Methodology, Writing – review & editing

    Affiliation Intelligent Global Health, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland

  • Vincent Faivre,

    Roles Project administration, Software, Writing – review & editing

    Affiliation Information Technology & Digital Transformation sector, Unisanté, Center for Primary Care and Public Health, University of Lausanne, Switzerland

  • Julien Thabard,

    Roles Project administration, Software, Writing – review & editing

    Affiliation Information Technology & Digital Transformation sector, Unisanté, Center for Primary Care and Public Health, University of Lausanne, Switzerland

  • Kristina Keitel ,

    Contributed equally to this work with: Kristina Keitel, Valérie D’Acremont

    Roles Conceptualization, Investigation, Methodology, Software, Supervision, Writing – review & editing

    Affiliations Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland, Paediatric Emergency Department, Department of Pediatrics, University Hospital Berne, Berne, Switzerland

  •  [ ... ],
  • Valérie D’Acremont

    Contributed equally to this work with: Kristina Keitel, Valérie D’Acremont

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Software, Supervision, Writing – review & editing

    Affiliations Digital and Global Health Unit, Unisanté, Centre for Primary Care and Public Health, University of Lausanne, Lausanne, Switzerland, Swiss Tropical and Public Health Institute, Basel, Switzerland, University of Basel, Basel, Switzerland

  • [ view all ]
  • [ view less ]

Abstract

Electronic clinical decision support algorithms (CDSAs) have been developed to address high childhood mortality and inappropriate antibiotic prescription by helping clinicians adhere to guidelines. Previously identified challenges of CDSAs include their limited scope, usability, and outdated clinical content. To address these challenges we developed ePOCT+, a CDSA for the care of pediatric outpatients in low- and middle-income settings, and the medical algorithm suite (medAL-suite), a software for the creation and execution of CDSAs. Following the principles of digital development, we aim to describe the process and lessons learnt from the development of ePOCT+ and the medAL-suite. In particular, this work outlines the systematic integrative development process in the design and implementation of these tools required to meet the needs of clinicians to improve uptake and quality of care. We considered the feasibility, acceptability and reliability of clinical signs and symptoms, as well as the diagnostic and prognostic performance of predictors. To assure clinical validity, and appropriateness for the country of implementation the algorithm underwent numerous reviews by clinical experts and health authorities from the implementing countries. The digitalization process involved the creation of medAL-creator, a digital platform which allows clinicians without IT programming skills to easily create the algorithms, and medAL-reader the mobile health (mHealth) application used by clinicians during the consultation. Extensive feasibility tests were done with feedback from end-users of multiple countries to improve the clinical algorithm and medAL-reader software. We hope that the development framework used for developing ePOCT+ will help support the development of other CDSAs, and that the open-source medAL-suite will enable others to easily and independently implement them. Further clinical validation studies are underway in Tanzania, Rwanda, Kenya, Senegal, and India.

Author summary

In accordance with the principles of digital development we describe the process and lessons learnt from the development of ePOCT+, a clinical decision support algorithm (CDSA), and medAL-suite, a software, to program and implement CDSAs. The clinical algorithm was adapted from previous CDSAs in order to address challenges in regards to the limited scope of illnesses and patient population addressed, the ease of use, and limited performance of specific algorithms. Clinical algorithms were adapted and improved based on considerations of what symptoms and signs would be appropriate for primary care health workers, and how well these clinical elements predic a particular disease or severe outcome. We hope that by sharing our multi-stakeholder approach to the development of ePOCT+, it can help others in the development of other CDSAs. The medAL-creator software was developed to allow clinicians without IT programming experience to program the clinical algorithm using a drag-and-drop interface, intended to allow a wider range of health authorities and implementers to develop and adapt their own CDSA. The medAL-reader application, deploys the algorithm from medAL-creator to end-users following the usual healthcare processes within a consultation.

Introduction

Electronic clinical decision support algorithms (CDSAs) have been implemented in low- and middle-income countries (LMICs) in order to address excessive mortality due to poor quality of health care [1], and antimicrobial resistance due to inappropriate antibiotic prescription [25]. Such tools provide guidance through every step of the outpatient consultation to ultimately suggest the diagnosis and management plan based on the entered symptoms, signs and test results [6]. CDSAs have shown to help clinicians better adhere to guidelines [79], which resulting in improved quality of care and, for some, more rational antibiotic prescription [10,11]. This has led the World Health Organization (WHO) and its Member States to prioritize the scale-up of digital health technologies [12,13].

Current CDSAs are not standardized, and concerns have been raised about their limited demographic and clinical scope [14,15], their usability [15,16], and their static and generic logic based on outdated guidelines that are unable to adapt to new evidence, evolving epidemiology, or changing resources. These challenges may contribute to variable uptake of CDSAs [1618], and suboptimal performance when implemented [9,19].

In order to address these challenges, and build on the experience of previous CDSAs by our group [10,11], and others [6,9], we developed the CDSA ePOCT+, and a supporting digital software to create and execute CDSAs, the medAL-suite. ePOCT+ is currently being implemented in over 200 health facilities within the context of implementation studies in Tanzania, Rwanda, Senegal, Kenya and India. Following the principles of digital development and guidance on CDSAs [2022], we aim to transparently share the rationale, strategy, and lessons learnt from this development process (Fig 1).

thumbnail
Fig 1. Overall development process of ePOCT+ requiring multiple feedback loops.

The development process of ePOCT+ was an iterative process. We first defined the scope, then developed the algorithm (decision tree logic), followed by expert review with relevant stakeholders, the digitalization, and finally piloting and testing. Each stage resulted in multiple feedback loops to refine the end product.

https://doi.org/10.1371/journal.pdig.0000170.g001

Methods

Scope

Compared to our previous generation CDSAs [6,10,11], the target level of care (primary health care facilities), and target users (mostly nurses and non-physician clinicians) remain the same. However, the target patient population was expanded from 2 months to 5 years, to also cover young infants below 2 months, and in some countries children 5 years up to 15 years.

The expanded target population age group adds young infants (<2 months) who are at highest risk of mortality [23], and children aged 5–15 years who are often neglected in international and national policies resulting in a slower decrease in mortality in LMICs compared to children under 5 years [24]. This expanded age group may help address the challenge of uptake by avoiding the need for clinicians to change tools when managing children of different age groups.

The scope of illnesses covered was also expanded in response to the frustration of clinicians using CDSAs who were not able to reach specific illnesses [14,16]. Expanding the scope allowed for the integration of common illnesses covered by other national clinical guidelines to which clinicians are expected to adhere, and to provide more opportunity for antibiotic stewardship when providing management guidance for specific illnesses.

Three major criteria were considered when expanding the scope of illnesses: 1) Incidence of presenting symptoms and diagnoses; 2) Morbidity, mortality, and outbreak potential; and 3) Capacity to diagnose and manage specific conditions at the primary care level.

Additional conditions were identified through: 1) national guidelines; 2) fever aetiology studies; 3) national health surveys; 4) chief complaints from primary care outpatient studies; 5) clinical expert review teams from the implementation countries; 6) interviews with end user clinicians; and 6) observation of consultations at primary health care facilities (Table A in S1 Appendix). Examples of notable additions for the Tanzanian algorithm include trauma, urinary tract infection, and abdominal pain that can account for 4.3–21.6% [25], 5.9–19.7% [2527], and 4.6–23% [11,26] of outpatient consultations respectively.

Clinical algorithm

The target users (mostly nurses and non-physician clinicians), and setting (primary health care facilities) were important considerations when identifying the guidelines and evidence to develop the algorithm. Previously validated algorithms [11], and the WHO Integrated Management of Childhood Illnesses (IMCI) chart booklets formed the backbone of the algorithm [28]. To support the expanded clinical scope, we turned to national guidelines to ensure adaptation to the local epidemiology, resources, and setting. If there was not sufficient detail in order to derive decision logic from these national guidelines, a brief review of literature was conducted to identify peer-reviewed literature and other international guidelines.

In order to transform narrative guidelines into Boolean decision tree logic algorithms, considerable interpretation was needed. The guiding principles for this process were derived from the properties to consider in the screening and diagnosis of a disease by Sackett and colleagues [29], the target product profile (TPP) for CDSAs as defined by experts in the field [21], and guidance on appropriate diagnostic and prognostic model development [30]. These include consideration of: a) the feasibility, acceptability, and reliability of clinical elements assessed at the primary care level, b) the diagnostic and prognostic value of individual and combined predictors, c) the sensitivity and specificity in relation to the severity and pre-test probability of the condition in the target population, and d) the overall clinical impression of the patient by the clinician.

  1. a) Feasibility, acceptability, and reliability of predictors

If clinical algorithms are to be adequately utilized, the signs and symptoms used to reach a diagnosis must be feasible, acceptable and reliable when assessed by end-users. These properties were evaluated based on the results of several assessments: primarily an international Delphi study on predictors of sepsis in children [31], a systematic review on triage tools in low-resource settings [32], signs and symptoms included in established guidelines for primary health care workers such as IMCI [28], interviews with clinicians, observation of routine consultations, a Delphi survey among 30 Tanzanian health care workers (S2 Appendix), as well as subsequent feasibility tests observing clinicians using the CDSA on real and fictional cases. Notable findings from this process led to us not adding a pain score, capillary refill time, the assessment of cool peripheries, and weak and fast pulse, as they were deemed neither feasible nor reliable to be assessed at the primary care level. Importantly, these symptoms and signs are also not included within IMCI, likely for similar reasons [28].

  1. b) Diagnostic and prognostic value of predictors

In the absence of validated diagnostic models for each diagnosis, we assessed individual diagnostic and prognostic factors to help guide the development of ePOCT+. Diagnostic studies derived from the population and setting of interest were preferred [33,34], as those developed from other settings often perform worse [35]. However, diagnostic predictors notably those predicting ‘serious bacterial infection’, often have low sensitivity, lack reference tests to confirm bacterial origin, and ignore serious infections caused by viral diseases [36,37]. Prognostic studies are often better suited to develop clinical algorithms in order to understand which children are at risk of developing severe disease, regardless of the aetiology, to improve patient outcomes and reduce resource misallocation [3840]. A systematic review of predictors of severe disease in febrile children presenting from the community helped identify useful clinical feature to be integrated within ePOCT+ [35], however few studies occurred at the primary care level. To address this gap we performed an exploratory analysis of clinical elements used in two CDSAs evaluated in Tanzania to predict clinical failure (S3 Appendix). This analysis found IMCI danger signs, severe general appearance, mid-upper arm circumference <12.5cm, oxygen saturation <90%, respiratory distress, and signs of anaemia and dehydration to be good predictors of clinical failure. Specific subgroup analyses on our previous generation CDSA provided further support for maintaining or modifying specific algorithm branches, particularly the inclusion of C-reactive Protein (CRP) point-of-care tests that helped safely reduce antibiotic prescription and improve confidence in management [41,42].

  1. c) Sensitivity and specificity of algorithm branches in relation to severity and pre-test probability of condition

When constructing the algorithm, it was important to first identify children presenting with a severe condition, and only then use more specific branches to distinguish conditions requiring specific treatment from self-limiting illnesses requiring only supportive care (Fig 2). Predictors of severe conditions need to be sufficiently sensitive to guide interventions to reduce morbidity and mortality. However if this high sensitivity comes at the cost of reduced specificity, it can result in over-referral, misallocation of limited health care resources, and excess antibiotic prescription [38]. While this concept was considered within the development of the algorithm, most predictors and models studied lacked sufficient sensitivity and specificity to appropriately meet these requirements at the primary care level, thus emphasizing the need for better predictors and models [35,38].

thumbnail
Fig 2. Considering algorithm performance in regards to pre-test probability (disease prevalence) of the condition.

Health care workers are confronted with two major questions at primary care health facilities: 1) Does the child need to be referred? For which an algorithm must evaluate sensitivity and specificity in relation to the severity of disease. 2) Does the child require specific treatment (most often an antibiotic)? For which the disease prevalence of a bacterial illness needs to be considered when evaluating the sensitivity and specificity of such an algorithm.

https://doi.org/10.1371/journal.pdig.0000170.g002

Once a severe condition has been excluded, restricting antimicrobial prescriptions can be more safely integrated given the lower risk of clinical failure. Understanding the pre-test probability (disease prevalence) of the disease guides us on the level of specificity needed for the corresponding predictors to be included in the algorithm. In the outpatient settings, few non-severe children above 2 months have a condition requiring antibiotics [11,27]. As such, using the principles of Bayes’ theorem [43], an algorithm for a condition of low prevalence requires a higher likelihood ratio to have a similar post-test probability than a condition with a higher prevalence. Within ePOCT+, C-Reactive Protein (CRP) test is integrated in several branches of the algorithm to increase specificity/likelihood ratio when the pre-test probability of requiring antibiotics is low. However, the pre-test probability of requiring antibiotics may increase in a child with comorbidities, and therefore a lower CRP cut-off can be used to increase sensitivity and reach the same post-test probability.

  1. d) Integrating overall clinical impression

The overall clinical impression of a healthcare worker plays an important part in the diagnostic process [44], and may sometimes better identify serious conditions compared to isolated symptoms and signs [45,46]. As blindly following CDSA recommendations runs the risk of neglecting nuanced clinical observations or patient-initiated elements, we incorporated clinical impression in the algorithm to better preserve these skills [47]. More generally, it also shows a respect and consideration for the clinician’s judgment and allows the tools to be more participatory; including the clinician in the interpretation and responsibility of the decision. As such, attempts were made to combine multiple clinical elements into one question utilizing clinical impression. This approach was used to help identify children who need a referral or antibiotics, such as “Severe difficult breathing needing referral”, a criteria similar to that proposed by the British Thoracic Society [48], and “well/unwell appearing child”, often used in children with fever without apparent source [36,49]. Highlighting in the application that this response will result in a recommendation of referral, aims to help clinicians understand the impact of their selection, and thus improve both the sensitivity and specificity. Such composite elements reduce the number of questions prompted by the CDSA, and speeds up the consultation process; an important consideration for uptake. Nevertheless, the diagnostic and prognostic value of the overall clinical impression of primary care clinicians in LMIC settings is not well understood, and further research is needed to understand how helpful these types of elements are when integrated within ePOCT+.

Adapting and validating the medical content

ePOCT+ was first developed for Tanzania, where the prior generation of the algorithm was validated in a randomized-controlled trial [11]. Following the expansion and adaptation of the content described above, the algorithm was internally reviewed by 13 clinicians from 6 medical institutions with good understanding of CDSAs; 5 working in Tanzania, and the other 8 with experience working in LMICs. The ePOCT+ algorithm for Rwanda, Senegal, Kenya and India were then each drafted, with rounds of internal review, by small development teams composed of clinical algorithm development specialists, and national child health experts based on country-specific objectives, guidelines, and epidemiology, using the first algorithm as a scaffold.

In each country, the ePOCT+ algorithm was reviewed by a technical panel from the Ministry of Health or an independent clinical expert group (usually with Ministry of Health representatives). The panels were asked to assess the algorithm in terms of clinical validity, feasibility in primary care, scope of illnesses, and consistency with national policy and guidelines. The process of validation varied slightly in each country according to national decision-making mechanisms, but all included written feedback, individual and group meetings.

Certain algorithm branches were highlighted for group discussion; especially those with novel content, those for which significant interpretation was required from national guidelines, and any branches with queries or comments from panel members. For the algorithms with more novel content, more formal decision processes were used. In Tanzania and Rwanda a modified nominal group method was used, in which each participant one-by-one provided their opinion on the presented branch of the algorithm, followed by a group discussion and then an absolute majority vote for the final version.

Following the internal and external reviews, further modifications were made during the digitalization process, and feasibility tests, including feedback and review from end-users. For each proposed major change, the modification was communicated to the group to allow subsequent feedback and final approval by health authorities.

Digitalization of ePOCT+ and development of the medAL-suite

We performed a landscaping review of existing CDSA software with respect to user interface, open source, data management, ease of programming and interpretation of clinical algorithms, and operability in target health facilities. Since none of the available software packages met our requirements, we developed the medAL-suite software following the requirements of the target product profile for CDSAs [21]. medAL-creator allows clinical experts to design the clinical content and logic of the algorithm, while medAL-reader is an Android based interface to execute the algorithm to end-user clinicians (Fig 3). Both software were developed collaboratively between the clinicians, IT programmers, end-users via feedback from field tests, and health authorities from the implementation countries.

thumbnail
Fig 3. medAL-creator and medAL-reader.

A) medAL-creator and its “drag and drop” user interface to design the clinical algorithm. For each clinical element a description and/or photo can be included to assist the end-user using medAL-reader; B) medAL-reader the android based application to collect the medical history, exposures, symptoms, signs and tests, and then propose the appropriate diagnosis and management.

https://doi.org/10.1371/journal.pdig.0000170.g003

The World Health Organization (WHO) have recently proposed the SMART guidelines to provide guidance and structure to translate the narrative guidelines (Layer 1), to semi-structured “human readable” decision trees and digital adaptation kits (Layer 2), to computer/machine readable structured algorithms (Layer 3), to the executable form of the software (Layer 4), and finally dynamic algorithms that are trained and optimised to local data (Layer 5) [50]. Each “translation” between layers is prone to interpretation and error, especially when each layer is developed by different actors and continuously adapted. To reduce error in interpretation, a major feature of medAL-creator is to allow the “computer/machine readable” structured algorithms to be “human readable”, thus merging Layers 2 and 3. medAL-creator features a “drag and drop” user interface and automatic terminology/code set enabling the clinicians with no programming knowledge to create and review the algorithm. medAL-reader is then able to automatically convert the algorithm from medAL-creator for use at point-of-care.

medAL-reader, was designed based on our previous experiences of CDSA interfaces [8,11], and expert guidance on successful strategies in order for the application to be intuitive to use with limited training, to align with normal workflows at primary health care facilities, and encourage user autonomy [21,51,52].

Validation tests and user-experience evaluations

Validation tests were performed for each diagnosis to ensure that the inputted data within medAL-creator were processed correctly into the expected output on medAL-reader. This included automated unit and integration testing, as well as automated non-regression testing by medAL-creator, and manual verification of medication posology for all drugs according to weight and age of the patient. All issues were reviewed by a clinical and IT team to correct the problems. While such tests are encouraged by the CDSA TPP [21], since CDSAs are not considered a “software as a medical device” by the Food and Drug Administration (FDA) [53] or European Medical Device Coordination Group [54], these tests are not legally required.

The ePOCT+ tool underwent numerous types and rounds of testing. To start, over 500 desk-based review cases focusing on user interface and analytical validation were performed by the various team members. Analytical validation tests ensured that the clinical content that was programmed in medAL-creator had the correct output in the medAL-reader application. End-user testing using fictional cases and supervised consultations concentrated on user experience, acceptability, and clinical applicability. Finally integrated testing in real-life conditions were performed where feedback was sought regularly. All user experience feedback was reviewed by a team including both clinical and IT specialists, while all clinical content modifications were approved by both the internal and external review panels.

Ethics

Activities related to the development and piloting of ePOCT+ and the medAL-suite were done within the studies of DYNAMIC and TIMCI, for which approval was given from each country of implementation. The study protocol and related documents were approved by the institutional review boards of the Ifakara Health Institute in Tanzania (IHI/IRB/No: 11–2020 and 49–2020), the National Institute for Medical Research in Tanzania (NIMR/HQ/R.8a/Vol. IX/3486 and NIMR/HQ/R.8a/Vol. IX/3583), the National Ethics Committee of Rwanda (752/RNEC/2020), the Comité National d’Ethique pour la Recherche en Santé of Senegal (SEN20/50), the University of Nairobi Ethics and Research Committee in Kenya (UON/CHS/TIMCI/1/1), the King George’s Medical College Institutional Ethics Committee in India (103rd ECM IC/P2), the Indian Council of Medical Research (2020–9753), the cantonal ethics review board of Vaud, Switzerland (CER-VD 2020–02800 & CER-VD 2020–02799), and the WHO Ethics Review Committee (ERC.0003405 & ERC.0003406). Written informed consent was obtained from all parents or guardians of children involved in the piloting of ePOCT+ and medAL-reader. No informed consent was obtained from health care workers involved in the development and refinement of the tools.

The exploratory analysis of predictors from the 2014 ePOCT study received approval of the study protocol and related documents by the institutional review boards of the Ifakara Health Institute and the National Institute for Medical Research in Tanzania (NIMRrHQ,R.8a,/trI’VoIl. 789), by the Ethikkommission Beider Basel in Switzerland (EKNZ UBE 15/03), and the Boston Children’s Hospital ethical review board. Written informed consent was obtained from all parents or guardians.

Results

The ePOCT+ clinical algorithm and supporting evidence for each country of implementation can be found on the websites of the DYNAMIC and TIMCI studies that are implementing ePOCT+. The major features of medAL-creator and medAL-reader are summarized in the supplementary material (S4 Appendix), including the requirements defined by the CDSA target product profile (S5 Appendix).

The feasibility tests of ePOCT+ were conducted in over 200 patients in 20 health facilities, leading to numerous modifications (Table 1). The improved algorithm was then piloted with over 2000 consultations following 2 days of training and on-site support, before officially starting the clinical validation studies in the five countries of implementation.

thumbnail
Table 1. Example of modifications based on user-experience feedback and observations.

https://doi.org/10.1371/journal.pdig.0000170.t001

Discussion

ePOCT+ was derived from existing evidence and clinical validation field studies from previous generation CDSAs [8,10,11]. Novel content in the algorithm compared to other CDSA include decision logic for young infants less than 2 months, and in some countries decision logic for children 5–15 years old, and expanded clinical content for diagnoses not included in IMCI. It is now being further validated in several large clinical studies. Following established development protocols, attempts were made to ensure a transparent development process, multi-stakeholder collaboration, and end-user feedback [21,22,55,56]. Specifically, aligning the development process of ePOCT+ and specifications of medAL-reader to the requirements of the Target Product Profile for CDSAs was helpful to better meet the needs of end users in terms of quality, safety, performance and operational functionality [21]. The development of medAL-creator, allows non-IT specialists to be able to program the clinical algorithms using a no-code, drag and drop interface, a novel solution that democratizes the development of CDSAs. This is a big advantage when compared to other CDSA tools that generally require advanced IT knowledge to review and program the code of the CDSA. Nonetheless, there are several limitations and challenges with the development process and the end-result of ePOCT+ and the medAL-suite, for which ongoing modifications and improvements will be required.

First, while efforts were made to improve the performance of the algorithm, there was often a reliance on clinical guidelines which may not always be founded on the best/latest/highest quality evidence, or applicable to low resource primary care settings [57,58]. Furthermore, they require significant interpretation to transform into algorithms. Digital Adaptation Kits (DAKs) to guide implementers in how to interpret narrative guidelines to transform into digital platforms are currently being developed by the World Health Organization and should help address this challenge in the future [50,59]. Often supplementary evidence was needed to complement national and international guidelines. This evidence should ideally be identified through systematic reviews [60], however those are not always feasible. Leveraging existing evidence databases as done by another CDSA may be a more feasible method to avoid biases in identifying supporting evidence [61]. Among the supporting evidence identified, there was a paucity of evidence for conditions specific to older children above 5 years, prognostic studies in the primary care setting, and diagnostic studies for conditions other than serious bacterial infection and pneumonia. Evaluating the prognostic and diagnostic value of predictors and models used in ePOCT+ during the ongoing validation studies will help to develop more efficient and better performing algorithms optimised for the target population [50,62].

A number of considerations were taken into account when digitalizing and adapting paper guidelines. Among the most important considerations were the feasibility, acceptability, reliability, and diagnostic and prognostic performance of individual clinical elements, while also considering the overall performance of the algorithms in relation to the pre-test probability of the outcome or disease, and the clinician’s overall impression. Often conflicts can arise among the various factors that must be considered, which leads to difficult decisions. For example the Delphi survey among Tanzanian health care workers found that capillary refill time may not be feasible in primary health settings, however it has been found to have good prognostic value [35]. Such difficult decisions were often taken with input from clinical experts from the country of implementation. Additional training on clinical signs deemed not feasible, could potentially allow for future modifications. Another difficult decision included the option of estimating results when measurements are not possible (e,g, respiratory rate). Health care workers often do not measure respiratory rate when following paper guidelines or using a CDSA [7,19]. If the CDSA does not allow the option of not being able to measure respiratory rate then health care workers may not be able to move forward using the tool, or may enter false data if indeed respiratory rate measurement is not feasible. Allowing health care workers to estimate the value is not ideal, but allows the health care worker to at the very least visually assess respiratory rate, and provide an input in order for the algorithm to reach a diagnosis. This data can then be used to mentor health care workers that do not measure respiratory rate. Allowing clinicians to simply indicate that the respiratory rate was not possible to measure without forcing an estimation could be an option to consider, but would complicate the decision on what diagnosis to reach when selecting this option.

Many modifications to ePOCT+ and medAL-reader compared to previous generation CDSAs were implemented in order to help improve uptake, addressing previously shared concerns such as limited scope, and ease of use. medAL-reader was specifically designed to follow normal healthcare workflows, and incorporate more input from the healthcare workers. Compared to other CDSAs, medAL-reader includes new functions such as an emergency button, and the ability to accept or refuse a diagnosis or treatment. The introduction of other digital tools such as electronic medical records within the same health facilities creates challenges in uptake and may result in duplication of processes. As an example, it is estimated that there are over 160 digital health or health-related systems in Tanzania [63]. While efforts are currently being made to harmonize processes so that different digital systems can complement each other rather than creating additional work, this has not yet been achieved. It is important to note, that while ePOCT+ and medAL-reader may address some challenges to uptake of CDSAs, there are many extrinsic and intrinsic factors that are not addressed, such as the low perceived value of following guidelines, and lack of motivation partly related to poor remuneration [16,64].

The digitalization process allows for increased complexity in the algorithm compared to paper guidelines. However, this complexity may limit the understanding by healthcare workers. Understanding how a diagnosis and treatment plan is reached is fundamental to clinical and patient autonomy, important for continued learning, and for fostering trust in any algorithm.[6567] Efforts were made to present simple decision tree logic for each diagnosis. Nevertheless, the optimal method of presentation of algorithm branches to assure understanding by primary healthcare workers should be further explored.

Conclusion

ePOCT+ aims to improve clinical care of sick children in LMICs, notably by reducing unnecessary antibiotic prescription. We hope that the strong stakeholder involvement, the expanded scope of the clinical algorithm, and the novel software of the medAL-suite will result in high uptake, trust and acceptability. Widespread implementation will provide opportunities for dynamic and targeted refinements to the clinical content to improve the performance of the algorithm. We further hope that the easy-to-use platform of the medAL-suite, and the framework used to develop ePOCT+ will allow health authorities and local communities to be able to take ownership of ePOCT+ or their own clinical algorithm for future adaptations and developments. Future success however, is contingent on the harmonization with national health management information systems and other digital systems.

Supporting information

S1 Appendix. Prevalence of specific symptoms and diagnoses not covered in IMCI from Tanzania.

https://doi.org/10.1371/journal.pdig.0000170.s001

(DOCX)

S2 Appendix. Delphi survey on the reliability and feasibility of measurement of symptoms and signs.

https://doi.org/10.1371/journal.pdig.0000170.s002

(DOCX)

S3 Appendix. Prognostic value of predictors used in the ePOCT and ALMANACH electronic clinical decision support algorithms.

https://doi.org/10.1371/journal.pdig.0000170.s003

(DOCX)

S4 Appendix. Features of the medAL-creator and medAL-reader software as defined by a clinical-IT collaboration with end-user feedback.

https://doi.org/10.1371/journal.pdig.0000170.s004

(DOCX)

S5 Appendix. Evaluation of ePOCT+ based on the characteristics set by the target product profile for electronic clinical decision support algorithm as defined by expert consensus.

https://doi.org/10.1371/journal.pdig.0000170.s005

(DOCX)

Acknowledgments

Emmanuel Barchichat, Alain Fresco, and Quentin Girard from Wavemind for the IT programming of medal-creator and medal-reader software. Martin Norris, Lisa Cleveley, Dr Sabine Renggli, Ibrahim Mtabene, Peter Agrea and Dr Godfrey Kavishe for the medAL-reader tests and suggestions for improvements to both medal-reader and medal-creator. Cecile Trottet for the statistical support. The many health care workers providing feedback on the tool, patients and caretakers involved with pilot and feasibility testing. Dr Arjun Chandna and Janet Urquhart for helpful comments on the manuscript.

References

  1. 1. Kruk ME, Gage AD, Joseph NT, Danaei G, García-Saisó S, Salomon JA. Mortality due to low-quality health systems in the universal health coverage era: a systematic analysis of amenable deaths in 137 countries. The Lancet. 2018;392(10160):2203–12. pmid:30195398
  2. 2. Murray CJL, Ikuta KS, Sharara F, Swetschinski L, Robles Aguilar G, Gray A, et al. Global burden of bacterial antimicrobial resistance in 2019: a systematic analysis. The Lancet. 2022. pmid:35065702
  3. 3. Fink G, D’Acremont V, Leslie HH, Cohen J. Antibiotic exposure among children younger than 5 years in low-income and middle-income countries: a cross-sectional study of nationally representative facility-based and household-based surveys. The Lancet Infectious Diseases. 2020;20(2):179–87. pmid:31843383
  4. 4. Levine G, Bielicki J, Fink G. Cumulative Antibiotic Exposure in the First Five Years of Life: Estimates for 45 Low- and Middle-income Countries from Demographic and Health Survey Data. Clinical Infectious Diseases. 2022:ciac225. pmid:35325088
  5. 5. van de Maat J, De Santis O, Luwanda L, Tan R, Keitel K. Primary Care Case Management of Febrile Children: Insights From the ePOCT Routine Care Cohort in Dar es Salaam, Tanzania. Frontiers in pediatrics. 2021;9(465). pmid:34123960
  6. 6. Keitel K, D’Acremont V. Electronic clinical decision algorithms for the integrated primary care management of febrile children in low-resource settings: review of existing tools. Clinical microbiology and infection: the official publication of the European Society of Clinical Microbiology and Infectious Diseases. 2018;24(8):845–55. Epub 2018/04/24. pmid:29684634.
  7. 7. Bernasconi A, Crabbé F, Raab M, Rossi R. Can the use of digital algorithms improve quality care? An example from Afghanistan. PLoS One. 2018;13(11):e0207233-e. pmid:30475833.
  8. 8. Rambaud-Althaus C, Shao A, Samaka J, Swai N, Perri S, Kahama-Maro J, et al. Performance of Health Workers Using an Electronic Algorithm for the Management of Childhood Illness in Tanzania: A Pilot Implementation Study. The American journal of tropical medicine and hygiene. 2017;96(1):249–57. Epub 2017/01/13. pmid:28077751; PubMed Central PMCID: PMC5239703.
  9. 9. Sarrassat S, Lewis JJ, Some AS, Somda S, Cousens S, Blanchet K. An Integrated eDiagnosis Approach (IeDA) versus standard IMCI for assessing and managing childhood illness in Burkina Faso: a stepped-wedge cluster randomised trial. BMC health services research. 2021;21(1):354-. pmid:33863326.
  10. 10. Shao AF, Rambaud-Althaus C, Samaka J, Faustine AF, Perri-Moore S, Swai N, et al. New Algorithm for Managing Childhood Illness Using Mobile Technology (ALMANACH): A Controlled Non-Inferiority Study on Clinical Outcome and Antibiotic Use in Tanzania. PLoS One. 2015;10(7):e0132316. Epub 2015/07/15. pmid:26161535; PubMed Central PMCID: PMC4498627.
  11. 11. Keitel K, Kagoro F, Samaka J, Masimba J, Said Z, Temba H, et al. A novel electronic algorithm using host biomarker point-of-care tests for the management of febrile illnesses in Tanzanian children (e-POCT): A randomized, controlled non-inferiority trial. PLoS medicine. 2017;14(10):e1002411. Epub 2017/10/24. pmid:29059253; PubMed Central PMCID: PMC5653205.
  12. 12. World Health Organization. WHO guideline: recommendations on digital interventions for health system strengthening. Geneva: 2019 Contract No.: Licence: CC BY-NC-SA 3.0 IGO.
  13. 13. United Republic of Tanzania: Ministry of Health CD, Gender, Elderly and Children. Digital Health Strategy. Tanzania: 2019.
  14. 14. Bessat C, Zonon NA, D’Acremont V. Large-scale implementation of electronic Integrated Management of Childhood Illness (eIMCI) at the primary care level in Burkina Faso: a qualitative study on health worker perception of its medical content, usability and impact on antibiotic prescription and resistance. BMC public health. 2019;19(1):449. Epub 2019/05/01. pmid:31035968; PubMed Central PMCID: PMC6489291.
  15. 15. Mitchell M, Getchell M, Nkaka M, Msellemu D, Van Esch J, Hedt-Gauthier B. Perceived Improvement in Integrated Management of Childhood Illness Implementation through Use of Mobile Technology: Qualitative Evidence From a Pilot Study in Tanzania. Journal of Health Communication. 2012;17(sup1):118–27. pmid:22548605
  16. 16. Shao AF, Rambaud-Althaus C, Swai N, Kahama-Maro J, Genton B, D’Acremont V, et al. Can smartphones and tablets improve the management of childhood illness in Tanzania? A qualitative study from a primary health care worker’s perspective. BMC health services research. 2015;15:135-. pmid:25890078.
  17. 17. Jensen C, McKerrow NH, Wills G. Acceptability and uptake of an electronic decision-making tool to support the implementation of IMCI in primary healthcare facilities in KwaZulu-Natal, South Africa. Paediatr Int Child Health. 2020;40(4):215–26. Epub 2019/11/30. pmid:31779539.
  18. 18. Jensen C, McKerrow NH. The feasibility and ongoing use of electronic decision support to strengthen the implementation of IMCI in KwaZulu-Natal, South Africa. BMC Pediatrics. 2022;22(1):80. pmid:35130847
  19. 19. Bernasconi A, Crabbé F, Adedeji AM, Bello A, Schmitz T, Landi M, et al. Results from one-year use of an electronic Clinical Decision Support System in a post-conflict context: An implementation research. PLoS One. 2019;14(12):e0225634. Epub 2019/12/04. pmid:31790448; PubMed Central PMCID: PMC6886837.
  20. 20. Waugaman A. From principle to practice: implementing the principles for digital development. Proceedings of the Principles for Digital Development Working Group. 2016;4.
  21. 21. Pellé KG, Rambaud-Althaus C, Acremont V, Moran G, Sampath R, Katz Z, et al. Electronic clinical decision support algorithms incorporating point-of-care diagnostic tests in low-resource settings: a target product profile. BMJ Global Health. 2020;5(2):e002067. pmid:32181003
  22. 22. Ansermino JM, Wiens MO, Kissoon N. Evidence and Transparency are Needed to Develop a Frontline Health Worker mHealth Assessment Platform. 2019;101(4):948–. pmid:32519659
  23. 23. Li Z, Karlsson O, Kim R, Subramanian SV. Distribution of under-5 deaths in the neonatal, postneonatal, and childhood periods: a multicountry analysis in 64 low- and middle-income countries. Int J Equity Health. 2021;20(1):109-. pmid:33902593.
  24. 24. Masquelier B, Hug L, Sharrow D, You D, Hogan D, Hill K, et al. Global, regional, and national mortality trends in older children and young adolescents (5–14 years) from 1990 to 2016: an analysis of empirical data. The Lancet Global Health. 2018;6(10):e1087–e99. pmid:30223984
  25. 25. McHomvu E, Mbunda G, Simon N, Kitila F, Temba Y, Msumba I, et al. Diagnoses made in an Emergency Department in rural sub-Saharan Africa. Swiss Med Wkly. 2019;149:w20018. Epub 2019/02/05. pmid:30715723.
  26. 26. Hercik C, Cosmas L, Mogeni OD, Wamola N, Kohi W, Omballa V, et al. A diagnostic and epidemiologic investigation of acute febrile illness (AFI) in Kilombero, Tanzania. PLoS One. 2017;12(12):e0189712-e. pmid:29287070.
  27. 27. D’Acremont V, Kilowoko M, Kyungu E, Philipina S, Sangu W, Kahama-Maro J, et al. Beyond malaria—causes of fever in outpatient Tanzanian children. The New England journal of medicine. 2014;370(9):809–17. Epub 2014/02/28. pmid:24571753.
  28. 28. World Health Organization. IMCI chart booklet. Geneva: 2014.
  29. 29. Sackett DL, Holland WW. Controversy in the detection of disease. Lancet. 1975;2(7930):357–9. Epub 1975/08/23. pmid:51154
  30. 30. Moons KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration. Annals of internal medicine. 2015;162(1):W1–W73. pmid:25560730
  31. 31. Fung JST, Akech S, Kissoon N, Wiens MO, English M, Ansermino JM. Determining predictors of sepsis at triage among children under 5 years of age in resource-limited settings: A modified Delphi process. PLoS One. 2019;14(1):e0211274. Epub 2019/01/29. pmid:30689660; PubMed Central PMCID: PMC6349330.
  32. 32. Hansoti B, Jenson A, Keefe D, De Ramirez SS, Anest T, Twomey M, et al. Reliability and validity of pediatric triage tools evaluated in Low resource settings: a systematic review. BMC Pediatrics. 2017;17(1):37. pmid:28122537
  33. 33. Erdman LK, D’Acremont V, Hayford K, Rajwans N, Kilowoko M, Kyungu E, et al. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study. PLoS One. 2015;10(9):e0137592. Epub 2015/09/15. pmid:26366571; PubMed Central PMCID: PMC4569067.
  34. 34. De Santis O, Kilowoko M, Kyungu E, Sangu W, Cherpillod P, Kaiser L, et al. Predictive value of clinical and laboratory features for the main febrile diseases in children living in Tanzania: A prospective observational study. PLoS One. 2017;12(5):e0173314. Epub 2017/05/04. pmid:28464021; PubMed Central PMCID: PMC5413055.
  35. 35. Chandna A, Tan R, Carter M, Van Den Bruel A, Verbakel J, Koshiaris C, et al. Predictors of disease severity in children presenting from the community with febrile illnesses: a systematic review of prognostic studies. BMJ Glob Health. 2021;6(1). Epub 2021/01/22. pmid:33472837.
  36. 36. Keitel K, Kilowoko M, Kyungu E, Genton B, D’Acremont V. Performance of prediction rules and guidelines in detecting serious bacterial infections among Tanzanian febrile children. BMC infectious diseases. 2019;19(1):769-. pmid:31481123.
  37. 37. Oostenbrink R, Thompson M, Steyerberg EW. Barriers to translating diagnostic research in febrile children to clinical practice: a systematic review. Archives of disease in childhood. 2012;97(7):667–72. Epub 2012/01/06. pmid:22219168.
  38. 38. McDonald CR, Weckman A, Richard-Greenblatt M, Leligdowicz A, Kain KC. Integrated fever management: disease severity markers to triage children with malaria and non-malarial febrile illness. Malaria Journal. 2018;17(1):353. pmid:30305137
  39. 39. Royston P, Moons KGM, Altman DG, Vergouwe Y. Prognosis and prognostic research: Developing a prognostic model. 2009;338:b604. %J BMJ pmid:19336487
  40. 40. Chandna A, Osborn J, Bassat Q, Bell D, Burza S, D’Acremont V, et al. Anticipating the future: prognostic tools as a complementary strategy to improve care for patients with febrile illnesses in resource-limited settings. BMJ Glob Health. 2021;6(7). Epub 2021/08/01. pmid:34330761; PubMed Central PMCID: PMC8327814.
  41. 41. Keitel K, Samaka J, Masimba J, Temba H, Said Z, Kagoro F, et al. Safety and Efficacy of C-reactive Protein–guided Antibiotic Use to Treat Acute Respiratory Infections in Tanzanian Children: A Planned Subgroup Analysis of a Randomized Controlled Noninferiority Trial Evaluating a Novel Electronic Clinical Decision Algorithm (ePOCT). Clinical Infectious Diseases. 2019;69(11):1926–34. pmid:30715250
  42. 42. Tan R, Kagoro F, Levine GA, Masimba J, Samaka J, Sangu W, et al. Clinical Outcome of Febrile Tanzanian Children with Severe Malnutrition Using Anthropometry in Comparison to Clinical Signs. American Journal of Tropical Medicine and Hygiene. 2020;102(2):427–35. WOS:000512881500035. pmid:31802732
  43. 43. LII Bayes T. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFR S. Philosophical transactions of the Royal Society of London. 1763;(53):370–418.
  44. 44. Meredith V, Sandra M, Eamon C, Geoff N, Jonathan S, Matthew S, et al. Experienced physician descriptions of intuition in clinical reasoning: a typology. Diagnosis. 2019;6(3):259–68. pmid:30877781
  45. 45. Van den Bruel A, Thompson M, Buntinx F, Mant D. Clinicians’ gut feeling about serious infections in children: observational study. BMJ: British Medical Journal. 2012;345:e6144. pmid:23015034
  46. 46. Dale AP, Marchello C, Ebell MH. Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis. British Journal of General Practice. 2019;69(684):e444. pmid:31208974
  47. 47. Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis? BMJ: British Medical Journal. 2014;348:g3725. pmid:24927763
  48. 48. Harris M, Clark J, Coote N, Fletcher P, Harnden A, McKean M, et al. British Thoracic Society guidelines for the management of community acquired pneumonia in children: update 2011. Thorax. 2011;66 Suppl 2:ii1–23. Epub 2011/10/19. pmid:21903691.
  49. 49. Bleeker SE, Derksen-Lubsen G, Grobbee DE, Donders AR, Moons KG, Moll HA. Validating and updating a prediction rule for serious bacterial infection in patients with fever without source. Acta Paediatr. 2007;96(1):100–4. Epub 2006/12/26. pmid:17187613.
  50. 50. Mehl G, Tunçalp Ö, Ratanaprayul N, Tamrat T, Barreix M, Lowrance D, et al. WHO SMART guidelines: optimising country-level use of guideline recommendations in the digital age. The Lancet Digital Health. 2021. pmid:33610488
  51. 51. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17. Epub 2020/02/13. pmid:32047862; PubMed Central PMCID: PMC7005290.
  52. 52. Källander K, Tibenderana JK, Akpogheneta OJ, Strachan DL, Hill Z, ten Asbroek AHA, et al. Mobile health (mHealth) approaches and lessons for increased performance and retention of community health workers in low- and middle-income countries: a review. Journal of medical Internet research. 2013;15(1):e17-e. pmid:23353680.
  53. 53. US Food and Drug Administration. Software as a Medical Device (SAMD). Clinical Evaluation-Guidance for Industry and Food and Drug Administration Staff. 2017.
  54. 54. Eurpoean Commission. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC. Official Journal of the European Union. 2017;L 117/1.
  55. 55. Aranda-Jan CB, Mohutsiwa-Dibe N, Loukanova S. Systematic review on what works, what does not work and why of implementation of mobile health (mHealth) projects in Africa. BMC public health. 2014;14(1):188. pmid:24555733
  56. 56. Lampariello R, Ancellin-Panzani S. Mastering stakeholders’ engagement to reach national scale, sustainability and wide adoption of digital health initiatives: lessons learnt from Burkina Faso. Fam Med Community Health. 2021;9(3). Epub 2021/06/20. pmid:34144970; PubMed Central PMCID: PMC8215243.
  57. 57. Florez ID, Brouwers MC, Kerkvliet K, Spithoff K, Alonso-Coello P, Burgers J, et al. Assessment of the quality of recommendations from 161 clinical practice guidelines using the Appraisal of Guidelines for Research and Evaluation–Recommendations Excellence (AGREE-REX) instrument shows there is room for improvement. Implementation Science. 2020;15(1):79. pmid:32948216
  58. 58. Maaløe N, Ørtved AMR, Sørensen JB, Sequeira Dmello B, van den Akker T, Kujabi ML, et al. The injustice of unfit clinical practice guidelines in low-resource realities. The Lancet Global Health. 2021. pmid:33765437
  59. 59. Tamrat T, Ratanaprayul N, Barreix M, Tunçalp Ö, Lowrance D, Thompson J, et al. Transitioning to Digital Systems: The Role of World Health Organization’s Digital Adaptation Kits in Operationalizing Recommendations and Interoperability Standards. Global Health: Science and Practice. 2022. pmid:35294382
  60. 60. Qaseem A, Forland F, Macbeth F, Ollenschläger G, Phillips S, van der Wees P. Guidelines International Network: toward international standards for clinical practice guidelines. Annals of internal medicine. 2012;156(7):525–31. Epub 2012/04/05. pmid:22473437.
  61. 61. Cornick R, Picken S, Wattrus C, Awotiwon A, Carkeek E, Hannington J, et al. The Practical Approach to Care Kit (PACK) guide: developing a clinical decision support tool to simplify, standardise and strengthen primary healthcare delivery. BMJ Global Health. 2018;3(Suppl 5):e000962. pmid:30364419
  62. 62. Loftus TJ, Tighe PJ, Ozrazgat-Baslanti T, Davis JP, Ruppert MM, Ren Y, et al. Ideal algorithms in healthcare: Explainable, dynamic, precise, autonomous, fair, and reproducible. PLOS Digital Health. 2022;1(1):e0000006.
  63. 63. Watts G. The Tanzanian digital health agenda. The Lancet Digital Health. 2020;2(2):e62–e3.
  64. 64. Lange S, Mwisongo A, Mæstad O. Why don’t clinicians adhere more consistently to guidelines for the Integrated Management of Childhood Illness (IMCI)? Soc Sci Med. 2014;104:56–63. Epub 2014/03/04. pmid:24581062.
  65. 65. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.
  66. 66. Kundu S. AI in medicine must be explainable. Nature Medicine. 2021;27(8):1328-. pmid:34326551
  67. 67. Amann J, Vetter D, Blomberg SN, Christensen HC, Coffee M, Gerke S, et al. To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health. 2022;1(2):e0000016.