Skip to main content
Advertisement
  • Loading metrics

Improving in-patient neonatal data quality as a pre-requisite for monitoring and improving quality of care at scale: A multisite retrospective cohort study in Kenya

  • Timothy Tuti ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing

    TTuti@kemri-wellcome.org

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • Jalemba Aluvaala,

    Roles Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliations KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya, Department of Paediatrics and Child Health, University of Nairobi, Nairobi, Kenya

  • Daisy Chelangat,

    Roles Formal analysis, Investigation, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • George Mbevi,

    Roles Data curation, Software, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • John Wainaina,

    Roles Data curation, Software, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • Livingstone Mumelo,

    Roles Data curation, Software, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • Kefa Wairoto,

    Roles Data curation, Software, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • Dolphine Mochache,

    Roles Data curation, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • Grace Irimu,

    Roles Project administration, Writing – review & editing

    Affiliations KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya, Department of Paediatrics and Child Health, University of Nairobi, Nairobi, Kenya

  • Michuki Maina,

    Roles Formal analysis, Writing – review & editing

    Affiliation KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya

  • The Clinical Information Network Group ,

    Membership of the Clinical Information Network Group is provided in the S1 Acknowledgments.

  • Mike English

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliations KEMRI-Wellcome Trust Research Programme, Nairobi, Kenya, Nuffield Department of Medicine, University of Oxford, Oxford, United Kingdom

Abstract

The objectives of this study were to (1)explore the quality of clinical data generated from hospitals providing in-patient neonatal care participating in a clinical information network (CIN) and whether data improved over time, and if data are adequate, (2)characterise accuracy of prescribing for basic treatments provided to neonatal in-patients over time. This was a retrospective cohort study involving neonates ≤28 days admitted between January 2018 and December 2021 in 20 government hospitals with an interquartile range of annual neonatal inpatient admissions between 550 and 1640 in Kenya. These hospitals participated in routine audit and feedback processes on quality of documentation and care over the study period. The study’s outcomes were the number of patients as a proportion of all eligible patients over time with (1)complete domain-specific documentation scores, and (2)accurate domain-specific treatment prescription scores at admission, reported as incidence rate ratios. 80,060 neonatal admissions were eligible for inclusion. Upon joining CIN, documentation scores in the monitoring, other physical examination and bedside testing, discharge information, and maternal history domains demonstrated a statistically significant month-to-month relative improvement in number of patients with complete documentation of 7.6%, 2.9%, 2.4%, and 2.0% respectively. There was also statistically significant month-to-month improvement in prescribing accuracy after joining the CIN of 2.8% and 1.4% for feeds and fluids but not for Antibiotic prescriptions. Findings suggest that much of the variation observed is due to hospital-level factors. It is possible to introduce tools that capture important clinical data at least 80% of the time in routine African hospital settings but analyses of such data will need to account for missingness using appropriate statistical techniques. These data allow exploration of trends in performance and could support better impact evaluation, exploration of links between health system inputs and outcomes and scrutiny of variation in quality and outcomes of hospital care.

Introduction

Neonatal (new-born children aged ≤ 28 days) deaths account for 47% of all under-five deaths with 37% of these deaths occurring in Sub-Saharan African (SSA) countries [1]. These deaths are largely attributable to preterm birth, sepsis and intrapartum complications [2] and hospital admissions with these conditions are still associated with high morbidity in Low- and Middle-Income Countries (LIMCs) like Kenya [3]. Essential interventions such as newborn resuscitation, Kangaroo Mother Care (KMC), early recognition and treatment of neonatal infections, and Continuous Positive Airway Pressure (CPAP) therapy have been identified as major interventions to reduce neonatal deaths in hospitals [4, 5]. However, available evidence suggests that adherence to recommended care giving practices in LMICs is poor [68] while poorly functioning information systems mean limited data of questionable quality on the delivery of such interventions in routine hospital settings in LMIC is available [9, 10]. This poor data quality precludes effective monitoring of the routine quality of care provided (where quality of care is defined as the adherence to the recommended clinical guidelines in provision of care) and patient outcomes at scale, and, limits the ability to track effective delivery of essential neonatal interventions.

Availability of high-quality timely, accessible, and easy to use data from routine clinical settings could improve monitoring of intervention adoption and quality of hospital care at scale, and ultimately might help improve clinical outcomes [912]. An integrated approach providing a mechanism to promote continued improvement of clinical information, implementation of effective practices and technologies, and locally relevant research can comprise a ‘learning health system’, which are posited to be influential in producing the positive change required [1316].

The objectives of this study were to determine: (1) if the quality of documentation that is the source of routine data improves over time so that good quality data can be generated from hospitals’ newborn units invited to participate in a low-cost learning health system, (2) if basic recommended treatments or interventions are being correctly provided to neonatal in-patients (if the quality of clinical data permits this) and so explore the potential for tracking intervention adoption and ultimately their effects in LMIC.

Methods

Ethics and reporting

The reporting of this observational study follows the Strengthening of reporting of observational studies in epidemiology (STROBE) statement [17]. The Scientific and Ethics Review Unit of the Kenya Medical Research Institute (KEMRI) approved the collection of the de-identified data that provides the basis for this study as part of the Clinical Information Network (CIN) for newborns (CIN-N). Individual consent for access to de-identified patient data was not required.

Ethics approval and consent to participate

Ethical approval was provided by the KEMRI Scientific and Ethical Review Committee (SERU 3459). Individual patient consent for the de-identified clinical data was judged to not be required, but consent from participating hospitals was sought.

Study design and setting

This study is situated within the CIN-N. The CIN-N is a collaborative learning health system network between KEMRI-Wellcome Trust Research Programme, the Ministry of Health, Kenya Paediatric Association, and 21 partner hospitals [9, 18]. The hospitals in CIN-N are first referral-level, geographically dispersed hospitals with an interquartile range of annual NBU inpatient admissions between 550 and 1640. A paediatric network was established in 2013/2014 to improve care given to inpatient children [13]. After co-development work with a single large NBU, multiple hospitals’ neonatal units joined to extend the original paediatric network and create the CIN-N in 2017/2018. In these hospitals most admission care and prescribing is done by medical officer interns who rotate through departments regularly resulting in almost complete changes in those responsible for NBU admissions every three months [19]. In-depth description of the development of CIN-N and its activities are detailed elsewhere [3, 9, 1316, 18]. For the purposes of this study, NBU data is omitted from analyses from one hospital because it was developing and using information tools with the CIN-N team for four years before any additional hospitals joined the CIN-N; thus, only data from 20 hospitals is analysed.

This was a retrospective cohort study involving NBUs in the CIN-N hospitals. The CIN-N hospitals receive three-monthly clinical audit and feedback reports on the quality of care including for example, summaries of key issues for documentation and treatment prescription errors once they join [18]. Shorter feedback reports on data quality, and morbidity and mortality reports are disseminated monthly via email to clinicians, nurses in-charge and other hospital administration staff. Neonatal team leaders (neonatologists, paediatricians, and nurses) met face to face once or twice annually until 2020 (before the COVID-19 pandemic) to discuss these reports and how to improve clinical care. Finally, those that received no feedback were neither included in written reports nor discussed in meetings. During the COVID-19 pandemic only short, online network meetings were conducted that focused mostly on disseminating information of relevance to the pandemic and efforts to improve local neonatal audit and nursing practices.

Study size and participants

All hospitals have a specific newborn unit (NBU) and neonates aged ≤28 days admitted between January 2018 and December 2021 to the NBU of 20 CIN-N hospitals were eligible for inclusion. Neonates excluded were those whose admission or discharge dates were missing or improbable (e.g. discharge date is earlier than admission date), and those whose admissions fell within prolonged health worker strikes that resulted in major disruption to health care delivery (i.e. December 2020 –January 2021) [20].

Data sources and management

Methods of collection and cleaning of data in the CIN-N are reported in detail elsewhere [21]. Clinical data for neonatal admissions to the hospitals within the CIN-N are captured through Neonatal Admission Record (NAR) forms and other forms and charts that are part of the hospital’s medical record. The NAR and associated patient charts prompt the clinician with a checklist of fields covering nine documentation domains that include demographics, admission information, discharge information, maternal history, presenting complaints, cardinal signs on examination, other physical examinations, nursing monitoring and supportive care [18]. Other charts that are also used are a comprehensive newborn monitoring chart (collects data on vital signs, feeds and fluids prescribed) which were developed and introduced between March and June 2019 [22], transfer forms (containing key data when a baby is transferred internally from maternity unit to NBU), treatment sheets, discharge summaries and death notification reports in case death occurs.

The clinical signs included in the NAR are based on recommendations in guidelines from the national Ministry of Health and the World Health Organisation (WHO) [23]. NAR forms were originally developed as part of the Emergency Treatment and Triage plus admission (ETAT+) approach which includes skill training in essential inpatient newborn care [24]. In earlier work they were associated with improved documentation of key patient characteristics during admission [18]. NAR are not provided to hospitals in CIN-N and so their adoption is at the discretion of hospital teams and supported by hospitals’ own resources, although CIN-N hospitals are encouraged to use them.

Each hospital has a clerk who extracts data from the NAR forms into a Research Electronic Data Capture (REDCap) database [25]. Two sets of data are captured: minimum and full datasets. The minimal dataset–which is unsuitable for this study’s analyses—is collected for (1) admissions during major holidays when the data clerk is on leave, and (2) on a random selection of records in hospitals where the workload is very high. The minimal dataset includes biodata and patient outcomes at discharge and is collected on all neonatal admissions in all CIN-N hospitals for reporting to the national Health Information System. The full dataset contains comprehensive data on admission details, patient history, clinical investigations, treatment and discharge information including diagnoses and outcome [9]. The data collected is subjected to routine quality assurance checks, with this process explained in detailed reports published elsewhere [9].

Quantitative variables

Creation of documentation scores.

The outcome for objective 1 of this study was based on use of individual patient documentation scores compiled from the signs, symptoms, treatments, and outcomes data (Table 1). These scores were developed for each of the eight NAR indicator documentation domains then used to determine trends in the completeness of documentation in the hospitals involved. Domains had different numbers of component data items (Table 1) considered key for characterising NBU populations and assessing core aspects of technical quality of care neonates receive [26]. Domain-specific composite scores for each patient were developed by arithmetic aggregation of all items with valid (non-missing) data in that domain (score = 1 if valid data, = 0 for missing data).

thumbnail
Table 1. Variables used in the documentation and coverage scores.

https://doi.org/10.1371/journal.pgph.0000673.t001

Creation of treatment correctness scores and intervention tracking.

An additional three indicator domains were created to reflect the accuracy of basic treatment prescriptions (antibiotics, fluids, and feeds) for relevant sub-populations of neonates receiving these treatments and based on the dosage or volume recommendations in the national guidelines [27] (Table 2). Each eligible patient in each of the treatment domains in the analysis could either have correct or incorrect prescription: If the treatment was correctly prescribed, then it contributed a score of one to the domain-specific score; if treatment information was missing or treatment was incorrectly prescribed, then it contributed a score of zero to the domain-specific score. Finally, descriptive analyses was done to assess how well the data could support tracking of intervention adoption over time by evaluating whether neonates eligible for weight monitoring, CPAP, and KMC received these essential services. The adoption of weight monitoring, CPAP and KMC was summarised by coverage scores calculated as the percentage of neonates potentially eligible who were recorded as receiving these interventions/monitoring.

thumbnail
Table 2. Threshold for prescribing antibiotics, fluids, and feeds.

https://doi.org/10.1371/journal.pgph.0000673.t002

Statistical methods

Descriptive analyses.

Documentation performance of each hospital over time was summarised monthly using trend plots as a percentage representing (1) the average score of individual patient domain-specific score as a proportion of the maximum domain score possible (i.e., Documentation score), and (2) the number of patients with maximum possible documentation score for each domain out of all patients with full data collection admitted to the NBU. Similarly, domain-specific treatment accuracy and treatment coverage was summarised monthly for each hospital using trend plots as a percentage representing the proportion of neonates eligible for essential treatments who received an accurate prescription or required intervention respectively.

These pooled scores are presented using scatter plots for each hospital each month over the period 2018 to 2021 (i.e., from the month of joining CIN-N). Locally Weighted Scatterplot Smoothing (LOWESS) line plots were used to visually represent the trend over time for each documentation, treatment accuracy, and treatment coverage domain. Descriptively, from the hospital-specific trend plots, hospitals whose performance at the month of joining CIN is below 40% were considered to have a low baseline while those with a performance above 75% were considered to have a high baseline.

Inferential statistics.

To characterise clinical documentation completeness and adherence to recommended treatment prescribing guidelines over time while quantifying heterogeneity between CIN-N hospitals, generalised linear mixed effects models were fitted for each documentation and treatment accuracy domain. These mixed effects models (using log link) were fitted on two types of hospital-level count outcome variables computed monthly:

  1. Documentation domain completeness score per hospital (as the number of all patients with all domain-specific variables documented out of all patients admitted to CIN-N hospitals).
  2. Treatment domain accuracy score per hospital (as the number of all patients with accurate treatment prescription out of all those with the treatment prescribed).

For our approach to the mixed effects model fitting, the patients are nested in hospitals nested in time points. Time elapsed was captured as months since the hospitals joined the CIN-N and was treated as a continuous fixed effect. From previous studies within the CIN, the effect of time on adherence to recommended clinical practice was found to vary across hospitals [28]. Likelihood ratio tests (LRT) were used to determine the most suitable random effects model (hospital random intercepts vs hospital random intercepts with random slopes for time). The outcome variables for documentation and treatment domains were assumed to follow a negative binomial distribution (a generalisation of Poisson regression which loosens the restrictive assumption made by the Poisson model that the variance is equal to the mean i.e. equidispersion assumption) [29]. The sensitivity analyses sub-section of the methods section addresses how this assumption was tested. An offset term (i.e., the number or patients eligible per month per hospital) was included in each model to model the count outcome as a rate over time (e.g., change in the number of patients with accurate treatment prescribed), and the model effects are reported as incidence rate ratios (IRR). Intra-cluster correlation coefficients (ICC) were provided to indicate variation between hospitals in recommended documentation practices and adherence to treatment guidelines.

Missing data.

Missing data was considered ‘informative’ as the analysis is based on documentation or no documentation. For the documentation score, missing variables were recorded as zero and therefore contributed a score of zero to the domain score per patient. For treatments, the absence of clear prescription information was logically considered to represent an inadequate prescription; for coverage, no record of use of the intervention was assumed to indicate no use.

Sensitivity analyses.

Overdispersion of the outcome variables (which is when the conditional variance exceeds the conditional mean), a key negative binomial model assumption, was evaluated by a likelihood ratio test comparing the model(s) to their Poisson model equivalent, which holds the conditional mean and variance to be equal (i.e., Equidispersion). Also, a likelihood ratio test (LRT) was used to examine the most suitable random effect model (random intercepts at the hospital level versus random intercepts for the hospitals with random slopes for time) [30]. To ensure that the correlations between the repeated outcome measurements of each hospital which decrease with time lag (i.e. autocorrelation) were adequately reflected, LRT was used to examine if there was evidence to support including a term for an autocorrelation structure of order one [31], over using a mixed effects model without such a term.

Finally, exploration of whether there was evidence supporting the assumption that the conditional outcome of the models’ approximated a normal distribution using quantile residual quantile-quantile (QQ) plot for each fitted model was conducted, although this assumption is debatable for count data models [32].

Results

Descriptive findings

Fig 1 depicts the study population inclusion process. Out of the 84,960 NBU admissions to CIN-N hospitals, 80,060 (94.23%) were eligible for analysis. Most exclusions were because an admission was randomly sampled for minimum data collection (2934/84960) or fell in industrial action period (1966/84960). Among the patients admitted to CIN-N during the study period and selected for this study, 43953/80060 (54.9%) were male. Overall, the mortality rate across the 20 hospitals was 11314/80060 (14.13%). The median birth weight of CIN-N NBU admissions was 3 kgs (inter-quartile range (IQR): 2.0–3.395) and median length of stay was 4 days (1QR: 2–8). NBU admissions had a median of one admission diagnosis (IQR: 1–2). The leading NBU discharge diagnoses over time was low birth weight followed by birth asphyxia, respiratory distress syndrome, and then neonatal sepsis. Out of the 42998/80060 (53.71%) NBU admissions with Gentamicin prescription and 43889/80060 (54.82%) with Penicillin prescription, 1022/42998 (2.38%) and 1964/43889 (4.47%) were classified as incorrect because of incomplete prescribing data (e.g., any of missing age, birth weight, dosage, route, and frequency of administration variables) respectively. Out of the 35295/80060 (44.09%) NBU admissions with fluids prescription and 13643/80060 (17.04%) with feeds prescription, 2778/35295 (7.87%) and 3721/13643 (27.27%) were classified as incorrect because of incomplete prescribing data respectively. The proportion of records in which key items are not recorded is illustrated in Table 3.

thumbnail
Fig 1. Flow-chart of the inclusion criteria.

The overall population (n = 80060) is used for documentation score analysis. S1 Table in supporting information provides details of when CIN-N hospitals joined the network and patient records per hospital so far.

https://doi.org/10.1371/journal.pgph.0000673.g001

thumbnail
Table 3. Proportion of records for admission observations included in this study in which key items are not recorded.

https://doi.org/10.1371/journal.pgph.0000673.t003

Objective 1 findings: Quality of documentation of in-patient neonatal care provided over time

Examining trends with data from across hospitals it can be seen that at the time all (new) hospitals joined the CIN-N, documentation completeness for 5/8 documentation domains was already around 80% or better (Fig 2, Table 4). This is likely attributable to most of these sites already using the NAR linked to being already part of CIN-Paediatrics [9, 14, 16]. Specifically, for admission information, discharge information and demographics documentation domains, performance was consistently >95%, with a median of >80% of patients having full documentation at admission (Table 4, Fig 2). For this reason, further examination of hospital specific trends for these domains was not done.

thumbnail
Fig 2. Domain-specific documentation trends over time.

Domain completeness score summarised as an average of all individual patient domain-specific scores in each month. Trend line generated using LOWESS technique.

https://doi.org/10.1371/journal.pgph.0000673.g002

thumbnail
Table 4. Domain documentation summary statistics pooled across all time periods (n = 80060 patients).

https://doi.org/10.1371/journal.pgph.0000673.t004

For other domains, performance started lower with a suggestion from all hospitals’ data of improvement over time, but also considerable between-hospital variability e.g., maternal history, other bedside examination, and monitoring (vital signs) (Fig 2). Fig 3 (and S1 Fig in supporting information) explored and demonstrated considerable between-hospital variability (e.g., Other examination domain). Plots display some examples of broad improvement (H13 and H20), some with static performance over time (e.g., H12) and some with rather erratic performance including occasional substantial declines (e.g., H2).

thumbnail
Fig 3. Illustration of hospital-specific documentation trends using a random selection of half the CIN-N hospitals.

Hospital-specific trends for the remaining subset of hospitals can be found in S1 Fig of supporting information. Trend line generated using LOWESS technique. Fewer observations in some hospitals due to different CIN-N joining dates.

https://doi.org/10.1371/journal.pgph.0000673.g003

All documentation domains demonstrated month-to-month improvements in the number of patients with complete domain documentation–even if modest in size–which were statistically significant (Table 5); In descending order, monitoring (vital signs), other physical examination and bedside testing (i.e. Other Signs), discharge information, and maternal history domains demonstrated a statistically significant month-to-month relative improvement in number of patients with complete documentation of 7.6%, 2.9%, 2.4%, and 2.0% respectively (Table 5).

thumbnail
Table 5. Incidence rates of complete domain documentation at admission over time.

https://doi.org/10.1371/journal.pgph.0000673.t005

At the time of joining CIN-N, less than 50% of the patients admitted to the hospitals had complete documentation of Discharge information, Monitoring (Vital signs), Other physical examination and bedside testing, and Maternal history domains, with a substantive amount of variance in the outcome explained by hospital factors as illustrated by the high ICCs (Table 5, Fig 3). Hospitals with higher baseline performance tended to demonstrate slower rates of improvement than hospitals with lower baseline performance as illustrated in Fig 3 (Table 5, H13 versus H17 in Fig 3).

Objective 2 findings: Accuracy of essential neonatal intervention prescriptions over time

Given the good quality of prescribing data from CIN-N and the reasonable assumptions about the meaning of missing prescribing data, treatment prescribing accuracy was evaluated for the common antibiotics, feeds, and fluids in NBUs. Domain specific treatment accuracy scores revealed an increasing proportion of patients with accurate fluids and feeds prescriptions from approximately 40% to 60%, and 15% to 40% respectively, although feeds prescribing accuracy then regresses to 25% (Fig 4).

thumbnail
Fig 4. Overall trend in treatment accuracy and coverage.

KMC: Kangaroo Mother Care; CPAP: Continuous Positive Airway Pressure; RDS: Respiratory Distress Syndrome. Trend line generated using LOWESS technique.

https://doi.org/10.1371/journal.pgph.0000673.g004

Antibiotic prescription shows a modest improvement in accuracy from around 65% to 80% within the first 12 months, after which it then fluctuated around 80% over time across all CIN-N admissions. Treatment coverage levels for KMC demonstrated an increase over time from 20% to 40% in neonates with birth weight <2kg (Fig 4). There was a small increase in CPAP coverage levels over time in neonates with a clinical diagnosis of respiratory distress syndrome (RDS) from 4% to 10%. Repeated weight monitoring for sick neonates improved from 80% to approximately 95% over time (Fig 4). There is evidence of moderate to high hospital variability in both treatment accuracy and coverage scores in CIN-N hospitals (Fig 4).

While antibiotic treatment accuracy seemed to have a ceiling effect of 80% in pooled hospital data, from hospital specific plots (Fig 5, S2 Fig in supporting information), some hospitals consistently attained accuracy levels > 80% (H8, H11, H19), an indication that improvement in other sites was possible. Hospital-specific trends suggested that fluids prescribing accuracy improved from a lower baseline for some hospitals (e.g., H5, H12, H20) but there is still a long way to go, with considerable between hospital variability evident over time (Fig 5, S2 Fig in supporting information). Similarly, feeds prescribing accuracy shows some improvement in some hospitals from a lower baseline (e.g., H5, H8) but in others performance is erratic (H2, H12), with most performing consistently poorly over time (e.g., H13, H19) (Fig 5, S2 Fig in supporting information).

thumbnail
Fig 5. Hospital-specific treatment accuracy trends for half of randomly selected CIN-N hospitals.

Hospital-specific trends for the remaining subset of hospitals can be found in S2 Fig in supporting information. Trend line generated using LOWESS technique. Fewer observations in some hospitals due to different CIN-N joining dates.

https://doi.org/10.1371/journal.pgph.0000673.g005

On average, 73.5%, 10.8% and 22.8% of the patients in the CIN-N received the correct antibiotic, feeds, and fluids treatment at the time when hospitals joined the CIN-N (Table 6). There was a modest statistically significant month-to-month relative increase in correct inpatient treatment after joining the CIN of 2.8% and 1.4% for feeds and fluids prescribing accuracy. Antibiotic prescriptions showed no statistically significant month-to-month improvement after joining CIN-N (Table 6). The high ICC from the antibiotics and fluids mixed effects models suggests that much of the variation in the prescribing practices accuracy was associated with hospital-level factors.

thumbnail
Table 6. Incidence rates of correct treatment prescribing at admission over time.

https://doi.org/10.1371/journal.pgph.0000673.t006

Sensitivity analyses findings

It was reasonable to use Poisson models (which assumes equidispersion) in all but four documentation domains (Discharge information, Monitoring (Vital Signs), Other physical examinations and bedside testing (i.e., Other Signs) documentation domains) and the Feeds treatment accuracy domain which showed evidence suggestive of overdispersion (S2 Table, S3S5 Figs in supporting information). Where the equidispersion assumption was violated, negative binomial models were used and informed any inference drawn.

Discussion

Summary of findings

This study aimed at determining if quality routine clinical data might be generated from CIN-N hospitals and if the quality of data improved over time. As the data quality was reasonable, they were then used to determine whether essential treatments or interventions are being correctly prescribed to newborns and to track intervention adoption. From the time hospitals joined the CIN-N, around 80% of newborns had complete documentation in 5/8 documentation domains (Table 4). This relatively good performance at baseline may be a consequence of participation in the paediatric CIN by most of these hospitals prior to formal extension of CIN to NBUs (i.e., CIN-N) with many paediatric practitioners previously exposed to use of the NAR, ETAT+ training and national neonatal guidelines [13]. All documentation domains demonstrated month-to-month statistically significant albeit modest improvements (between 0.6% and 7.6% per month) in the number of patients with complete domain documentation (Fig 4, Table 5). On average, 73.5%, 10.8% and 22.8% of the newborns with treatment orders in the CIN-N for first-line antibiotics, feeds, and fluids had correct prescriptions at the time when hospitals joined the CIN-N (Table 6). There is a modest statistically significant 2.8% and 1.4% month-to-month relative increase in accurate feeds and fluids prescription after joining the CIN-N resulting in an end line performance of around 40% and 60% respectively. Antibiotic prescribing showed no statistically significant month-to-month change.

Although sometimes modest the improvements observed were often sustained during the COVID-19 pandemic (perhaps with the exception of feed prescribing) that restricted network engagement activities to brief online meetings between April 2020 and December 2021. Improvements also occurred over a period of 4 years during which junior medical staff on NBUs changed every 3 months with frequent changes also in senior staff [19]. Across the entire period CIN-N sustained distribution of feedback reports and the magnitude of improvements observed are in keeping with findings from many audit and feedback interventions [33].

Coverage levels for KMC in neonates with birth weight <2kg and CPAP in neonates with clinically diagnosed respiratory distress syndrome (RDS) demonstrated an increase over time from 20% to 40% and 4%-10%; repeated weight monitoring for sick neonates with birth weight <2.5 kg and length of stay >6 days improved from 80% to approximately 95% over time (Fig 4). There was evidence of moderate to high hospital variability in documentation, treatment accuracy, and coverage scores in CIN-N hospitals; as shown previously hospitals with higher baseline performance evidence slower rates of improvement than hospitals with lower baseline performance in some cases perhaps linked to ceiling effects [34].

Comparison to other findings

Previous studies in Kenya depict a health system that is struggling to collect quality data that is usable for decision making especially for neonatal care [18, 35]. The poor quality of neonatal clinical data has been widely reported in other African countries [36, 37], this undermines efforts to track the scale up of quality care [38, 39]. Implementation of a learning health system across hospitals utilising a common data platform to facilitate routine audit and feedback cycles have been shown to improve the documentation of patient data and its subsequent use in care improvement [9, 38, 40]. Employing findings, tools and practices from previous studies and progressively engaging more hospitals, this study demonstrated that data can be collected using a common data platform as part of a learning health system approach from a network of hospitals’ NBUs; this study further shows that these data can be useful for identifying potential gaps in care (e.g., treatment accuracy) with an aim of improving the quality of care provided in facilities and tracking outcomes at scale [1316, 18, 41]. To our knowledge this is the largest reported long-term neonatal learning health system platform in SSA, serving as an exemplar actionable health information system in line with WHO standards [12, 13, 15, 16].

Findings from scoping reviews suggest that having better data can help improve quality of care if coupled with development of local leadership, training, and use of local improvement strategies such as mortality audits or quality improvement cycles; This can help reduce inpatient neonatal mortality in low-income country hospitals [4245]. However, the complex intervention strategies required to tackle multiple quality and safety concerns in hospitals may make it challenging to demonstrate mortality reductions over the short term [46]. High-quality data platforms may therefore be especially helpful to track whether hospital quality of care and mortality rates are improving over the long-term. Hospital neonatal outcomes may also be influenced by the successful scaling up of key interventions such as CPAP and KMC. It is therefore essential to be able to track their adoption at scale. However, outside specific research studies these data are rarely reported and the effects of programmes supporting such scaling up therefore remain largely unknown. The CIN-N data platform this study described, by spanning aspects of care rarely included in other LMIC quality assessment approaches, offered one means to track adoption over the long-term and provides a hypothesis generating platform for implementation research linked to observed variations in quality of care and intervention rollout [42, 47, 48].

Implications of findings

For key clinical data domains (i.e., demographic, admission, and discharge information) there was good data documentation at the time hospitals joined the CIN-N. This is likely attributable to most of these sites being already part of CIN-Paediatrics, where organisationally, the learning health system culture and activities explained in detail elsewhere [1316] were already being cultivated, allowing these hospitals to take advantage of the roll-out and dissemination of tools like the NAR coupled with ETAT+ training. This also does suggest that it is possible to introduce tools that capture essential clinical data with missingness rates of 20% or less in routine SSA hospital settings. Analyses of such data do then need to account for missingness using appropriate statistical techniques to reduce potential biases [49, 50]. The slow but steady month-to-month improvement illustrates how long it takes to change clinical behaviours for some forms of patient documentation.

For several documentation domains (e.g., Other physical examination, Maternal History, Monitoring Vital Signs) and all treatment accuracy and coverage domains except antibiotic prescribing accuracy, considerable variability in performance between hospitals remains a persistent challenge.

Domains that had lower baselines were those where documentation practices were less standardised prior to joining CIN-N. New information tools such as Transfer Forms (for sick newborns transferred from labour wards or theatres to NBU) or feedback on documentation of vital signs on NBU admission may have improved performance for Maternal history and Monitoring (Vital signs) domains across hospitals over time (Figs 2 and 3). However, the challenges with adoption or improvement were illustrated by both patterns where facilities starting very low showed gradual improvement and those starting high which either stagnated or got worse.

In some cases, this may reflect a “ceiling” effect (e.g., Fluids prescribing accuracy). It is evident, however, that some hospitals could attain higher accuracy levels consistently (Figs 3 and 5), suggesting improvements in other sites would be possible. Similarly, trends in accuracy in the prescription of feeds and fluids are quiet erratic and vary within and between hospitals (Fig 5). These erratic patterns might have been exacerbated by the limited interaction of hospitals through CIN meetings during the COVID-19 pandemic and other challenges to sustaining quality care such as human resource shortages and labour strikes [19, 20]. Learning from ‘positive deviants’ may be informative. Prior research suggests good performers have adequate and supportive staffing, participate actively in local clinical audits and feedback process, and have good supervision by unit leads [1416, 51].

Better theory-driven ways of conducting audit and feedback might be required within CIN-N to improve quality of care and treatment accuracy [33]. For example, more active feedback might be needed for more complex tasks such as to promote accurate prescribing. Further elaborations might explicitly address (1) capacity limitations of CIN-N hospitals and clinical teams to produce the improvements required, (2) lessons learned about the identity and culture of each individual CIN-N hospital and site specific barriers to change (3) specific use of behavioural thinking that directly supports positive clinical behaviours by ensuring feedback is actionable, controllable and timely [40, 52].

Strengths and limitations

This study is among the few in SSA focusing on documentation of new-born data, an implementation of strategic objective 2 and 5 of the Every Newborn Action Plan [6, 53, 54]. The CIN-N generates data from more than 20 NBUs across Kenya by improving routine data sources in a strategy that is relatively low-cost and scalable as the central data management and data quality assurance processes involved can benefit from economies of scale [9, 55]. However, the data generated are limited only to ‘documentation’, limiting the range of measures of quality e.g., whether prescriptions were correct. Key limitations therefore remain such as confirmation of whether treatment was dispensed as prescribed (i.e., treatment adherence), and, for interventions such as CPAP, it can be hard to determine the best denominator population which would ideally be newborns that might have benefited from its use. This problem in identifying suitable denominators that enable evaluation of the appropriate use of interventions mean tracking adoption may frequently be based on a cruder measure of frequency of (documented) use. Furthermore, for some indicators applicable to only relatively small numbers of patients performance may appear erratic because there are few data points per month. Thus, less frequent monitoring over longer time periods may be required to sensibly track trends.

Conclusions

All neonatal in-patient care documentation domains demonstrated modest improvements between 0.6% and 7.6% per month in the number of patients with complete domain documentation with each month in the CIN. From the improved clinical data quality, evaluation of essential treatment accuracy was possible, with the data showing a month-to-month improvement in prescribing accuracy of 2.8% and 1.4% for feeds and fluids but not for Antibiotic prescriptions after hospitals joining the CIN.

It is possible to introduce tools that capture essential clinical data often in 80% or more of newborns admitted to routine SSA hospital settings engaged in a centrally supported peer-to-peer network, but analyses of such data need to account for missingness using appropriate statistical techniques. Improvements in quality indicators are on average modest but valuable on a month-to-month basis and occur over a prolonged period that included the COVID pandemic. Average effects mask considerable temporal and between hospital variability with some hospitals demonstrating high levels of performance for indicators likely to be important to patient safety and outcomes such as feeding or antibiotic treatment prescribing accuracy. Future research is needed to explore how learning from high-performing hospitals in a learning health system in SSA context can help realise better improvements more widely within the hospital network while continuously deploying better theory-driven feedback approaches. However, considerable system challenges such as rapid staff turnover, general staff shortages and ongoing material resource challenges likely contribute to persistent problems delivering quality care. Such quality clinical data (and associated platforms) can support better impact evaluation, performance benchmarking, exploration of links between health system inputs and outcomes and critical scrutiny of geographic variation in quality and outcomes of hospital care [56]. Efforts to improve the quality of clinical data from SSA are needed to support these objectives remain much needed.

Supporting information

S1 Table. Hospitals’ CIN-N membership and patient volumes.

https://doi.org/10.1371/journal.pgph.0000673.s002

(DOCX)

S2 Table. Sensitivity analysis evaluating if there is any evidence that negative binomial model assumptions might have been violated.

https://doi.org/10.1371/journal.pgph.0000673.s003

(DOCX)

S1 Fig. Hospital-specific documentation trends for last half of randomly selected CIN-N hospitals.

Trend line generated using LOWESS technique. Fewer observations in some hospitals due to different CIN-N joining dates.

https://doi.org/10.1371/journal.pgph.0000673.s004

(TIF)

S2 Fig. Hospital-specific treatment accuracy trends for half of randomly selected CIN-N hospitals.

Trend line generated using LOWESS technique. Fewer observations in some hospitals due to different CIN-N joining dates.

https://doi.org/10.1371/journal.pgph.0000673.s005

(TIF)

S3 Fig. Normality assumption check for Poisson and negative binomial generalised linear mixed effects models for documentation completeness domains.

https://doi.org/10.1371/journal.pgph.0000673.s006

(TIF)

S4 Fig. Linearity assumption check for Poisson and negative binomial generalised linear mixed effects models the documentation completeness domains.

https://doi.org/10.1371/journal.pgph.0000673.s007

(TIF)

S5 Fig. Linearity and normality assumption check for the treatment appropriateness Poisson and negative binomial generalised linear mixed effects models.

https://doi.org/10.1371/journal.pgph.0000673.s008

(TIF)

References

  1. 1. Tunçalp Ӧ, Were W, MacLennan C, Oladapo O, Gülmezoglu A, Bahl R, et al. Quality of care for pregnant women and newborns—the WHO vision. BJOG: an international journal of obstetrics & gynaecology. 2015;122(8):1045–9. pmid:25929823
  2. 2. Liu L, Oza S, Hogan D, Chu Y, Perin J, Zhu J, et al. Global, regional, and national causes of under-5 mortality in 2000–15: an updated systematic analysis with implications for the Sustainable Development Goals. The Lancet. 2016;388(10063):3027–35.
  3. 3. Irimu G, Aluvaala J, Malla L, Omoke S, Ogero M, Mbevi G, et al. Neonatal mortality in Kenyan hospitals: a multisite, retrospective, cohort study. BMJ Global Health. 2021;6(5):e004475. pmid:34059493
  4. 4. Brotherton H, Gai A, Kebbeh B, Njie Y, Walker G, Muhammad AK, et al. Impact of early kangaroo mother care versus standard care on survival of mild-moderately unstable neonates< 2000 grams: A randomised controlled trial. EClinicalMedicine. 2021;39:101050.
  5. 5. World Health Organization. WHO recommendations on interventions to improve preterm birth outcomes. 2015.
  6. 6. Bee M, Shiroor A, Hill Z. Neonatal care practices in sub-Saharan Africa: a systematic review of quantitative and qualitative data. Journal of Health, Population and Nutrition. 2018;37(1):1–12. pmid:29661239
  7. 7. de Graft-Johnson J, Vesel L, Rosen HE, Rawlins B, Abwao S, Mazia G, et al. Cross-sectional observational assessment of quality of newborn care immediately after birth in health facilities across six sub-Saharan African countries. BMJ open. 2017;7(3):e014680. pmid:28348194
  8. 8. Wilunda C, Putoto G, Riva DD, Manenti F, Atzori A, Calia F, et al. Assessing coverage, equity and quality gaps in maternal and neonatal care in sub-saharan Africa: an integrated approach. PloS one. 2015;10(5):e0127827. pmid:26000964
  9. 9. Tuti T, Bitok M, Malla L, Paton C, Muinga N, Gathara D, et al. Improving documentation of clinical care within a clinical information network: an essential initial step in efforts to understand and improve care in Kenyan hospitals. BMJ Global Health. 2016;1(1):e000028. pmid:27398232
  10. 10. Murphy GA, Waters D, Ouma PO, Gathara D, Shepperd S, Snow RW, et al. Estimating the need for inpatient neonatal services: an iterative approach employing evidence and expert consensus to guide local policy in Kenya. BMJ global health. 2017;2(4):e000472. pmid:29177099
  11. 11. Maina M, McKnight J, Tosas-Auguet O, Schultsz C, English M. Using treatment guidelines to improve antibiotic use: insights from an antibiotic point prevalence survey in Kenya. BMJ Global Health. 2021;6(1):e003836.
  12. 12. WHO. Standards for improving the quality of care for small and sick newborns in health facilities Geneva: WHO; 2020 [cited 2022 26th May]; Available from: https://www.who.int/publications/i/item/9789240010765.
  13. 13. Irimu G, Ogero M, Mbevi G, Agweyu A, Akech S, Julius T, et al. Approaching quality improvement at scale: a learning health system approach in Kenya. Archives of Disease in Childhood. 2018;103(11):1013–9. pmid:29514814
  14. 14. English M, Ayieko P, Nyamai R, Were F, Githanga D, Irimu G. What do we think we are doing? How might a clinical information network be promoting implementation of recommended paediatric care practices in Kenyan hospitals? Health research policy and systems. 2017;15(1):1–12.
  15. 15. English M, Irimu G, Agweyu A, Gathara D, Oliwa J, Ayieko P, et al. Building learning health systems to accelerate research and improve outcomes of clinical care in low-and middle-income countries. PLoS medicine. 2016;13(4):e1001991. pmid:27070913
  16. 16. English M, Irimu G, Akech S, Aluvaala J, Ogero M, Isaaka L, et al. Employing learning health system principles to advance research on severe neonatal and paediatric illness in Kenya. BMJ Global Health. 2021;6(3):e005300. pmid:33758014
  17. 17. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Annals of internal medicine. 2007;147(8):573–7. pmid:17938396
  18. 18. Maina M, Aluvaala J, Mwaniki P, Tosas-Auguet O, Mutinda C, Maina B, et al. Using a common data platform to facilitate audit and feedback on the quality of hospital care provided to sick newborns in Kenya. BMJ Global Health. 2018;3(5):e001027. pmid:30258654
  19. 19. Ogero M, Akech S, Malla L, Agweyu A, Irimu G, English M. Examining which clinicians provide admission hospital care in a high mortality setting and their adherence to guidelines: an observational study in 13 hospitals. Archives of disease in childhood. 2020;105(7):648–54. pmid:32169853
  20. 20. Irimu G, Ogero M, Mbevi G, Kariuki C, Gathara D, Akech S, et al. Tackling health professionals’ strikes: an essential part of health system strengthening in Kenya. BMJ global health. 2018;3(6). pmid:30588346
  21. 21. Tuti T, Bitok M, Paton C, Makone B, Malla L, Muinga N, et al. Innovating to enhance clinical data management using non-commercial and open source solutions across a multi-center network supporting inpatient pediatric care and research in Kenya. Journal of the American Medical Informatics Association. 2016;23(1):184–92. pmid:26063746
  22. 22. Muinga N, Paton C, Gicheha E, Omoke S, Abejirinde I-OO, Benova L, et al. Using a human-centred design approach to develop a comprehensive newborn monitoring chart for inpatient care in Kenya. BMC health services research. 2021;21(1):1–14.
  23. 23. World Health Organization. Pocket book of hospital care for children: guidelines for the management of common childhood illnesses: World Health Organization; 2013.
  24. 24. Irimu G, Wamae A, Wasunna A, Were F, Ntoburi S, Opiyo N, et al. Developing and introducing evidence based clinical practice guidelines for serious illness in Kenya. Archives of disease in childhood. 2008;93(9):799–804. pmid:18719161
  25. 25. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2):377–81. pmid:18929686
  26. 26. English M, Wamae A, Nyamai R, Bevins B, Irimu G. Implementing locally appropriate guidelines and training to improve care of serious illness in Kenyan hospitals: a story of scaling-up (and down and left and right). Archives of disease in childhood. 2011;96(3):285–90. pmid:21220265
  27. 27. Amolo L, Irimu G, Njai D. Knowledge of postnatal mothers on essential newborn care practices at the Kenyatta National Hospital: a cross sectional study. Pan African Medical Journal. 2017;28(1):159-. pmid:29255567
  28. 28. Tuti T, Aluvaala J, Akech S, Agweyu A, Irimu G, English M. Pulse oximetry adoption and oxygen orders at paediatric admission over 7 years in Kenya: a multihospital retrospective cohort study. BMJ open. 2021;11(9):e050995. pmid:34493522
  29. 29. Gardner W, Mulvey EP, Shaw EC. Regression analyses of counts and rates: Poisson, overdispersed Poisson, and negative binomial models. Psychological bulletin. 1995;118(3):392. pmid:7501743
  30. 30. Verbeke G, Molenberghs G, Rizopoulos D. Random effects models for longitudinal data. Longitudinal research with latent variables: Springer; 2010. p. 37–96.
  31. 31. Linden A. Conducting interrupted time-series analysis for single-and multiple-group comparisons. The Stata Journal. 2015;15(2):480–500.
  32. 32. Roback P, Legler J. Beyond multiple linear regression: applied generalized linear models and multilevel models in R. Chapman and Hall/CRC; 2021.
  33. 33. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard‐Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane database of systematic reviews. 2012 (6). pmid:22696318
  34. 34. Gachau S, Ayieko P, Gathara D, Mwaniki P, Ogero M, Akech S, et al. Does audit and feedback improve the adoption of recommended practices? Evidence from a longitudinal observational study of an emerging clinical network in Kenya. BMJ global health. 2017;2(4):e000468. pmid:29104769
  35. 35. Aluvaala J, Nyamai R, Were F, Wasunna A, Kosgei R, Karumbi J, et al. Assessment of neonatal care in clinical training facilities in Kenya. Archives of disease in childhood. 2015;100(1):42–7. pmid:25138104
  36. 36. Bhattacharya AA, Umar N, Audu A, Felix H, Allen E, Schellenberg JR, et al. Quality of routine facility data for monitoring priority maternal and newborn indicators in DHIS2: a case study from Gombe state, Nigeria. PloS one. 2019;14(1):e0211265. pmid:30682130
  37. 37. Dadzie D, Boadu RO, Engmann CM, Twum-Danso NAY. Evaluation of neonatal mortality data completeness and accuracy in Ghana. Plos one. 2021;16(3):e0239049. pmid:33661920
  38. 38. Canavan ME, Brault MA, Tatek D, Burssa D, Teshome A, Linnander E, et al. Maternal and neonatal services in Ethiopia: measuring and improving quality. Bulletin of the World Health Organization. 2017;95(6):473. pmid:28603314
  39. 39. Dickson KE, Kinney MV, Moxon SG, Ashton J, Zaka N, Simen-Kapeu A, et al. Scaling up quality care for mothers and newborns around the time of birth: an overview of methods and analyses of intervention-specific bottlenecks and solutions. BMC pregnancy and childbirth. 2015;15(2):1–19. pmid:26390820
  40. 40. Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical performance feedback intervention theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implementation Science. 2019;14(1):1–25.
  41. 41. A Model Pediatric form for the Newborn. Hospital Topics. 1965 1965/01/01;43(1):98–9.
  42. 42. Ogola M, Njuguna EM, Aluvaala J, English M, Irimu G. Audit identified modifiable factors in Hospital Care of Newborns in low-middle income countries: a scoping review. BMC pediatrics. 2022;22(1):1–17.
  43. 43. Willcox ML, Price J, Scott S, Nicholson BD, Stuart B, Roberts NW, et al. Death audits and reviews for reducing maternal, perinatal and child mortality. Cochrane Database of Systematic Reviews. 2020 (3). pmid:32212268
  44. 44. Singh K, Brodish P, Speizer I, Barker P, Amenga-Etego I, Dasoberi I, et al. Can a quality improvement project impact maternal and child health outcomes at scale in northern Ghana? Health research policy and systems. 2016;14(1):1–13. pmid:27306769
  45. 45. Ayieko P, Ntoburi S, Wagai J, Opondo C, Opiyo N, Migiro S, et al. A multifaceted intervention to implement guidelines and improve admission paediatric care in Kenyan district hospitals: a cluster randomised trial. PLoS medicine. 2011;8(4):e1001018. pmid:21483712
  46. 46. Hategeka C, Lynd LD, Kenyon C, Tuyisenge L, Law MR. Impact of a multifaceted intervention to improve emergency care on newborn and child health outcomes in Rwanda. Health Policy and Planning. 2022;37(1):12–21. pmid:34459893
  47. 47. English M, Nzinga J, Irimu G, Gathara D, Aluvaala J, McKnight J, et al. Programme theory and linked intervention strategy for large-scale change to improve hospital care in a low and middle-income country-A Study Pre-Protocol. Wellcome open research. 2020;5. pmid:33274301
  48. 48. Mekbib T, Leatherman S. Quality improvement in maternal, neonatal and child health services in sub-Saharan Africa: a look at five resource-poor countries. Ethiopian Journal of Health Development. 2020;34(1).
  49. 49. Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. Bmj. 2009;338. pmid:19564179
  50. 50. White IR, Royston P, Wood AM. Multiple imputation using chained equations: issues and guidance for practice. Statistics in medicine. 2011;30(4):377–99. pmid:21225900
  51. 51. Lawton R, Taylor N, Clay-Williams R, Braithwaite J. Positive deviance: a different approach to achieving patient safety. BMJ quality & safety. 2014;23(11):880–3. pmid:25049424
  52. 52. Tuti T, Aluvaala J, Malla L, Irimu G, Mbevi G, Wainaina J, et al. Evaluation of an audit and feedback intervention to reduce gentamicin prescription errors in newborn treatment (ReGENT) in neonatal inpatient care in Kenya: a controlled interrupted time series study protocol. Implementation Science. 2022;17(1):1–17.
  53. 53. World Health Organization. Opportunities for Africa’s newborns. 2006.
  54. 54. World Health Organization. Every newborn: an action plan to end preventable deaths. 2014.
  55. 55. Hemkens LG, Contopoulos-Ioannidis DG, Ioannidis JP. Routinely collected data and comparative effectiveness evidence: promises and limitations. Cmaj. 2016;188(8):E158–E64. pmid:26883316
  56. 56. Grimshaw J, Ivers N, Linklater S, Foy R, Francis JJ, Gude WT, et al. Reinvigorating stagnant science: implementation laboratories and a meta-laboratory to efficiently advance the science of audit and feedback. BMJ quality & safety. 2019;28(5):416–23. pmid:30852557