Skip to main content
  • Loading metrics

The impact of continuous quality improvement on coverage of antenatal HIV care tests in rural South Africa: Results of a stepped-wedge cluster-randomised controlled implementation trial

  • H. Manisha Yapa ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations The Kirby Institute, University of New South Wales Sydney, NSW, Australia, Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa

  • Jan-Walter De Neve,

    Roles Formal analysis, Visualization, Writing – review & editing

    Affiliation Heidelberg Institute of Global Health (HIGH), Medical Faculty and University Hospital, Heidelberg University, Heidelberg, Germany

  • Terusha Chetty,

    Roles Conceptualization, Funding acquisition, Writing – review & editing

    Affiliation Health systems Research Unit, South African Medical Research Council, Durban, South Africa

  • Carina Herbst,

    Roles Investigation, Methodology, Resources, Software, Writing – review & editing

    Affiliation Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa

  • Frank A. Post,

    Roles Supervision, Writing – review & editing

    Affiliation King’s College Hospital NHS Foundation Trust, London, United Kingdom

  • Awachana Jiamsakul,

    Roles Formal analysis, Supervision, Writing – review & editing

    Affiliation The Kirby Institute, University of New South Wales Sydney, NSW, Australia

  • Pascal Geldsetzer,

    Roles Formal analysis, Methodology, Writing – review & editing

    Affiliations Heidelberg Institute of Global Health (HIGH), Medical Faculty and University Hospital, Heidelberg University, Heidelberg, Germany, Division of Primary Care and Population Health, Department of Medicine, Stanford University, Stanford, California, United States of America

  • Guy Harling,

    Roles Formal analysis, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, Institute for Global Health, University College London, London, United Kingdom

  • Wendy Dhlomo-Mphatswe,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliation School of Clinical Medicine, Discipline of Obstetrics and Gynaecology, University of KwaZulu-Natal, Durban, South Africa

  • Mosa Moshabela,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, School of Nursing and Public Health, University of KwaZulu-Natal, Durban, South Africa

  • Philippa Matthews,

    Roles Conceptualization, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, Islington GP Federation, London, United Kingdom

  • Osondu Ogbuoji,

    Roles Writing – review & editing

    Affiliation Global Health Institute, Duke University, Durham, North Carolina, United States of America

  • Frank Tanser,

    Roles Formal analysis, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, School of Nursing and Public Health, University of KwaZulu-Natal, Durban, South Africa, Lincoln International Institute for Rural Health, University of Lincoln, Lincoln, United Kingdom, Centre for the AIDS Programme of Research in South Africa (CAPRISA), University of KwaZulu-Natal, Durban, South Africa

  • Dickman Gareta,

    Roles Data curation, Software, Writing – review & editing

    Affiliation Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa

  • Kobus Herbst,

    Roles Conceptualization, Data curation, Methodology, Software, Writing – review & editing

    Affiliation Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa

  • Deenan Pillay,

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, Division of Infection and Immunity, University College London, London, United Kingdom

  • Sally Wyke,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, Institute for Health & Wellbeing, University of Glasgow, Glasgow, United Kingdom

  •  [ ... ],
  • Till Bärnighausen

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Writing – review & editing

    Affiliations Africa Health Research Institute (AHRI), KwaZulu-Natal, South Africa, Heidelberg Institute of Global Health (HIGH), Medical Faculty and University Hospital, Heidelberg University, Heidelberg, Germany, Institute for Global Health, University College London, London, United Kingdom, MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa, Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America

  • [ view all ]
  • [ view less ]



Evidence for the effectiveness of continuous quality improvement (CQI) in resource-poor settings is very limited. We aimed to establish the effects of CQI on quality of antenatal HIV care in primary care clinics in rural South Africa.

Methods and findings

We conducted a stepped-wedge cluster-randomised controlled trial (RCT) comparing CQI to usual standard of antenatal care (ANC) in 7 nurse-led, public-sector primary care clinics—combined into 6 clusters—over 8 steps and 19 months. Clusters randomly switched from comparator to intervention on pre-specified dates until all had rolled over to the CQI intervention. Investigators and clusters were blinded to randomisation until 2 weeks prior to each step. The intervention was delivered by trained CQI mentors and included standard CQI tools (process maps, fishbone diagrams, run charts, Plan-Do-Study-Act [PDSA] cycles, and action learning sessions). CQI mentors worked with health workers, including nurses and HIV lay counsellors. The mentors used the standard CQI tools flexibly, tailored to local clinic needs. Health workers were the direct recipients of the intervention, whereas the ultimate beneficiaries were pregnant women attending ANC. Our 2 registered primary endpoints were viral load (VL) monitoring (which is critical for elimination of mother-to-child transmission of HIV [eMTCT] and the health of pregnant women living with HIV) and repeat HIV testing (which is necessary to identify and treat women who seroconvert during pregnancy). All pregnant women who attended their first antenatal visit at one of the 7 study clinics and were ≥18 years old at delivery were eligible for endpoint assessment. We performed intention-to-treat (ITT) analyses using modified Poisson generalised linear mixed effects models. We estimated effect sizes with time-step fixed effects and clinic random effects (Model 1). In separate models, we added a nested random clinic–time step interaction term (Model 2) or individual random effects (Model 3). Between 15 July 2015 and 30 January 2017, 2,160 participants with 13,212 ANC visits (intervention n = 6,877, control n = 6,335) were eligible for ITT analysis. No adverse events were reported. Median age at first booking was 25 years (interquartile range [IQR] 21 to 30), and median parity was 1 (IQR 0 to 2). HIV prevalence was 47% (95% CI 42% to 53%). In Model 1, CQI significantly increased VL monitoring (relative risk [RR] 1.38, 95% CI 1.21 to 1.57, p < 0.001) but did not improve repeat HIV testing (RR 1.00, 95% CI 0.88 to 1.13, p = 0.958). These results remained essentially the same in both Model 2 and Model 3. Limitations of our study include that we did not establish impact beyond the duration of the relatively short study period of 19 months, and that transition steps may have been too short to achieve the full potential impact of the CQI intervention.


We found that CQI can be effective at increasing quality of primary care in rural Africa. Policy makers should consider CQI as a routine intervention to boost quality of primary care in rural African communities. Implementation research should accompany future CQI use to elucidate mechanisms of action and to identify factors supporting long-term success.

Trial registration

This trial is registered at under registration number NCT02626351.

Author summary

Why was this study done?

  • Gaps in implementation of evidence-based guidelines can slow progress towards major health systems goals, such as the elimination of mother-to-child transmission of HIV (eMTCT).
  • Continuous quality improvement (CQI) has the potential to improve service delivery with available resources and has been successful in resource-rich settings.
  • There is very limited evidence on the impact of CQI in improving health systems in resource-poor primary care.

What did the researchers do and find?

  • We assigned 7 public-sector primary care clinics in rural South Africa to receive CQI in random sequence.
  • The intervention was delivered by a trained team of local CQI mentors who worked collaboratively with clinic health workers. The mentors and health workers aimed to address root causes of failure to perform HIV viral load (VL) monitoring among pregnant women living with HIV and repeat HIV testing among pregnant women not living with HIV.
  • CQI increased HIV VL monitoring by nearly 40% but did not improve repeat HIV testing.
  • Lay counsellors, who conducted HIV testing and counselling in the primary care clinics, were redeployed during the study period, placing a strain on available human resources for HIV testing.

What do these findings mean?

  • CQI can be effective at improving quality of primary care in resource-poor communities.
  • One explanation for the different effectiveness of CQI on our 2 primary endpoints is that health workers may have perceived VL monitoring to be more important for health outcomes than repeat HIV testing.
  • Resource shortages, particularly staffing shortages, may have prevented CQI from achieving its full potential.
  • Future research should focus on the conditions under which CQI is most likely to be successful.


Continuous quality improvement (CQI) is an important approach to improving quality of care and adherence to clinical guidelines in the health sector [1,2]. Originally developed to streamline production processes in the consumer industry [3], CQI was adopted in the 1990s by the healthcare sector to improve organisational systems to create better quality of care and health outcomes [4]. CQI focuses on developing healthcare providers’ capacity to improve quality of care. It consists of a set of adaptable but systematic techniques to diagnose quality problems using real-time data [1,2] and to address these problems with existing resources [5]. These properties make CQI an attractive approach to improve primary care in resource-poor countries and communities, such as in sub-Saharan Africa [6]. Indeed, CQI is being increasingly rolled out at national scale in several countries in sub-Saharan Africa, with the aim to improve quality of care in primary care [711]. The focus on quality improvement in primary care is important, because a primary care visit is often the first and only contact people have with their local health system when seeking care [12].

To date, there have only been 2 randomised controlled trials (RCTs) that have tested whether CQI improves quality of care in the primary care systems in sub-Saharan Africa, one in Malawi [13] and the other one in Nigeria [14]. Both trials found CQI to be ineffective [13,14]. One possible reason for the lack of effectiveness in these 2 trials is that the investigators evaluated CQI in terms of health or distal healthcare utilisation outcomes—neonatal and perinatal mortality rates in the study in Malawi [13] and 6-month postpartum retention in the study in Nigeria [14]. These endpoints are important, and they are rigorously assessed in the 2 studies. However, these endpoints are quite distal from the CQI activities that took place in the primary care clinics. The variability of these endpoints is therefore likely largely a function of factors outside the sphere of influence of CQI, such as emergency care availability, road infrastructure, and mothers’ educational attainment.

For our RCT of CQI effectiveness, we thus decided to use 2 primary endpoints that are closely and directly linked to CQI activities. Our trial took place in an HIV hyperendemic community in rural South Africa, where antenatal care (ANC) in primary care fulfils 2 important HIV care–related functions: to ensure that all pregnant women living with HIV are (1) identified and (2) successfully treated with antiretroviral therapy (ART). We choose measures that capture the procedural quality of care of these 2 key ANC functions as primary endpoints: viral load (VL) monitoring and repeat HIV testing. Both endpoints are important measures of ANC quality and are necessary conditions for preventing mother-to-child transmission of HIV (MTCT) [15,16]. Ultimately, the purpose of healthcare is to improve health outcomes. Implementation science, however, focuses on the improvement of processes whose intermediate outcomes, such as testing coverage, can be directly influenced by changes to health services and are known to be strong determinants of health outcomes [17].

To capture proximate effects of CQI relevant to successful ART, we chose VL monitoring, because VL is the marker of ART response—and MTCT risk—prescribed in the South African national guidelines for ART and for the elimination of MTCT (eMTCT) [18, 19]. Regular VL monitoring enables timely management of virologic failure to achieve eMTCT, including treatment switches and enhanced antiretroviral prophylaxis for HIV-exposed infants [2022]. Despite the importance of VL monitoring and clear stipulation in the national guidelines [23], most pregnant women on ART are not regularly monitored for VL [24,25].

To capture proximate effects of CQI relevant to the detection of HIV infection, we chose repeat HIV testing, because incident HIV infection in pregnancy increases risk of MTCT as VL peaks shortly after infection [26]. In South Africa, where antenatal HIV incidence is high [27,28], early diagnosis of incident HIV through repeat testing during pregnancy is a critical step towards eMTCT [29]. Yet national guideline recommendations for retesting pregnant women who tested HIV negative during a prior HIV test in ANC are not well adhered to [26].

Our stepped-wedge cluster RCT aimed to establish the effectiveness of CQI on important proximate indicators of the quality of HIV-related ANC in primary care. We hypothesised that a CQI intervention would be effective at improving (i) VL monitoring in pregnant women living with HIV and (ii) repeat HIV testing in pregnant women not living with HIV.


Details of the Management and Optimisation of Nutrition, Antenatal, Reproductive, Child health & HIV care (MONARCH) implementation project and study have been previously published [30] and are summarised below.

Study setting

The Africa Health Research Institute (AHRI) at Somkhele (previously known as the Africa Centre for Population Health) is located in a rural community in northern KwaZulu-Natal, South Africa. Our CQI intervention was conducted at 7 nurse-led South African National Department of Health (DoH) primary care clinics: 6 were located within the geographic bounds of the AHRI Population Intervention Platform Surveillance Area (PIPSA) South [31], and 1 clinic was located in the market town of Mtubatuba, which is often used by PIPSA residents (Fig 1). Management of the primary care clinics is overseen by Hlabisa Hospital, the local district hospital. HIV prevalence amongst women of reproductive age in this area is approximately 37% [32]. Additional contextual information is described in the Supporting Information: laboratory results workflow (S1 Text), clinic size (S1 Table), and staffing (including lay counsellors; S2 Table). The intervention was delivered by an external CQI team of mentors (including 2 isiZulu-speaking nurses) from the Centre for Rural Health (CRH) at the University of KwaZulu-Natal, who travelled to the study community (hereafter referred to as the CRH team). The mentors were closely supported by an improvement advisor (consultant obstetrician), a scientific advisor, and a data manager.

Fig 1. Participating study clinics located within AHRI PIPSA.

The PIPSA is depicted in white and covers 438 km2. Primary care clinics and the local district hospital, Hlabisa Hospital, are marked with a red cross. Source credit: Sabelo Ntuli, AHRI Research Data Management. AHRI, Africa Health Research Institute; PIPSA, AHRI Population Intervention Platform Surveillance Area.

Trial design

We conducted a stepped-wedge cluster RCT (; NCT02626351) from 15 July 2015 to 30 January 2017. Each clinic formed a cluster except for the 2 smallest clinics, which were merged into one cluster. After a 2-month baseline data collection period, the first cluster rolled over to the intervention on 29 September 2015. Each subsequent cluster rolled over from control to intervention in random order every 2 months (Fig 2).

Fig 2. Study design and endpoint observations by actual randomisation sequence.

Primary care clinics provided pre-intervention data until each rolled over to the CQI intervention in random order. All clinics provided data continuously throughout the study period. Baseline data collection across all clinics occurred from 15 July 2015 to 28 September 2015 (Step 0). As ANC data were captured retrospectively at delivery, the total observation period exceeded the data collection period by approximately 6 months. Width of each step is proportional to the number of months under observation. The baseline period (pre-intervention, depicted in light blue) contributed approximately 8 months, and the endline (Step 7) contributed approximately 4.5 months [30]. Intervention steps (intensive CQI phase, 2-month step) are depicted in medium blue. ANC, antenatal care; CQI, continuous quality improvement.

Trial registration occurred after the baseline and the first step of this 8-step stepped-wedge RCT, on 10 December 2015. The reason for this timing was that it became clear during the baseline that a rigorous scientific evaluation of the CQI intervention would be feasible and desirable for both government and implementing partners (S1 Text). The stepped-wedge design was selected for both pragmatic and ethical reasons [30]. The description of our results follows the 2018 Consolidated Standards of Reporting Trials (CONSORT) extension for stepped-wedge RCTs [33] (S1 CONSORT Checklist).


We use the Template for Intervention Description and Replication (TIDieR) to describe the intervention in detail (Table 1) [34]. Briefly, the intervention focused on developing the capacity of local ANC health workers in study clinics and aimed to improve implementation of the national eMTCT guidelines. The intervention was based on the Institute for Healthcare Improvement (IHI) breakthrough collaborative CQI model [35]. Clinical processes and resources were first ascertained during situational analyses conducted in the 2-week lead-up to intervention rollover. CQI tools provided a structured approach to improving process change with clearly defined goals and activities and were implemented flexibly based on need (Table 1). Patient care pathways in the clinic were documented using process maps [36] to identify areas for improvement (e.g., filing of VL results). Barriers and enablers of target endpoints (e.g., VL monitoring) were identified with fishbone diagrams [37], providing the opportunity for comprehensive, clinic-wide improvement. Improvement activities were reviewed using iterative Plan-Do-Study-Act (PDSA) cycles during which “one learns from taking action” in real time (tests of change), unlike awaiting the results of a formal research study [38]. Run charts—outcome time trends plotted during the course of improvement activities—provided visual feedback on whether any changes were likely due to the intervention [39]. As part of the IHI breakthrough collaborative model, the CRH CQI mentors also conducted action learning sessions to consolidate learning, share experiences between healthcare facilities, and motivate collaboration.

Table 1. MONARCH CQI intervention description: TIDieR framework.

The intervention delivery according to the stepped-wedge study design is described in Table 1. The CRH team delivered CQI intensively to each cluster during the 2-month intervention step and then continued with the less intensive intervention during the maintenance phase (Table 1, Fig 2). They delivered a standard “dose” of approximately 19 visits during the intervention phase (2–3 visits per week) and continued with approximately monthly visits during the maintenance phase (Fig 2) for ongoing support and mentorship. The CRH mentors held action learning sessions at the end of each intervention step. In typical CQI delivery, health workers from all clinics in an intervention community would concurrently engage in the intervention and attend all action learning sessions. In our CQI delivery, health workers participated in the intervention and the action learning sessions only during the phase when their clinic was in the intervention arm of our trial. Table 1 further describes materials; procedures; how and where the intervention was delivered (including duration and timing of CQI visits); and how we measured “dose,” “reach,” and fidelity of the intervention [41].


During control steps of the study design, health workers continued providing antenatal and postnatal care as usually implemented within routinely available resources.



Clusters were defined as described earlier. ANC health workers in clusters participated in CQI based on availability and ability to commit to CQI ideally for the entire study period. The CQI mentors tried to recruit health workers in leadership roles (e.g., operational managers, professional nurses) to clinic CQI teams to increase the likelihood that CQI activities would continue after the end of the intervention.


For the primary endpoints, all women aged ≥18 years were eligible for recruitment at delivery if they were resident in the PIPSA area during pregnancy and/or had ever attended any of the 7 study clinics for ANC in pregnancy.

Randomisation and blinding

We have described our randomisation procedure in detail elsewhere [30]. Briefly, the unit of randomisation was cluster balanced by patient volume. A senior biostatistician external to the study team performed the randomisation of all clusters during the baseline and before the first intervention step. Investigators and healthcare workers in the clusters were blinded to randomisation until the AHRI Chief Information Officer revealed each randomised cluster to the AHRI study team 2 weeks prior to the scheduled intervention rollover date for each cluster.



The pre-specified registered primary endpoints were indicators of quality of care in HIV-related ANC: (i) VL monitoring among pregnant women living with HIV and (ii) repeat HIV testing among pregnant women not living with HIV. We report the intervention impact on both primary endpoints in this manuscript. We will analyse and report the secondary endpoints elsewhere.

Data sources.

The data on our primary endpoints were sourced entirely from routine patient medical records (maternity case records) [30], which were photographed at delivery. All clusters provided pre-CQI, CQI implementation, and post-CQI data continuously throughout the study. As the maternity case records were first accessed after delivery, ANC data were captured retrospectively—this extended the baseline observation period by an additional 6 months, resulting in a total data collection period of 19 months and a total observation period of 25 months. The period after all clusters had received the CQI intervention was 4.5 months (Fig 2). We used a Research Electronic Data Capture (REDCap) study database for data entry [42]. We collected outcome data continuously over the study at all 7 primary care clinics participating in this study. We also collected outcome data at Hlabisa Hospital maternity ward, because most women living in the study subdistrict deliver at this hospital.

We also conducted a process evaluation to better understand intervention delivery and explain our primary findings. For this, we sourced field notes and reports by the CRH team collated every 2 months. The reports described actual visit dates and type, results of the root-cause analyses, the improvement interventions (including PDSA cycles), successes and challenges, as well as other observations, including impressions of health worker receptivity to CQI (Table 1). We further conducted semi-structured interviews with consenting health workers on their experiences of implementing CQI. We describe in detail the methods, data, and results of the process evaluation in an upcoming scientific publication.

Statistical methods

Power calculation and sample size.

As we describe in our protocol paper [30], we assumed for our baseline power calculation—informed by local routine data—that without CQI 40% of all pregnant women living with HIV would receive a test for VL monitoring and 65% of pregnant women not living with HIV would receive a repeat HIV test. We further assumed that half of all pregnant women would be HIV positive and that pregnant women would make 3 ANC visits. We assumed an intracluster correlation coefficient (ICC) of 0.10, which is a conservative assumption compared to other ICCs measured in similar settings [43]. We assumed missing data from 15% of enrolled women. If we enrolled a total of 1,260 pregnant women (i.e., 630 women living with HIV and 630 women not living with HIV), we estimated 80% power to detect at least a 15-percentage-point increase in our 2 primary endpoints at the 5% significance level [30]. In discussion with local stakeholders, we identified this minimum detectable difference over the course of a pregnancy as relevant for health policy and clinical practice.

Statistical analyses.

We performed intention-to-treat (ITT) analyses based on the clinic attended at the first antenatal booking visit—individuals declared their “intention” to attend that same facility for the remainder of pregnancy. Although it is well established that the AHRI surveillance population is mobile [31], ITT assumes exposure to a single clinic for the entire duration of pregnancy regardless of actual attendance elsewhere. All participants were assigned CQI exposure status at each ANC visit (by the actual date of that visit) according to the exposure status of their clinic at that time. Participants whose assigned clinic rolled over to CQI during their ANC thus had 1 or more initial ANC visits that were CQI unexposed and 1 or more later visits that were CQI exposed. The beginning of each step (CQI rollover date) was defined as the date of the first actual CQI intervention visit in the randomised cluster.

The binary VL monitoring endpoint was measured in pregnant women living with HIV and defined as a documented VL test performed at a particular visit. Each ANC visit was eligible for an endpoint assessment on or after the first documented HIV-positive status irrespective of whether ART was initiated or continued in pregnancy and irrespective of actual VL results. This definition accounts for real-life imperfections in adherence to guidelines and documentation of ART prescriptions. Women who seroconverted from HIV-negative to HIV-positive status during the study were not included in the analysis of the VL monitoring endpoint.

The binary repeat HIV testing endpoint was measured in pregnant women not living with HIV and defined as a subsequent documented HIV test at a particular visit. Each ANC visit following the first documented negative HIV test was eligible for assessment of this endpoint. ANC visits among women who subsequently tested HIV positive were not eligible for the repeat HIV testing endpoint after the first documented HIV-positive test. We did not restrict our endpoint definitions by visit number or gestation, to allow for real-life imperfections in adherence to guidelines.

Primary analyses.

The 2018 extension of the CONSORT statement for stepped-wedge cluster RCTs states that “in addition to reporting a relative measure of the effect of the intervention, it can be helpful to report an absolute measure of the effect” [33]. The reason for these recommendations are that “relative measures of the effects are often more stable across different populations” [33]. Relative measures are therefore more useful than absolute measures for policy makers considering transferring an intervention from one context to another one. In contrast, “absolute measures of effects are more easily understood” [33]. We follow these recommendations and report and interpret both the relative and absolute effect sizes as our primary analyses.

For the primary analysis, we estimated the relative effect sizes using modified Poisson mixed effects generalised linear regression models. Modified Poisson regression has become a standard for the analysis of RCTs with binary outcomes [44,45] because it has advantages over logistic and log binomial regression models [44,46]. One advantage of modified Poisson regression is that it directly generates risk ratios rather than the odds ratios that logistic regression generates. Risk ratios are easier to interpret than odds ratios and, unlike odds ratios, are collapsible [4749].

Another advantage of modified Poisson regression is that it does not suffer from the convergence problems that commonly arise with another approach to estimating risk ratios, log-binomial regression [5052]. In addition, modified Poisson regression models are more robust to model misspecification than log-binomial regression models [53]. A disadvantage of modified Poisson regression is that it can produce predicted probabilities greater than unity. However, several simulation studies have shown that modified Poisson regression provides risk ratio estimates equivalent to regression models that cannot produce such predicted probabilities, such as log-binomial regression [50,5456]. Modified Poisson regression models are thus a good choice for estimating risk ratios in RCTs [45]. To measure absolute effect sizes, we used mixed effects linear probability regression models.

We used 3 models to estimate the effect of the CQI intervention. First, we used the standard Hussey and Hughes model, which includes time-step fixed effects and clinic random effects (Model 1) [57]. Second, we used an extension to this model with a nested random clinic–time step interaction term (Model 2). This extension is recommended by Hemming and colleagues [58], because it allows secular time trends to vary randomly by clinic. Third, we extended the standard Hussey and Hughes model with individual random effects nested within clinic random effects (Model 3). This model accounts not only for clustering of outcomes by clinic but also for clustering of outcomes within individual women across time steps. In all models, we used cluster robust standard errors to further adjust for clustering and model misspecification [44,45].

In addition to the per-visit effect sizes described earlier, we also computed the cumulative absolute probabilities of attaining our endpoints. For this purpose, we used the per-visit absolute effect sizes measured in each model and applied the exponential formula to estimate the cumulative probabilities across the median number of visits during a pregnancy [59].

Sensitivity analyses.

For both of our primary analyses—estimating relative and absolute risk—we also ran regressions with additional control variables. We added (i) maternal age and parity; (ii) gestation at each ANC visit and total number of ANC visits attended in pregnancy; and (iii) maternal age, parity, gestation at each ANC visit, and total number of ANC visits.

Analysis of effect heterogeneity.

We measured CQI effect heterogeneity by duration of CQI exposure, using the same mixed effects regression models as in the primary analyses but replacing the fixed effect for overall CQI exposure with fixed effects for CQI exposure for each time step since rollover to CQI.

We used Stata version 15.0 (StataCorp LLC, College Station, TX) for all statistical analyses.

Ethics approval for the study was obtained from the University of KwaZulu-Natal Biomedical Research Ethics Committee (reference BE209/14). The ethics approval included a waiver of the requirement for individual consent to access routine clinical data from maternity case records, excluding labour and delivery clinical notes. Engagement meetings were held with subdistrict and district-level DoH partners prior to study commencement to share our study objectives and introduce the intervention. Standard DoH approvals for commencing the study were also obtained as part of a Memorandum of Understanding between AHRI and the DoH. Following the analyses of our results and prior to the publication of this paper, we held engagement workshops with the sub-district and district-level DoH partners, as well as with the primary funders of this study (the Delegation of the European Commission to South Africa). During these workshops, we jointly interpreted our findings and derived policy recommendations.

Although this is a low-risk health systems implementation trial, an independent Data Safety and Monitoring Board (DSMB) annually reviewed study progress. No adverse event data were formally collected, because the intervention targeted health workers in clinical facilities.


All 7 primary care clinics and Hlabisa Hospital maternity ward agreed to participate in the study. Maternity case records from 2,498 women who delivered between 15 July 2015 and 30 January 2017 were accessed. The clinic nurses and local DoH staff were continuous partners in this intervention throughout the study duration. No adverse events, complaints, or other deleterious effects of the CQI intervention were reported. The actual intervention rollover dates of clusters are shown in S1 Table.

The health workers appreciated the CQI intervention. For instance, the CRH team observed that the clinic health workers who participated in the CQI intervention were enthusiastic about their increased capacity to improve quality of services. Several clinic health workers expressed an interest in further training and mentoring on quality improvement tools and approaches. Finally, clinic health workers urged the CRH team to continue to work with them on quality improvement beyond the end of this study. We will report further details on these findings in an upcoming publication on the process evaluation that accompanied this trial. Preliminary results of situational analyses (conducted prior to intervention rollover) identified staffing shortages as well as lack of formal tracking systems for patients—and for test results—as potential impediments to the intervention activities described in Table 1.

Most maternity case records contained ANC clinical information. Among the 2,498 women recruited to this study (Fig 3), 5 had blank maternity case records because their previous records had been lost, and a further 19 had blank records because they never attended ANC. Of all women recorded in our study database, 2,160 participants were eligible for our primary ITT analysis (contributing 13,212 observations) during pregnancy: 338 women were excluded from our primary analysis because their HIV status at delivery was unknown, their first ANC clinic was not a study clinic, or their first ANC clinic was unknown (Fig 3).

Fig 3. Participant flow diagram.

Of 2,160 participants, women were assigned for analysis of each endpoint based on their first documented HIV status; 1,011 women who were HIV positive at first documented HIV status were analysed for the VL monitoring endpoint; 1,149 women with a negative HIV test at first documented HIV status were analysed for the repeat HIV testing endpoint and included 12 women who subsequently seroconverted to HIV-positive status (up to and including the date of seroconversion). AHRI, Africa Health Research Institute; ANC, antenatal care; DoH, South African National Department of Health; ITT, intention-to-treat; PIPSA, AHRI Population Intervention Platform Surveillance Area; VL, viral load.

We estimated the total number of women living with HIV who attended a study clinic at their first ANC visit to be 1,026 (1,011 with documented HIV status plus 15 women with unknown HIV status by delivery), of whom 99% were diagnosed HIV positive—and could therefore start ART.

Median age at first booking was 25 (interquartile range [IQR] 21 to 30), and median gestation at first booking was 19 weeks (IQR 15 to 24). Median parity was 1 (IQR 0 to 2). Median number of antenatal visits was 6 (IQR 4 to 8), and 86.9% of women (95% CI 82.5% to 90.3%) attended at least 4 ANC visits during pregnancy. Around half the women (50.3%, 95% CI 45.1% to 55.4%) attended their first ANC visit before 20 weeks’ gestation. HIV prevalence at first ANC visit was 47.4% (95% CI 42.3% to 52.7%) and varied by age group (S1 Fig). There were no major differences in individual participant baseline variables by arm (Table 2), i.e., the 2 arms were balanced on observed baseline characteristics. Among all eligible participants 822 of 2,160 (38.1%) were never exposed to CQI during their pregnancy (i.e., during all of their ANC clinic visits the clinic[s] had not yet received the CQI intervention), 870 of 2,160 (40.3%) were fully exposed to CQI (i.e., during all of their ANC clinic visits the clinic[s] had already received the CQI intervention), and the remaining participants were partially exposed (i.e., during initial ANC clinic visits the clinic[s] had not yet received the CQI intervention, while during later ANC clinic visits the clinic[s] had received the intervention).

Table 2. Summary of participant characteristics by CQI intervention exposure status.

Pregnant women living with HIV

We analysed 1,011 women whose first documented HIV status was HIV positive. At any time in pregnancy, 93.8% of women living with HIV had a documented ART prescription, and 56.3% had received a VL test. Among those women who had received a VL test, only 52.4% had a documented result of which 85.2% were <200 copies/mL (Table 3). ART prescriptions, VL monitoring, and suppression in late pregnancy are presented in Table 3.

Table 3. Descriptive outcomes aggregated across all participants.

Pregnant women not living with HIV

Among the 1,149 women with an initial negative HIV test, 768 of 1,149 (66.8%) had at least one repeat HIV screen during pregnancy. Repeat HIV testing within 3 months prior to delivery is presented in Table 3. Twelve women (1.0%) seroconverted during pregnancy.

Effect of CQI on HIV care tests

Primary analyses—Pregnant women living with HIV.

Women exposed to CQI were significantly more likely to receive a VL test: the relative effect sizes (relative risks [RRs]) in our 3 primary analyses ranged from 1.38 to 1.39 (Fig 4, Table 4) and were all highly significant (all p < 0.001). The absolute effect sizes of receiving a VL test per ANC visit in our 3 primary analyses ranged from 3.9 to 4.1 percentage points higher (Table 5) in the intervention compared to the control arm. With a median of 6 ANC visits per pregnancy in this study, the absolute effect size per visit translates to a cumulative probability of receiving a VL test in pregnancy of 58% in the intervention arm compared to 46% in the control arm. With the recent WHO recommendations for at least 8 ANC visits [60], these values are equivalent to a cumulative probability of receiving a VL test of 69% in the intervention arm compared to 57% in the control arm.

Fig 4. Effects of CQI on VL monitoring and repeat HIV testing.

Model 1 includes time-step fixed effects and clinic random effects. Model 2 includes time-step fixed effects, clinic random effects, and a random clinic–time step interaction term. Model 3 includes time-step fixed effects, clinic random effects, and individual random effects. CQI, continuous quality improvement; VL, HIV viral load

Table 4. Regression models for HIV VL monitoring and repeat HIV testing: RR.

Table 5. Regression models for HIV VL monitoring and repeat HIV testing estimating absolute risk difference.

Primary analyses—Pregnant women not living with HIV.

CQI did not have a statistically significant effect on repeat HIV testing in any of our 3 primary analyses. The RRs ranged from 1.00 to 1.01, and the corresponding p-values ranged from 0.877 to 0.958 (Table 4, Fig 4). These numbers are equivalent to an absolute change in the probability of receiving a repeat HIV test per ANC visit of 0.0 to 0.3 percentage points (all p ≥ 0.758, Table 5). With a median of 6 ANC visits in this study, this effect size per visit translates to a 59% to 60% cumulative probability of receiving a repeat HIV test during pregnancy in both the intervention and the control arm. With the WHO-recommended 8 ANC visits, these numbers translate to 69% to 70% cumulative probability of receiving a repeat HIV test in both arms.

Sensitivity analyses.

When we adjusted for additional covariates, all effect size estimates remained essentially the same as in the primary analyses. RRs for HIV VL monitoring ranged from 1.36 to 1.37 (all p < 0.001) with adjustments for maternal age and parity, 1.39 to 1.40 (p-values <0.001 to 0.001) with adjustments for gestation at each visit and total number of ANC visits, and 1.35 to 1.36 (p-values 0.005 to 0.009) with adjustments for all 4 covariates. The absolute effect sizes for VL monitoring per ANC visit were 3.7 to 4.0 percentage points higher (all p < 0.001) with adjustments for maternal age and parity, 4.4 to 4.6 percentage points (all p < 0.001) with adjustments for gestation at each visit and total number of ANC visits, and 4.2 to 4.4 percentage points (all p ≤ 0.008) with adjustments for all 4 covariates.

RRs for repeat HIV testing ranged from 1.00 to 1.01 (all p ≥ 0.871) with adjustments for maternal age and parity, 1.08 (all p ≥ 0.525) with adjustments for gestation at each visit and total number of ANC visits, and 1.08 (all p ≥ 0.504) with adjustments for all 4 covariates. The absolute effect sizes for repeat HIV testing per ANC visit ranged from 0.0 to 0.3 percentage points (all p ≥ 0.757) with adjustments for maternal age and parity, 1.1 to 1.3 percentage points (all p ≥ 0.457) with adjustments for gestation at each visit and total number of ANC visits, and 1.1 to 1.3 percentage points (all p ≥ 0.445) with adjustments for all 4 covariates.

CQI effect heterogeneity.

There was variability in CQI effect on VL monitoring with time since rollover to the intervention, in all 3 mixed effects models. Immediately after rolling over to the CQI intervention, the relative effect size was 1.17 (p = 0.071) in Model 1. After 2 months of CQI exposure (lag of 1 step) and 4 months (lag of 2 steps), the relative effect sizes were 1.42 (p = 0.021) and 1.45 (p = 0.015), respectively. There was no significant CQI effect beyond 6 months of exposure (Table 6). Similarly, the absolute effect sizes varied with the same durations of exposure to CQI in Model 1: 1.7 percentage points (p = 0.070) with immediate exposure, 4.4 percentage points (p = 0.038) after 2 months of exposure, 4.9 percentage points (p = 0.035) after 4 months of exposure, and no significant change beyond 6 months of exposure (Table 6). The results were essentially the same with Models 2 and 3 for relative and absolute effect sizes. There was no heterogeneity in CQI effect on repeat HIV testing by duration of CQI exposure in either the relative or absolute measures (Table 6).

Table 6. CQI effect heterogeneity by time since rollover to CQI.


Our stepped-wedge cluster RCT conducted under real-life conditions in rural South Africa showed that CQI improved antenatal HIV VL monitoring but did not improve repeat HIV testing. To our knowledge, this is the first RCT to show that CQI can improve quality of primary care in sub-Saharan Africa.

The significant improvement in the first of our 2 primary endpoints—VL monitoring—suggests that CQI may indeed be a good approach to improve quality of care in primary care. This finding is important for policy: CQI is an attractive approach for resource-poor settings because it works with local health workers and within existing resources. Moreover, CQI is already commonly used to improve quality of care in many resource-poor countries and communities, including sub-Saharan Africa [711]. Our trial provides evidence for these policies.

Both relative and absolute effect sizes are important for policy. We found a large relative effect of the CQI intervention on VL monitoring: CQI caused an almost 40% increase in VL monitoring. Relative effect sizes are commonly more stable across populations with different baseline attainments of an endpoint [61]. VL monitoring varies widely across ANC and HIV treatment programmes in sub-Saharan Africa [62,63]. Policy makers should therefore consider our relative effect size in the context of their health system’s achievement in measuring this critical indicator of success in interrupting transmission of HIV from mother to child. In absolute terms, our effect size was relatively small on a per-visit basis, about 4 percentage points per visit. However, it is well recognised that multiple ANC visits are required for maternal and neonatal health [60]. For the median number of 6 ANC visits in this community, we estimate a cumulative absolute effect size of about 12 percentage points. This effect size is close to the threshold of 15 percentage points for clinically meaningful effects that we used in our power calculations.

We also found heterogeneous effects of CQI on VL monitoring by duration of the cluster’s exposure to CQI: effects reached their maximum 2 to 4 months after intervention rollover and waned thereafter. The time pattern we see here can be explained by a “natural course” of innovation—initially health workers need to learn to implement innovations to maximum effectiveness; over time, the need for the innovations (e.g., novel processes supporting VL monitoring) declines as more and more patients have benefited. At the same time, the waning effects following a maximum could also indicate health worker fatigue in implementing an innovation that has now become routine. This latter explanation would argue for implementation research on approaches to maintain long-term innovation effects.

However, our findings also emphasise that it is important to better understand the contextual determinants and potential mechanisms of CQI effects, because CQI did not have an effect on our second primary endpoint, repeat HIV testing. There are several plausible reasons for the difference in CQI effects. The first reason is perceptions of clinical impact. For VL monitoring, impact is likely very salient—lower maternal mortality, better maternal health, and reduced MTCT. In contrast, for repeat HIV testing, impact may be perceived as more “distal” to clinical outcomes and thus less important. CQI activities aimed at increasing repeat HIV testing may therefore not have resonated as strongly with health workers as did CQI activities aimed at improving VL monitoring. A related issue is patient perceptions or behaviours. For example, women not living with HIV may have less perceived risk of infection [64] or concerns about negative partner attitudes if they received a positive HIV diagnosis [65]. Presenting late to ANC for the first time may also have been a barrier to receiving a second HIV test for some women, because the guidelines stipulate an interval of 3 months between HIV tests.

The second potential reason is the differential baseline attainment of our 2 primary endpoints—about one-third for VL monitoring and about two-thirds for repeat HIV testing. While our power calculations indicated that our trial was sufficiently powered to detect significant CQI effects on both primary endpoints, it might have been practically easier to identify and address problems in achieving high levels of coverage with VL monitoring compared to repeat HIV testing, because VL monitoring involves more processes and complexity than repeat HIV testing—thus, CQI that addresses process quality had fewer opportunities to change practice in repeat HIV testing. For instance, VL monitoring involved identifying women eligible for the test, phlebotomy, following up and documenting results, and retrospectively capturing routine ART clinic data onto the national monitoring and evaluation database (TIER.Net)—CQI could address flaws in most of these processes. During each visit, women living with HIV may have interacted with different staff cadres in the clinic—nurses for clinical management or lay counsellors for adherence counselling—with additional opportunities to identify eligible women for testing during the clinic appointment. Other staff (e.g., data capturers) may have also identified women eligible for VL monitoring while filing previous VL results or capturing routine data onto TIER.Net after the visit. Conversely, repeat HIV testing involved a simpler workflow: identifying eligible women, conducting the rapid test, and immediately documenting the result, with only one type of staff cadre involved (usually a lay counsellor).

The final potential reason for the difference in CQI effects is that, during the study period, many of the HIV lay counsellors, who were carrying out HIV testing in primary care clinics, lost their employment in the public-sector health system in South Africa because of a lay counsellor re-deployment policy [66]. The human resources for HIV testing in primary care were thus diminished over the study period. In contrast, nurses carried out VL monitoring in the South African health system, and their numbers remained stable over the study period. The differential CQI effectiveness on our 2 primary endpoints may thus have been caused by underlying differences in the availability of the human resources needed to effectively implement endpoint-specific CQI activities.

The 2 other trials of CQI in primary care in sub-Saharan Africa did not find significant effects, likely due to selection of endpoints that were relatively distal to CQI activities. As alluded to previously, the variability of these endpoints—neonatal and perinatal mortality in Malawi [13] as well as a distal healthcare utilisation outcome, postpartum retention, in Nigeria [14]—may be a function of many factors that CQI cannot be expected to influence. Our endpoints were more proximate to CQI thereby making it easier to demonstrate a direct CQI effect.

In general, however, the 2 prior studies and ours jointly point towards the need to ensure sufficient resources and sufficiently good underlying health systems processes for CQI to be effective. For instance, staff turnover or redeployment during CQI initiatives are likely to impede CQI effectiveness, whereas clinic health worker willingness to engage with CQI, sufficient protected time for clinic health workers to work with CQI mentors, and relevance of CQI initiatives to local needs are likely enablers of CQI effectiveness [13, 14, 67].

CQI implementation and resource investments

We observed that health workers maintained positive attitudes towards CQI and our CQI mentors during the study. In particular, health workers were enthusiastic to learn CQI tools and improve quality of services and were keen to continue working with the CRH team.

CQI is a systematic process for identifying problems and devising, testing, and revising potential solutions to quality shortcomings. The different management techniques used during this process are highly standardised and time-tested, including process mapping, fishbone diagrams, and PDSA cycles. Deviations from the overall process and these individual techniques—by the CQI mentors implementing them or by clinic health workers utilising them—would imply low fidelity. Based on our observations, CQI mentors implemented the CQI intervention with high fidelity. However, clinic health workers’ implementation of the CQI process and techniques was of somewhat lower fidelity. Clinic health workers could not always participate in CQI meetings and failed to complete some of the CQI tasks, despite their general enthusiasm for the intervention. The reasons for these deviations were mostly rooted in structural capacity constraints in this rural South African health system, including staffing shortages, rapid staff turnover, and competing clinical commitments [68].

The CQI process and management techniques are intended to be the same across clinics and communities; the solutions that they identify, test, and revise are intended to be tailored to each clinic and will therefore differ across contexts. The solutions that each clinic identified and implemented did indeed differ, even though all participating clinics were located in the same district in rural KwaZulu-Natal. We will report further details on these results in an upcoming publication on the process evaluation that accompanied this trial.

The key resource needed to implement CQI is CQI mentors’ and clinic health workers’ time. CQI mentors typically spent several hours per week training and supporting clinic health workers during the intensive phase of CQI; during the maintenance phase, the mentors spent substantially less time interacting with the clinic health workers, typically several hours per month. The training sessions at the very beginning of each step and the action learning sessions each required an entire day. CQI activities required time to (i) identify drivers of poor VL monitoring and repeat HIV testing, (ii) conduct group meetings to design clinic-specific solutions, (iii) implement those solutions (e.g., different methods to track women eligible for HIV care tests), and (iv) use clinic-based real-time data sources to test whether particular solutions improved testing rates.

CQI mentors who are adaptable and relate professionally and culturally to health workers at target health facilities may be more successful at engaging health workers in CQI. Mentors must be adept at generating descriptive data and, ideally, be familiar with implementing the desired change in practice. Obtaining basic CQI certification may take between around 18 hours (online) [69] to 3 days [70] of coursework.

In addition to the CQI mentors’ time and the health workers’ time, a so-called improvement advisor—a more highly qualified health professional than the CQI mentors with the ability to coach and lead the CQI mentors—is an integral part of the team to oversee CQI activities. In our experience, the improvement advisor would need to invest around 20% of their time for successful CQI support; in other settings, this would depend on the needs of the CQI mentors and participating clinics. Given these CQI resource needs, the time—and financial—costs of the intervention were overall low. Feasibility and scalability of CQI will, however, not only depend on resource requirements but also on clinic health workers’ motivation and mindset. CQI requires changes in individual behaviours and institutional practices, which in turn are determined by job satisfaction, ambition, experience, and local workplace culture and constraints [40].

In our study, the CQI mentors typically held CQI meetings 2 to 3 times per week during the intensive 2-month step. For routine implementation at larger scale, alternative meeting frequencies may also work. For instance, it may be more practicable to conduct visits less frequently (e.g., once per week) but over a longer period of time (e.g., half a year). The optimal frequency of mentor visits will likely depend on how many clinic health workers are available for CQI activities and whether the clinic activities that the CQI team chose for improvement require additional support.

Strengths and limitations

Our pragmatic cluster-randomised stepped-wedge design rigorously tested the effectiveness of CQI in a real-world setting in rural South Africa while enabling all clinics to eventually receive the intervention. Our registered primary endpoints were not only important process indicators of HIV care quality expected to respond to CQI, but they were also essential clinical practice standards described in the national South African guidelines for HIV treatment and ANC. The intervention was implemented flexibly by adapting to local clinic needs, and individual participant inclusion criteria were broad. Our CQI implementation team of local CQI mentors, improvement advisor, data manager, and scientific partners were from the University of KwaZulu-Natal, had extensive experience implementing CQI elsewhere in South Africa, and spoke the vernacular language, isiZulu. The research was co-led by South African researchers, while most researchers in the wider scientific team have worked in rural South Africa for many years.

Our study had several limitations. First, 2-month intervention steps may have been too short to catalyse the changes required to improve outcomes given that clinic resources in routine care are already overstretched. However, the monthly maintenance visits following the intervention steps may have compensated for the relatively short initial implementation period. Second, we measured our endpoints as documented in the maternity case records. Although most maternity case records contained clinical information, the poor documentation of VL results among women who had a VL test performed raises concerns about data quality overall. It is thus possible that actual repeat HIV testing and VL monitoring rates were higher than measured in our study albeit not documented. In this case, we would also expect that the effect sizes that we have measured are underestimates of the true effect sizes. Third, we were unable to measure long-term sustainability of CQI over several years, because our study duration was limited by our research budget.

Other policy and future research implications

In addition to the general policy relevance of our findings for health policies aiming to improve the quality of primary care in sub-Saharan Africa, our findings are of specific relevance to the goal of eMTCT. Universal coverage of the activities captured by our 2 primary endpoints—VL monitoring and repeat HIV testing in pregnancy—will be critical to achieve this goal [24]. In particular, VL monitoring close to delivery is key to ensuring maternal virologic suppression in the peripartum period [20], although VL monitoring and repeat HIV testing are crucial throughout pregnancy and breastfeeding to enable timely interventions for mother and baby. According to our findings, CQI could be a good addition to the set of interventions driving eMTCT in HIV hyperendemic settings.

Improving clinical processes to achieve desired health systems and health outcomes requires changes in practice (behavioural changes), which depend on individual motivation, job satisfaction, and clinical knowledge and context [40]. It is also unclear as to how far clinic health workers “normalise” CQI into their practice once the CQI mentors have completed an engagement, particularly given the human resource shortages and high staff turnover in the local health system. Knowledge of time to assimilation of CQI could inform future study designs that account for a suitable time lag before outcomes can be expected to change.

An important question for future research is whether this CQI intervention “spilled over” to ANC activities outside HIV care and to primary care outside ANC. Such spill-over effects could be positive or negative. On the one hand, the clinic health workers trained in CQI may have applied their new skills to other clinical services, resulting in further quality improvements. On the other hand, health workers invested time and energy in this CQI and, as a result, may have neglected clinical services outside the scope of this intervention. In future research, we plan to measure the effects of CQI on a broader set of endpoints, which will capture the quality of a wide range of ANC functions.

Outcome measures that lie on plausible causal pathways from an exposure to an outcome can be used to empirically confirm mechanistic hypotheses. CQI is an intervention that can work its way to outcomes through multiple pathways, because it empowers local health workers to identify and implement those health systems improvements and innovations that are most promising in the particular contexts in which they work. In this case—and other complex interventions—local health workers can identify likely mechanisms connecting an intervention to outcomes. In the case of HIV VL monitoring and repeat HIV testing, proximate endpoints that lie on the causal pathways from CQI could include availability of test kits, presence of systems for patient tracking and tracing, health worker motivation to test, and patient demand for testing.

While our quality of care measures are good choices to test the effectiveness of a CQI intervention, the ultimate purpose of healthcare is to reduce morbidity and mortality. Future mathematical modelling studies should estimate the impact of CQI on health outcomes that could be achieved given the effect sizes we measured in this study and local estimates of VL monitoring in different communities in sub-Saharan Africa.

Our study findings are likely transferable to other resource-poor communities in southern and sub-Saharan Africa, where nurses are the primary providers of ANC and HIV treatment. Given the resource shortages and increasing demand on HIV services with rollout of universal ART for all people living with HIV, patient and laboratory results tracking are likely to become more challenging with paper-based methods. Conversely, access to electronic laboratory results would increase healthcare provider availability for direct clinical care while maximising opportunities for detecting and managing maternal HIV viraemia. It is thus plausible that technological innovations in clinic data and management systems could contribute to CQI effectiveness in improving HIV care.


We showed that CQI in primary care can be effective in rural South Africa. We found a significant CQI effect on one of our 2 primary endpoints, VL monitoring among pregnant women living with HIV. However, the fact that CQI failed to improve our second primary endpoint indicates that CQI success is sensitive to the particular health system processes it is intended to address. CQI should be considered as an intervention to improve quality of primary care in rural African communities. Its use should be closely monitored to ensure that we further improve our understanding of factors influencing its success.

Preliminary results of this trial were presented at the Conference on Retroviruses and Opportunistic Infections (CROI) in March 2018, Boston, MA, USA.

Supporting information

S1 CONSORT Checklist. CONSORT checklist for stepped-wedge trials.

CONSORT, Consolidated Standards of Reporting Trials.


S1 Table. Characteristics of participating primary care clinics.


S2 Table. Summary of clinic staffing and recruitment to clinic CQI team.

CQI, continuous quality improvement


S1 Fig. HIV prevalence by maternal age group at delivery.



The authors wish to thank all study participants and the South African National Department of Health partners for their support and engagement with the study, and the UKZN Centre for Rural Health for implementing the intervention. We thank all colleagues at AHRI for their support with project operations, including Research Nurse Manager Mr Siphephelo Dlamini. We also thank the AHRI Research Data Management team for their support, in particular Mr Sabelo Ntuli for creating the map of the study area. We also extend our thanks to the DSMB for this trial (Professor Landon Myer, Professor Hoosen Coovadia, Professor Anna Coutsoudis, and Ms Kathy Baisley) for their excellent and ongoing support and advice. Finally, we extend our gratitude and thanks to the late Scientia Professor David A. Cooper AC, for his contributions to developing this manuscript and PhD supervision of Dr Yapa.


  1. 1. Cantiello J, Kitsantas P, Moncada S, Abdul S. The evolution of quality improvement in healthcare: patient-centered care and health information technology applications. J Hosp Admin. 2016;5(2):62–8.
  2. 2. Kelly DL, Johnson SP, Sollecito WA. Measurement, variation, and CQI tools. In: Sollecito WA, Johnson JK, editors. McLaughlin and Kaluzny's continuous quality improvement in health care. 4 ed: Jones & Bartlett Learning; 2011.
  3. 3. Laffel G, Blumenthal D. The case for using industrial quality management science in health care organizations. JAMA. 1989;262:2869–73. pmid:2810623;
  4. 4. Luce JM, Bindman AB, Lee PR. A brief history of health care quality assessment and improvement in the United States. West J Med. 1994;160:263–8. pmid:8191769;
  5. 5. Leatherman S, Ferris TG, Berwick D, Omaswa F, Crisp N. The role of quality improvement in strengthening health systems in developing countries. Int J Qual Health Care. 2010;22:237–43. pmid:20543209;
  6. 6. Yapa HM, Bärnighausen T. Implementation science in resource-poor countries and communities. Impl Sci. 2018;13:154. pmid:30587195;
  7. 7. Mate KS, Ngubane G, Barker PM. A quality improvement model for the rapid scale-up of a program to prevent mother-to-child HIV transmission in South Africa. Int J Qual Health Care. 2013;25:373–80. pmid:23710069;
  8. 8. Singh K, Brodish P, Speizer I, Barker P, Amenga-Etego I, Dasoberi I, et al. Can a quality improvement project impact maternal and child health outcomes at scale in northern Ghana? Health Res Policy Syst. 2016;14:45. pmid:27306769;
  9. 9. Magge H, Kiflie A, Mulissa Z, Abate M, Biadgo A, Bitewulign B, et al. Launching the Ethiopia health care quality initiative: interim results and initial lessons learned. BMJ Open Qual. 2017;6(Suppl 1):A1–A39.
  10. 10. Mutanda P, Muange P, Lutta M, Kinyua K, Chebet L, Okaka B, et al. Improving prevention of mother to child transmission of HIV care: Experiences from implementing quality improvement in Kenya. Technical Report. USAID ASSIST Project. Chevy Chase, MD: University Research Co, LLC; 2017.
  11. 11. Bhardwaj S, Barron P, Pillay Y, Treger-Slavin L, Robinson P, Goga A, et al. Elimination of mother-to-child transmission of HIV in South Africa: Rapid scale-up using quality improvement. S Afr Med J. 2014;104:239–43. pmid:24893500;
  12. 12. WHO and UNICEF. A vision for primary health care in the 21st century: towards universal health coverage and the Sustainable Development Goals. Geneva: World Health Organization and United Nations Children’s Fund, 2018.
  13. 13. Colbourn T, Nambiar B, Bondo A, Makwenda C, Tsetekani E, Makonda-Ridley A, et al. Effects of quality improvement in health facilities and community mobilization through women's groups on maternal, neonatal and perinatal mortality in three districts of Malawi: MaiKhanda, a cluster randomized controlled effectiveness trial. Int Health. 2013;5:180–95. pmid:24030269;
  14. 14. Oyeledun B, Phillips A, Oronsaye F, Alo OD, Shaffer N, Osibo B, et al. The effect of a continuous quality improvement intervention on retention-in-care at 6 months postpartum in a PMTCT program in northern Nigeria: results of a cluster randomized controlled study. J Acquir Immune Defic Syndr. 2017;75(Suppl 2):S156–S64. pmid:28498185;
  15. 15. Cawley C, McRobie E, Oti S, Njamwea B, Nyaguara A, Odhiambo F, et al. Identifying gaps in HIV policy and practice along the HIV care continuum: evidence from a national policy review and health facility surveys in urban and rural Kenya. Health Policy Plan. 2017;32:1316–26. pmid:28981667;
  16. 16. Ezeanolue EE, Powell BJ, Patel D, Olutola A, Obiefune M, Dakum P, et al. Identifying and prioritizing implementation barriers, gaps, and strategies through the Nigeria Implementation Science Alliance: getting to zero in the prevention of mother-to-child transmission of HIV. J Acquir Immune Defic Syndr. 2016;72 Suppl 2:S161–6. pmid:27355504;
  17. 17. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76. pmid:20957426;
  18. 18. Myer L, Dunning L, Lesosky M, Hsiao NY, Phillips T, Petro G, et al. Frequency of viremic episodes in HIV-infected women initiating antiretroviral therapy during pregnancy: a cohort study. Clin Infect Dis. 2017;64:422–7. pmid:27927852;
  19. 19. Koss CA, Natureeba P, Kwarisiima D, Ogena M, Clark TD, Olwoch P, et al. Viral suppression and retention in care up to 5 years after initiation of lifelong ART during pregnancy (Option B+) in rural Uganda. J Acquir Immune Defic Syndr. 2017;74:279–84. pmid:27828878;
  20. 20. Myer L, Essajee S, Broyles LN, Watts DH, Lesosky M, El-Sadr WM, et al. Pregnant and breastfeeding women: A priority population for HIV viral load monitoring. PLoS Med. 2017;14:e1002375. pmid:28809929;
  21. 21. Warszawski J, Tubiana R, Le Chenadec J, Blanche S, Teglas JP, Dollfus C, et al. Mother-to-child HIV transmission despite antiretroviral therapy in the ANRS French Perinatal Cohort. AIDS. 2008;22:289–99. pmid:18097232;
  22. 22. WHO. Mother-to-child transmission of HIV 2015. [cited 2019 Jul 9]. Available from:
  23. 23. National Department of Health South Africa. National consolidated guidelines for the prevention of mother-to-child transmission of HIV (PMTCT) and the management of HIV in children, adolescents and adults. Pretoria: National Department of Health, 2015.
  24. 24. Goga A, Chirinda W, Ngandu NK, Ngoma K, Bhardwaj S, Feucht U, et al. Closing the gaps to eliminate mother-to-child transmission of HIV (MTCT) in South Africa—understanding MTCT case rates, factors that hinder the monitoring and attainment of targets, and potential game changers. S Afr Med J. 2018;108(Suppl 1):S17–S24.
  25. 25. Haas AD, Keiser O, Balestre E, Brown S, Bissagnene E, Chimbetete C, et al. Monitoring and switching of first-line antiretroviral therapy in adult treatment cohorts in sub-Saharan Africa: collaborative analysis. Lancet HIV. 2015;2:e271–e8. pmid:26423252
  26. 26. Dinh TH, Delaney KP, Goga A, Jackson D, Lombard C, Woldesenbet S, et al. Impact of maternal HIV seroconversion during pregnancy on early mother to child transmission of HIV (MTCT) measured at 4–8 weeks postpartum in South Africa 2011–2012: a national population-based evaluation. PLoS ONE. 2015;10:e0125525. pmid:25942423;
  27. 27. Drake AL, Wagner A, Richardson B, John-Stewart G. Incident HIV during pregnancy and postpartum and risk of mother-to-child HIV transmission: a systematic review and meta-analysis. PLoS Med. 2014;11:e1001608. pmid:24586123;
  28. 28. Maman D, Huerga H, Mukui I, Chilima B, Kirubi B, Van Cutsem G, et al. Most breastfeeding women with high viral load are still undiagnosed in sub-Saharan Africa. Conference on Retroviruses and Opportunistic Infections (CROI); 2015; Seattle, Washington: Abstract number 32.
  29. 29. Read PJ, Mandalia S, Khan P, Harrisson U, Naftalin C, Gilleece Y, et al. When should HAART be initiated in pregnancy to achieve an undetectable HIV viral load by delivery? AIDS. 2012;26:1095–103. pmid:22441248;
  30. 30. Chetty T, Yapa HMN, Herbst C, Geldsetzer P, Naidu KK, De Neve J-W, et al. The MONARCH intervention to enhance the quality of antenatal and postnatal primary health services in rural South Africa: protocol for a stepped-wedge cluster-randomised controlled trial. BMC Health Serv Res. 2018;18:625. pmid:30089485;
  31. 31. Tanser F, Hosegood V, Bärnighausen T, Herbst K, Nyirenda M, Muhwava W, et al. Cohort profile: Africa Centre Demographic Information System (ACDIS) and population-based HIV survey. Int J Epidemiol. 2008;37:956–62. pmid:17998242;
  32. 32. Massyn N, Day C, Peer N, Padarath A, Barron P, English M. District health barometer 2013/2014. Durban: Health Systems Trust; 2014.
  33. 33. Hemming K, Taljaard M, McKenzie JE, Hooper R, Copas A, Thompson JA, et al. Reporting of stepped wedge cluster randomised trials: extension of the CONSORT 2010 statement with explanation and elaboration. BMJ. 2018;363:k1614. pmid:30413417;
  34. 34. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. pmid:24609605;
  35. 35. Institute for Healthcare Improvement (IHI). The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. IHI Innovation Series white paper. Boston: IHI; 2003.
  36. 36. Layton A, Moss F, Morgan G. Mapping out the patient’s journey—experiences of developing pathways of care. Qual Health Care. 1998;7(Suppl):S30–S6. pmid:10339033;
  37. 37. Bonetti PO, Waeckerlin A, Schuepfer G, Frutiger A. Improving time-sensitive processes in the intensive care unit: the example of ‘door-to-needle time’ in acute myocardial infarction. Int J Qual Health Care. 2000;12:311–7. pmid:10985269;
  38. 38. Berwick D. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128:651–6. pmid:9537939;
  39. 39. Perla RJ, Provost LP, Murray SK. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46–51. pmid:21228075;
  40. 40. Grol R, Wensing M. Implementation of Change in Healthcare: a Complex Problem. In: Grol R, Wensing M, Eccles M, Davis D, editors. Improving patient care: The implementation of change in health care. 2nd ed. West Sussex, UK: BMJ Books, Wiley Blackwell; 2013.
  41. 41. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258. pmid:25791983;
  42. 42. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–81. pmid:18929686;
  43. 43. Barnhart D, Hertzmark E, Liu E, Mungure E, Muya AN, Sando D, et al. Intra-cluster correlation estimates for HIV-related outcomes from care and treatment clinics in Dar es Salaam, Tanzania. Contemp Clin Trials Commun. 2016;4:161–9. pmid:27766318;
  44. 44. Zou G. A modified Poisson regression approach to prospective studies with binary data. Am J Epidemiol. 2004;159:702–6. pmid:15033648;
  45. 45. Zou GY, Donner A. Extension of the modified Poisson regression model to prospective studies with correlated binary data. Stat Methods Med Res. 2011;22:661–70. pmid:22072596;
  46. 46. Cummings P. The relative merits of risk ratios and odds ratios. Arch Pediatr Adolesc Med. 2009;163:438–45. pmid:19414690;
  47. 47. Greenland S. Interpretation of choice of effect measures in epidemiologic analyses. Am J Epidemiol. 1987;125:761–8. pmid:3551588;
  48. 48. Hauck WW, Anderson S, Marcus SM. Should we adjust for covariates in nonlinear regression analyses of randomized trials? Control Clin Trials. 1998;19:249–56. pmid:9620808;
  49. 49. Hernan MA, Clayton D, Keiding N. The Simpson's paradox unraveled. Int J Epidemiol. 2011;40:780–5. pmid:21454324;
  50. 50. Yelland L, Salter A, Ryan P. Relative risk estimation in cluster randomized trials: a comparison of generalized estimating equation methods. Int J Biostat. 2011;7:1–26.
  51. 51. Carter RE, Lipsitz SR, Tilley BC. Quasi-likelihood estimation for relative risk regression models. Biostatistics. 2005;6:39–44. pmid:15618526;
  52. 52. Williamson T, Eliasziw M, Fick GH. Log-binomial models: exploring failed convergence. Emerg Themes Epidemiol. 2013;10:14. pmid:24330636;
  53. 53. Chen W, Qian L, Shi J, Franklin M. Comparing performance between log-binomial and robust Poisson regression models for estimating risk ratios under model misspecification. BMC Med Res Methodol. 2018;18:63. pmid:29929477;
  54. 54. Yelland LN, Salter AB, Ryan P. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data. Am J Epidemiol. 2011;174:984–92. pmid:21841157;
  55. 55. Zou GY. Assessment of risks by predicting counterfactuals. Stat Med. 2009;28:3761–81. pmid:19856279;
  56. 56. Marschner IC, Gillett AC. Relative risk regression: reliable and flexible methods for log-binomial models. Biostatistics. 2012;13:179–92. pmid:21914729;
  57. 57. Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp ClinTrials. 2007;28:182–91. pmid:16829207;
  58. 58. Hemming K, Taljaard M, Forbes A. Analysis of cluster randomised stepped wedge trials with repeated cross-sectional samples. Trials. 2017;18:101. pmid:28259174;
  59. 59. Greenland S, Rothman KJ. Measures of occurrence. In: Rothman KJ, Greenland S, editors. Modern epidemiology. Philadelphia: Lippincott Williams and Wilkins; 2008. p. 33–51.
  60. 60. WHO. WHO recommendations on antenatal care for a positive pregnancy experience. Geneva: World Health Organization, 2016.
  61. 61. Ukoumunne OC, Forbes AB, Carlin JB, Gulliford MC. Comparison of the risk difference, risk ratio and odds ratio scales for quantifying the unadjusted intervention effect in cluster randomized trials. Stat Med. 2008;27:5143–55. pmid:18613226;
  62. 62. Médecins Sans Frontières (MSF). Getting to undetectable: usage of HIV viral load monitoring in five countries. Geneva: MSF; 2014.
  63. 63. Awungafac G, Amin ET, Fualefac A, Takah NF, Agyingi LA, Nwobegahay J, et al. Viral load testing and the use of test results for clinical decision making for HIV treatment in Cameroon: An insight into the clinic-laboratory interface. PLoS ONE. 2018;13:e0198686. pmid:29889862;
  64. 64. Alemu YM, Ambaw F, Wilder-Smith A. Utilization of HIV testing services among pregnant mothers in low income primary care settings in northern Ethiopia: a cross sectional study. BMC Preg Childbirth. 2017;17:199. pmid:28646888;
  65. 65. Gebremedhin KB, Tian B, Tang C, Zhang X, Yisma E, Wang H. Factors associated with acceptance of provider-initiated HIV testing and counseling among pregnant women in Ethiopia. Patient Prefer Adherence. 2018;12:183–91. pmid:29416320;
  66. 66. Hu J, Geldsetzer P, Steele SJ, Matthews P, Ortblad K, Solomon T, et al. The impact of lay counselors on HIV testing rates: quasi-experimental evidence from lay counselor redeployment in KwaZulu-Natal, South Africa. AIDS. 2018;32:2067–73. pmid:29912066;
  67. 67. Weaver MR, Burnett SM, Crozier I, Kinoti SN, Kirunda I, Mbonye MK, et al. Improving facility performance in infectious disease care in Uganda: a mixed design study with pre/post and cluster randomized trial components. PLoS ONE. 2014;9:e103017. pmid:25133799;
  68. 68. Michel J, Chimbindi N, Mohlakoana N, Orgill M, Bärnighausen T, Obrist B, et al. How and why policy-practice gaps come about: a South African universal health coverage context. J Glob Health Reports. 2020;3:e2019069.
  69. 69. Institute for Healthcare Improvement (IHI). IHI open school online courses 2020. [cited 2020 Feb 6]. Available from:
  70. 70. Institute for Healthcare Improvement (IHI). In-person training: Breakthrough series college 2020. [cited 2020 Feb 6]. Available from: