Dried Blood Spots for Viral Load Monitoring in Malawi: Feasible and Effective

Objectives To evaluate the feasibility and effectiveness of dried blood spots (DBS) use for viral load (VL) monitoring, describing patient outcomes and programmatic challenges that are relevant for DBS implementation in sub-Saharan Africa. Methods We recruited adult antiretroviral therapy (ART) patients from five district hospitals in Malawi. Eligibility reflected anticipated Ministry of Health VL monitoring criteria. Testing was conducted at a central laboratory. Virological failure was defined as >5000 copies/ml. Primary outcomes were program feasibility (timely result availability and patient receipt) and effectiveness (second-line therapy initiation). Results We enrolled 1,498 participants; 5.9% were failing at baseline. Median time from enrollment to receipt of results was 42 days; 79.6% of participants received results within 3 months. Among participants with confirmed elevated VL, 92.6% initiated second-line therapy; 90.7% were switched within 365 days of VL testing. Nearly one-third (30.8%) of participants with elevated baseline VL had suppressed (<5,000 copies/ml) on confirmatory testing. Median period between enrollment and specimen testing was 23 days. Adjusting for relevant covariates, participants on ART >4 years were more likely to be failing than participants on therapy 1–4 years (RR 1.7, 95% CI 1.0-2.8); older participants were less likely to be failing (RR 0.95, 95% CI 0.92-0.98). There was no difference in likelihood of failure based on clinical symptoms (RR 1.17, 95% CI 0.65-2.11). Conclusions DBS for VL monitoring is feasible and effective in real-world clinical settings. Centralized DBS testing may increase access to VL monitoring in remote settings. Programmatic outcomes are encouraging, especially proportion of eligible participants switched to second-line therapy.


Introduction
DBS for VL monitoring. In coordination with the Malawi Ministry of Health (MOH), we conducted a prospective, non-randomized evaluation of DBS for VL monitoring among ART patients managed at districts hospitals in Malawi. Our objective was to evaluate the feasibility and effectiveness of DBS use for VL monitoring, describing patient outcomes and programmatic challenges that are relevant for DBS implementation in sub-Saharan Africa.

Study population
We enrolled adult (18 years) patients from five ART clinics in central and southern Malawi. Inclusion and exclusion criteria mirrored MOH eligibility criteria for routine VL monitoring and monitoring based on suspected clinical failure [45]. Patients were eligible for VL testing if they were on first-line ART for 6 months, 24 months, or any 24-month period (+/-3 months) thereafter (routine monitoring). Patients who did not meet routine monitoring criteria were eligible if they were on first-line therapy 6 months and showed signs of clinical failure (World Health Organization [WHO] Stage 3 or 4). Patients were excluded if currently hospitalized, imprisoned, or involuntarily incarcerated in a medical facility.

Site selection and enrollment
ART clinics within district hospitals were selected based on the size of their retained ART patient population and willingness to both train providers and enroll participants. We validated DBS vs. plasma VL at the two sites with adequate capacity for plasma-processing. During this validation period, all participants provided a venous and fingerstick sample from which a plasma sample, venous DBS (vDBS), and fingerstick DBS (fsDBS) were produced. Interim analyses demonstrated acceptable agreement between plasma, vDBS, and fsDBS [33] and participants enrolled at the final three sites received fsDBS only.

Sample collection and transport
Sample collection and virological testing methods are presented elsewhere [33]. Briefly, sites were provided with pre-packed kits containing: DBS card, capillary tubes, gloves, sterile lancet, alcohol swab, plastic zip bag, and desiccant. All specimens were collected by ART clinic or hospital laboratory staff. Once dried, cards were transferred to individual zip bags with desiccant sachets and stored at room temperature.
DBS specimens were transported at ambient temperature to the central laboratory in Lilongwe (4-6 hours away) approximately weekly using existing hospital-based vehicles or shipped via specimen shipment service.

VL testing and result return
Specimens were tested using the Abbott RealTime HIV-1 Assay (Abbott Laboratories, Chicago, IL) (reportable range of 40 to 10,000,000 copies/ml for plasma and lower limit of detection of 550 copies/ml for DBS) at an internationally monitored research laboratory.
Results were returned to clinics using e-mail, short message service (SMS), or phone. Hardcopies of results were delivered via hospital vehicles returning to the clinic or by study coordinators during routine (approximately weekly) site visits. Providers delivered results to participants during scheduled clinic visits.

Data collection
All activities were conducted by non-study ART clinic personnel. ART staff members were trained in identifying eligible participants, obtaining consent, specimen collection, study sensitization, adherence counseling, and case report form (CRF) completion. We collected participant demographics, clinical history, and ART adherence data. ART history, including date of diagnosis, ART initiation, and reason for initiation, was abstracted from patient clinic records.

Study visits
Participants were asked to return for VL results one month after enrollment. Participants with elevated VLs (>5,000 copies/ml) received adherence counseling and were instructed to return after two months for a confirmatory draw. Participants who returned for confirmatory draws were told to return within one month for results. Providers were instructed to refer patients with two elevated VLs for second-line therapy.

Treatment failure definition
Per 2011 MOH guidelines, virological failure was defined as having two sequential VLs >5,000 copies/ml [45]. For patients with plasma results available (validation period), plasma results were used to guide treatment decisions. If vDBS and fsDBS were available, vDBS results were used; fsDBS was used for treatment decisions in all other cases. Laboratory validation of DBS compared to plasma are not discussed further in this paper [33].

Programmatic Outcomes
Primary outcomes were feasibility and effectiveness of DBS for VL monitoring. Feasibility was measured by: proportion of participants receiving VL results within 3 months of enrollment; laboratory testing turnaround time; delayed result return due to lab delays; delayed result return due to providers failing to deliver available results; proportion of participants with baseline elevated VL receiving confirmatory DBS; time from participant receipt of results to collection of confirmatory specimen; and time from enrollment to second-line treatment initiation among eligible participants. Participants were terminated from the study if results were not delivered 6 months of enrollment. Due to staffing constraints, we were not able to assess the proportion of eligible ART patients visiting the clinics who were enrolled.
Effectiveness of DBS for VL monitoring was proportion of eligible (failing) participants who initiated second-line therapy within 12 months (365 days) of enrollment. We also evaluated the proportion of participants who resuppressed (5000 copies/ml on confirmatory specimen) as a secondary outcome, suggesting effectiveness of VL monitoring in general, as related to provider-initiated adherence counseling and patient behavior change.

Statistical methods
We used student's t-tests (continuous variables) and Pearson's χ 2 or Fisher's exact test (categorical variables) to identify demographics and clinical characteristics associated with VL failure and resuppression (5000 copies/ml) [46]. We used generalized linear models with a log link and binomial distribution to explore the relationship between time on ART and VL failure (>5000 copies/ml) at enrollment. Factors considered included age, sex, WHO clinical stage at ART initiation, body mass index (BMI), ART regimen, self-reported adherence, and clinical symptoms. We used likelihood ratio (LR) tests to decide which variables to include. We tested interactions between time on ART and symptoms at enrollment to asses if the effect of ART exposure on likelihood of treatment failure was different for participants who showed signs of clinical failure. We evaluated agreement of time on ART (clinic records versus CRFs) using kappa statistics. We conducted a post-hoc sub-group analysis exploring the relationship between CD4 cell count at ART initiation and treatment failure as this may be an important predictor of virological failure [24,47].

Ethical approval
The National Health Sciences Research Committee of Malawi, the Centers for Disease Control and Prevention Ethics Review, and the Biomedical Institutional Review Board at University of North Carolina, Chapel Hill approved this study. All participants provided written informed consent.

Predictors of baseline failure
After adjusting for time on therapy, clinical symptoms, sex, WHO stage at initiation, and selfreported adherence, increasing age was associated with decreased risk of failure (RR 0.95, 95% confidence interval (CI) 0.92-0.98) ( Table 3). Participants on ART >4 years were 1.7 times more likely to fail compared to participants on therapy 1-4 years (RR 1.70, 95% CI 1.01-2.84); participants on ART 1 year were less likely to be failing (RR 0.57, 95% CI 0.18-1.83), although the association was not statistically significant.
The effect of time on ART on likelihood of treatment failure did not differ meaningfully among patients with and without documented symptoms of clinical failure (p>0.05). Removing this interaction term did not change model fit (LR test p = 0.26).
Limiting analyses to participants with CD4 count, compared to participants with a CD4 count >100 cells/mm 3 at ART initiation, participants with CD4 100 cells/mm 3 were 2.2 times more likely to have an elevated VL after adjusting for time on therapy, age, sex, symptoms, adherence, and BMI (RR 2.22, 95% CI 1.02-4.84). Increasing age remained associated Patients enrolled and eligible for study participation had viral load tests run at a central laboratory. Results were communicated back to enrolling ART clinics where providers were instructed to deliver results to participants. Providers proceeded with clinical care according to if the result was suppressed (5,000 copies/ml) or elevated (>5,000 copies/ml). Patients with elevated viral loads received confirmatory testing. Per national guidelines, patients with confirmed elevated viral loads were eligible for second-line ART.

Programmatic Outcomes
Feasibility. Median period between enrollment and a specimen being tested at the central lab was 23 days. Results were communicated back to clinics within 3 days of testing. About 80% (1189/1498) of participants received results within 3 months (mean time to receipt of results 58 days). Nearly 45% (665/1498) of participants had a clinic visit during which VL results were not delivered. Lab-based delays, in which a participant came to the clinic but results were not available, accounted for most delays. However, 21.0% of participants (315/1498) had at least one visit at which results were available, but not delivered.
Among participants with elevated VLs at baseline, 93.2% (82/88) received results (Fig 1). Mean period between enrollment and receipt of results was 60 days (Fig 2). Approximately 90% (78/88) of all eligible participants had a confirmatory VL test. The average number of days between receipt of elevated VL results and collection of confirmatory specimen was 68. Among participants with confirmed elevated VL, mean time from enrollment to second-line treatment initiation was 181 days (range: 125-381). Over half (54%; 27/50) of participants who were eventually switched, initiated second-line therapy on the same day they received confirmation of high VL.  Approximately 7% of participants never received results during the study follow-up period: 89 were terminated from the study prior to receiving results (enrolled 6 months without being given results), 6 died, 4 defaulted from care, 2 moved, and 1 was referred immediately for second-line therapy. Participants that were terminated without receiving results were enrolled an average of 195 days (range: 179-322). Four participants did not have VL results because of failed redraw attempts (n = 3) or ineligibility at enrollment (n = 1).
Effectiveness. Among participants with a confirmed elevated VL, 92.6% (50/54) initiated second-line therapy. Over 90% (49/54) of participants who were confirmed as eligible for second-line therapy were switched within 365 days of their first elevated VL; over half (31/54) were switched within 180 days. If we assume that the four participants who were switched before confirmatory VLs would not have resuppressed, 91.4% (53/58) of participants reached the primary effectiveness endpoint-initiating second-line therapy within 12 months of enrollment. Discussion DBS for VL monitoring was feasible and effective when implemented by ART providers in district hospitals in Malawi. Greater than 99% of the VL results were available on site, and nearly 80% of participants received their VL results within 3 months of testing. Among the participants with confirmed elevated VL, 92.6% initiated second-line therapy, 91% were switched Table 4. Demographic, ART, and clinical outcomes among patients with baseline viral loads >5000 copies/ml.

Resuppress (n = 24) † N (%)
No resuppression (n = 54) † N (%) p-value  within one year of their first high VL, and >50% switched to second-line the same day that they received confirmatory VL results. The time to second-line initiation was considerably faster than has been seen elsewhere in sub-Saharan Africa, where only 62% of patients meeting guideline-dictated failure definitions were switched and a median time between confirmation of failure and treatment switch approached 5 months [24,30]. There are many potential explanations for the observed differences. Providers involved in our study were aware of study endpoints, whereas retrospective programmatic evaluations in comparator studies are less likely to be subject to that influence.
Nearly one-third (31%) of participants with elevated VLs resuppressed, in-line with previously observed resuppression rates [48][49][50]. Resuppression suggests that, when coupled with appropriate adherence counseling, VL monitoring can help curtail virological failure. Per national guidelines, providers were instructed to emphasize the importance of adherence for patients with elevated VL [45]. However, we observed substantial inter-clinic variation in rates of resuppression-potentially indicating variable quality of adherence counseling. Alternatively, interclinic resuppression variation may be explained by the differences across clinics in time between enrollment and confirmatory testing among participants with elevated VLs. As a component of VL scale-up, Ministry of Health officials should consider monitoring inter-clinic variation in resuppression rates to help identify concordance with guidelines in terms of appropriateness of confirmatory testing as well as needs for additional training in adherence counseling techniques.
We observed surprisingly low virological failure (5.9%) among this previously unmonitored mature ART-patient cohort [14,24,26,27,[29][30][31]. The lower-than-expected failure rate may be at least partially explained by cohort variability in failure definitions [1,30,33]. The low failure rate is unlikely due to inadequate DBS sensitivity, as earlier investigations demonstrate 100% sensitivity of DBS for failure thresholds >5,000 copies/ml, compared to plasma [33]. Our results represent virological failure rates among persons retained in care and thus may underestimate the true rate of failure: 2% of Malawian ART patients default from care each quarter and >18% of patients initiated on ART have been lost to follow-up since 2004, with an additional 10% known to have died [42].
Both age and time on therapy were associated with treatment failure in multivariable models. Younger participants were at increased risk of failure, highlighting the importance of Planned (italics) and observed progression through study activities and follow-up for participants with elevated VL at baseline. According to the study protocol, participants were supposed to return for results 30 days after dried blood spot (DBS) collection (baseline DBS). For participants with an elevated viral load (>5,000 copies/ml), confirmatory test specimens were to be collected an additional 60 days later (90 days after enrollment). Again, participants were to return 30 days after DBS collection for receipt of results. This diagram describes the observed periods (mean number of days and range) between each participant study encounter. doi:10.1371/journal.pone.0124748.g002 Dried Blood Spots for VL Monitoring targeting adherence interventions to youth, regardless of how long they are retained on therapy [24,47,51]. We only enrolled participants 18 years, the majority (>95%) of whom initiated ART in their early or mid-20s and yet were still at increased risk of failure. Expanding the definition of youth to include young adults and tailoring interventions to this group may be an efficient strategy for reducing virological failure. Participants who had been on therapy longer were at increased risk of failing, even after adjusting for clinical signs of failure.
Clinical symptoms were not associated with increased risk of failure, emphasizing shortcomings of relying on clinical staging for predicting virological failure [1,3,6]. However, participants enrolled outside of routine eligibility criteria due to suspected clinical failure were significantly more likely to fail than participants meeting routine monitoring eligibility criteria. The difference may be explained by the extent of symptoms: patients enrolled for suspicion of clinical failure were more likely to have multiple concurrent symptoms. Our findings reaffirm the efficiency of monitoring based on suspected clinical failure [1,52], but demonstrate that nearly 75% of failing patients would be missed if only patients with provider-identified clinical failure received monitoring.
We attempted to have study conditions mimic real-world circumstances, but elements of our evaluation may not be replicated beyond the study setting. Providers were aware of data collection procedures, leading to an unavoidable observer effect. Laboratory turnaround time likely represents an ideal scenario: study coordinators retrieved specimens during site visits when hospital vehicles were not available for specimen transfer, and technicians at the research laboratory had extensive experience with the testing platform. Nonetheless, we observed substantial delays in return of results. Having a "point person" in each hospital or district to facilitate specimen transfer and result follow-up may expedite monitoring activities.
Numerous barriers remain to widespread implementation of DBS for VL monitoring. More than three-fourths of participants who went >90 days without receiving results had at least one interim clinic visit. Laboratory-driven delays (result not available at the time of visit) accounted for some of the "missed" result delivery opportunities. Improved collection and utilization of patient tracing may facilitate improved result delivery turnaround time in the event of laboratory-based processing delays. Patients for whom results are not available at the time of initial visit could be contacted and asked to return for results between regularly scheduled visits-especially relevant for patients with high VLs. However, laboratory delays accounted for only half of the result delivery "misses", the remaining were due to providers failing to retrieve results and deliver to participants despite being available at the clinic. Improved data management systems at both central laboratories and testing clinics will be critical to improve the result turnaround time. Integrating VL results with existing clinical management by linking results directly to patient records and generating visual flags for clinicians may reduce the frequency with which providers miss delivering available results to patients during clinic visits.
The DBS technology also has some inherent limitations, with poor specificity at lower levels of viremia (<5,000 copies/ml), overestimating VLs by detecting cell-associated HIV DNA and RNA not detected by traditional plasma measurements [33,35]. Given variation in nucleic acid amplification methods, different VL measurement platforms may be more or less susceptible to these sources of error. Appropriate thresholds for defining virological failure using DBS remains a topic of debate. Nonetheless, high DBS sensitivity makes this monitoring approach an appealing alternative for VL screening programs, likely reducing the number of missed virological failures compared to clinical or immunological monitoring. We did not directly compare DBS performance to clinical and/or immunological failure criteria. However, our assessment of clinical symptoms suggests that clinical monitoring alone would miss a substantial proportion of patients who, according to DBS, were virological failures. Previous work has demonstrated comparability of DBS to plasma in this patient population [33], and although plasma is a more precise means of identifying virological failure, the logistical and financial barriers associated with plasma make it unappealing for the more remote clinical settings explored here.
We have demonstrated that DBS for VL monitoring is both feasible and effective in resource-limited settings. The centralized laboratory testing was efficient and results were successfully distributed back to clinics. Our results help validate the potential for central laboratory testing of DBS to improve access to VL monitoring in more remote settings. Delays in returning results to participants were largely due to inadequate result tracking and provider notification in the existing paper-based and electronic ART management systems. Modifications to these systems will be essential in advance of widespread DBS implementation. We observed remarkable performance in terms of proportion of eligible participants switched to second-line therapy in a timely manner. We also observed a lower-than-expected virological failure rate. At this failure rate, pooling specimens may be a cost-effective testing alternative, although tradeoffs in sensitivity and specificity are important to consider with the dilution effects of pooling, especially at lower failure thresholds [31,53]. Our findings demonstrate the importance of virological monitoring among patients on ART for extended periods, regardless of clinical symptoms. Important next steps include assessment of resistance among patients who do not resuppress to distinguish between modifiable inadequate adherence and biological failure.