Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Implementation of an electronic patient-reported measure of barriers to antiretroviral therapy adherence with the Opal patient portal: Protocol for a mixed method type 3 hybrid pilot study at a large Montreal HIV clinic

  • Kim Engler ,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – original draft, Writing – review & editing

    kimcengler@gmail.com

    Affiliation Center for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada

  • Serge Vicente,

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Mathematics and Statistics, University of Montreal, Montreal, Quebec, Canada

  • Yuanchao Ma,

    Roles Methodology, Software, Writing – review & editing

    Affiliation Center for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada

  • Tarek Hijal,

    Roles Software, Writing – review & editing

    Affiliation Department of Radiation Oncology, Cedars Cancer Center, McGill University Health Centre, Montreal, Quebec, Canada

  • Joseph Cox,

    Roles Methodology, Writing – review & editing

    Affiliations Division of Infectious Disease, Department of Medicine, Chronic Viral Illness Service, McGill University Health Centre, Montreal, Quebec, Canada, Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada

  • Sara Ahmed,

    Roles Funding acquisition, Methodology, Writing – review & editing

    Affiliation School of Physical & Occupational Therapy, McGill University, Montreal, Quebec, Canada

  • Marina Klein,

    Roles Funding acquisition, Methodology, Writing – review & editing

    Affiliations Center for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Division of Infectious Disease, Department of Medicine, Chronic Viral Illness Service, McGill University Health Centre, Montreal, Quebec, Canada, Department of Medicine, McGill University, Montreal, Quebec, Canada

  • Sofiane Achiche,

    Roles Software, Writing – review & editing

    Affiliation Department of Mechanical Engineering, École Polytechnique de Montréal, Montreal, Quebec, Canada

  • Nitika Pant Pai,

    Roles Writing – review & editing

    Affiliation Department of Medicine, McGill University, Montreal, Quebec, Canada

  • Alexandra de Pokomandy,

    Roles Writing – review & editing

    Affiliations Center for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Division of Infectious Disease, Department of Medicine, Chronic Viral Illness Service, McGill University Health Centre, Montreal, Quebec, Canada, Department of Family Medicine, McGill University, Montreal, Quebec, Canada

  • Karine Lacombe,

    Roles Writing – review & editing

    Affiliation Sorbonne Université, Inserm IPLESP, Hôpital St Antoine, Assistance Publique -Hôpitaux de Paris, Paris, France

  • Bertrand Lebouché

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    Affiliations Center for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Division of Infectious Disease, Department of Medicine, Chronic Viral Illness Service, McGill University Health Centre, Montreal, Quebec, Canada, Department of Family Medicine, McGill University, Montreal, Quebec, Canada

Abstract

Background

Adherence to antiretroviral therapy (ART) remains problematic. Regular monitoring of its barriers is clinically recommended, however, patient-provider communication around adherence is often inadequate. Our team thus decided to develop a new electronically administered patient-reported outcome measure (PROM) of barriers to ART adherence (the I-Score) to systematically capture this data for physician consideration in routine HIV care. To prepare for a controlled definitive trial to test the I-Score intervention, a pilot study was designed. Its primary objectives are to evaluate patient and physician perceptions of the I-Score intervention and its implementation strategy.

Methods

This one-arm, 6-month study will adopt a mixed method type 3 implementation-effectiveness hybrid design and be conducted at the Chronic Viral Illness Service of the McGill University Health Centre (Montreal, Canada). Four HIV physicians and 32 of their HIV patients with known or suspected adherence problems will participate. The intervention will involve having patients complete the I-Score through a smartphone application (Opal), before meeting with their physician. Both patients and physicians will have access to the I-Score results, for consideration during the clinic visits at Times 1, 2 (3 months), and 3 (6 months). The implementation strategy will focus on stakeholder involvement, education, and training; promoting the intervention’s adaptability; and hiring an Application Manager to facilitate implementation. Implementation, patient, and service outcomes will be collected (Times 1-2-3). The primary outcome is the intervention’s acceptability to patients and physicians. Qualitative data obtained, in part, through physician focus groups (Times 2–3) and patient interviews (Times 2–3) will help evaluate the implementation strategy and inform any methodological adaptations.

Discussion

This study will help plan a definitive trial to test the efficacy of the I-Score intervention. It will generate needed data on electronic PROM interventions in routine HIV care that will help improve understanding of conditions for their successful implementation.

Clinical trial registration

ClinicalTrials.gov identifier: NCT04702412; https://clinicaltrials.gov/.

Introduction

Routinely collecting data on patient-reported outcome measures (PROMs) for individual patient care can benefit both people living with HIV and their providers, yet it is seldom done in HIV clinical practice [1]. For patients, it may help ensure that HIV care is person-centered and in line with their needs [1]. For providers, given the multidimensional and chronic nature of HIV clinical assessment and follow-up, the use of PROMs could facilitate efficient application of clinical guidelines in a context of time and resource constraints [2].

While past syntheses of effectiveness evidence for PROM use across specialties in routine care have typically found mixed results, with inconsistent impacts on patient outcomes [3,4], a more recent systematic review published in 2019 finds the evidence supports PROM use in standard care, particularly to improve patient-provider communication and decision-making in clinical practice [5]. Furthermore, the international momentum building for PROM use [6] may increase with the current COVID-19 pandemic. Indeed, there are calls for a scale up of electronic PROM implementation in this crisis for the remote follow-up of chronic conditions, in part, to better screen patients and promptly manage their needs [7].

The management of antiretroviral therapy (ART) adherence for the treatment of HIV is among the areas that could profit from greater PROM use. Successful ART remains essential to a near-normal life expectancy; however, many on ART have suboptimal adherence [8,9], even on single-tablet regimens [10,11]. In a recent study, only 23% percent of adults initiating a single-tablet regimen were considered adherent over a six-month period versus 12% among those who initiated a multiple tablet regimen, based on prescription fill dates [10]. Clinically recommended strategies to foster adherence include ongoing monitoring of barriers to adherence among people living with HIV [12]. Yet, several studies point to inadequate patient-provider communication around ART adherence and its impediments [1317] and many HIV providers underestimate their patients’ adherence difficulties [18,19]. In addition, individuals with HIV collectively report a multitude of barriers to adherence, including a variety of cognitive, emotional, social, and material issues as well as health service-related barriers [20,21], the proper evaluation of which may be time-consuming for providers [22].

For these reasons, with patient [23] and provider [24] involvement, we are developing a PROM of barriers to ART adherence, the Interference-Score (or I-Score), for electronic administration. I-Score data will be collected from patients and shared with their providers via Opal, a patient portal and smartphone app. This award-winning app [25], which is currently in use at the Cedars Cancer Centre of the McGill University Health Centre (MUHC), will be configured to respond to the needs of patients with HIV. Opal can give patients access to appointment schedules, laboratory test results, educational material, waiting room management tools, and PROMs. Electronic administration of our PROM was crucial as it simplifies score integration within the clinical workflow, allows for longitudinal presentation of scores as well as remote monitoring, and through Opal, it provides access to several other useful and potentially empowering patient-centered functions.

Aim and objectives

With the present mixed method pilot study, drawing on implementation science, the aim is to develop the methods and tools necessary to undertake a more robust evaluation of the implementation and effectiveness of the I-Score PROM-within-Opal innovation (henceforth, the I-Score intervention) in routine HIV care with individuals on ART. This study’s primary objectives are to evaluate stakeholder perceptions of the I-Score innovation (Objective 1) and evaluate the implementation strategy (Objective 2) in terms of recommended implementation science metrics for PROMs in routine care [26]. Its secondary objective (Objective 3) is to determine if the intervention shows promise and the chosen outcomes are useful, by observing collected data on select effectiveness outcomes (patient and service outcomes).

Guiding frameworks

It is important that a credible causal explanation of a digital health innovation’s intended impacts be provided [27]. Indeed, in an electronic PROM-based intervention, conceptual or theoretical frameworks specify the mechanisms through which the intervention is expected to have its effects [28], facilitating appropriate outcome selection and the interpretation of results [29].

This pilot study will be guided, in part, by an intervention logic chain, depicted by the boxes in Fig 1, and adapted from the frameworks of Greenhalgh and colleagues [28,29]. The core of the intervention involves having patients complete the PROM prior to their HIV clinic visit and having both patients and providers receive and review the results. The left arrow in Fig 1 presents the key components of the implementation framework used, which will guide qualitative analysis. Specifically, these are the five broad domains of potential influence on implementation of Damschroder and colleagues’ [30] Consolidated Framework for Implementation Research (CFIR) within which are grouped 39 distinct constructs. Hence, it is assumed that flow through the logic chain can be affected by features of the intervention, settings, individuals, and implementation process involved. The CFIR is a flexible and widely used framework in implementation research, including for PROM-based initiatives [26].

thumbnail
Fig 1. Guiding implementation framework and I-Score intervention logic chain.

Asterisks indicate elements of the logic chain which will be examined as a part of this pilot study.

https://doi.org/10.1371/journal.pone.0261006.g001

Another working framework (Fig 2) presents the broad hypothesized relationships between the implementation strategy used for the I-Score and the categories of study outcomes addressed. Borrowing from the frameworks of Stover and colleagues [26] and Santana and Feeny [31] it, in part, conceives successful implementation of I-Score use in standard HIV care, as potentially generating cascading effects on service and patient outcomes.

thumbnail
Fig 2. Relationship between the I-Score implementation strategy and study outcomes.

Asterisks indicate outcomes for which data will be collected as a part of this pilot study.

https://doi.org/10.1371/journal.pone.0261006.g002

Materials and methods

This study received approval by the McGill University Health Centre Research Ethics Board on January 18, 2021 (Study ID CTNPT039/ 2021–7190). Specifically, the Cells, Tissues, Genetics & Qualitative research panel approved the study.

Study design

This 6-month pilot study will adopt a one-arm mixed method type 3 implementation-effectiveness hybrid design and be conducted in a single clinical site (Fig 3). Type 3 hybrid designs emphasize testing the implementation strategy of an evidence-based intervention, and to a lesser extent, reporting on intervention effectiveness [32].

Mixed methods were adopted in this study as multiple methods are recommended for studying intervention implementation and related challenges in complex systems, like HIV clinics [33]. The integration of the quantitative and qualitative data collected will occur toward study end within a convergent parallel design [34]. Those directly involved in the analyses will decide upon the specifics of integration. Reporting of this study will seek to satisfy the standards of Good Reporting of a Mixed Methods Study [35] and the Standards for Reporting Implementation Studies [36].

Setting and participants

The study setting is a large hospital-based clinic in Montreal, Quebec, Canada. This clinic, the Chronic Viral Illness Service (CVIS) of the MUHC, offers multidisciplinary care to over 1600 adults living with HIV. The CVIS and several team members have experience with implementation science methods and related pilot studies [e.g., 37].

Among the 16 physicians actively treating individuals with HIV at the CVIS, four will be recruited to participate as well as 32 of their adult patients. This sample size amply meets rule of thumb recommendations for one-arm pilot studies [38]. To participate, patients must be confirmed HIV positive, aged at least 18 years old, on combination ART, irrespective of duration, and literate in English or French. They must own a smartphone with an appropriate data plan and/or home Wi-Fi connection, since currently, Opal is ideally suited to a smartphone interface. They must also be willing to download the smartphone app. Finally, patients must have had known or suspected adherence problems in the past 12 months, based on: a detectable viral load test result per local standards [39], and/or report by the patient or healthcare team (by the physician, nurse, social worker, or pharmacist). At least ten female patients among the 32 will be recruited, to ensure sufficient representation of women living with HIV [40]. Patients may not participate if they are: concurrently enrolled in a clinical trial; affected by a cognitive impairment or medical instability that prevents them from participating in all aspects of the study; insufficiently able to use the app with the technical support provided; receiving treatment for hepatitis C or have completed treatment 3 months ago or less; or being treated for hepatitis B with a medication other than their combination ART.

Recruitment and consent process.

Physicians and patients are expected to be enrolled from July to August 2021. Physicians treating patients with HIV at the CVIS will be asked individually to participate, by oral (e.g., phone) or email invitation. Patients of participating physicians will be recruited and consented in two ways: 1) when they visit the clinic; or 2) prior to an upcoming clinic visit. When they visit the clinic to meet with a health or social service provider (e.g., physician, nurse, social worker, psychologist), the provider will briefly describe the study to determine interest. Alternatively, suitable patients with upcoming visits will be identified by the physician and then contacted by a neutral clinic staff member to inform them of the study. If the individual is interested, the study coordinator will present the project in greater detail to them in person or over the phone, check the eligibility criteria, and obtain consent (See S1 Appendix for the patient consent form). Then, an appointment for a teleconference (e.g., on Zoom) or in person meeting at the clinic will be made with the participant prior to their next regular clinic appointment with their physician to deliver training to use the app and to complete the I-Score. The app’s installation and functionality on the patient’s smartphone will also be verified. Given efforts to limit in-hospital visits and risks for patients during the COVID-19 pandemic, in-clinic appointments with research staff will be avoided, where possible.

The I-Score intervention.

During the study, participating patients will visit with their HIV physician three times, at Time 1 (T1), month 3 (T2), and month 6 (T3), prior to which they will complete the I-Score, as instructed. This visit schedule was selected as it concords with guidelines for the clinical follow-up of HIV in Quebec [41], while maximizing the collection of repeated study measurements. Given COVID-19, one visit (T2) will be done remotely (by phone or teleconferencing), while the other visits will be held at the CVIS. The I-Score PROM contains 20 items, covering 6 domains of barriers to ART adherence: cognitive and emotional aspects; lifestyle factors; the social and material context; the health experience and state; characteristics of ART; and the healthcare system and its services. Respondents indicate how often each barrier made adherence difficult in the past 4 weeks, with an 11-point scale, from 0% (never made it difficult) to 100% (always made it difficult). Details on the measure’s development are published elsewhere [20,23,24,42]. Example items include “I was not motivated to take my medication,” “I felt isolated or alone,” “I had another health condition to deal with (for example, depression, diabetes, or heart disease),” and “My medication cost coverage was not sufficient.”

The core intervention of this pilot consists of having individuals with HIV on ART register on the Opal app and complete the I-Score PROM prior to each of three consecutive visits with their HIV physician. The patients will receive a reminder to complete the I-Score one week before their visit and they will have immediate access to their results. The HIV physician will acquire the I-Score results before each visit via the ORMS dashboard, an appointment and questionnaire management tool integrated with Opal and designed specifically for healthcare providers. The option of graphically presenting scores over time will also be available, allowing the comparison of past and present scores. It is expected that patients and physicians will review the I-Score results so they can be considered during the clinic visit (Fig 1).

Opal cybersecurity.

The technical cybersecurity aspects of the Opal app conform to the security and governance recommendations for the development of a patient portal, as identified by the MUHC’s Security and Governance team, to ensure the confidentiality of patient data. For details, see the multimedia S2 Appendix of Kildea et al. [25].

The implementation strategy.

The multilevel implementation strategy, designed for this study, addresses known facilitators and barriers to implementing electronic PROMs in routine clinical practice [43] and draws on recognized implementation strategies [44,45]. S1 Table presents the correspondence between the facilitators and barriers targeted, the chosen implementation strategies to address them, and their relationship to components of the implementation framework used in this study, the CFIR. As a part of our approach, we will conduct an “Educational meeting” by teleconference with providers to formally teach them about the intervention and its rationale and respond to concerns. We will provide “Training and consultation” on the PROM and app by hiring an “Application Manager” (AM). The AM will train patients and providers and be available to them on an ongoing basis, as needed, preferably by phone or teleconference. They will also help monitor the quality of PROM data. Appointing such a coordinator (or Quality Assurance officer) is a recommended strategy to minimize the impact of missing PROM data [46]. The AM will thus oversee the completeness of PROM data collected and manage any system or software problems, which are potential disadvantages of computer or web-based PROM administration [47]. Hence, overall, the AM will participate in the “Facilitation” of the PROM’s implementation. Another strategy aims to meaningfully “Involve patients and providers” in the I-Score’s implementation. Notably, following the I-Score administrations at T2 and T3, physicians will participate in a focus group (by teleconference), while a short semi-structured interview will be conducted with each patient (by telephone or teleconference). Throughout the study, the AM will take field notes on the problems encountered by participants and this feedback will enable “Cyclical small tests of change” to improve implementation, using an evaluation approach guided by the Consolidated Framework for Implementation Research [48]. This way, we will “Promote adaptability” of the I-Score process, to enable adjustments to local considerations while maintaining the intervention’s core components, namely, I-Score completion by the patient via Opal prior to the clinic visit and review of scores by the physician in conjunction with the visit. Many peripheral components will be adaptable, such as the timing and number of reminders to complete the I-Score and how I-Score results are presented to providers on the ORMS dashboard.

Data collection

The data collection period is expected to extend from about September 2021 to February 2022.

Quantitative component.

The quantitative component will have three sources of data: 1) participant self-report; 2) electronic medical records; and 3) passive data (e.g., on app use to assess fidelity). Patient self-report data will be collected via Opal. Physician self-report data will be obtained with paper questionnaires.

At T1, T2 (3 months) and T3 (6 months), a study questionnaire will be administered to participants, especially to assess implementation outcomes. At T1, the study questionnaire will be composed of 31 questions for patients and 24 for physicians, while at T2 and T3, it will have 21 questions for patients and 19 for physicians. Based on the metadata of patients who have used Opal, these questionnaires should take less than 10 minutes to complete, considering that patients take 10–15 seconds per question, at first completion. At T1, the questionnaire will ask about socio-demographics (e.g., year of birth, preferred language, sex, sexual orientation, ethnic group identity, immigration, education, income) and digital technology use as well as pose general health questions for patients (year of diagnosis with HIV, treatment satisfaction) and clinical practice questions for physicians (years practicing in HIV, current number of HIV patients) (For the full content of Time 1 study questionnaires, see S2 Appendix). The measures of digital technology use are as follows: frequency of mobile device use (adapted from Schnall et al. [49]), having a health app on one’s mobile device [50], extent of health app use (adapted from Balapour et al. [51]), confidence in reporting medical information using mobile technology [51], and intention to report personal health data with a mobile device app, if asked by a provider [51]. This information will help contextualize the findings and describe the sample. Clinical data, namely, HIV viral load in copies/mL, to determine viral suppression, will be extracted from patients’ medical health record at the clinic, at T1 and T3.

Qualitative component.

The qualitative component will have three sources of data: 1) 1-hour focus groups with all physicians (T2, T3); 2) 45-minute interviews with patients (T2, T3), until core theme saturation (an intermediate sample of 15 should be sufficient at each time point [52]); and 3) the Application Manager field notes, recorded on a standardized form (T1-T3). Focus groups and patient interviews will be conducted by an experienced interviewer and, if possible, through a teleconferencing platform such as Zoom. Participants will have the option of accessing the teleconference by telephone or the Internet. The patient’s name will not be shown. Audio recordings of the focus groups and interviews will be manually transcribed, extracting nominal information. Each will be guided by a similar semi-structured interview schedule, in English or French, depending on preferred language. It will ask about the participants’ experience with I-Score use and its implementation as well as about facilitating and impeding factors. The schedule of study procedures for patients and physicians can be found in Table 1.

Study metrics and instruments.

Details on the constructs assessed; the instruments and metrics used; the chosen thresholds for success, if applicable; the participant group contributing data; and the timing of data collection are presented in Table 2. Many of the chosen metrics are based on those recommended by Stover et al. [26]. Importantly, the authors emphasize the need to standardize evaluation metrics in patient-reported measure implementation and to distinguish between those used to assess perceptions of the innovation and those used to assess the implementation strategy. Not meeting the set thresholds for success, in this study, will signify that modifications are necessary before proceeding to a definitive trial [53].

thumbnail
Table 2. Implementation science metrics and effectiveness outcomes collected for the pilot study.

https://doi.org/10.1371/journal.pone.0261006.t002

Objective 1 -evaluate perceptions of the I-Score intervention.

The primary outcome of this pilot study will be acceptability, as measured by the Acceptability E-scale (AES) for web-based PROMs (alpha coefficient: .76) [54]. Acceptability is related to how agreeable, palatable, or satisfactory an intervention is perceived to be by stakeholders [59]. It will be measured, at T1, T2, and T3, with an adapted version of the AES and administered to both patient and physician participants. The scale has 6 items rated on a 5-point Likert scale that varies depending on the item. Example items of the original measure include “How would you rate your overall satisfaction with this computer program?”, and “How easy was this computer program […] for you to use?” A summary score is obtained by adding the item scores (range: 6–30). A score of at least 24 (80% of maximum) indicates high acceptability and usability, as suggested by the scale developers.

Acceptability will also be measured at T1, T2 and T3 with a variant of the Net Promoters Score (NPS) used by England’s National Health Service (NHS) and labelled the Friends and Family Test [55]. The NPS is considered a measure of user satisfaction [60]. A single question will be asked (“How likely are you to recommend the I-Score?”) and rated on a 5-point Likert scale (1 = Extremely unlikely, 2 = Unlikely; 3 = Neither likely nor unlikely; 4 = Likely; 5 = Extremely likely). From this measure, the percentage recommending the I-Score will be calculated (score of 4 or 5), with a success threshold of 80% or more. An NPS-type score will also be calculated by creating three groups: promoters (score of 5), passives (score of 4), and detractors (score of 1–3). Subtracting the percentage of detractors from the promoters provides the NPS. NPS scores range from -100 to 100. A positive score (> 0) will be considered good [61], and a score of ≥ 50, excellent.

Appropriateness concerns the perceived fit or relevance of the intervention for the particular users, setting, or problem at hand [59]. It will be measured, at T1, T2, and T3, with two instruments. One concerns the perceived compatibility of the I-Score with the physicians’ work. The perceived compatibility of an information technology innovation broadly relates to how consistent it is perceived to be with the potential users’ values, needs, and past experiences [56]. It will only be collected from physicians, with a compatibility subscale developed by Moore and Benbasat (alpha coefficient: .86) [56]. It contains four items (e.g., “Using [the IT innovation] is compatible with all aspects of my work”, “Using [the IT innovation] fits into my work style”). These are rated on a 7-point Likert scale, from Extremely disagree to Extremely agree, and averaged to produce the subscale score (range: 1–7). A minimum average score of 5.5 is the threshold set for compatibility.

In addition, a 4-item scale, the Appropriateness of Intervention Measure [57], will be completed by all participants (alpha coefficient: .91). Example items include “This [evidence-based practice] seems fitting” and “This [evidence-based practice] seems like a good match.” Items are scored on a five-point scale of agreement, from 1 = Completely disagree to 5 = Completely agree and averaged for a total score (range: 1–5). An average score of at least 4 will indicate adequate appropriateness with this instrument.

Feasibility relates to the extent to which our I-Score intervention is successfully used or carried out within the study site [59]. To determine feasibility, data will be collected on the consent rate, defined as the proportion of approached eligible patients and physicians who consent to participate. Individuals who choose not to participate will be asked to provide select sociodemographic information (sex, year of birth, preferred language) and their reason(s) with a checklist on a refusal form. If 70% or more agree to participate, the study will be judged feasible on this aspect. We will also examine the retention rate, indicated by the proportion of patients and physicians who complete the study. Eighty percent will be considered the benchmark for success. Missing I-Score data rates due to network failure as well as patient and provider non-completion of self-reported questionnaire data will be calculated as well. The criterion for success on this metric is at least 90% of items completed per participant. Furthermore, participants will complete the Feasibility of Intervention Measure (alpha coefficient: .89) [57], at T1, T2 and T3, a four-item self-report measure that is appropriate for different stakeholder groups (e.g., patients, providers). Example items include “This [evidence-based practice] seems possible” and “This [evidence-based practice] seems doable.” Average scores of at least 4 (range: 1–5), indicative of agreement on the 5-point response scale, will signify the I-Score intervention’s feasibility.

Fidelity is the degree to which the intervention was implemented as specified in the protocol [59]. It will be indicated by patient and provider adherence to core components of the intervention. Thresholds for success, from T1 to T3, are: 90% patient completion of the I-Score prior to meeting with the physician; 90% provider review of the patient’s I-Score results prior to or during the clinic visit.

Objective 2 -evaluate the implementation strategy.

Evaluation of the implementation strategy will be performed in relation to the same constructs as for the first objective. However, the assessment of acceptability, appropriateness, and fidelity will be solely based on analysis of qualitative data (see Table 2). As to feasibility, it will be assessed in terms of the rate of technical issues encountered and recorded in the Application Manager’s notes, and the percentage of providers who participated in the implementation activities (i.e., education meeting, focus groups), with a success threshold set at 80% or more.

Objective 3 -determine preliminary intervention effectiveness.

This pilot study will collect data on one service outcome, patient management. It will be verified with a checklist submitted to participating physicians to allow them to record, per patient encounter, if they received the I-Score results on time, if they reviewed them prior to or during the clinic visit, if they were discussed during the visit, and if the I-Score identified concerning barriers. Then, they will check off any clinical actions that were taken based on the I-Score results or any related patient-provider discussion (e.g., recording issues in medical notes, referring to another health professional, ordering a test, changing a medication or treatment, providing advice or education).

The patient outcomes assessed in this pilot are barriers to adherence, self-reported adherence, and viral load. Barriers to ART adherence will be assessed at T1, T2, and T3, with our previously described PROM. Adherence will be examined with the Self-Rating Scale Item (SRSI) [58] at T1, T2, and T3. It is a one item measure of treatment adherence (i.e., ‘‘Rate your ability to take all your medications as prescribed” [over the past 4 weeks], rated on a 6-point scale (i.e., Very poor, Poor, Fair, Good, Very good, and Excellent). Viral load, a clinical indicator of viral activity (e.g., infectiousness) and treatment response, will be treated as a dichotomous variable based on whether, as indicated in the patient’s medical file, the HIV RNA viral load is detectable (over 50 copies), or not. Undetectability is a goal of HIV treatment. The most recent viral load test result at T1 and T3, will be collected.

Data analysis

The period of qualitative and quantitative data analysis is projected to extend from approximately July 2021 to April 2022.

Quantitative analysis.

Time 1 questionnaires for people living with HIV and for HIV physicians will be summarized with descriptive statistics. For continuous variables, the minimum, the maximum, the mean, and the standard deviation will be reported. For ordinal and nominal qualitative variables, we will report absolute and relative frequencies (proportions).

As, specified, for most quantitative metrics relating to Objectives 1 and 2, as recommended for pilot studies, score targets were set to evaluate the ability to proceed to a definitive trial [53]. For people living with HIV and HIV physicians, continuous outcomes expressed on a Likert scale will be summarized with the minimum, the maximum, the mean, and the standard deviation at T1, T2 and T3. Binary outcomes (yes or no) will be reported with absolute and relative frequencies (proportions) at T1, T2 and T3. The means and proportions of T1, T2 and T3 will be confronted with their corresponding thresholds for success, presented in Table 2. To study the tendency of means and proportions for people living with HIV over time, a Linear Mixed Model or a Generalized Linear Mixed Model will be used, for continuous and binary outcomes, respectively. The response variable of each model will be the corresponding outcome and the independent variable will be the time (T1, T2 and T3). The null hypothesis of no time effect on the corresponding outcome will be tested with a Student’s t-test on the regression coefficient. If the null hypothesis is rejected, we will perform post-hoc Student’s paired t-tests between all combinations of time points to show between which time points means and proportions differed significantly. Additionally, to verify if each threshold for success is met at the end of the study, we will test the null hypothesis that each mean or proportion at T3 is inferior to its corresponding threshold with a Student’s t-test. For all analyses, a significance level of 5% will be adopted. Finally, where appropriate, Cronbach’s alpha will be calculated to evaluate the internal consistency of subscales.

Regarding the patient outcomes of Objective 3, barriers to adherence and adherence will be summarized with the minimum, the maximum, the mean, and the standard deviation. Viral load will be reported by absolute and relative frequencies, as it is considered a dichotomous variable. For the service outcome obtained from the physician checklist, we will report the proportion of clinic visits when physicians took action based on the I-Score results, among the visits where an adherence barrier of concern was identified by physicians. Proportions will be reported for T1, T2 and T3 and globally, across time periods. To evaluate evidence of a statistically significant difference in our chosen effectiveness outcomes, we will run a Student’s paired t-test for barriers to adherence and adherence and a McNemar test for viral load, between T1 and T3. To complete the analysis of service outcomes, we will use a logistic regression model, considering only the visits where an adherence barrier of concern was identified by physicians. The dependent variable is the binary variable of whether or not an action was taken by physicians and the independent variable is the time, considered as a factor with three independent levels (T1, T2 and T3). We will test the null hypothesis that time has no effect on the probability of taking action, with a t-test on the regression coefficient. We will conclude the analysis by testing the null hypothesis of equality of proportions between all pairwise combinations of time points, with a Student’s t-test between two proportions, performing a Bonferroni correction for multiple tests. For all analyses, a significance level of 5% will be adopted.

Qualitative analysis.

The study’s qualitative material (i.e., focus groups, interviews, Application Manager notes) will be submitted to content analysis [62], focusing on the manifest content. Deductive content analysis will be favored, allowing implementation barriers and facilitators identified to be categorized with an existing framework, while remaining open to emerging categories. Deductive content analysis allows categories to be compared at different periods, fitting with the study’s longitudinal design [63]. For this purpose, the Consolidated Framework for Implementation Research (CFIR) will be used [30]. Analysis will involve three phases [62]: 1) preparation, when the analyst attempts to get a sense of the entire dataset through immersion in the data; 2) organizing, during which an unconstrained categorization matrix will be devised with the CFIR’s constructs, and the data will be coded, accordingly. At this point, the qualitative data management software, Atlas.ti version 8, will be used to code and categorize the material; and 3) reporting, which involves presenting the described contents (meanings) of the categories and addressing trustworthiness [62]. A product of these analyses will be matrices of facilitators, barriers and potential solutions raised by patients and physicians, at each main qualitative data collection period, using the CFIR. These will allow for the tracking of categories over time [64], to help identify patterns. Two trained coders will be involved in the qualitative analyses which will be discussed during periodic team meetings, including any discrepancies in coding or interpretation.

For the cyclical small tests of change of the implementation strategy, consistent with the approach by Keith et al. [48], the qualitative data will be coded and categorized with the CFIR. We will further structure and document our cyclical small tests of change by drawing on the iterative Plan-Do-Study-Act (PDSA) approach for quality improvement [65]. During the ‘plan’ stage, the stakeholder feedback collected will help to periodically identify and document factors that are affecting the intervention and associated changes to the implementation and/or peripheral components of the intervention that could lead to improvement. Related predictions will be explicitly articulated [65]. During the ‘do’ stage, changes will be tested. At the ‘study’ stage, the extent of the change’s success will be evaluated against the prediction(s) and documented with subsequent qualitative or quantitative data, per the study’s design, and the Application Manager’s field notes. The ‘act’ stage will see further adaptations, depending on the successfulness of the change, and/or the initiation of another cycle of change. For each PDSA cycle undertaken, all decisions and relevant information will be recorded, following the PDSA theoretical framework developed by Taylor et al. [65].

Mixed methods analysis.

The quantitative and qualitative data will be analyzed separately and subsequently brought together for comparison, for a more complete interpretation of the results. Areas of convergence and divergence will be highlighted.

Data management

The Opal team (TH, YM) will oversee data management for this study. Patient-reported data will be electronically collected directly through Opal. This data will be stored in a local server protected by the MUHC. The Opal team will manage patient information through the Opal app, always respecting data security and confidentiality. For this study’s purpose, relevant deidentified data will be extracted by the team and stored on a password protected USB key or hard drive for subsequent analysis.

Discussion

Electronic PROM use for clinical practice with individuals living with HIV is limited. To our knowledge, this is the first pilot study of an intervention aimed at implementing systematic consideration of patient-identified ART adherence barriers with an electronic PROM in routine HIV care. This pilot study will help build needed knowledge on impediments to and strategies for implementing these tools in HIV care [1]. As such, it acts as a standalone study, providing useful and rich data to others considering similar interventions in similar contexts. It will generate data that will improve understanding of conditions for successful implementation as well as test and solidify the implementation strategy. Furthermore, it may shed light on the mechanisms of similar PROM interventions. Overall, it will produce useful data to design a definitive effectiveness trial of the I-Score intervention.

Anticipated problems

There are many potential barriers to implementing PROMs in care, such as provider reticence (e.g., due to concerns for increased workload). Our multi-pronged implementation strategy directly seeks to mitigate numerous common barriers to implementing PROMs in clinical practice [43]. For details, see S1 Table. Also in our favor is the study site (CVIS), which is a highly active center for HIV research where several of its investigators are experienced in implementation science methods.

With the use of self-reported adherence measures, there are accuracy concerns. These measures are known for being prone to recall bias, if not misremembering, given, for instance, the mundane nature of medication-taking [66,67]. They are also deemed vulnerable to social desirability bias, if not intentional deception, given, for example, patient beliefs about the consequences of admitting adherence problems [66,67]. These processes could influence the types of adherence barriers people are able or willing to report when completing our PROM. Conversely, PROM administration in routine care is recognized to give patients permission to raise health problems with their providers [68]. Furthermore, our understanding is that barriers are multidimensional and interconnected, where a common barrier such as forgetting, can be associated with numerous others (e.g., substance use, HIV stigma, life demands, co-morbidity) [20]. Hence, while our PROM is not expected to capture the full details of an individual’s barriers, a fuller portrait may emerge through conversations the PROM instigates with providers. Our team has also vied to involve people living with HIV with a range of methods throughout the development of our instrument (via committee meetings, cognitive interviewing, etc.) to ensure its relevance, acceptability, and appropriateness for use in HIV care. This pilot study will allow us to further gauge the utility of the information provided by the PROM and to potentially make adjustments to improve accuracy.

Finally, an added concern is the continued spread of COVID-19. Physicians have been advised to use telemedicine and teleconsultations, whenever possible, to limit the spread of COVID-19 [69,70]. Given the uncertain evolution of the pandemic and associated public health response, methodological adjustments to this study may be required, for instance, to further limit in-person participant visits with physicians and research team members.

Conclusion

The PROM initiative concerned by this study challenges traditional care paradigms with a more patient-centered approach. It aims to shift an HIV treatment paradigm emphasizing biomedical markers (i.e., viral load) in adherence management. Systematic monitoring of patient-reported adherence barriers could allow for a more preventative approach and help ensure adherence management addresses patients’ priorities. Indeed, the I-Score PROM includes only the most highly valued barriers in terms of relevance and importance to HIV care, as rated by people living with HIV and providers in our Delphi consultation [42]. As to the app through which the PROM is administered and its features, it may help redress the patient-provider knowledge imbalance and empower patients in their care [71].

Supporting information

S1 Checklist. SPIRIT 2013 checklist: Recommended items to address in a clinical trial protocol and related documents*.

https://doi.org/10.1371/journal.pone.0261006.s001

(DOC)

S1 Table. Relationship between the identified facilitators/barriers of PROM implementation, the implementation framework (CFIR), and the pilot study’s implementation strategies.

PROM = patient-reported outcome measure; CFIR = Consolidated Framework for Implementation Research; a Reproduced or adapted from Foster et al. [41]; b Based on Damschroder et al. [25]; c Based on the taxonomies of Powell et al. [42,43].

https://doi.org/10.1371/journal.pone.0261006.s002

(DOCX)

S2 Appendix. Patient and physician Time 1 study questionnaires.

https://doi.org/10.1371/journal.pone.0261006.s004

(DOCX)

S1 Dataset. World Health Organization Trial Registration Data Set for study CTNPT039.

https://doi.org/10.1371/journal.pone.0261006.s005

(DOCX)

S1 File. The I-Score/Opal implementation pilot study.

https://doi.org/10.1371/journal.pone.0261006.s006

(PDF)

Acknowledgments

We wish to thank the patients and health and social service providers who participated in the development of the I-Score PROM and in adapting Opal for HIV care. On these projects, we also thank the Quebec SPOR Support Unit -McGill Methodological Developments Platform, for sharing their expertise and resources.

References

  1. 1. Kall M, Marcellin F, Harding R, Lazarus JV, Carrieri P. Patient-reported outcomes to enhance person-centred HIV care. The Lancet HIV. 2020;7(1):68. pmid:31776101
  2. 2. Kjær ASHK Rasmussen TA, Hjollund NH Rodkjaer LO, Storgaard M. Patient-reported outcomes in daily clinical practise in HIV outpatient care. Int J Infect Dis. 2018;69:108–114. pmid:29476900
  3. 3. Boyce M, Browne J. Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Qual Life Res. 2013;22(9):2265–2278. pmid:23504544
  4. 4. Valderas JM, Kotzeva A, Espallargues M, Guyatt G, Ferrans CE, Halyard MY, et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res. 2008;17(2):179–193. pmid:18175207
  5. 5. Ishaque S, Karnon J, Chen G, Nair R, Salter AB. A systematic review of randomised controlled trials evaluating the use of patient-reported outcome measures (proms). Qual Life Res. 2019;28(3):567–592. pmid:30284183
  6. 6. Rutherford C, Campbell R, King M, Speerin R, Soars L, Butcher A, et al. Implementing patient-reported outcome measures into clinical practice across nsw: mixed methods evaluation of the first year. Applied Research in Quality of Life. 2020.
  7. 7. Marandino L, Necchi A, Aglietta M, Di Maio M. COVID-19 emergency and the need to speed up the adoption of electronic patient-reported outcomes in cancer clinical practice. JCO Oncol Pract. 2020;16(6):295–298. pmid:32364846
  8. 8. Bezabhe WM, Chalmers L, Bereznicki LR, Peterson GM. Adherence to antiretroviral therapy and virologic failure: a meta-analysis. Medicine (Baltimore). 2016;95(15):e3361. pmid:27082595
  9. 9. Ortego C, Huedo-Medina TB, Llorca J, Sevilla L, Santos P, Rodriguez E, et al. Adherence to highly active antiretroviral therapy (HAART): a meta-analysis. AIDS Behav. 2011;15(7):1381–1396. pmid:21468660
  10. 10. Cohen J, Beaubrun A, Bashyal R, Huang A, Li J, Baser O. Real-world adherence and persistence for newly-prescribed HIV treatment: single versus multiple tablet regimen comparison among US medicaid beneficiaries. AIDS Res Ther. 2020;17:12. pmid:32238169
  11. 11. Clay PG, Yuet WC, Moecklinghoff CH, Duchesne I, Tronczyński KL, Shah S, et al. A meta-analysis comparing 48-week treatment outcomes of single and multi-tablet antiretroviral regimens for the treatment of people living with HIV. AIDS Res Ther. 2018 Oct 30;15(1):17. pmid:30373620
  12. 12. Department of Health and Human Services (DHHS). Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the Use of Antiretroviral Agents in Adults and Adolescents Living with HIV. http://www.aidsinfo.nih.gov/ContentFiles/ AdultandAdolescentGL.pdf. (accessed April 7, 2020).
  13. 13. Barfod TS, Hecht FM, Rubow C, Gerstoft J. Physicians’ communication with patients about adherence to HIV medication in San Francisco and Copenhagen: a qualitative study using Grounded Theory. BMC Health Serv Res. 2006;6:154. pmid:17144910
  14. 14. Beach MC, Roter DL, Saha S, Korthuis PT, Eggly S, Cohn J, et al. Impact of a brief patient and provider intervention to improve the quality of communication about medication adherence among HIV patients. Patient Educ Couns. 2015;98(9):1078–1083. pmid:26021185
  15. 15. Malta M, Petersen ML, Clair S, Freitas F, Bastos FI. Adherence to antiretroviral therapy: a qualitative study with physicians from Rio de Janeiro, Brazil. Cad Saúde Pública. 2005;21(5):1424–1432. pmid:16158148
  16. 16. Laws MB, Beach MC, Lee Y, Rogers WH, Saha S, Korthuis PT, et al. Provider-patient adherence dialogue in HIV care: results of a multisite study. AIDS Behav. 2013;17(1):148–159. pmid:22290609
  17. 17. Wilson IB, Laws MB, Safren SA, Lee Y, Lu M, Coady W, et al. Provider-focused intervention increases adherence-related dialogue but does not improve antiretroviral therapy adherence in persons with HIV. J Acquir Immune Defic Syndr. 2010;53(3):338–347. pmid:20048680
  18. 18. Fredericksen R, Crane PK, Tufano J, Ralston J, Schmidt S, Brown T, et al. Integrating a web-based, patient-administered assessment into primary care for HIV-infected adults. J AIDS HIV Res. 2012;4(2):47–55. pmid:26561537
  19. 19. Miller LG, Liu H, Hays RD, Golin CE, Beck CK, Asch SM, et al. How well do clinicians estimate patients’ adherence to combination antiretroviral therapy?. J Gen Intern Med. 2002;17(1):1–11. pmid:11903770
  20. 20. Engler K, Lènàrt A, Lessard D, Toupin I, Lebouché B. Barriers to antiretroviral therapy adherence in developed countries: a qualitative synthesis to develop a conceptual framework for a new patient-reported outcome measure. Aids Care. 2018;30(Sup1):17–28. pmid:29719990
  21. 21. Shubber Z, Mills EJ, Nachega JB, Vreeman R, Freitas M, Bock P, et al. Patient-reported barriers to adherence to antiretroviral therapy: A systematic review and meta-analysis. PLoS Med. 2016;13(11):e1002183. pmid:27898679
  22. 22. Genberg BL, Lee Y, Rogers WH, Wilson IB. Four types of barriers to adherence of antiretroviral therapy are associated with decreased adherence over time. AIDS Behav. 2015;19(1):85–92. pmid:24748240
  23. 23. Lessard D, Engler K, Toupin I, Routy J-P, Lebouché B. Evaluation of a project to engage patients in the development of a patient-reported measure for HIV care (the I-score study). Health Expectations. 2019;22(2):209–225. pmid:30375111
  24. 24. Toupin I, Engler K, Lessard D, Wong L, Lènàrt A, Spire B, et al. Developing a patient-reported outcome measure for HIV care on perceived barriers to antiretroviral adherence: assessing the needs of HIV clinicians through qualitative analysis. Qual Life Res. 2018;27(2):379–388. pmid:29027607
  25. 25. Kildea J, Battista J, Cabral B, Hendren L, Herrera D, Hijal T, et al. Design and development of a person-centered patient portal using participatory stakeholder co-design. Journal of Medical Internet Research. 2019;21(2):11371. pmid:30741643
  26. 26. Stover AM, Haverman L, van Oers HA, Greenhalgh J, Potter CM; ISOQOL. Using an implementation science approach to implement and evaluate patient-reported outcome measures (PROM) initiatives in routine care settings. Qual Life Res. 2020. Epub 2020 Jul 10. pmid:32651805
  27. 27. Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating digital health interventions: key questions and approaches. Am J Prev Med. 2016;51(5):843–851. pmid:27745684
  28. 28. Greenhalgh J, Dalkin S, Gooding K, Gibbons E, Wright J, Meads D, et al. Functionality and feedback: a realist synthesis of the collation, interpretation and utilisation of patient-reported outcome measures data to improve patient care. Health Serv Deliv Res. 2017;5(2). pmid:28121094
  29. 29. Greenhalgh J, Long AF, Flynn R. The use of patient reported outcome measures in routine clinical practice: lack of impact or lack of theory? Social Sci Med. 2005;60:833e43. pmid:15571900
  30. 30. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4(1). pmid:19664226
  31. 31. Santana MJ, Feeny D. Framework to assess the effects of using patient-reported outcome measures in chronic care management. Qual Life Res. 2014;23(5):1505–13. pmid:24318085
  32. 32. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–226. pmid:22310560
  33. 33. Peters DH, Tran NT, Adam T. Implementation research in health: a practical guide. Alliance for Health Policy and Systems Research. World Health Organization 2013.
  34. 34. Creswell JW, Plano-Clark VL. Designing and Conducting Mixed Methods Research. 2nd Edition. London, UK: Sage Publications 2011.
  35. 35. O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy. 2008;13(2):92–98. pmid:18416914
  36. 36. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795. pmid:28264797
  37. 37. Cox J, Linthwaite B, Engler K, Lessard D, Lebouché B, Kronfli N. A type ii implementation-effectiveness hybrid quasi-experimental pilot study of a clinical intervention to re-engage people living with hiv into care, ’lost & found’: an implementation science protocol. Pilot and Feasibility Studies. 2020;6:29–29. pmid:32110432
  38. 38. Sim J, Lewis M. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency. Journal of Clinical Epidemiology. 2011;65:301–308. pmid:22169081
  39. 39. Institut national de santé publique du Québec [INSPQ]. Expert Consensus: Viral Load and the Risk of HIV Transmission. Report. Direction des risques biologiques et de la santé au travail, 2014. Retrieved November 25, 2020 from: https://www.inspq.qc.ca/pdf/publications/1987_Viral_Load_HIV_Transmission.pdf.
  40. 40. Public Health Agency of Canada. Summary: Estimates of HIV Incidence, Prevalence and Canada’s Progress on Meeting the 90-90-90 HIV targets, 2016. Public Health Agency of Canada, 2018. Retrieved August 7, 2020 from: https://www.canada.ca/en/public-health/services/publications/diseases-conditions/summary-estimates-hiv-incidence-prevalence-canadas-progress-90-90-90.html.
  41. 41. Ministère de la santé et des services sociaux. La thérapie antirétrovirale pour les adultes infectés par le VIH—Guide pour les professionnels de la santé du Québec. Gouvernement du Québec. Last updated February 2, 2019. Retrieved August 7, 2020 from: https://publications.msss.gouv.qc.ca/msss/document-000733/?&date=DESC&sujet=vih-sida&critere=sujet.
  42. 42. Engler K, Ahmed S, Lessard D, Vicente S, Lebouché B. Assessing the content validity of a new patient-reported measure of barriers to antiretroviral therapy adherence for electronic administration in routine HIV care: proposal for a web-based Delphi study. JMIR Res Protoc. 2019;8(8):e12836. pmid:31376275
  43. 43. Foster A, Croot L, Brazier J, Harris J, O’Cathain A. The facilitators and barriers to implementing patient reported outcome measures in organisations delivering health related services: a systematic review of reviews. J Patient Rep Outcomes. 2018;2:46. pmid:30363333
  44. 44. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–157. pmid:22203646
  45. 45. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (eric) project. Implement Sci. 2015;10:21–21. pmid:25889199
  46. 46. Mercieca-Bebber R, Palmer MJ, Brundage M, Calvert M, Stockler MR, King MT. Design, implementation and reporting strategies to reduce the instance and impact of missing patient-reported outcome (PRO) data: a systematic review. BMJ Open. 2016;6:e010938. pmid:27311907
  47. 47. International Society for Quality of Life Research [ISOQOL] (prepared by Aaronson N, Choucair A, Elliott T, Greenhalgh J, Halyard M, Hess R, et al.). User’s Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice, Version: November 11, 2011.
  48. 48. Keith RE, Crosson JC, O’Malley AS, Cromp D, Taylor EF. Using the Consolidated Framework for Implementation Research (CFIR) to produce actionable findings: a rapid-cycle evaluation approach to improving implementation. Implement Sci. 2017;12:15. pmid:28187747
  49. 49. Schnall R, Cho H, Liu J. Health information technology usability evaluation scale (health-itues) for usability assessment of mobile health technology: validation study. JMIR Mhealth and Uhealth. 2018;6(1):4. pmid:29305343
  50. 50. Mahmood A, Kedia S, Wyant DK, Ahn S, Bhuyan SS. Use of mobile health applications for health-promoting behavior among individuals with chronic medical conditions. Digit Health. 2019;5:2055207619882181. pmid:31656632
  51. 51. Balapour A, Reychav I, Sabherwal R, Azuri J. Mobile technology identity and self-efficacy: implications for the adoption of clinically supported mobile health apps. International Journal of Information Management. 2019;49:58–68.
  52. 52. Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qualitative Health Research. 2017;27(4):591–608. pmid:27670770
  53. 53. Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1. pmid:20053272
  54. 54. Tariman JD, Berry DL, Halpenny B, Wolpin S, Schepp K. Validation and testing of the Acceptability E-scale for web-based patient-reported outcomes in cancer care. Appl Nurs Res. 2011;24(1):53–58. pmid:20974066
  55. 55. National Health Service England. NHS England Review of the Friends and Family Test. 2014. Retrieved August 13, 2020 from https://www.england.nhs.uk/wp-content/uploads/2014/07/fft-rev1.pdf.
  56. 56. Moore G, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research. 1991;2(3):192–222.
  57. 57. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):1–12. pmid:28057027
  58. 58. Feldman BJ, Fredericksen RJ, Crane PK, Safren SA, Mugavero MJ, Willig JH, et al. Evaluation of the single-item self-rating adherence scale for use in routine clinical care of people living with HIV. AIDS Behav. 2013;17(1):307–318. pmid:23108721
  59. 59. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. pmid:20957426
  60. 60. Hamilton DF, Lane JV, Gaston P, Patton JT, Macdonald DJ, Simpson AH, et al. Assessing treatment outcomes using a single question: the net promoter score. Bone Joint J. 2014;96-B(5):622–8. pmid:24788496
  61. 61. Stirling P, Jenkins PJ, Clement ND, Duckworth AD, McEachan JE. The Net Promoter Scores with Friends and Family Test after four hand surgery procedures. J Hand Surg Eur. 2019;44(3):290–295. pmid:30567459
  62. 62. Elo S, Kyngäs H. Original methodology: the qualitative content analysis process. J Adv Nurs. 2008;62(1):107–115. pmid:18352969
  63. 63. Vaismoradi M, Turunen H, Bondas T. Content analysis and thematic analysis: implications for conducting a qualitative descriptive study. Nursing and Health Sciences. 2014;15(3):398–405.
  64. 64. Damschroder L, Hall C, Gillon L, Reardon C, Kelley C, Sparks J, et al. The Consolidated Framework for Implementation Research (CFIR): progress to date, tools and resources, and plans for the future. Implementation Sci. 2015;10(Suppl 1):A12.
  65. 65. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Qual Saf. 2014;23:290–298. pmid:24025320
  66. 66. Williams AB, Amico KR, Bova C, Womack JA. A proposal for quality standards for measuring medication adherence in research. AIDS Behav. 2013;17(1):284–297. pmid:22407465
  67. 67. Wilson IB, Carter AE, Berg KM. Improving the self-report of HIV antiretroviral medication adherence: is the glass half full or half empty? Curr HIV/AIDS Rep. 2009 Nov;6(4):177–86. pmid:19849960
  68. 68. Greenhalgh J, Gooding K, Gibbons E, Dalkin S, Wright J, Valderas J, et al. How do patient reported outcome measures (PROMs) support clinician-patient communication and patient care? A realist synthesis. J Patient Rep Outcomes. 2018;2:42. pmid:30294712
  69. 69. American Medical Association. AMA quick guide to telemedicine in practice. Updated June 22, 2020. Access July 3, 2020. https://www.ama-assn.org/practice-management/digital/ama-quick-guidetelemedicine-Practice.
  70. 70. Royal College of Physicians and Surgeons of Canada. Telemedicine and virtual care guidelines (and other clinical resources for COVID-19). Updated May 21, 2020. Accessed July 3, 2020. http://www.royalcollege.ca/rcsite/documents/about/covid-19-resources-telemedicine-virtual-care-e#qc.
  71. 71. Rigby M, Georgiou A, Hyppönen H, Ammenwerth E, de Keizer N, Magrabi F, et al. Patient portals as a means of information and communication technology support to patient-centric care coordination–the missing evidence and the challenges of evaluation. Yearbook of Medical Informatics. 2015;24(01):148–159.