Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Patient motivation as a predictor of digital health intervention effects: A meta-epidemiological study of cancer trials

  • Yuqian Yan,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland

  • Jesús López-Alcalde,

    Roles Conceptualization, Investigation, Methodology, Project administration, Writing – review & editing

    Affiliations Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland, Faculty of Medicine, Universidad Francisco de Vitoria (UFV), Madrid, Spain, Unidad de Bioestadística Clínica, Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), CIBERESP, Hospital Universitario Ramón y Cajal, Madrid, Spain

  • Elena Stallings,

    Roles Investigation, Writing – review & editing

    Affiliation Unidad de Bioestadística Clínica, Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), CIBERESP, Hospital Universitario Ramón y Cajal, Madrid, Spain

  • Elena Jimenez Tejero,

    Roles Investigation, Writing – review & editing

    Affiliations Faculty of Medicine, Universidad Francisco de Vitoria (UFV), Madrid, Spain, Unidad de Bioestadística Clínica, Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), Hospital Universitario Ramón y Cajal, Madrid, Spain

  • Claudia M. Witt,

    Roles Investigation, Resources, Writing – review & editing

    Affiliations Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland, Charité –Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Social Medicine, Epidemiology and Health Economics, Berlin, Germany

  • Jürgen Barth

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – review & editing

    mail@juergen-barth.de

    Affiliation Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland

Abstract

The objective of this meta-epidemiological study was to develop a rating that captures participants’ motivation at the study level in digital health intervention (DHI) randomised controlled trials (RCTs). The rating was used to investigate whether participants’ motivation is associated with the effect estimates in DHI RCTs for cancer patients. The development of the rating was based on a bottom-up approach involving the collection of information that captures participants’ baseline motivation in empirical studies from the Smartphone-RCCT Database. We specified three indicators for rating: indicator 1 captures whether the study team actively selects or enhances the motivation of the potential study participants; indicator 2 captures the study participants’ active engagement before the treatment allocation; and indicator 3 captures the potential bond and trust between the study participants and the person/institution referring to the study. The rating of each indicator and the overall rating varies between high motivation, moderate motivation, and low motivation. We applied the rating across 27 DHI RCTs with cancer patients. We performed meta-regression analysis to examine the effect of patient motivation on quality of life (QoL), psychological outcomes, and attrition. The intraclass correlation coefficient (ICC) indicated moderate to poor inter-rater reliability. The meta-regression showed that cancer patients’ overall motivation before engaging in the intervention was associated with the treatment effect of QoL. Patient motivation was not found to be associated with psychological outcomes or attrition. Subgroup analyses revealed that the clinical effects of DHIs were more prevalent in the high-motivation subgroups, whereas the low-motivation subgroups were unlikely to show intervention benefits. The likelihood of dropouts from DHIs seems to be especially high among the low-bond (indicator 3) subgroup. We suggest using single indicators since they reflect specific content. Better reporting about baseline motivation is required to enable meaningful interpretations in not only primary studies but also in evidence syntheses.

Introduction

Patient motivation plays an important role in disease management, particularly for patients with chronic health conditions who have to actively engage in and maintain health-enhancing behaviours (i.e., self-management) [1]. For chronic diseases, such as cancer, patients with higher motivation have better treatment adherence, improved treatment outcomes, and lower attrition rates [24]. Therefore, patient motivation has been considered a key factor for successful therapeutic interventions [5].

Digital health interventions (DHIs) show vast potential in promoting health behaviours and supporting patients and health care systems [69], as they enable self-management among patients with chronic conditions. However, many DHIs experience a high level of attrition [10]. For example, a previous large-scale DHI study had substantial dropout rates, with a mean engagement of only 4.1 days [11]. A systematic review of randomised controlled trials (RCTs) examining DHIs reported a pooled attrition rate of 24.1% at short-term follow-up and 35.5% at longer-term follow-up [12]. One of the reasons why participants do not engage with DHIs may be a lack of motivation at the start of the trial [13]. Moreover, it can be assumed that motivated participants are more likely to adhere to and engage with the DHIs consistently, therefore enhancing the effectiveness of the intervention. In this regard, it is necessary to take note of patient motivation in self-care DHI trials. However, participants’ baseline motivation is rarely examined in DHI trials since there are few tools for measuring motivation in clinical settings [14]. It is not well understood how baseline motivation, as an effect modifier, influences the effectiveness and attrition rates in DHIs trials. To address this gap, study-level information can be utilized as proxies to develop a rating for participants’ motivation, enabling researchers to draw inferences about baseline motivation and the respective consequences on treatment outcomes.

The concept of patients’ motivation is ambiguous in most measurement tools; furthermore, the definition of motivation varies across different types of diseases and is influenced by different social and cultural backgrounds [1417]. In our study, the definition of participants’ motivations closely aligns with Deci and Ryan’s self-determination theory on “intrinsic motivation” [18, 19], which are internal drives such as core values and interests that influence participants’ own behaviour [20, 21]. We define participants’ motivation as their self-determination and intrinsic drive to carry out tasks of the offered DHI in order to achieve therapeutic goals. We are especially interested in an individual’s intrinsic motivation before participating in an intervention of a specific trial. We specifically look for proxies for participants’ intrinsic motivation in the recruitment and screening process (e.g., interpersonal bond and trust, committed effort), and we assess participants’ characteristics (e.g., expectations, inherent interest, personal values and goals, self-efficacy). We do not consider the motivation that is associated with extrinsic rewards (e.g., financial benefits) or individuals’ conflicts (e.g., social pressure, shame).

The first objective of this meta-epidemiological study was to develop a rating that captures participants’ motivation at the study level in DHI RCTs. The second objective was to use this rating across a sample of studies to investigate whether participants’ motivation is associated with the effect estimates in DHI RCTs for cancer patients.

Methods

The study protocol [22] was prospectively registered on the Open Science Framework and is available at https://osf.io/8ns2q. The registration also contains the appendices detailing the rating manual and the rating tree diagram.

The purpose of the rating

The rating was developed to capture motivation of a study sample enrolled in a DHI RCT (regardless of the control condition). The DHI should involve self-care, where participants play an active role in disease management. This means that DHIs relying solely on passive tracking devices were not eligible. Despite the DHI is a self-care intervention, human guidance was allowed if this was facilitated indirectly through electronic channels.

Development of the rating

We extracted information about recruitment and baseline characteristics from a sample of 20 primary studies in in the Smartphone-RCCT Database [23] (https://osf.io/nxerf/), which is hosted by our Institute. These 20 studies were selected based on different recruitment strategies to reflect the variability during the enrolment of patients. The development of the rating of participants’ motivation is based on a bottom-up approach by collecting a variety of study-level information. We identified different aspects of recruitment strategies and baseline characteristics from these 20 studies that might be associated with participants’ intrinsic motivation. Since the reporting of some potentially useful aspects for rating was weak, we ultimately specified three indicators serving as proxies that capture participants’ motivation at baseline and are likely to be rated across all studies.

The first indicator (labelled “expectation”) captures whether the study team actively selects or enhances the motivation of the potential study participants. We assumed that the selection of highly motivated participants or the active communication to increase participants’ motivation or expectations is associated with better treatment outcomes [24].

The second indicator (labelled “effort”) captures study participants’ active engagement before the treatment allocation. We assumed that a higher demand for participants’ investment (e.g., time, effort) before allocation reflects higher motivation among the participants, which is associated with better treatment outcomes [25].

The third indicator (labelled “bond”) captures the potential bond and trust between study participants and the person/institution referring to the study. We assumed that a stronger bond and higher level of trust reflects higher motivation among the participants, which is associated with better treatment outcomes [26].

The detailed description of these three indicators and the respective rating guidance are provided in the rating manual (S1 Appendix). The rating of each indicator varies between high motivation, moderate motivation, and low motivation. The first and second indicators have the same weight for the overall rating, and the third indicator has lower weight for the overall rating. The rationale for the lower weight of the third indicator was that the assessment of the bond (indicator 3) might be based on more indirect information, whereas the rating of both other indicators might rely on more direct information. Based on these premises, JB and YY developed a rating decision tree (S1 Fig) that considers all the rating patterns from the three indicators to generate an overall rating of participants’ motivation at the study level. The overall motivation can range from high to moderate to low.

Study sample for rating

We applied the rating to RCTs that were included in a recently published systematic review and meta-analysis about the effect of mHealth app interventions on quality of life (QoL) and psychological outcomes in cancer patients [27]. The control groups in all studies received usual care, including waitlist control, conventional care, or health education delivered without the use of the mHealth app. We excluded three studies from our study for the following reasons: in one study, the target person for the app was the caregiver and not the patient [28]; one study used a dismantling study design [29] and did not meet our inclusion criteria [30]; and another study used only an educational intervention and did not require an active engagement of the patients [31].

Rating and data extraction

Before applying the rating, JB and YY conducted training sessions with EJ and ES (training record in S2 Appendix) based on a training manual. Afterwards, a pilot testing with five RCTs from the study sample was undertaken to ensure consistency between these four raters regarding the judgement of the available study-level information. Once the training was completed and agreement had been achieved, all eligible studies were then rated by these four raters independently, and the results were entered into a predesigned rating form. The raters recorded their certainty of the rating for each indicator independently and made notes about the reasoning behind their rating and their certainty. The certainty of the rating has three levels: high certainty, moderate certainty, and low certainty. Disagreements were discussed between four raters (EJ, ES, JB, YY) and resolved by a final consensus between two raters (JB, YY). The consensus rating was not a standardized overruling procedure since all concerns of each rater were considered as valid. The consensus can be considered as re-rating based on the most comprehensive information taking the confidence of the rating by each raters into consideration. The final consensus of the rating was used as the predictor for meta-regression.

One author (YY) extracted information about treatment outcomes from the tables, flow diagram, and text of the included studies. The following outcomes were extracted: QoL, anxiety, depression, and attrition rate for each study arm. All measurement tools for QoL, anxiety, and depression were allowed. When multiple measurement tools were used for one outcome in the same study, we considered the data that had the lowest attrition. The attrition rate was defined as the proportion of individuals who dropped out during the intervention, which corresponds to those who disengaged from or ceased their involvement in the DHI itself (intervention dropouts). This definition as reported in the study flow chart, does not mean dropouts due to death or individuals who merely discontinued questionnaire assessments. For all outcomes, we extracted data from the first post-intervention time point. If the intervention period was unclear, we considered the measurement taken at the longest follow-up reported in the included study. A second author (JB) crosschecked the extracted data.

Statistical analysis

In order to get an impression about the performance of the rating, we measured the inter-rater reliability of four raters by calculating the intraclass correlation coefficient (ICC) [32, 33]. Additionally, we calculated the ICC for five raters, treating the consensus rating as the fifth rater. The ICC value ranges between 0 and 1. Values less than 0.5 indicate poor agreement; values between 0.5 and 0.75 indicate moderate agreement; values between 0.75 and 0.9 indicate good agreement; and values equal to or greater than 0.90 indicate excellent agreement [32]. We also explored the correlations between the three indicators and the correlations between each indicator and the overall motivation [34].

As outcome measure in the meta-analysis of health outcomes, we used the standardized mean difference (SMD) [35] with a 95% confidence interval (CI) to estimate intervention effects on QoL, depression, and anxiety. If SDs were not provided, they were calculated using the available data [36]. If there were no available data to obtain SDs, the baseline SDs were used as post-intervention SDs for the meta-analysis. The risk ratio (RR) with a 95% CI was used as effect estimates for attrition.

For the meta-analytic procedure we used the random effects model with Knapp–Hartung adjustment for pooling of studies [37, 38]. We then fitted a mixed effects meta-regression model to the rating variables (i.e., overall motivation and single indicators) to examine the association between different levels of participants’ motivation and the treatment outcomes. A two-sided P < .05 was used to indicate a statistically significant difference in the overall effect. We assessed statistical heterogeneity with Cochran’s Q test and measured its magnitude with Higgin’s and Thompson’s I2 statistics, where I2 ≥ 50% indicated substantial heterogeneity [39]. All statistical analyses were conducted using R (R Foundation for Statistical Computing, Vienna, Austria, version 4.3.1) [40].

Results

A total of 27 RCTs with 4,986 cancer patients were included in our study (S3 Appendix). The characteristics of these studies can be found in the original systematic review [27]. The ratings of the study participants’ motivations were as follows: for overall motivation, 19 studies (70%) were rated as high motivation, five studies (19%) were rated as moderate motivation, and three studies (11%) were rated as low motivation. For indicator 1 (expectation), 16 studies (59%) were rated as high motivation, 11 studies (41%) were rated as moderate motivation, and no study was rated as low motivation. For indicator 2, 12 studies (44%) were rated as high motivation, seven studies (26%) were rated as moderate motivation, and eight studies (30%) were rated as low motivation. For indicator 3, 13 studies (48%) were rated as high motivation, three studies (11%) were rated as moderate motivation, and 11 studies (41%) were rated as low motivation. The certainty levels from four raters, both on a per-study basis and in summary form, can be found in S4 and S5 Appendices.

Inter-rater reliability

The ICC based on the rating of four raters are 0.49 for the overall rating, 0.55 for Indicator 1, 0.42 for Indicator 2, and 0.27 for Indicator 3. The ICC results based on five ratings, where the consensus rating is considered an independent rater, are provided in S6 Appendix. There results showed higher reliability compared to the initial rating with only four raters.

Meta-regression and subgroup analysis

QoL.

Twenty-two studies (81%) reported about QoL (Table 1 and S2 Fig). The meta-analysis showed that mHealth app interventions improved QoL among cancer patients compared to usual care (SMD = 0.31; 95% CI, 0.17 to 0.46; P < 0.001), with high heterogeneity between the studies (I2 = 64%; chi-square P < 0.001). Meta-regression showed that cancer patients’ overall motivation was associated with the treatment effect of QoL (P < 0.001), and the effect estimate was the largest in the moderate motivation subgroups. Subgroup analysis based on overall motivation suggested that mHealth app interventions improved QoL only in the high- and moderate-motivation subgroups. Subgroup analyses of pooled estimates by single indicators revealed that mHealth app interventions were consistently beneficial in improving QoL in the high-motivation subgroups, while the interventions were consistently inconclusive in the low-motivation subgroups. In addition, subgroup analyses showed low heterogeneity among studies with patients with high and moderate overall motivation (21% and 0%, respectively), high expectation (indicator 1) (34%), high and moderate effort (indicator 2) (18% and 30%, respectively), and moderate bond (indicator 3) (32%).

Anxiety.

Twelve studies (44%) reported about anxiety (Table 2 and S3 Fig). The meta-analysis showed that mHealth app interventions reduced anxiety among cancer patients compared to usual care (SMD = -0.82; 95% CI, -1.55 to -0.10; P = 0.030), with high heterogeneity between the studies (I2 = 95%; chi-square P < 0.001). No association was found between different levels of patients’ motivation and anxiety. Subgroup analyses based on overall motivation and expectation (indicator 1) showed that the mHealth app interventions decreased anxiety only in the high-motivation subgroups. Subgroup analyses based on single indicators revealed that the treatment effect of mHealth app interventions was consistently inconclusive in the low-motivation subgroups. In addition, the heterogeneity remained high in all subgroup analyses.

thumbnail
Table 2. Meta-regression and subgroup analysis on anxiety.

https://doi.org/10.1371/journal.pone.0306772.t002

Depression.

Eleven studies (41%) reported about depression (Table 3 and S4 Fig). For the pooled analysis, no significant difference was found between the mHealth app intervention and usual care (SMD = -0.60; 95% CI, -1.37 to 0.16; P = 0.110). In addition, significant heterogeneity existed between the studies (I2 = 94%; chi-square P < 0.001). No association was found between different levels of patients’ motivation and depression. Subgroup analyses based on overall motivation and expectation (indicator 1) showed that the mHealth app interventions decreased depression only in the high-motivation subgroups. Subgroup analyses based on single indicators revealed that the treatment effect of mHealth app interventions was consistently inconclusive in the low-motivation subgroups. The heterogeneity remained high in all subgroup analyses.

thumbnail
Table 3. Meta-regression and subgroup analysis on depression.

https://doi.org/10.1371/journal.pone.0306772.t003

Attrition.

Twenty-three studies (85%) reported about attrition (Table 4 and S5 Fig). The meta-analysis showed that patients in the mHealth app intervention group had a higher likelihood of dropping out from the study as compared to the control group (RR = 1.66; 95% CI, 1.01 to 2.71; P = 0.045), with significant heterogeneity between the studies (I2 = 73%, chi-square P < 0.001). No association was found between different levels of patient motivation and attrition. Subgroup analyses showed that the risk of attrition was the highest for the mHealth interventions in the low-motivation subgroup when grouped by bond (indicator 3) (RR = 2.87; 95% CI, 1.37 to 6.00). In addition, subgroup analyses showed low heterogeneity for moderate and low overall motivation (35% and 0%, respectively), low effort (indicator 2) (15%), and moderate bond (indicator 3) (0%).

thumbnail
Table 4. Meta-regression and subgroup analysis on attrition.

https://doi.org/10.1371/journal.pone.0306772.t004

Discussion

In this meta-epidemiological study, we successfully developed a rating for participants’ motivation in DHI trials. We applied the rating to a sample of mHealth app intervention studies, rated by four raters, showing moderate to poor reliability. However, the reliability with the final consensus and the initial decision of the four raters showed better agreement. Our findings demonstrated an association between cancer patients’ overall motivation before engaging in the intervention and treatment effects for QoL. We did not find an association between patients’ motivation and psychological outcomes or attrition. However, our subgroup analyses revealed that mHealth app interventions were unlikely to show clinical benefits in the low-motivation subgroups in general. The likelihood of attrition from mHealth app interventions seems to be high in the low-bond (indicator 3) subgroup.

In our study sample, patients’ overall motivation was found to be associated with the treatment effect of QoL. However, it is important to interpret this finding cautiously. The overall motivation was determined based on an algorithm of different rating patterns of three indicators, as outlined in our prospectively registered protocol [22]. Specifically, indicator 3 has lower weight as compared to indicator 1 and 2, considering that the assessment of Indicator 3 might rely more on indirect study-level information. Using this algorithm, we classified 19 studies (70%) as having high motivation, five studies (19%) as having moderate motivation, and three studies (11%) as having low motivation. While we maintain confidence in the rationale behind our initial decision, we also acknowledge its arbitrary nature. As suggested by one reviewer, we re-calculated the overall motivation by assigning equal weight to all three indicators (scale of 1 to 3, 1 for low motivation and 3 for high motivation). We summed the scores of the three indicators (range is 3 to 9) and established meaningful cut-off scores. We used score 3, 4 or 5 for low motivation, score 6 or 7 as moderate motivation, and 8 or 9 as high motivation. This new approach resulted in 11 studies (41%) classified as high motivation, 11 studies (41%) as moderate motivation, and five studies (19%) as low motivation. Subsequently, we conducted a sensitivity analysis for all investigated outcomes. As expected, the results of meta-regression changed significantly for QoL and attrition, as these two outcomes included most studies and the distribution of studies across subgroups changes in the sensitivity analysis. The result of the sensitivity analysis is available in S7 Appendix. Nevertheless, both the primary analysis and the sensitivity analysis suggested a similar pattern that low motivated patients did not benefit from DHI self-care intervention.

Combining indicators into an overall score has the advantage to allow a single regression analysis without multiple testing [41] and may avoid contradicting results per indicator. Therefore, we used both the overall score and single indicators in the regression models. Using the overall score might be intuitive since motivational aspects may add up, but as said before the weighting of indicators may be considered arbitrary. Using single indicators might be justified since the indicators had quite low correlations (0.03 < r < 0.28) with each other and may capture different aspects of motivation, but the number of tests considerably increases. From a conceptual point of view, a closer look at the advantages and disadvantages of scoring in meta-analyses must be taken: Peter Juni et al. [42] showed in a meta-epidemiological study that the association of “study quality” and treatment benefits depends on the scale used to assess study quality. It can be either positive or negative, which makes the use of scores questionable. Therefore, we suggest using single indicators since they reflect specific content (i.e., expectations, effort, bond), and the weighting of indicators can be avoided.

When looking at the single indicators, we found that in studies with patients having a low bond to the treatment referrer, the percentage of dropouts during the study was higher, which is in line with earlier research [43]. This supports the idea that trustworthy physicians or health care professionals can serve as powerful advocates for successful clinical research. According to a survey [44], 84% of patients implied that they would consider participating in a clinical trial if their physician recommended it.

According to CONSORT-EHEALTH [45], it is highly recommended to specify how participants were briefed during recruitment and in informed consent procedures. This information can influence user self-selection, expectations, and may introduce bias into the results (refer to Checklist 4a: Eligibility criteria for participants). However, our rating refers to available indirect study-level information about participants’ expectations, effort, and bonds to reflect motivational baseline characteristics in DHI trials. Based on our experience in this study, such an indirect approach is necessary since the motivation of participants who are enrolled in DHI trials were rarely reported. Since such aspects could be assessed, DHI trialists should be encouraged to report information about patients’ expectations, beliefs, previous treatment experiences, and preferences. This may also be useful for understanding heterogeneity between trials’ findings, since study authors mention low motivation as a reason for the low effectiveness of some DHIs. The most commonly implemented measures are scales about digital health literacy (DHL) or affinity to technology [46], which reflects competence and knowledge but does not directly relate to motivation. Some populations have low DHL [47], but increasing DHL is possible [48]. Higher DHL may increase the benefits of DHIs, as shown for health literacy [49]. Similar to research on DHL, patients’ motivation should be assessed before the uptake of a DHI in the trial.

Clinical implications

Health care providers have an important role in decision-making to support patients in their choice of the most suitable and promising DHI. If health care providers are aware of low motivation in patients, interventions to increase patients’ motivation and expectations by addressing patients’ concerns would be an option [50]. By engaging patients in such discussions (e.g., support, education, encouragement), clinicians may increase patients’ motivation and treatment benefits.

Research implications

The implementation of measures to assess patients’ motivation in DHI trials would allow the stratification for motivation in RCTs to prevent imbalance between groups. Baseline information about motivation could also be integrated as an interaction factor for effectiveness analyses and, consequently, could also be used as a moderator to explain heterogeneity between trials in systematic reviews.

Furthermore, we would also encourage a better assessment and a more transparent reporting in DHI trials regarding specific indicators (i.e., expectations, effort, bond). Regarding expectations, specific measures are available [51, 52]. Regarding effort, a clearer reporting on participants’ active engagement before the start of the trial is desirable. Concerning bond, information about the recruitment strategy including how and by whom participants are being approached would be of help in order to get information about working alliance [53].

Out study developed and applied a rather complex rating. In order to achieve a common understanding of the concepts, it is required to have thorough training sessions in advance and also extensive consensus meetings. Both steps are necessary to foster a common understanding and to mitigate any inherent biases among raters, thereby making the final rating meaningful.

Strengths and limitations

Four raters were involved in the rating of all studies. Despite the fact, that some studies did not report much about motivational issues of the study sample, we achieved good procedures to deal with a lack of information with moderate to poor reliable ratings. Nevertheless, some limitations should be taken into consideration. First, studies in our sample were not evenly distributed over subgroups; as a matter of fact, the majority of studies have overall high-motivation patients (70%). Second, weighting of indicators in a composite score of overall motivation may be arbitrary. Third, some studies had large effect sizes and could be considered as outliers in the meta-regression. Forth, high heterogeneity remained in some subgroups. Finally, the rating decision tree might not reflect the true hierarchy between the three indicators.

Conclusion

The clinical effects of DHI were more prevalent in the high-motivation subgroups, whereas the treatment was inclusive in the low-motivation subgroups. The likelihood of dropouts from DHI seems to be especially high in the low-bond (indicator 3) subgroup. We suggest using single indicators since they reflect specific content. Better reporting about baseline motivation is required to allow for meaningful interpretation in not only primary studies but also in evidence synthesis.

Supporting information

S1 Appendix. Rating manual for patient motivation.

https://doi.org/10.1371/journal.pone.0306772.s001

(DOCX)

S4 Appendix. Certainty levels of 4 raters per studies.

https://doi.org/10.1371/journal.pone.0306772.s004

(DOCX)

S5 Appendix. Summary of certainty levels per raters.

https://doi.org/10.1371/journal.pone.0306772.s005

(DOCX)

S6 Appendix. ICC based on five ratings (four raters and consensus) for the overall rating and each indicator.

https://doi.org/10.1371/journal.pone.0306772.s006

(DOCX)

S7 Appendix. Sensitivity analysis: Equal weighting of indicators for overall motivation.

https://doi.org/10.1371/journal.pone.0306772.s007

(DOCX)

S1 Fig. Rating decision tree.

The rating tree is used to generate an overall rating of patient motivation at the study level based on the rating patterns from the three indicators.

https://doi.org/10.1371/journal.pone.0306772.s008

(PDF)

S2 Fig. Forest plot.

Meta-analysis on quality of life.

https://doi.org/10.1371/journal.pone.0306772.s009

(PNG)

S3 Fig. Forest plot.

Meta-analysis on anxiety.

https://doi.org/10.1371/journal.pone.0306772.s010

(PNG)

S4 Fig. Forest plot.

Meta-analysis on depression.

https://doi.org/10.1371/journal.pone.0306772.s011

(PNG)

S5 Fig. Forest plot.

Meta-analysis on attrition.

https://doi.org/10.1371/journal.pone.0306772.s012

(PNG)

References

  1. 1. Lorig KR, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med. 2003;26(1):1–7. pmid:12867348
  2. 2. Carl Simonton O, Matthews-Simonton S, Flint Sparks T. Psychological intervention in the treatment of cancer. Psychosomatics. 1980;21(3):226–33. pmid:7367569
  3. 3. Cox LS, Patten CA, Ebbert JO, Drews AA, Croghan GA, Clark MM, et al. Tobacco use outcomes among patients with lung cancer treated for nicotine dependence. Journal of Clinical Oncology. 2002;20(16):3461–9. pmid:12177107
  4. 4. Gönderen Çakmak HS, Kapucu S. The effect of educational follow-up with the motivational interview technique on self-efficacy and drug adherence in cancer patients using oral chemotherapy treatment: A randomized controlled trial. Seminars in Oncology Nursing. 2021;37(2):151140. pmid:33766423
  5. 5. Ryan RM, Deci EL. Self-determination theory: basic psychological needs in motivation, development, and wellness. New York, NY, US: The Guilford Press; 2017. xii, 756–xii, p.
  6. 6. Blok AC, Sadasivam RS, Amante DJ, Kamberi A, Flahive J, Morley J, et al. Gamification to motivate the unmotivated smoker: the “take a break” digital health intervention. Games Health J. 2019;8(4):275–84. pmid:31219347
  7. 7. Liu F, Kong X, Cao J, Chen S, Li C, Huang J, et al. Mobile phone intervention and weight loss among overweight and obese adults: a meta-analysis of randomized controlled trials. Am J Epidemiol. 2015;181(5):337–48. pmid:25673817
  8. 8. Widmer RJ, Collins NM, Collins CS, West CP, Lerman LO, Lerman A. Digital health interventions for the prevention of cardiovascular disease: a systematic review and meta-analysis. Mayo Clinic Proceedings. 2015;90(4):469–80. pmid:25841251
  9. 9. Jones KR, Lekhak N, Kaewluang N. Using mobile phones and short message service to deliver self-management interventions for chronic conditions: a meta-review. Worldviews Evid Based Nurs. 2014;11(2):81–8. pmid:24597522
  10. 10. Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. pmid:15829473
  11. 11. Hershman SG, Bot BM, Shcherbina A, Doerr M, Moayedi Y, Pavlovic A, et al. Physical activity, sleep and cardiovascular health data for 50,000 individuals from the MyHeart Counts Study. Scientific Data. 2019;6(1):24. pmid:30975992
  12. 12. Linardon J, Fuller-Tyszkiewicz M. Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol. 2020;88(1):1–13. pmid:31697093
  13. 13. Schroé H, Crombez G, De Bourdeaudhuij I, Van Dyck D. Investigating when, which, and why users stop using a digital health intervention to promote an active lifestyle: secondary analysis with a focus on health action process approach-based psychological determinants. JMIR Mhealth Uhealth. 2022;10(1):e30583. pmid:35099400
  14. 14. Hosseini F, Alavi NM, Mohammadi E, Sadat Z. Scoping review on the concept of patient motivation and practical tools to assess it. Iran J Nurs Midwifery Res. 2021;26(1):1–10. pmid:33954092
  15. 15. Roberts JR, Maxfield M. Examining the relationship between religious and spiritual motivation and worry about alzheimer’s disease in later life. J Relig Health. 2018;57(6):2500–14. pmid:29730806
  16. 16. Zhang R, Eschler J, Reddy M. Online support groups for depression in China: culturally shaped interactions and motivations. Computer Supported Cooperative Work (CSCW). 2018;27(3):327–54.
  17. 17. Pargament KI. Spirituality as an irreducible human motivation and process. International Journal for the Psychology of Religion. 2013;23(4):271–81.
  18. 18. Deci EL, Ryan RM. Intrinsic motivation and self-determination in human behavior. New York: Plenum Publishing Co; 1985.
  19. 19. Deci EL, Ryan RM. The general causality orientations scale: self-determination in personality. Journal of Research in Personality. 1985;19(2):109–34.
  20. 20. Flannery M. Self-determination theory: intrinsic motivation and behavioral change. Oncol Nurs Forum. 2017;44(2):155–6. pmid:28222078
  21. 21. Ackerman C. Self determination theory and how it explains motivation (updated Feburary 2024). Available from: https://positivepsychology.com/self-determination-theory/.
  22. 22. Yan Y, Lopez-Alcalde J, Witt C, Barth J. Participants’ motivation as predictor of treatment outcomes in mHealth studies: a meta-epidemiological study protocol.
  23. 23. Barth J, Wang J, Lopez-Alcalde J, Kramm C, Pach D, Álvarez-Díaz N, et al. Smartphone-RCCT: an online repository of randomized controlled clinical trials of smartphone applications for chronic conditions. Trials. 2022;23(1):909. pmid:36303168
  24. 24. Laferton JAC, Oeltjen L, Neubauer K, Ebert DD, Munder T. The effects of patients’ expectations on surgery outcome in total hip and knee arthroplasty: a prognostic factor meta-analysis. Health Psychology Review. 2022;16(1):50–66. pmid:33228474
  25. 25. Salamone JD, Yohn SE, López-Cruz L, San Miguel N, Correa M. Activational and effort-related aspects of motivation: neural mechanisms and implications for psychopathology. Brain. 2016;139(Pt 5):1325–47. pmid:27189581
  26. 26. Shaw BR, Han JY, Hawkins RP, Stewart J, McTavish F, Gustafson DH. Doctor–patient relationship as motivation and outcome: examining uses of an interactive cancer communication system. International Journal of Medical Informatics. 2007;76(4):274–82. pmid:16460995
  27. 27. Qin M, Chen B, Sun S, Liu X. Effect of mobile phone app-based interventions on quality of life and psychological symptoms among adult cancer survivors: systematic review and meta-analysis of randomized controlled trials. J Med Internet Res. 2022;24(12):e39799. pmid:36534460
  28. 28. Peng Z, Li L, Chen Y, Feng Z, Fang X. WeChat app-based reinforced education improves the quality of opioid titration treatment of cancer-related pain in outpatients: a randomized control study. BMC Cancer. 2020;20(1):852. pmid:32887560
  29. 29. Papa A, Follette WC. Dismantling studies of psychotherapy. The Encyclopedia of Clinical Psychology. p. 1–6.
  30. 30. Lei L. The application of APP in the transitional care among the post-operative patients with laryngeal cancer. Dissertation of nursing school in Zhengzhou University: Zhengzhou University; 2016.
  31. 31. Foley NM, O’Connell EP, Lehane EA, Livingstone V, Maher B, Kaimkhani S, et al. PATI: patient accessed tailored information: a pilot study to evaluate the effect on preoperative breast cancer patients of information delivered via a mobile application. Breast. 2016;30:54–8. pmid:27611236
  32. 32. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–63. pmid:27330520
  33. 33. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2):420–8. pmid:18839484
  34. 34. Turney S. Pearson correlation coefficient (r) | guide & examples. Scribbr. Retrieved August 30, 2023, from https://www.scribbr.com/statistics/pearson-correlation-coefficient/.
  35. 35. Hedges LV. Distribution theory for glass’s estimator of effect size and related estimators. Journal of Educational and Behavioral Statistics. 1981;6(2):107–28.
  36. 36. Higgins J, Li T, Deeks Je. Chapter 6: Choosing effect measures and computing estimates of effect. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Available from www.training.cochrane.org/handbook.2022.
  37. 37. Langan D, Higgins JPT, Jackson D, Bowden J, Veroniki AA, Kontopantelis E, et al. A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses. Res Synth Methods. 2019;10(1):83–98. pmid:30067315
  38. 38. Knapp G, Hartung J. Improved tests for a random effects meta-regression with a single covariate. Stat Med. 2003;22(17):2693–710. pmid:12939780
  39. 39. Deeks J, Higgins J, Altman De. Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Available from www.training.cochrane.org/handbook.2022.
  40. 40. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available from https://www.R-project.org/2021.
  41. 41. Higgins JP, Thompson SG. Controlling the risk of spurious findings from meta-regression. Stat Med. 2004;23(11):1663–82. pmid:15160401
  42. 42. Jüni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. Jama. 1999;282(11):1054–60. pmid:10493204
  43. 43. Henkemans OAB, Rogers WA, Dumay AMC. Personal characteristics and the law of attrition in randomized controlled trials of eHealth services for self-care. Gerontechnology. 2011;10(3):157–68.
  44. 44. Getz K. Enabling healthcare providers as facilitators of patient engagement. Applied Clinical Trials. 2017;26(10).
  45. 45. Eysenbach G. CONSORT-EHEALTH: improving and standardizing evaluation reports of Web-based and mobile health interventions. J Med Internet Res. 2011;13(4):e126. pmid:22209829
  46. 46. Franke T, Attig C, Wessel D. A Personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction. 2019;35(6):456–67.
  47. 47. Estrela M, Semedo G, Roque F, Ferreira PL, Herdeiro MT. Sociodemographic determinants of digital health literacy: a systematic review and meta-analysis. Int J Med Inform. 2023;177:105124. pmid:37329766
  48. 48. Dong Q, Liu T, Liu R, Yang H, Liu C. Effectiveness of digital health literacy interventions in older adults: single-arm meta-analysis. J Med Internet Res. 2023;25:e48166. pmid:37379077
  49. 49. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155(2):97–107. pmid:21768583
  50. 50. Bennett G. Miller, W. R. and Rollnick, S. (1991) Motivational interviewing: preparing people to change addictive behavior. New York: Guilford Press, 1991. Pp. xvii + 348. £24.95 hardback, £11.50 paper. ISBN 0–89862–566–1. Journal of Community & Applied Social Psychology. 1992;2(4):299–300.
  51. 51. Laferton JA, Kube T, Salzmann S, Auer CJ, Shedden-Mora MC. Patients’ expectations regarding medical treatment: a critical review of concepts and their assessment. Front Psychol. 2017;8:233. pmid:28270786
  52. 52. Barth J, Kern A, Lüthi S, Witt CM. Assessment of patients’ expectations: development and validation of the expectation for treatment scale (ETS). BMJ Open. 2019;9(6):e026712. pmid:31213446
  53. 53. Farin E, Gramm L, Schmidt E. The patient-physician relationship in patients with chronic low back pain as a predictor of outcomes after rehabilitation. J Behav Med. 2013;36(3):246–58. pmid:22476813