Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A review of methods for addressing components of interventions in meta-analysis

  • Maria Petropoulou ,

    Roles Conceptualization, Formal analysis, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    petropoulou@imbi.uni-freiburg.de

    Affiliations Faculty of Medicine and Medical Center, Institute of Medical Biometry and Statistics, University of Freiburg, Freiburg, Germany, Department of Primary Education, Evidence Synthesis Methods Team, University of Ioannina, Ioannina, Greece

  • Orestis Efthimiou,

    Roles Conceptualization, Validation, Visualization, Writing – review & editing

    Affiliations Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland, Department of Psychiatry, University of Oxford, Oxford, United Kingdom

  • Gerta Rücker,

    Roles Conceptualization, Validation, Visualization, Writing – review & editing

    Affiliation Faculty of Medicine and Medical Center, Institute of Medical Biometry and Statistics, University of Freiburg, Freiburg, Germany

  • Guido Schwarzer,

    Roles Conceptualization, Software, Validation, Visualization, Writing – review & editing

    Affiliation Faculty of Medicine and Medical Center, Institute of Medical Biometry and Statistics, University of Freiburg, Freiburg, Germany

  • Toshi A. Furukawa,

    Roles Writing – review & editing

    Affiliation Departments of Health Promotion and Human Behavior and Clinical Epidemiology, Kyoto University Graduate School of Medicine/School of Public Health, Kyoto, Japan

  • Alessandro Pompoli,

    Roles Data curation, Writing – review & editing

    Affiliation Psychiatric Rehabilitation Clinic Villa San Pietro, Trento, Italy

  • Huiberdina L. Koek,

    Roles Writing – review & editing

    Affiliation Department of Geriatric Medicine, University Medical Centre Utrecht, Utrecht University, Utrecht, The Netherlands

  • Cinzia Del Giovane,

    Roles Writing – review & editing

    Affiliations Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland, Population Health Laboratory (#PopHealthLab), University of Fribourg, Fribourg, Switzerland

  • Nicolas Rodondi,

    Roles Writing – review & editing

    Affiliations Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland, Department of General Internal Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland

  • Dimitris Mavridis

    Roles Conceptualization, Supervision, Validation, Visualization, Writing – review & editing

    Affiliations Department of Primary Education, Evidence Synthesis Methods Team, University of Ioannina, Ioannina, Greece, Faculté de Médecine, Paris Descartes University, Sorbonne Paris Cité, Paris, France

Abstract

Many healthcare interventions are complex, consisting of multiple, possibly interacting, components. Several methodological articles addressing complex interventions in the meta-analytical context have been published. We hereby provide an overview of methods used to evaluate the effects of complex interventions with meta-analytical models. We summarized the methodology, highlighted new developments, and described the benefits, drawbacks, and potential challenges of each identified method. We expect meta-analytical methods focusing on components of several multicomponent interventions to become increasingly popular due to recently developed, easy-to-use, software tools that can be used to conduct the relevant analyses. The different meta-analytical methods are illustrated through two examples comparing psychotherapies for panic disorder.

Introduction

Complex interventions are increasingly employed in public health. Several definitions are provided in the literature for complex interventions [14]. Such interventions are usually multifaceted, i.e. comprise several, potentially active and possibly interacting components (multicomponent interventions).

Several articles are discussing methodological challenges at each stage of a systematic review with complex interventions published in the Journal of Clinical Epidemiology in 2013 and the Agency for Healthcare Research and Quality (AHRQ) Series on Complex Intervention Systematic Reviews [1, 518]. Furthermore, a special series of seven articles were recently published in the British Medical Journal Global Health considering challenges in evidence synthesis with complex interventions under the World Health Organization (WHO) guideline development [1925].

Evaluating the effects of multi-component interventions requires tailored statistical synthesis methods. For example, consider a randomized controlled trial (RCT) on interventions for weight loss, in which a group of people is randomized to follow a certain diet and physical exercise while another group is randomized to a placebo diet. The intervention group consists of two components, diet, and physical exercise, and can be regarded as a complex intervention. We could easily estimate the relative effect of this complex intervention versus placebo. However, it might be of interest to disentangle the individual effects of the intervention components (diet and physical exercise). This question cannot be answered by this particular RCT design, but it could be estimated using a factorial RCT where participants are allocated to receive neither intervention, one or the other, or both [26]. Alternatively, we can estimate the component effect using appropriate meta-analytical methods if there exist other studies that compare the diet versus placebo, and studies that compare physical exercise versus placebo.

More generally, components may act independently, synergistically (i.e. the effect of their combination is larger than the sum of their individual effects), or even antagonistically. In this article, we provide a review of the methodology regarding meta-analytical approaches for evaluating the effects of complex interventions. We focus more on component network meta-analysis (CNMA), which allows estimating the component effects of several multicomponent interventions [27]. We exemplify the identified meta-analytical methodologies through their implementation in systematic reviews of psychological interventions for panic disorder and discuss their advantages and disadvantages.

Materials and methods

We searched in the literature for methodological articles that address the effects of complex interventions with meta-analytical models. We also searched in a database for published papers regarding network meta-analysis with respect to multicomponent interventions [28]. By inspecting those identified published papers that employ complex interventions and references therein, and based on our expertise on the subject, we provide an overview of meta-analytical methods that address the effects of complex interventions.

Meta-analytical models evaluating the effects of complex interventions

We identified twelve articles that present meta-analysis models for evaluating the effects of multicomponent interventions or providing methodological aspects for their implementation (Table 1). These twelve articles could be categorized in one or more than one of the below categories according to their methodological content: (1) provide methodological challenges of an existing meta-analytic method to deal with complex interventions (1 article); (2) present or extend a novel meta-analytical model (or framework) to deal with complex interventions (5 articles); (3) provide methods/ simulation/ prerequisites to assess the assumption of models (3 articles); (4) methodological review of meta-analytical models addressing complexity (3 articles); and model selection for component network meta-analysis (1 article) (Table 1). Table 2 presents an overview of the methods, outlining the possible benefits and limitations of each meta-analytical model identified.

thumbnail
Table 1. Categorization of the twelve methodological articles with meta-analysis models that evaluate the effects of complex interventions.

https://doi.org/10.1371/journal.pone.0246631.t001

thumbnail
Table 2. Description, benefits, and limitations of the meta-analytical approaches that evaluate the effects of complex interventions.

https://doi.org/10.1371/journal.pone.0246631.t002

Standard meta-analytical approaches comparing two interventions (meta-analysis, subgroup analysis, meta-regression)

Caldwell and Welton described standard meta-analysis in the context of complex intervention as the ‘lumping approach’ [36]. In a standard pairwise meta-analysis, all active interventions are grouped as a single ‘intervention’ against different kinds of ‘no intervention’ (e.g. placebo, usual intervention, waiting list, no intervention) that grouped as a control (reference intervention) (Table 2). This is the simplest model answering the clinical question ‘Are active interventions (on average) effective?’. This is very important if we are interested in obtaining an estimate of the overall effectiveness of active interventions.

This can be illustrated by the study of Furukawa et al., who conducted a synthesis of studies that compared combined psychotherapy plus antidepressants (PT+AD) versus antidepressants (AD) for the rate of response (i.e. substantial improvement) at 2–4 months in patients with panic disorder with or without agoraphobia [39]. They considered the following three clinically meaningful groups; (1) Behaviour therapy (BT), (2) Cognitive-behaviour therapy (CBT), and (3) Psychodynamic psychotherapy and others (PP). Subgroup analysis was employed to explore component effects (BT, CBT, PP). Fig 1 provides the forest plot for subgroup analysis of studies that compared PT+AD (experimental group) versus AD (control group) for panic disorder. The combination of PT+AD versus AD from the Cognitive-behaviour therapy group had the largest effect size (risk ratio = 1.45, 95% confidence interval CI [1.05, 2.01]), but pooled effects across subgroups overlapped and there was no strong evidence of a difference (test for subgroup differences: p = 0.32). A limitation of subgroup analysis is that it typically has low power and a risk of increasing the type I error rate through multiple testing [6, 40] (Table 2).

thumbnail
Fig 1. Forest plot for a subgroup of studies that compared combined PT+AD versus AD for the rate of response (i.e. substantial improvement) at 2–4 months in patients with panic disorder with or without agoraphobia.

Subgroup analysis is conducted for the three components of the combined psychotherapy: (1) Behaviour therapy, (2) Cognitive-behaviour therapy, (3) Psychodynamic psychotherapy, and others. The analysis provided with the inverse-variance random-effects model with R package meta [41]. The risk ratio was used as an effect size. Heterogeneity was estimated with the DerSimonian-Laird estimation method.

https://doi.org/10.1371/journal.pone.0246631.g001

Meta-regression addresses the research question ‘Which components are effective?’ and requires a reasonably large number of studies for powerful results (Table 2). One application of meta-regression with an interest in components of complex interventions was employed by Bower et al., who aimed to identify ‘active ingredients’ of collaborative care interventions for depression in primary care [42]. Table 3 shows the meta-regression results for the synthesis of studies for panic disorder to compare the aforementioned three component effects (BT, CBT, PP) (only one is considered at a time). Here, CBT had also the largest effect size, however, there is again no strong evidence for a difference (p = 0.49).

thumbnail
Table 3. Meta-regression results for the synthesis of studies that compared combined psychotherapy plus antidepressants (PT+AD) versus antidepressants (AD) for the rate of response (i.e. substantial improvement) at 2–4 months in patients with panic disorder with or without agoraphobia [39].

https://doi.org/10.1371/journal.pone.0246631.t003

Additionally, Bangdiwala et al. proposed a new statistical framework for meta-regression to evaluate the effectiveness of non-randomized, dynamic complex interventions from community-based studies by modeling the observed outcome rather than the observed intervention effect [31].

Component individual participant data meta-analysis

Individual participant data (IPD) is the gold standard in evidence synthesis [4345]. A detailed methodology for the IPD meta-analysis model can be found in Debray et al. [46]. There are several advantages of having IPD compared to aggregated data, such as the explanation of potential sources of heterogeneity, updated data sets, investigation of the interaction between interventions and participant-level characteristics, and better data understanding. On the other hand, IPD are rarely available or difficult, and time-consuming to obtain [47].

General benefits and limitations of IPD meta-analysis also apply when evaluating complex interventions (Table 2). Jonkman et al. (Table 1) discussed methodological challenges that can be addressed when evaluating complex interventions in the IPD meta-analysis model [29]. One of the challenges is that complex interventions are typically heterogeneous and therefore a meta-regression model using IPD can be helpful to investigate how the component effects differ according to study-level characteristics (component effect heterogeneity) [29]. Jonkman et al. suggested the use of causal mechanisms in component IPD meta-analysis as they can help to further determine which characteristics of complex interventions work best in which patient subgroups [29].

Methodological challenges encountered in two application papers of component IPD meta-analyses provided by Jonkman et al. [4851] may help researchers to carefully prepare the resource-intensive IPD meta-analyses.

Standard Network Meta-Analysis (NMA)

Standard NMA is a weighted regression that synthesizes direct (from head-to-head experiments) and indirect evidence (obtained via a common comparator) to allow multiple intervention comparisons. Interventions are rarely identical across studies but using too narrow criteria for defining interventions may result in each study comparing a different set of interventions. In such a case, the standard NMA model may become unidentifiable, i.e. when the data forms two or more disconnected networks.

In the case of interventions consisting of multiple components, standard NMA is also referred to as a full interaction model and considers each combination of components seen in the data as a separate intervention. Standard NMA answers the question “Which combination of components (seen in the data) is most effective?”. Tricco et al. [52] provided an application article of the standard NMA model for evaluating and comparing the effectiveness of combinations of intervention components for the prevention of falls.

The number of published network meta-analyses evaluating multicomponent interventions is increasing [28, 53]. According to a database of 456 published networks of interventions provided by Petropoulou et al. [28], 59 (13%) networks considered multicomponent interventions; while only 5 out of these 59 networks (8%) were published in 2011, the number of networks increased to 26 (44%) in 2014.

A case study provided by Pompoli et al. [54] explored 12 components of eleven psychological interventions for panic disorder; the data are available in S1 Table. Fig 2(A) (left-hand side) provides the graphical representation of the network structure at the intervention level with each node denoting one of the eleven psychological interventions. At the component level, the available studies compared a total of 51 interventions; 49 combinations of components and 2 single components. If we treat each combination as a distinct intervention, we will have 50 different parameters (relative intervention effects versus a reference) to estimate, resulting in low power. Fig 2(B) shows the network plot at the component level with each node denoting the various combinations of components that appear in the network. Intervention comparisons between eleven psychological interventions give a connected NMA structure (Fig 2(A)), but splitting interventions into their components leads to disconnected component comparisons such as pl+ pe+ps+br+mr+ive+ine+cr versus wl+pe+ps (Fig 2(B), red edge).

thumbnail
Fig 2. Network plot for psychological interventions at intervention and component level.

On the left, we show the network plot at the intervention level (Fig 2(A)). Each circle (node) represents an intervention. Solid lines indicate comparisons for which direct information was available. Abbreviations for 11 interventions: Waiting List (WL); Supportive Psychotherapy (SP); Self Help Physiological Therapy (SH-PT); Self Help Cognitive Behavioral Therapy (SH-CBT); Self Help Behavioral Therapy (SH-BT); Physiological Therapy (PT); No Intervention (NT); Cognitive Therapy (CT); Behavioral Therapy (BT); Cognitive Behavioral Therapy (CBT); Third Wave CBT (3W)). On the right, we show the network plot at the component level (Fig 2(B)). Each node corresponds to a particular combination of components. Abbreviations for 12 components: waiting component (wl); placebo effect (pl); psychological support (ps); psychoeducation (pe); breathing retraining (br); progressive/applied muscle relaxation (mr); cognitive restructuring (cr); interoceptive exposure (ine); in vivo exposure (ive); virtual reality exposure (vre); 3w, third-wave components (3w); face-to-face setting (ftf).

https://doi.org/10.1371/journal.pone.0246631.g002

Even if we were able to fit a standard NMA model (i.e. if the network was connected), combinations of component effects may be difficult to interpret in practice. For example, we may find that the relative effect of the third-wave component (3w), when used as a standalone intervention, versus waiting list, is estimated with precision; but, we may also find that when the 3w component is used in combination with other components, the estimated relative effect versus waiting list is very imprecise. This could happen when few studies combine the 3w component with other components and we do not have enough evidence to detect an effect.

NMA models considering components of interventions / CNMA

When all studies compare an intervention to common control, we may consider a meta-regression or subgroup analyses to explore if the effect is moderated by the type of intervention. Considering the components of interventions, CNMA allows the estimation of various component effects and all possible combinations of components [27]. Splitting interventions into different components may lead to a sparse or disconnected network. However, disconnected networks may share common components that can be used to reconnect the network at the component level.

In a seminal article, Welton et al. [27] suggested CNMA models evaluate the effects of complex interventions. The models they proposed are the additive effects CNMA model and the CNMA model with interactions. The description, benefits, and limitations of the suggested models used to handle complex interventions are shown in Table 2. A recently published article by Freeman et al. [32] reviewed all models presented in Welton et al. [27] and extended the CNMA models to account for covariates. A recently published article by Rücker et al. [33] described the additive and interaction CNMA models in a frequentist framework [55]. Madan et al. [30] employed interaction and additive CNMA model in time-to-event data by categorizing smoking cessation electronic or non-electronic interventions into five electronic and five non-electronic components.

Additive effects CNMA model

The additive effects CNMA model assumes that each intervention effect equals the sum of the effects of the corresponding components it comprises [23, 27, 32, 33, 36]. This is known as the additivity assumption and it suggests that the relative effect of intervention A+B comprising two components A and B versus intervention C is dA+B vs C = dA+dBdC, where: dt1 vs t2 is the relative effect of intervention t1 versus intervention t2; dt1 the effect of intervention t1; and dt2 the effect of intervention t2. This model answers not only the question ‘Which of all possible combinations of components is the most effective?’ but also ‘Which components are the most effective?’ [27]. Additive CNMA models allow estimating the relative effects between components and a combination of components and can provide a hierarchy of components.

Under the additivity assumption, the effect of adding component c to intervention is independent of the intervention. In other words, under additivity, the relative effect of the intervention (c+X) versus intervention X is the same for all X (i.e. for X being any combination of components other than c). In essence, this model assumes that components do not interact with each other. Consider the active drugs A, B, C, D, and E. A study that compares (A+B+C) vs (A+B+D) estimates C vs D under the additivity assumption (and A, B components cancel out), the same with a study that compares (E+C) vs (E+D) [54]. In the panic disorder example [54], the relative effect for additive CNMA model of intervention pl+ftf+pe+ps+ive+ine versus intervention wl is dpl+ftf+pe+ps+ive+ine vs wl = dpl+dftf+dpe+dps+dive+dinedwl.

Rücker et al. proposed a likelihood ratio test of the additivity assumption to compare NMA and additive CNMA model [33]. Mills et al. [35] provided methods and prerequisites to assess the additivity assumption. Thorlund and Mills conducted a simulation study and found that if additivity holds, the additive CNMA estimates are more precise than conventional NMA estimates and the additive CNMA model is comparably advantageous than standard NMA in terms of bias when additivity is mildly violated [34]. This suggests that if additivity holds approximately, using the additive CNMA model can be beneficial [34].

Interaction CNMA model

The interaction CNMA model is an extension of the additive CNMA model with extra interaction terms between components to allow their combination to lead to larger/smaller effects than the sum of their effects [27, 34, 35]. Allowing a clinically meaningful interaction between two (or more, e.g., three-way interaction) components, the interaction model allows components to act synergistically or antagonistically. The relative effect of an intervention comprising components A and B versus intervention C is dA+B vs C = dA+dB+dA*BdC, where dt1*t2 the effect of interaction between interventions t1 and t2. The assessment of the assumption with interaction can be tested using likelihood ratio tests [33]. Interaction terms should be defined a-priori [35, 56]. A currently developed method for model selection on CNMA models can be used on top of this prespecification to decide which interactions to include in the CNMA model and therefore which CNMA model fits best [38].

In the panic disorder example, several clinically relevant interaction terms can be examined. For presentation issues, we only show the case of including the interaction between the psychoeducation and interoceptive exposure component in the model. Allowing the interaction between psychoeducation and interoceptive exposure component, we can estimate the interaction effect of phycoeducation(pe)*interocaptive exposure(ine). Then, the relative intervention effect for interaction CNMA model for intervention pl+ftf+pe+ps+ive+ine versus wl is dpl+ftf+pe+ps+ive+ine vs wl = dpl+dftf+dpe+dps+dive+dine+dpe*inedwl.

Additive and interaction CNMA model in practice

Splitting interventions into different components provides new methodological challenges. There are still outstanding methodological issues with implementing additive and interaction CNMA models. Methods for testing consistency in standard NMA need to be expanded for CNMA models. Additionally, a methodological extension for the plausibility of assumptions behind CNMA models for disconnected networks is required. Additivity and interaction CNMA models need to be extended when a disconnected network at the component level is provided without having common components. Ranking measures and other methods from standard NMA need to be tailored for CNMA models.

The additive and interaction CNMA models have been applied in practice in a Bayesian setting (Welton et al. [27], Caldwell and Welton [36], Mills et al. [35, 56], Freeman et al. [32], Pompoli et al. [54], Madan et al. [30]). Pompoli et al. [54] also provided an assessment of the additivity assumption in the Bayesian framework. Bayesian CNMA models can be implemented in any Bayesian software (e.g. WinBUGS [57], OpenBUGS [58], rjags [59], etc.). Rücker et al. have recently provided additive and interaction CNMA models in a frequentist framework [33]. The additive and interaction CNMA model can now be implemented with the R package netmeta [60, 61] using the commands netcomb()and discomb()for connected and disconnected networks, respectively [33, 60]. If we make an additivity assumption, it is possible (but not necessary) to assume one of the interventions to be inactive, across the network, i.e. to have no intervention effect. Using the commands for CNMAs (for example, netcomb()), there is a choice to make a distinction between this inactive intervention and the reference intervention (formally used for presenting the results, for example, a comparison in a forest plot). Readers should note, however, that there is no need to specify a reference and/or inactive intervention in order to fit the model.

We provide the analysis results with CNMA models for the network of interventions in panic disorder [54]. As shown in Fig 2(B), the network structure at the combination level is disconnected and it consists of three networks of interventions in panic disorder [54]. We implemented the analysis in the frequentist framework with discomb()command from netmeta [60, 61] package. Waiting component wl was used as the reference intervention for presenting results. Fig 3 presents the results in the odds ratio scale of the additive and interaction CNMA model (allowing interaction between psychoeducation (pe) and interoceptive exposure (ine)) for the network of interventions for panic disorder [54].

thumbnail
Fig 3. Results from fitting the additive and interaction CNMA model in the panic disorder dataset.

The interaction model assumes only one interaction term between components psychoeducation (pe) and interoceptive exposure (ine). Both analyses were conducted in the frequentist framework with the discomb() command in the netmeta package [33, 60]. Estimation of the combination component interventions versus the reference waiting list component (wl) is provided in the OR scale with their 95% confidence intervals (CI). The red color is for the additive CNMA model, blue for the interaction CNMA model.

https://doi.org/10.1371/journal.pone.0246631.g003

In general, adding the interaction between psychoeducation (pe) and interoceptive exposure (ine) did not alter the results (Fig 3). Pompoli et al. [54] examined several suspected interactions but found no strong evidence for any of them; however, this might be because the panic disorder data are sparse, and thus there is a lack of power to detect interactions. The inclusion of different combinations of components can influence the effectiveness of multicomponent therapies and explain some of the statistical heterogeneity estimated in the case of lumping. The inclusion or exclusion of each component can increase or decrease the outcome effectiveness. For instance, in the additive CNMA model, the inclusion of the ps component in an intervention increases the overall efficacy for remission in panic disorder by an OR = 6.99 [1.96; 24.91], however, there is large uncertainty around this estimate. The addition of the pl component combined with ftf, ps, ine, cr components (pl+ftf +ps+ine+cr) leads to an OR = 12.07 [5.39; 27.01] and therefore to the most effective combination of components for the remission in panic disorder (Fig 3).

Conclusion

We provided an overview of meta-analytical methods used for evaluating the effects of complex interventions. Systematic reviewers should recognize the advantages and limitations of each method and define a-priori the method of analysis.

A decisive aspect of the analysis is the node-making process. There is currently a lack of guidance on reporting the process of defining nodes in NMAs. Reporting of the node-making process in published applications seems insufficient and may potentially compromise the external validity of the analyses [62]. Generally, it should be based on clinical arguments defined a priori. A panel of experts needs to identify key features of interventions and provide a relevant taxonomy that pertains to the research question.

There is an increased awareness of the methodological challenges when handling complex interventions in the meta-analytical context. We argue that this trend will continue, especially for the CNMA models, since there are recent methodological and software advances.

The additive CNMA model seems the most attractive approach, when the additivity assumption holds, as it offers the advantage to explore the comparative effectiveness of all possible combinations of components. Τhe plausibility of the additivity and interaction assumptions behind the CNMA models should be evaluated, in addition to the assumptions of the standard NMA. Methods for testing the consistency assumption, ranking measures, and methods developed for NMA may need tailoring to be employed in CNMA. Methodological extensions for the implementation of CNMA models of disconnected networks (such as additivity or consistency assumption) need to be provided. New methodological aspects of CNMA models can be further tested in real or simulated data sets.

Supporting information

S1 Table. Synthesis of studies for short term remission in panic disorder in patients with or without agoraphobia.

https://doi.org/10.1371/journal.pone.0246631.s001

(XLSX)

Acknowledgments

We would like to special thanks to Professor Georgia Salanti for conceptualizing the initial research idea and for her valuable suggestions for this review paper.

References

  1. 1. Petticrew M, Anderson L, Elder R, Grimshaw J, Hopkins D, Hahn R, et al. Complex interventions and their implications for systematic reviews: a pragmatic approach. J. Clin. Epidemiol. 2013;66(11):1209–1214. pmid:23953085
  2. 2. Petticrew Μ. When are complex interventions ‘complex’? When are simple interventions ‘simple’?. Eur. J. Public Health. 2011;21(4):397–398. pmid:21771736
  3. 3. Hawe P, Shiell A, Riley T. Complex interventions: how ‘out of control’ can a randomised controlled trial be?. BMJ. 2004;328(7455):1561–1563. pmid:15217878
  4. 4. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. pmid:18824488
  5. 5. Noyes J. A research and development agenda for systematic reviews that ask complex questions about complex interventions. J. Clin. Epidemiol. 2013;66(11):1262–1270. pmid:23953084
  6. 6. Petticrew M, Rehfuess E, Noyes J, Higgins JP, Mayhew A, Pantoja T, et al. Synthesizing evidence on complex interventions: how meta-analytical, qualitative, and mixed-method approaches can contribute. J. Clin. Epidemiol. 2013;66(11): 1230–1243. pmid:23953082
  7. 7. Anderson LM, Oliver SR, Michie S, Rehfuess E, Noyes J, Shemilt I. Investigating complexity in systematic reviews of interventions by using a spectrum of methods. J. Clin. Epidemiol. 2013;66(11):1223–1229. pmid:23953087
  8. 8. Tugwell P, Knottnerus JA, Idzerda L. Complex interventions–how should systematic reviews of their impact differ from reviews of simple or complicated interventions?. J. Clin. Epidemiol. 2013; 66(11):1195–1196. pmid:24079641
  9. 9. Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. J. Clin. Epidemiol. 2013;66(11): 1215–1222. pmid:23953086
  10. 10. Pigott T, Shepperd S. Identifying, documenting, and examining heterogeneity in systematic reviews of complex interventions. J. Clin. Epidemiol. 2013;66(11):1244–1250. pmid:23953079
  11. 11. Burford B, Lewin S, Welch V, Rehfuess E, Waters E. Assessing the applicability of findings in systematic reviews of complex interventions can enhance the utility of reviews for decision making. J. Clin. Epidemiol. 2013;66(11):1251–1261. pmid:23953081
  12. 12. Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, et al. AHRQ series on complex intervention systematic reviews-paper 2: defining complexity, formulating scope, and questions. J. Clin. Epidemiol. 2017;90:11–18. pmid:28720514
  13. 13. Pigott T, Shepperd S. Identifying, documenting, and examining heterogeneity in systematic reviews of complex interventions. J. Clin. Epidemiol. 2013;66(11):1244–1250. pmid:23953079
  14. 14. Guise JM, Chang C, Butler M, Viswanathan M, Tugwell P. AHRQ series on complex intervention systematic reviews-paper 1: an introduction to a series of articles that provide guidance and tools for reviews of complex interventions. J. Clin. Epidemiol. 2017;90:6–10. pmid:28720511
  15. 15. Butler M, Epstein RA, Totten A, Whitlock EP, Ansari MT, Damschroder LJ, et al. AHRQ series on complex intervention systematic reviews-paper 3: adapting frameworks to develop protocols. J. Clin. Epidemiol. 2017;90:19–27. pmid:28720510
  16. 16. Viswanathan M, McPheeters ML, Murad MH, Butler ME, Devine EEB, Dyson MP, et al. AHRQ series on complex intervention systematic reviews-paper 4: selecting analytic approaches. J. Clin. Epidemiol. 2017;90:28–36. pmid:28720515
  17. 17. Guise JM, Butler ME, Chang C, Viswanathan M, Pigott T, Tugwell P; Complex Interventions Workgroup. AHRQ series on complex intervention systematic reviews—paper 6: PRISMA-CI extension statement and checklist. J. Clin. Epidemiol. 2017;90:43–50. pmid:28720516
  18. 18. Guise JM, Butler M, Chang C, Viswanathan M, Pigott T, Tugwell P; Complex Interventions Workgroup. AHRQ series on complex intervention systematic reviews-paper 7: PRISMA-CI elaboration and explanation. J. Clin. Epidemiol. 2017;90:51–58. pmid:28720513
  19. 19. Booth A, Noyes J, Flemming K, Moore G, Tunçalp Ö, Shakibazadeh E. Formulating questions to explore complex interventions within qualitative evidence synthesis. BMJ Glob. Health. 2019;4(Suppl 1):e001107. pmid:30775019
  20. 20. Petticrew M, Knai C, Thomas J, Rehfuess EA, Noyes J, Gerhardus A, et al. Implications of a complexity perspective for systematic reviews and guideline development in health decision making BMJ Glob. Health. 2019;4(Suppl 1):e000899.
  21. 21. Noyes J, Booth A, Moore G, Flemming K, Tunçalp Ö, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob. Health. 2019;4(Suppl 1):e000893. pmid:30775016
  22. 22. Flemming K, Booth A, Garside R, Tunçalp Ö, Noyes J. Qualitative evidence synthesis for complex interventions and guideline development: clarification of the purpose, designs and relevant methods. BMJ Glob. Health. 2019;4(Suppl 1):e000882. pmid:30775015
  23. 23. Higgins JPT, López-López JA, Becker BJ, Davies SR, Dawson S, Grimshaw JM, et al. Synthesising quantitative evidence in systematic reviews of complex health interventions. BMJ Glob. Health. 2019;4(Suppl 1):e000858. pmid:30775014
  24. 24. Montgomery P, Movsisyan A, Grant SP, Macdonald G, Rehfuess EA. Considerations of complexity in rating certainty of evidence in systematic reviews: a primer on using the GRADE approach in global health. BMJ Glob. Health. 2019; 4(Suppl 1):e000848. pmid:30775013
  25. 25. Booth A, Moore G, Flemming K, Garside R, Rollins N, Tunçalp Ö, et al. Taking account of context in systematic reviews and guidelines considering a complexity perspective. BMJ Glob. Health. 2019;4(Suppl 1):e000840. pmid:30775011
  26. 26. Montgomery AA, Peters TJ, Little P. Design, analysis and presentation of factorial randomised controlled trials. BMC Med. Res. Methodol. 2003;3(1):26. pmid:14633287
  27. 27. Welton NJ, Caldwell DM, Adamopoulos E, Vedhara K. Mixed Intervention Comparison Meta-Analysis of Complex Interventions: Psychological Interventions in Coronary Heart Disease. Am. J. Epidemiol. 2009;169(9):1158–1165. pmid:19258485
  28. 28. Petropoulou M, Nikolakopoulou A, Veroniki AA, Rios P, Vafaei A, Zarin W, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J. Clin. Epidemiol. 2017;82:20–28. pmid:27864068
  29. 29. Jonkman NH, Groenwold RHH, Trappenburg JCA, Hoes AW, Schuurmans MJ. Complex self-management interventions in chronic disease unravelled: a review of lessons learned from an individual patient data meta-analysis. J. Clin. Epidemiol. 2017;83:48–56. pmid:28126599
  30. 30. Madan J, Chen YF, Aveyard P, Wang D, Yahaya I, Munafo M, et al. Synthesis of evidence on heterogeneous interventions with multiple outcomes recorded over multiple follow-up times reported inconsistently: a smoking cessation case-study. J. R. Stat. Soc. Ser. A Stat. Soc. 2014;177(1):295–314.
  31. 31. Bangdiwala SI, Hassem T2, Swart LA, van Niekerk A, Pretorius K, Isobell D, et al. Evaluating the Effectiveness of Complex, Multi-component, Dynamic, Community-Based Injury Prevention Interventions: A Statistical Framework. Eval. Health Prof. 2018;41(4):435–455. pmid:30376737
  32. 32. Freeman SC, Scott NW, Powell R, Johnston M, Sutton AJ. Cooper NJ. Component network meta-analysis identifies the most effective components of psychological preparation for adults undergoing surgery under general anaesthesia. J. Clin. Epidemiol. 2018;98:105–116. pmid:29476923
  33. 33. Rücker G, Petropoulou M, Schwarzer G. Network meta-analysis of multicomponent interventions. Biom. J. 2020;62(3):808–821. pmid:31021449
  34. 34. Thorlund K, Mills E. Stability of additive treatment effects in multiple treatment comparison meta-analysis: a simulation study. Clin. Epidemiol. 2012;4:75–85. pmid:22570567
  35. 35. Mills EJ, Thorlund K, Ioannidis JPA. Calculating additive treatment effects from multiple randomized trials provides useful estimates of combination therapies. J. Clin. Epidemiol. 2012;65(12):1282–1288. pmid:22981250
  36. 36. Caldwell DM, Welton NJ. Approaches for synthesising complex mental health interventions in meta-analysis. Evid. Based Ment. Health. 2016;19(1):16–21. pmid:26792834
  37. 37. Tanner-Smith EE, Grant S. Meta-Analysis of Complex Interventions. Annu. Rev. Public Health 2018;39:135–151. pmid:29328876
  38. 38. Rücker G, Schmitz S, Schwarzer G. Component network meta-analysis compared to a matching method in a disconnected network: A case study. Biom. J. 2020.
  39. 39. Furukawa TA, Watanabe N, Churchill R. Combined psychotherapy plus antidepressants for panic disorder with or without agoraphobia. Cochrane Database Syst. Rev. 2007;(1):CD004364. pmid:17253502
  40. 40. Thompson SG, Higgins JPT. How should meta-regression analyses be undertaken and interpreted?. Stat. Med. 2002;21(11):1559–1573. pmid:12111920
  41. 41. Schwarzer G. meta: An R package for meta-analysis. R News 2007;7(3):40–45.
  42. 42. Bower P, Gilbody S, Richards D, Fletcher J, Sutton A. Collaborative care for depression in primary care: Making sense of a complex intervention: systematic review and meta-regression. Br. J. Psychiatry, 2006;189(6):484–493. pmid:17139031
  43. 43. Riley RD, Kauser I, Bland M, Thijs L, Staessen JA, Wang J, et al. Meta-analysis of randomised trials with a continuous outcome according to baseline imbalance and availability of individual participant data. Stat. Med. 2013;32(16):2747–2766. pmid:23303608
  44. 44. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ. 2010;340:c221. pmid:20139215
  45. 45. Stewart L, Parmar D. Meta-analysis of the literature or of individual patient data: is there a difference?. The Lancet 1993;341(8842):418–422. pmid:8094183
  46. 46. Debray TPA, Moons KG, van Valkenhoef G, Efthimiou O, Hummel N, Groenwold RH, et al.; GetReal Methods Review Group. Get real in individual participant data (IPD) meta-analysis: a review of the methodology. Res. Synth. Methods. 2015;6(4):293–309. pmid:26287812
  47. 47. Veroniki ΑΑ, Straus SE, Ashoor H, Stewart LA, Clarke M, Tricco AC. Contacting authors to retrieve individual patient data: study protocol for a randomized controlled trial. Trials. 2016;17(1):138. pmid:26975720
  48. 48. Jonkman NH, Westland H, Groenwold RH, Ågren S, Atienza F, Blue L, et al. Do Self-Management Interventions Work in Patients With Heart Failure? An Individual Patient Data Meta-Analysis. Circulation. 2016;133(12):1189–1198. pmid:26873943
  49. 49. Jonkman NH, Westland H, Trappenburg JC, Groenwold RH, Bischoff EW, Bourbeau J, et al. Do self-management interventions in COPD patients work and which patients benefit most? An individual patient data meta-analysis. Int. J. Chron. Obstruct. Pulmon. Dis. 2016;11:2063–2074. pmid:27621612
  50. 50. Jonkman NH, Westland H, Trappenburg JC, Groenwold RH, Bischoff EW, Bourbeau J, et al. Characteristics of effective self-management interventions in patients with COPD: individual patient data meta-analysis. Eur. Respir. J. 2016; 48(1):55–68. pmid:27126694
  51. 51. Jonkman NH, Westland H, Groenwold RH, Ågren S, Anguita M, Blue L, et al. What Are Effective Program Characteristics of Self-Management Interventions in Patients With Heart Failure? An Individual Patient Data Meta-analysis. J. Card. Fail. 2016;22(11):861–871. pmid:27374838
  52. 52. Tricco AC, Thomas SM, Veroniki AA, Hamid JS, Cogo E, Strifler L, et al. Comparisons of Interventions for Preventing Falls in Older Adults: A Systematic Review and Meta-analysis. JAMA. 2017;318(17):1687–1699. pmid:29114830
  53. 53. Nikolakopoulou A, Chaimani A, Veroniki AA, Vasiliadis HS, Schmid CH, Salanti G. Characteristics of Networks of Interventions: A Description of a Database of 186 Published Networks. PLoS ONE. 2014;9(1):e86754. pmid:24466222
  54. 54. Pompoli A, Furukawa TA, Efthimiou O, Imai H, Tajika A, Salanti G. Dismantling cognitive-behaviour therapy for panic disorder: a systematic review and component network meta-analysis. Psychol. Med. 2018;48(12):1945–1953. pmid:29368665
  55. 55. Rücker G. Network meta-analysis, electrical networks and graph theory. Res Synth Methods. 2012;3(4):312–24. pmid:26053424
  56. 56. Mills EJ, Druyts E, Ghement I, Puhan MA. Pharmacotherapies for chronic obstructive pulmonary disease: a multiple treatment comparison meta-analysis. Clin. Epidemiol. 2011;3:107–129. pmid:21487451
  57. 57. Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS—a Bayesian modelling framework: concepts, structure, and extensibility. Stat. Comput. 2000;10:325–337.
  58. 58. Thomas Α, Hara Β, Ligges U, Sturtz . Making BUGS Open. R News 2006;6(1):12–17.
  59. 59. Plummer M. rjags: Bayesian Graphical Models using MCMC. URL https://cran.r-project.org/web/packages/rjags/rjags.pdf, R package version 4–8; 2018.
  60. 60. Rücker G, Krahn U, König J. Efthimiou O, Schwarzer G. netmeta: Network Meta-Analysis using Frequentist Methods. URL https://CRAN.R-project.org/package=netmeta, R package version 1.2–1; 2020.
  61. 61. R Development Core Team. R: A language and environment for statistical computing. Version 3.5.1, R Foundation for Statistical Computing, Vienna, Austria, 2018. URL http://www.R-project.org
  62. 62. James A, Yavchitz A, Ravaud P, Boutron I. Node-making process in network meta-analysis of nonpharmacological treatment are poorly reported. J. Clin. Epidemiol. 2018;97:95–102. pmid:29196202