Figures
Abstract
Background
Network meta-analysis (NMA) is a new tool developed to overcome some limitations of pairwise meta-analyses. NMAs provide evidence on more than two comparators simultaneously. This study aimed to map the characteristics of the published NMAs on drug therapy comparisons.
Methods
A systematic review of NMAs comparing pharmacological interventions was performed. Searches in Medline (PubMed) and Scopus along with manual searches were conducted. The main characteristics of NMAs were systematically collected: publication metadata, criteria for drug inclusion, statistical methods used, and elements reported. A methodological quality score with 25 key elements was created and applied to the included NMAs. To identify potential trends, the median of the publication year distribution was used as a cut-off.
Results
The study identified 365 NMAs published from 2003 to 2016 in more than 30 countries. Randomised controlled trials were the primary source of data, with only 5% including observational studies, and 230 NMAs used a placebo as a comparator. Less than 15% of NMAs were registered in PROSPERO or a similar system. One third of studies followed PRISMA and less than 9% Cochrane recommendations. Around 30% presented full-search strategies of the systematic review, and 146 NMAs stated the selection criteria for drug inclusion. Over 75% of NMAs presented network plots, but only half described their geometry. Statistical parameters (model fit, inconsistency, convergence) were properly reported by one third of NMAs. Although 216 studies exhibited supplemental material, no data set of primary studies was available. The methodological quality score (mean 13·9; SD 3·8) presented a slightly positive trend over the years.
Citation: Tonin FS, Steimbach LM, Mendes AM, Borba HH, Pontarolo R, Fernandez-Llimos F (2018) Mapping the characteristics of network meta-analyses on drug therapy: A systematic review. PLoS ONE 13(4): e0196644. https://doi.org/10.1371/journal.pone.0196644
Editor: Russell J. de Souza, McMaster University, CANADA
Received: March 3, 2017; Accepted: April 17, 2018; Published: April 30, 2018
Copyright: © 2018 Tonin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files. Additional data are available at Open Science Framework (DOI: 10.17605/OSF.IO/GVQXT).
Funding: Two scholarships were provided by the Brazilian National Counsel of Technological and Scientific Development (CNPq) and the Coordination for the Improvement of Higher Education Personnel (CAPES).
Competing interests: The authors have declared that no competing interests exist.
Introduction
Traditional pairwise meta-analyses produced a step forward in evidence-based selection between therapeutic alternatives. However, the lack of a complete set of head-to-head clinical trials limits the evidence in many areas [1,2]. This situation is especially relevant in highly innovative therapeutic classes, in which trials comparing two drugs require large sample sizes and financial resources [1,3,4]. In addition, traditional pairwise meta-analyses are restricted to compare only two treatments at a time [5–8].
The indirect comparison method proposed by Bucher et al [9] provided a potential solution for treatments that have not been directly compared before. However, this model can only be applied to data generated from trials with two arms and with a common comparator, allowing the indirect comparison of three treatments (A vs. B; B vs. C) [10,11]. Thereafter, Lumley [12] and Lu and Ades [13] improved indirect treatment comparison techniques, involving more than one common comparator (the linking treatment) and creating NMA, also called mixed or multiple treatment comparison meta-analysis. NMA allows to simultaneously combine both direct and indirect results from all studies’ arms into a single pooled effect, which strengthens results and provides a broader picture of all treatments in the same model [14–17]. Moreover, NMAs calculate the probability for each treatment to be the best (or worst) for a specific outcome by creating probability rank orders or rankograms (graphical methods), which are useful for the decision-making process [11,18].
Over the last several years, NMA has matured as a technique, with models available for all types of raw data, producing different pooled effect measures, using both frequentist and Bayesian frameworks with different approaches (i.e. contrast-based or arm-based) and software packages available [19–26]. However, initial analyses of NMAs reported some gaps in the use of this new technique [19,27–29]. Thus, our aim was to map the characteristics of all the NMAs published, including drug therapy comparisons.
Material and methods
Search and eligibility criteria
A systematic review was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and Cochrane Collaboration recommendations [30,31]. Two reviewers performed all the steps individually, and discrepancies were decided by a third author.
We searched for articles reporting NMAs comparing drug therapy alternatives in PubMed and Scopus without time or language limits (last updated in March 2016). A manual search in the reference lists of included studies was performed, and grey literature was also searched in Google and Google Scholar. The complete search strategies are presented as supporting information (S1 Table).
We included studies using NMAs—also referred to as multiple or mixed treatment comparisons, mixed treatment meta-analysis, or indirect meta-analysis—to compare any drug therapy intervention (defined as a pharmacological intervention including an active substance) alone or in combination with other pharmacological intervention, regardless of regimen or dosage. We considered any type of network (with open or closed loops) of experimental, quasi-experimental, or observational trials that assessed at least three or more treatments, comparing head to head or against placebo/no control in patients (no restriction of gender, age, or clinical/medical condition). Non-NMAs, study protocols, studies reporting data only on non-pharmacological interventions, and articles written in non-Roman characters were excluded during screening (title and abstract reading) and full-text article eligibility steps.
Data extraction and analyses
We used a standardised data collection form to extract data on: (i) the studies’ general characteristics, such as author names, countries of affiliation, journal impact factor (as reported on journal citation reports), publication year, sample size (number of included trials and population), type of included studies, and patients’ clinical conditions; (ii) methods used in the systematic review (included databases, description of complete search strategies, reports on manual search, grey literature searches, recommendation compliance, register [PRISMA–Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement, Cochrane Recommendations, PROSPERO–International prospective register of systematic reviews]) and the studies’ quality assessment using validated methods (i.e. Jadad Score or Cochrane Risk of Bias Tool); (iii) description of statistical analyses (frequentist, Bayesian, or both), statistical model (random, fixed, or both), statistical approaches (i.e. contrast-based or arm-based), additional analyses (i.e. subgroup, sensitivity, trial-level outlier detection, or meta-regression analyses), inconsistency analyses, model fit and model convergence, and computer software used for calculations; (iv) report of results (i.e. supplementary material; data on direct, indirect, or mixed evidence; presence of network plot; description of network geometry; and presence of rank orders); (v) conflict of interest and funding source declarations.
A methodological quality score with 25 key elements for the performance and reporting of systematic reviews and NMAs was applied. The construction of this preliminary tool was based on PRISMA-NMA statement, and considered the Bayesian approach for conduct NMAs due to its flexibility and interpretability. The main elements of conduct and reporting the systematic review process and the statistical analyses of NMAs were incorporated in this preliminary tool, considering both internal validity and reporting quality items. The complete quality score description is presented as supporting information (see S2 Table). Potential correlation of the methodological quality score was tested with (i) the year of publication of the NMA, (ii) the impact factor of the journal in which the NMA was published, and (iii) the area of the clinical condition evaluated by the NMA (e.g. cardiovascular diseases, metabolic disorders, respiratory diseases).
Statistical analyses
To evaluate potential time trends, the median of the publication year distribution was used as a cut-off. The normality of the variables was assessed through the Kolmogorov–Smirnov and Shapiro–Wilk tests. Continuous variables with non-normal distribution were reported as median and interquartile range (IQR), and the Wilcoxon–Mann–Whitney test was used for within-group comparisons. Categorical variables were compared using the chi-square test for univariate comparisons and reported as absolute and relative frequencies. The methodological quality score (normally distributed) was correlated with the year of publication and impact factor through the Pearson test. ANOVA was used to associate the methodological quality score and clinical conditions. All analyses were conducted in IBM SPSS Statistics v. 24.0 (Armonk, NY: IBM Corp.), and probabilities below the 5% level were considered statistically significant.
Results
After the systematic search, a total of 1,425 articles were retrieved from PubMed and Scopus. During the screening process, 930 articles were considered irrelevant, and another 130 articles were excluded during the full-text appraisal, resulting in 365 NMAs for data extraction (Fig 1). For the complete raw data of the included NMA see OSF platform (DOI: 10.17605/OSF.IO/GVQXT).
These 365 articles were published between 2003 and 2016, with a median in 2014 and an inflection point in 2010 (see Table 1). Most studies (n = 265; 72.6%) were produced in only one country: the United States (n = 62), China (n = 57), the United Kingdom (n = 33), Canada (n = 27), and Italy (n = 20). International collaboration among authors did not statistically differ before and after 2014, accounting for 24.7% of all studies published (Table 1). Switzerland, the Netherlands, and Germany were the most collaborative countries, with 80%, 78.3%, and 69.6% of articles published in collaboration, respectively. The final map of NMA publications (Fig 2) shows that the United States published more NMAs (n = 115), followed by the United Kingdom (n = 86) and China (n = 73). The medical conditions evaluated were cardiovascular diseases (n = 98), oncologic disorders (n = 50), autoimmune disorders (n = 39), mental health disorders (n = 32), infectious diseases (n = 32), respiratory diseases (n = 27), musculoskeletal disorders (n = 10), pain (n = 7), gastrointestinal injuries (n = 6), and other health disorders (n = 64), which included diseases of different systems (skin, eye, endocrine, genitourinary). The 365 NMAs were published in 204 different journals, but a decline in impact factor of the journals in which NMAs were published was observed. Prior to 2014, the mean impact factor of journals publishing NMAs was 6.214; after 2014, the mean impact factor was 4.701 (Table 1).
Countries are presented as nodes. Nodes sizes are proportional to the number of NMAs publications by country. Lines thickness are proportional to the number of NMAs publications between countries publishing in collaboration.
A protocol registration for the systematic review (i.e. PROSPERO) was provided by 53 studies (14.5%), and 31.8% studies (n = 116) stated complying PRISMA guideline. Both parameters significantly increased after 2014 (p values 0.013 and <0.001, respectively). Cochrane recommendations were followed only by 32 studies, of which 20 were published after 2014. Less than half of the articles (n = 146; 40%) reported objective criteria for the selection of drugs or classes included in the NMA, whereas 87 articles (23.8%) provided non-objective reasons (e.g. ‘most commonly used drugs’, ‘frequent treatments’, ‘currently employed drugs’). Studies occasionally provided complete search strategies (29.6%), with no significant differences before and after 2014 (p = 0.561).
The median number of databases used for the electronic searches was three (IQR = 1). The vast majority of the articles (342; 93.7%) detailed the databases used, with the following being the most frequent: PubMed/MEDLINE (92.9%), Cochrane Library (78.4%), Scopus/Embase (77.3%), Clinicaltrials.gov (17.0%), Web of Science (10.4%), CINAHL (6.3%), Health Technology Assessment (5.5%), and International Pharmaceutical Abstracts (1.9%). Manual searches and grey literature searches were conducted by 73.4% and 48.2% of studies, respectively. These two indicators along with the supply of online supplementary material (provided by 216 articles) have improved after 2014 (p values of 0.010, 0.004, and 0.027, respectively). The majority of NMAs (94.2%) included only randomised controlled trials, with the remaining 5.2% including also non-randomised or quasi-experimental trials or observational studies. Only two (0.5%) NMAs were restricted to observational studies. The median number of primary studies included in the networks (n = 21) remained similar before and after 2014 (p = 0.706). However, the median number of patients significantly decreased after 2014 (p = 0.019). Methodological quality assessment of primary studies was performed in 193 articles using the Jadad Score or the Cochrane Risk of Bias Tool. Over the years, more authors declared not to have conflicts of interest or did not mention any in their articles. More than 55% of studies received external financial support (Table 1).
As part of the NMA analyses, a network plot was provided by 287 articles for at least one assessed outcome (Table 2). The median number of nodes in the networks was 7.0 (IQR = 6), ranging from 3 to 71, with a statistically significant increase after 2014 (p value <0.001) of articles describing the geometry of the network (e.g. node sizes, line widths, proportion of trials and arms), as well as presenting rank order analyses of which intervention could be the best or worst for the clinical condition under evaluation (p value <0.001). A placebo was used as a comparator in 230 NMAs (63%). The statistical model used was described in 315 studies (86.3%), with the Bayesian model (n = 297) the most prevalent. The frequentist model was used in 15 articles, and both models were conducted by three studies. The statistical method was reported by 349 NMAs, with the random method (62.5%) the most common, and 33.8% of networks were built with both fixed and random methods. Only 3% of networks used only the fixed effect method. As expected, 91.8% of studies (n = 335) presented their main results as mixed treatment evidence, accounting for direct and indirect comparisons in one single effect (e.g. matrix of results, tables of data). Moreover, 52.9% of studies reported results for direct comparison and 12.1% for indirect comparisons individually. The software used was stated in 345 studies, with WinBUGS (57.5%), Stata (27.9%), R (23.8%), and Addis (6.0%) the most frequent. Supplementary analyses such as subgroup, sensitivity, and meta-regression analyses between included primary studies were conducted in about 60% of NMAs, and their prevalence was similar before and after 2014 (Table 2). The statistical approach (i.e. contrast-based or arm-based) was mentioned by 20% of studies (n = 73) and only 5.2% of NMA (n = 19) referred to be multivariate meta-analyses. The detection of outlying trials in the network was performed in only 26 studies (7.1%). However, there has been an increase of articles reporting network parameters such as inconsistency of direct and indirect evidence (p = 0.002), model fit (p = 0.333), and convergence (p = 0.004) in recent years.
Overall, the 365 NMAs obtained a mean methodological quality score (considering items of internal validity and reporting quality) of 13.9 (SD = 3.8), ranging from 2 to 22. Before 2014, a mean of nine (SD 2.1) parameters was properly reported by at least half of studies (>50%), whereas after 2014, this number increased to 13 (SD 1.2) parameters. Reporting drug selection criteria as well as providing supplemental material have increased since 2012, whereas descriptions of NMAs’ geometry and rank order started increasing in 2013. However, parameters such as PROSPERO registration, PRISMA/Cochrane recommendation follow-up, and some statistical model descriptions are still poorly reported by authors (Table 3). A correlation was found for the year of publication of the NMA and the methodological quality score (r = 0.315) (Fig 3), whereas a slight correlation between the impact factor and the quality score was found (r = 0.172). No association was found between the quality score and the medical conditions of the NMAs (p = 0.437).
Discussion
We identified a rapid increase in the publication of NMAs as a valid method to compare pharmacological treatments during the 2010s. Similar growth was previously reported for pairwise meta-analyses, whereas the annual publications increased more than 20-fold between 1994 (n = 386) and 2014 (n = 8203) [32–34]. The growing interest in NMAs is evident by more than 50% of NMAs published since 2014 by authors from more than 30 countries in more than 200 journals. Scientific production follows a geographical distribution associated with the number of researchers, available technology, country science funding, and international collaboration [33–35]. A study about the global production of pairwise meta-analyses (n = 736) published by 3,178 authors from 51 countries reported that developed countries such as the USA, the UK, and Canada were the greatest producers [36]. Similar results were found in our study, but with the emergence of new countries such as China and Italy, which may change the future publication patterns of NMAs [37,38]. New countries may enter, probably because NMAs are a valid, cheap, and quick alternative to support pricing and marketing approval decisions, especially in the absence of direct comparisons [27,39,40]. The increasing rate of NMA publications may also have caused the decrease of the impact factor of the journals publishing NMAs. The very low slope of the correlation between impact factor and NMAs’ methodological quality score suggests virtually no association. It seems that when NMAs were an innovative statistical tool, journals with the highest impact factor were more interested in this technique. However, with the increase of NMA production more journals became interested, including those with lower impact factors.
The quality of reports about methodological aspects in both systematic reviews and NMAs has also significantly improved over the years. Similarly to systematic reviews [31,41,42], more NMAs have performed manual and grey literature searches. We also found that more NMAs followed the PRISMA statement and provided a PROSPERO registration number. However, although Cochrane guidelines were available since 1994, few NMAs claimed to pursue these recommendations. Though authors have searched in more than two electronic databases, as recommended [31,43], only one third provided the complete search strategies, as similarly reported in a study on the systematic review process of NMAs [44]. As expected, PubMed/Medline was the most commonly used database for electronic searches, perhaps because of the expanded coverage of biomedicine and health sciences [45] and its free access. On the other hand, Web of Science was used only by one in ten NMAs. The highly restrictive process for journal indexing performed in the Web of Science, which is alleged as a strength to calculate the impact factor [46–48], may also be the reason why this database is considered useless in about 90% of NMA searches.
Probably one of the most important weaknesses of many NMAs is the lack of inclusion and exclusion criteria of molecules [49–51]. More than one third of NMAs lacked objective criteria to select substances included in their analysis. Despite the lack of standardised criteria for inclusion of molecules in meta-analyses, efforts to minimise potential biases are needed [43,52]. Regardless of what happens in pairwise meta-analyses, drug selection is particularly important for NMAs because differences in the selection of agents influenced the estimates of the network and rankeograms, and results may not reflect the comparative profile of drugs when some treatments are missing [17,53]. The reasons for drug selection should be clearly and explicitly provided in registered protocols, as well as in the methods section of the articles reporting the NMAs.
Almost all NMAs included only randomised controlled trials, with more than 60% using a placebo as the common comparator. Randomised, double-blind, placebo controlled trials are the gold standard to demonstrate the superior efficacy of a new treatment; however, ethical issues about the use of a placebo as a comparator and the possible overestimated effect size of the active compared drug have been discussed [54,55]. Head-to-head trials are increasingly used, as well as observational studies. When carefully designed, the latter can provide critical information about drugs used in the real world and have been recommended for comparative effectiveness research given the few differences between well-designed observational studies and randomised controlled trials [56–58]. In the future, the inclusion of these other types of studies in NMAs will likely increase [59,60]. Although the number of primary studies in NMAs have remained similar over the years, the number of patients included has significantly decreased, probably due to ethical issues and the costs of clinical trials. A study on pairwise meta-analyses showed that only 58.1% (n = 451) reported a priori sample size calculations [61]. NMAs typically include more trials than traditional meta-analyses because of multiple comparisons, but sample size calculations are still required. For NMAs, the sample size for a particular treatment comparison should be estimated as the number of patients in a pairwise meta-analysis that provides the same degree and strength of evidence as in the indirect comparison or NMA [62].
The graphical representation of a network (plot) and the description of its geometry offer a visual idea about trials’ sample sizes, tendencies, and available direct and indirect evidence [5,6]. In the future, a standardised way of reporting NMA plots and geometry should be considered as an additional parameter of publication reproducibility. The use of rankograms to display the probability to be the best choice among evaluated treatments is a helpful tool for policy makers [11,18,28]. However, rankograms alone may not be enough, and graphs would depend on both the nature of data used in the NMAs and the statistical method employed [11,63]. The Bayesian approach is the most commonly used, because it provides a straightforward way to make predictions. This model combines the likelihood with a prior probability distribution (which reflects prior belief about possible parameters) to obtain a posterior probability of the parameters, which improves the frequentist approach [1,64]. However, as showed in our results, there is still a lack of reporting the Bayesian based-method in NMAs. Despite widely used, the contrast-based approach, that focus on modeling relative treatment effects,[26,65] was equally poorly described as the recent developed arm-based approach [66,67].
NMAs share other methodological challenges with traditional pairwise meta-analyses (e.g. issues of bias, heterogeneity, and precision) [11]. The statistical strength of NMAs relies on two basic assumptions: consistency and transitivity. An agreement between direct and indirect estimates of a comparison ensures consistency, and a balanced distribution of effect of trials guarantees transitivity [63,68]. However, while heterogeneity and inconsistency are being better addressed in the NMAs, trial-level outliers’ assessment and multivariate meta-analyses are still poorly reported (less than 10% of studies), probably because few research on this field exists [69,70]. To ensure that these aspects are complied with, information on the model characteristics should always be provided in either the main article or the supplementary material. The use of online supplementary material has significantly increased in NMA publications, because it does not increase any cost to publications and can provide important further details [71,72]. This resource should always include a minimum data set of the systematic review (e.g. complete search strategies for at least one database, characteristics of included studies, methodological quality, and risk of bias of included studies) and, when possible, the complete data set with raw data for the NMA (e.g. raw data or single-effect sizes of primary studies for at least the main outcome, software and algorithm/model used, and evaluated statistical parameters).
Many statistical parameters have been properly reported since the first NMA publications, but key aspects such as inconsistency factors, model fit, statistical approach, detection of trial-level outliers, and convergence are still poorly reported. To improve methodological reporting standards, guidelines and statements—such as the recently published PRISMA-NMA extension of 2015—should help researchers to follow similar reporting patterns, enhancing the evidence quality and reproducibility of NMAs [73,74]. Editors and peer reviewers should ensure that authors carefully follow these recommendations, and periodical analyses could identify reporting weaknesses and recommend guideline clarifications.
Our study has some limitations. We included only NMAs of drug interventions, but NMAs of non-pharmacological interventions are also available in the literature; we cannot guarantee that our results are extensive to these other NMAs. Although the quality score tool was created based on the items of internal validity with items of reporting quality to summarize the methodological requirements to perform an NMA, including different items in the score could produce different results. Further studies on methodological quality assessment tools to NMAs should be conducted.
Finally, our map of characteristics of the published NMAs on pharmacological interventions emphasises this tool’s potential as a gold standard method for healthcare evidence synthesis. Publication of NMAs is growing rapidly as a robust tool to make decisions about effectiveness and safety in drug classes. Some weaknesses, like the non-objective drug selection criteria, were identified in the NMA literature that may limit this technique’s credibility and reproducibility.
Acknowledgments
We thank the Brazilian National Council of Technological and Scientific Development (CNPq) and Coordination for the Improvement of Higher Education Personnel (CAPES) for the academic support and scholarships.
References
- 1. Kim H, Gurrin L, Ademi Z, Liew D (2014) Overview of methods for comparing the efficacies of drugs in the absence of head-to-head clinical trial data. Br J Clin Pharmacol 77: 116–121. pmid:23617453
- 2. Leucht S, Chaimani A, Cipriani AS, Davis JM, Furukawa TA Salanti G. (2016) Network meta-analyses should be the highest level of evidence in treatment guidelines. Eur Arch Psychiatry Clin Neurosci 266: 477–480. pmid:27435721
- 3. Fisher LD, Gent M, Buller HR (2001) Active-control trials: how would a new agent compare with placebo? A method illustrated with clopidogrel, aspirin, and placebo. Am Heart J 141: 26–32. pmid:11136483
- 4. Pocock SJ, Gersh BJ (2014) Do current clinical trials meet society's needs?: a critical review of recent evidence. J Am Coll Cardiol 64: 1615–1628. pmid:25301467
- 5. Catala-Lopez F, Tobias A, Cameron C, Moher D, Hutton B (2014) Network meta-analysis for comparing treatment effects of multiple interventions: an introduction. Rheumatol Int 34: 1489–1496. pmid:24691560
- 6. Hutton B, Salanti G, Chaimani A, Caldwell DM, Schmid C, Thorlund K, et al. (2014) The quality of reporting methods and results in network meta-analyses: an overview of reviews and suggestions for improvement. PLoS One 9: e92508. pmid:24671099
- 7. Hoaglin DC, Hawkins N, Jansen JP, Scott DA, Itzler R, Cappelleri JC, et al. (2011) Conducting indirect-treatment-comparison and network-meta-analysis studies: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 2. Value Health 14: 429–437. pmid:21669367
- 8. Carroll K, Hemmings R (2016) On the need for increased rigour and care in the conduct and interpretation of network meta-analyses in drug development. Pharm Stat 15: 135–142. pmid:26732132
- 9. Bucher HC, Guyatt GH, Griffith LE, Walter SD (1997) The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol 50: 683–691. pmid:9250266
- 10. Hasselblad V (1998) Meta-analysis of multitreatment studies. Med Decis Making 18: 37–43. pmid:9456207
- 11. Hassan S, N R, Nair NS (2015) Methodological considerations in network meta-analysis. Int J Med Sci Public Health 4: 588–594.
- 12. Lumley T (2002) Network meta-analysis for indirect treatment comparisons. Statistics in Medicine 21: 2313–2324. pmid:12210616
- 13. Lu G, Ades AE (2004) Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med 23: 3105–3124. pmid:15449338
- 14. Nikolakopoulou A, Mavridis D, Salanti G (2015) Planning future studies based on the precision of network meta-analysis results. Stat Med 6.
- 15. Jansen JP, Naci H (2013) Is network meta-analysis as valid as standard pairwise meta-analysis? It all depends on the distribution of effect modifiers. BMC Med 11: 159. pmid:23826681
- 16. Jansen JP, Trikalinos T, Cappelleri JC, Daw J, Andes S, Eldessouki R, et al. (2014) Indirect Treatment Comparison/Network Meta-Analysis Study Questionnaire to Assess Relevance and Credibility to Inform Health Care Decision Making: An ISPOR-AMCP-NPC Good Practice Task Force Report. Value in Health: 157–173. pmid:24636374
- 17. Salanti G (2012) Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Methods 3: 80–97. pmid:26062083
- 18. Caldwell DM (2014) An overview of conducting systematic reviews with network meta-analysis. Syst Rev 3: 109. pmid:25267336
- 19. Salanti G, Higgins JP, Ades AE, Ioannidis JP (2008) Evaluation of networks of randomized trials. Stat Methods Med Res 17: 279–301. pmid:17925316
- 20. Dias S, Sutton AJ, Ades AE, Welton NJ (2013) Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Decis Making 33: 607–617. pmid:23104435
- 21. Veroniki AA, Vasiliadis HS, Higgins JP, Salanti G (2013) Evaluation of inconsistency in networks of interventions. Int J Epidemiol 42: 332–345. pmid:23508418
- 22. Greco T, Edefonti V, Biondi-Zoccai G, Decarli A, Gasparini M, Zangrillo A, et al. (2015) A multilevel approach to network meta-analysis within a frequentist framework. Contemporary Clinical Trials 42: 51–59. pmid:25804722
- 23. Van Valkenhoef G, Dias S, Ades AE, Welton NJ (2015) Automated generation of nodesplitting models for assessment of inconsistency in network meta-analysis. Res Syn Meth.
- 24. Efthimiou O, Debray TPA, Van Valkenhoef G, Trelle S, Panayidou K, Moons KG, et al. (2016) GetReal in network meta-analysis: a review of the methodology. Res Syn Meth.
- 25. Chaimani A, Higgins JP, Mavridis D, Spyridonos P, Salanti G (2013) Graphical tools for network meta-analysis in STATA. PLoS One 8: e76654. pmid:24098547
- 26. Lin L, Zhang J, Hodges JS, Chu H (2017) Performing Arm-Based Network Meta-Analysis in R with the pcnetmeta Package. J Stat Softw 80. pmid:28883783
- 27. Caldwell DM, Dias S, Welton NJ (2015) Extending Treatment Networks in Health Technology Assessment: How Far Should We Go? Value Health 18: 673–681. pmid:26297096
- 28. Bafeta A, Trinquart L, Seror R, Ravaud P (2014) Reporting of results from network meta-analyses: methodological systematic review. British Medical Journal 348: 1–9.
- 29. Tonin FS, Rotta I, Pontarolo R (2017) Network meta-analysis: a technique to gather evidence from direct and indirect comparisons. Pharm Pract (Granada) 15: 943.
- 30. Moher D, Liberati A, Tetzlaff J, Altman DG (2009) The PRISMA Group. Preferred reporting items for systematic reviews and metaanalyses: the PRISMA statement. J Clin Epidemiol 62: 1006–1012. pmid:19631508
- 31. Higgins JPT, Green S (2011) Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0: Cochrane
- 32. Riaz IB, Khan MS, Riaz H, Goldberg RJ (2016) Disorganized Systematic Reviews and Meta-analyses: Time to Systematize the Conduct and Publication of These Study Overviews? Am J Med 129: 339 e311–338.
- 33. King DA (2004) The scientific impact of nations. Nature 430: 311–316. pmid:15254529
- 34. Tebala GD (2015) What is the future of biomedical research? Med Hypotheses 85: 488–490. pmid:26194725
- 35. Wagner CS, Park HW, Leydesdorff L (2015) The Continuing Growth of Global Cooperation Networks in Research: A Conundrum for National Governments. PLoS One 10: e0131816. pmid:26196296
- 36. Catala-Lopez F, Alonso-Arroyo A, Hutton B, Aleixandre-Benavent R, Moher D (2014) Global collaborative networks on meta-analyses of randomized trials published in high impact factor medical journals: a social network analysis. BMC Med 12: 15. pmid:24476131
- 37. Oliver S, Bangpan M, Stansfield C, Stewart R (2015) Capacity for conducting systematic reviews in low- and middle-income countries: a rapid appraisal. Health Res Policy Syst 13: 23. pmid:25928625
- 38. Bai X, Liu Y (2016) International Collaboration Patterns and Effecting Factors of Emerging Technologies. PLoS One 11: e0167772. pmid:27911926
- 39. Laws A, Kendall R, Hawkins N (2014) A comparison of national guidelines for network meta-analysis. Value Health 17: 642–654. pmid:25128059
- 40. Lee A (2016) Use of network meta-analysis in systematic reviews: a survey of authors. Systematic reviews 5.
- 41. Aromataris E, Riitano D (2014) Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs 114: 49–56.
- 42. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG (2007) Epidemiology and reporting characteristics of systematic reviews. PLoS Med 4: e78. pmid:17388659
- 43. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, et al. (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 339: b2700. pmid:19622552
- 44. Bafeta A, Trinquart L, Seror R, Ravaud P (2013) Analysis of the systematic reviews process in reports of network meta-analyses: methodological systematic review. BMJ 347: f3675. pmid:23818558
- 45. Roberts RJ (2001) PubMed Central: The GenBank of the published literature. Proc Natl Acad Sci U S A 98: 381–382. pmid:11209037
- 46. Golubic R, Rudes M, Kovacic N, Marusic M, Marusic A (2008) Calculating impact factor: how bibliographical classification of journal items affects the impact factor of large and small journals. Sci Eng Ethics 14: 41–49. pmid:18004672
- 47. Weale AR, Bailey M, Lear PA (2004) The level of non-citation of articles within a journal as a measure of quality: a comparison to the impact factor. BMC Med Res Methodol 4: 14. pmid:15169549
- 48. Kanchan T, Krishan K (2016) Journal impact factor—Handle with care. Biomed J 39: 227. pmid:27621128
- 49. Clark GT, Mulligan R (2011) Fifteen common mistakes encountered in clinical research. J Prosthodont Res 55: 1–6. pmid:21095178
- 50. Page MJ, McKenzie JE, Kirkham J, Dwan K, Kramer S, Green S, et al. (2014) Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database Syst Rev: MR000035. pmid:25271098
- 51. Page MJ, McKenzie JE, Chau M, Green SE, Forbes A (2015) Methods to select results to include in meta-analyses deserve more consideration in systematic reviews. J Clin Epidemiol 68: 1282–1291. pmid:25841706
- 52. Greco T, Zangrillo A, Biondi-Zoccai G, Landoni G (2013) Meta-analysis: pitfalls and hints. Heart Lung Vessel 5: 219–225. pmid:24364016
- 53. Salanti G, Del Giovane C, Chaimani A, Caldwell DM, Higgins JPT (2014) Evaluating the Quality of Evidence from a Network Meta-Analysis. PlosOne 9.
- 54. Olfson M, Marcus SC (2013) Decline in placebo-controlled trial results suggests new directions for comparative effectiveness research. Health Aff (Millwood) 32: 1116–1125.
- 55. Brophy JM (2015) Improving the evidence base for better comparative effectiveness research. J Comp Eff Res 4: 525–535. pmid:26387479
- 56. Anglemyer A, Horvath HT, Bero L (2014) Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev: MR000034. pmid:24782322
- 57. Chavez-MacGregor M, Giordano SH (2016) Randomized Clinical Trials and Observational Studies: Is There a Battle? J Clin Oncol 34: 772–773. pmid:26786920
- 58. Frakt AB (2015) An observational study goes where randomized clinical trials have not. JAMA 313: 1091–1092. pmid:25781429
- 59. Cameron C, Fireman B, Hutton B, Clifford T, Coyle D, Wells G, et al. (2015) Network meta-analysis incorporating randomized controlled trials and non-randomized comparative cohort studies for assessing the safety and effectiveness of medical treatments: challenges and opportunities. Syst Rev 4: 147. pmid:26537988
- 60. Tudur Smith C, Marcucci M, Nolan SJ, Iorio A, Sudell M, Riley R, et al. (2016) Individual participant data meta-analyses compared with meta-analyses based on aggregate data. Cochrane Database Syst Rev 9: MR000007. pmid:27595791
- 61. Lee PH, Tse AC (2016) The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed. Eur J Intern Med.
- 62. Thorlund K, Mills EJ (2012) Sample size and power considerations in network meta-analysis. Syst Rev 1: 41. pmid:22992327
- 63. Warren FC, Abrams KR, Sutton AJ (2014) Hierarchical network meta-analysis models to address sparsity of events and differing treatment classifications with regard to adverse outcomes. Stat Med 33: 2449–2466. pmid:24623455
- 64. Uhlmann L, Jensen K, Kieser M (2016) Bayesian network meta-analysis for cluster randomized trials with binary outcomes. Res Synth Methods.
- 65. Dias S, Ades AE (2016) Absolute or relative effects? Arm-based synthesis of trial data. Res Synth Methods 7: 23–28. pmid:26461457
- 66. Hong H, Chu H, Zhang J, Carlin BP (2016) Rejoinder to the discussion of "a Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons," by S. Dias and A. E. Ades. Res Synth Methods 7: 29–33. pmid:26461816
- 67. Zhang J, Carlin BP, Neaton JD, Soon GG, Nie L, Kane R, et al. (2014) Network meta-analysis of randomized clinical trials: reporting the proper summaries. Clin Trials 11: 246–262. pmid:24096635
- 68. Sturtz S, Bender R (2012) Unsolved issues of mixed treatment comparison meta-analysis: network size and inconsistency. Res Synth Methods 3: 300–311. pmid:26053423
- 69. Zhang J, Fu H, Carlin BP (2015) Detecting outlying trials in network meta-analysis. Stat Med 34: 2695–2707. pmid:25851533
- 70. Riley RD, Jackson D, Salanti G, Burke DL, Price M, Kirkham J, et al. (2017) Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples. BMJ 358: j3932. pmid:28903924
- 71. Pop M, Salzberg SL (2015) Use and mis-use of supplementary material in science publications. BMC Bioinformatics 16: 237. pmid:26525146
- 72. Straus S, Moher D (2010) Registering systematic reviews. CMAJ 182: 13–14. pmid:19620270
- 73. Li T, Puhan MA, Vedula SS, Singh S, Dickersin K (2011) Network meta-analysis-highly attractive but more methodological research is needed. BMC Med 9: 79. pmid:21707969
- 74. Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. (2015) The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations. Ann Intern Med 162: 777–784. pmid:26030634