Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions

  • Stephanie Coronado-Montoya,

    Affiliations Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada, Department of Psychiatry, McGill University, Montréal, Québec, Canada

  • Alexander W. Levis,

    Affiliation Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada

  • Linda Kwakkenbos,

    Affiliations Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada, Department of Psychiatry, McGill University, Montréal, Québec, Canada

  • Russell J. Steele,

    Affiliations Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada, Department of Mathematics and Statistics, McGill University, Montréal, Québec, Canada

  • Erick H. Turner,

    Affiliations Department of Psychiatry, Oregon Health & Science University, Portland, Oregon, United States of America, Department of Psychiatry, Portland Veterans Affairs Medical Center, Portland, Oregon, United States of America

  • Brett D. Thombs

    brett.thombs@mcgill.ca

    Affiliations Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada, Department of Psychiatry, McGill University, Montréal, Québec, Canada, Department of Mathematics and Statistics, McGill University, Montréal, Québec, Canada, Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montréal, Québec, Canada, Department of Medicine, McGill University, Montréal, Québec, Canada, Department of Educational and Counselling Psychology, McGill University, Montréal, Québec, Canada, Department of Psychology, McGill University, Montréal, Québec, Canada, School of Nursing, McGill University, Montréal, Québec, Canada

Abstract

Background

A large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power. The objective of the present study was to characterize the reporting of “positive” results in randomized controlled trials of mindfulness-based therapy. We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified.

Methods

CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses.

Results

108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion. No trial registrations adequately specified a single primary outcome measure with time of assessment. None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases.

Conclusions

The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.

Introduction

Mindfulness-based therapies (MBT), which include mindfulness-based stress reduction (MBSR) programs and mindfulness-based cognitive therapy (MBCT), have been described as feasibly delivered, low-cost, evidence-based options for managing stress, reducing mental health symptoms, and preventing relapse of depression [14]. MBSR is an 8-week group-based program, designed to reduce stress through mindful awareness [5, 6]. The program consists of weekly 2 to 2.5 hour sessions, a whole-day retreat, and independent daily meditation and yoga. MBCT additionally incorporates cognitive therapy into the sessions [5, 7]. MBSR can be led by trained para-professionals or by laypersons [8], whereas MBCT must be led by a licensed health care provider [9].

MBSR and MBCT have been reported to improve mental health outcomes among patients with psychiatric conditions (e.g., depression [1, 10], anxiety [11, 12], posttraumatic stress disorder [13], eating disorders [14], substance use disorders [15]), and other medical conditions (e.g., diabetes [16], hypertension [17], cancer [18], arthritis [19], obesity [20], heart disease [21], stroke [22]). In the United Kingdom, MBCT has been recommended by the National Institute for Health and Care Excellence to prevent depression relapse [23].

A concern, however, is that the overwhelmingly statistically significant results in favor of MBSR and MBCT interventions that can be seen in the published literature, despite very low power in many studies, may be influenced by reporting biases. Reporting biases are said to occur when statistically significant or “positive” outcomes have been preferentially published compared to non-significant or “negative” outcomes [2426]. Reporting biases include (1) study publication bias, in which positive studies tend to be published, whereas negative studies are not; (2) selective outcome reporting bias, in which outcomes published are chosen based on statistical significance with non-significant outcomes not published; (3) selective analysis reporting bias, in which data are analyzed with multiple methods but are reported only for those that produce positive results; and (4) other biases, such as relegation of non-significant primary outcomes to secondary status when results are published [2428].

Meta-analyses of MBT have either not assessed reporting biases [2932] or have attempted to assess the possibility of publication bias and reported that it was not present or not likely to have influenced findings [3342]. Studies that have attempted to detect publication bias have used statistical or graphical methods, such as visual examination or statistical tests for asymmetry of funnel plots or procedures that aim to identify and correct for funnel plot asymmetry, such as trim and fill [4346]. These methods assess whether larger effect sizes are associated with smaller trials among published trials, which would suggest that relatively small, non-significant trials may tend to go unpublished. These tests are commonly used, but they are not likely to detect reporting biases, if present, when there are fewer than 10 to 20 included trials and may require very large numbers of trials in some circumstances. They are also not appropriate when most studies have limited sample sizes, or when there is relatively little variance in sample sizes [45, 46], all of which are common in MBT trials.

Some studies of MBT [34, 37, 39, 41, 42] have also used the fail-safe N method, which attempts to determine the number of additional trials with zero intervention effect that would be needed to increase the overall P value to above 0.05. This method is generally discouraged, however, due to methodological concerns and because it emphasizes statistical significance or non-significance rather than the magnitude of an estimated intervention effect and associated confidence intervals [47].

The authors of a recent meta-analysis of 47 trials of MBT for psychological stress and well-being [29] concluded that they could not conduct quantitative tests of publication bias due to the relatively small number of trials reporting most of the outcomes they evaluated. Instead, they reviewed trial registries and found 5 trials that were completed at least 3 years before their review, but did not publish all registered outcomes, and 9 completed trials for which an associated publication was not found, suggesting that reporting biases are sometimes present even if not easily detected using standard methods [29].

We have observed anecdotally that there seem to be few examples of published MBT trials without statistically significant results, even though many existing trials appear to have been conducted with very low statistical power. Thus, our objectives were to (1) characterize the degree to which published MBT trials report statistically significant results in favor of MBT interventions; (2) attempt to evaluate the plausibility of the number of statistically significant results; (3) evaluate MBT trial registrations and subsequent publication status to assess the potential influence of study publication bias and selective outcome reporting bias on the number of positive trials; and (4) evaluate systematic reviews and meta-analyses on MBT to determine whether reporting bias has been assessed and, if so, what conclusions have been drawn.

Methods

MBT Trials

Search Strategy and Identification of Eligible RCTs.

The CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched on July 4, 2013. See S1 Appendix for search terms.

Our main objective was to characterize the degree to which published MBT trials have reported statistically significant results, not comparative effectiveness. Thus, RCTs published in any language, including dissertations that appeared in indexed databases, were eligible if they evaluated the effect of MBT versus usual care, placebo, or other inactive controls (e.g., waitlist, sham control) on mental health outcomes in any population. MBT was defined as a group-based intervention in which standard MBT components comprised the core of the intervention [5, 7]. Shortened MBT interventions were included if they provided at least 4 sessions over 4 or more weeks and include core MBT elements (e.g., meditation, yoga). RCTs of interventions that included a substantive component not typically included in MBT and not available to the control group (e.g., exercise, art therapy, weight loss programs) were excluded. Meditation-based interventions not described as mindfulness-based and/or not including key components of MBSR (e.g., yoga) or MBCT (e.g., focus on cognitive distortions) were excluded. Internet-based interventions were also excluded. Because we sought trials of interventions intended to influence mental health, eligible RCTs had to report at least one outcome reflecting mental health status (e.g., symptoms of depression, anxiety) in the abstract.

Search results were downloaded into the citation management database RefWorks (RefWorks, RefWorks-COS, Bethesda, MD, USA) and duplicates were removed using the RefWorks duplication check and manual searching. Two investigators independently reviewed articles for eligibility. If either deemed an article potentially eligible based on title/abstract review, then a full-text review was completed. Disagreements after full-text review were resolved by consensus. Translation was done for non-English articles.

Data Extraction.

Two investigators independently extracted and entered data items from eligible RCTs into a standardized spreadsheet; discrepancies were resolved by consensus. When there was more than one publication on the same RCT, we extracted data for the RCT as a unit, incorporating information from all publications together. Identification of multiple publications from the same RCT was done by cross-referencing authors and co-authors, patient characteristics, and countries. In cases where it was not clear whether publications reported data from the same RCT, we contacted study authors.

Identifying MBT Trials with “Positive” Results.

For most included trials it was not possible to identify a pre-defined, single primary outcome variable, and, in many trials, it was not possible to even identify a single primary outcome, whether or not pre-defined. Most trial reports included multiple outcome variables with no indication of primacy or included multiple “primary” outcome variables with no statistical adjustment. Since we could not identify a single primary outcome variable in most cases, in order to attempt to determine if a trial had been reported as a positive trial, we used a classification method based on a method published by Kyzas et al. [48] as our primary classification method. RCTs were classified as negative if all between-groups mental health outcomes reported in the abstract were statistically non-significant or as positive if at least one was statistically significant. Since this method could over-identify trials as positive, we also classified studies as negative or positive based on published study conclusions. Conclusions of study abstracts were coded as unequivocally supporting the effectiveness of MBT (positive), suggesting that MBT was not effective (negative), or as “mixed or inconclusive.”

Based on our primary classification method, negative RCTs were further coded to indicate whether results were presented with a caveat, defined as a statement made by investigators to mitigate the lack of statistical significance [48, 49]. Caveats included describing non-significant results as representing “trends” for a therapeutic effect; suggestions that other, larger, or different studies would likely find a positive effect; or arguments that MBT was still important to provide for other reasons [48, 49]. For positive RCTs, we coded whether all results reported in the abstract were statistically significant or whether there was at least one non-significant result. If no between-groups results were reported in the abstract, we coded within-group pre-post results from the abstract to determine positive versus negative status.

The effectiveness of mental health therapies in trials may depend on whether they are delivered by highly trained professionals, as in MBCT, versus professionals with less clinical training, as in MBSR; whether the patient sample is a symptomatic clinical sample versus a non-clinical sample; and whether a minimum symptom threshold is required for enrollment [5054]. Thus, in addition to reporting totals for all MBT trials, we also categorized trials into subgroups of (1) RCTs of MBCT versus other MBT interventions, either MBSR or a similar mindfulness meditation-based program; (2) Clinical versus non-clinical patient sample; and (3) Trials with a mental health symptom threshold requirement for patient eligibility versus no such requirement. Clinical samples were defined as including only patients with a defined mental health (e.g., depression) or medical (e.g., arthritis) condition. Non-clinical samples included general population, employee, or student samples, for instance.

Plausibility of Proportion of RCTs with Positive Results.

We initially intended to evaluate the plausibility of the number of RCTs with positive results using the test for excess significance, which was developed by Ioannidis and Trikalinos [55, 56]. The test is based on the idea that all forms of reporting bias, including publication bias and selective reporting of outcomes or analyses, result in an exaggerated number of statistically significant results in published trial reports. Thus, the test for excess significance [55, 56] evaluates whether the number of observed positive trials exceeds the number expected based on the statistical power of published trials. It does not depend on the strong assumption that sample size is associated with reporting bias, as in graphical and regression-based methods and may be more robust than other tests in the context of small numbers of trials and limited variability in trial sample sizes [55, 56]. Pragmatically, the test for excess significance assesses whether the observed number of positive MBT RCTs is significantly larger than the expected number given a particular estimated effect size. The observed number is obtained by summing the number of positive studies, and the expected number is the sum of the power of all included RCTs based on the estimated effect size.

We encountered three substantive barriers, however, to applying the test for excess significance in this group of studies, as we had originally intended. First, we were not able to identify a single primary outcome variable in most studies on which to base a study-specific effect size estimate. Second, related to this, we did not believe that we could reasonably estimate an unbiased “true” effect size upon which to base an estimate of statistical power for all MBT trials, which is needed for the text of excess significance. In the context of substantial selective reporting, a meta-analysis based effect estimate would exaggerate actual effectiveness and underestimate excess statistical significance. Third, given the clinical heterogeneity of the included studies, it was reasonable to assume that there was substantial heterogeneity of effects across studies, and substantive heterogeneity can also lead to a greater number of statistically significant results, beyond reporting biases.

Thus, we did not conduct a statistical test for excess significance. Instead, we presented the overall number of positive trials and the number of positive trials in each subgroup. For comparison purposes, we also presented the number that would have been expected if the true effect size were the same as the effect size reported in a recent meta-analysis of trials of individual psychotherapy for adult depression, d = 0.55 [57]. We used this type of therapy as a reference point because it is another intervention that is intended for mental health symptoms and its therapeutic effects on depression have been well-studied, compared to most other mental health treatment conditions. Furthermore, we believe that d = 0.55 was a conservative estimate for our purpose, in that it is almost certainly an overestimate of the true effect of MBT therapies. This is because individual depression therapy is administered in an individual format, is provided by a trained mental health practitioner, and is delivered to patients with a defined clinical condition. These characteristics all tend to result in greater effects compared to therapies administered in group formats by non-mental health professionals to treatment recipients who may not have a defined mental health condition or exceed any symptom threshold for eligibility [5054], as is the case in many MBT intervention trials. Furthermore, this effect estimate is substantially greater than all effect sizes reported in a recent Agency for Healthcare Research and Quality (AHRQ) systematic review that found that meditation programs, including MBT reduced symptoms of anxiety, depression, and pain by 0.22 to 0.44 standard deviations [29].

Power for each study that we included was calculated using the pwr package in R [58]. The expected number was calculated with the understanding that any differences between the observed number of MBT trials and the expected number based on this effect estimate could have occurred because the effect estimate was not accurate for MBT trials, because of effect heterogeneity, due to differences in study quality between the trials used to generate that effect and the trials in the present study, because of reporting biases in the MBT trials, or some combination thereof. We also calculated the effect size that would have been necessary for the expected number of positive studies to equal the observed number with the same understanding that any difference could have been due to multiple factors.

MBT Trial Registrations

We examined MBT trial registrations to assess the degree to which publication bias and selective outcome reporting may have influenced the number of positive trials we encountered.

Search Strategy.

We searched 3 trial registries: ClinicalTrials.gov, the Standard Randomized Controlled Trial Number Register, and the World Health Organization’s (WHO) International Clinical Trials Registry Platform, which is a central database that provides access to multiple national or region-specific registries (see S1 Appendix for search terms). We included all registrations of RCTs completed as of December 31, 2010, which compared MBT to an inactive comparator and that reported mental health outcomes, consistent with our eligibility criteria for published RCTs of MBT. We included trials completed by as of December 31, 2010 to allow at least 30 months between trial completion and our search for published results, consistent with the methods of a recent study on publication patterns following registration [59]. The completion date was defined by ClinicalTrials.gov as “final data collection date for primary outcome measure.” For trial registrations in registries other than ClinicalTrials.gov that did not provide a date for completion of data collection, we contacted investigators directly. Studies with unknown status were considered completed if 2 or more years had lapsed since the last trial registry update and if the expected study completion date was December 31, 2010 or earlier. Trials listed as terminated or withdrawn were considered completed and were included.

Results were downloaded into an Excel database. Duplicate registrations were identified in the WHO registry platform automatically, and any additional duplicates across registries were identified by manual search. Two investigators independently reviewed trial registrations for eligibility with any disagreements resolved by consensus.

Publication Status of Registered MBT Trials.

Two investigators independently reviewed each trial registration for listed publications of trial results in peer-reviewed journals. If none were listed, search results from the electronic database search were reviewed for published RCTs (see above section regarding eligible RCTs) to attempt to identify published results. If none were found, MEDLINE and PsycInfo were additionally searched for results published in peer-reviewed journals using the trial registration number and, if unsuccessful, using the intervention, condition studied, and the name of the listed principal investigator (e.g., Bremner AND mindful*). Each trial registration was classified as having published or not published trial results in a peer-reviewed journal or indexed doctoral dissertation within 30 months of completion. For trials published online ahead of print, the date when the trial was made available electronically was used as the publication date [59].

Risk of Selective Outcome Reporting in Registered MBT Trials.

The risk of selective outcome reporting increases when, prior to data collection, there is no clear declaration of a single primary trial outcome or, in the case of multiple primary trial outcomes, when there is no clear declaration of those outcomes with a plan to adjust for multiple analyses [24, 6062]. Using a method similar to that described by Mathieu et al. and used subsequently by others [6062], “adequately registered” MBT trials were defined as trials that registered a single primary outcome (or multiple primary outcomes with appropriate adjustment), specified the primary outcome measure, the time point when it would be assessed, and the metric (e.g., continuous, dichotomous with specified cutoff threshold). As a sensitivity analysis, we reclassified trial registration adequacy without requiring specification of the metric. Registration adequacy was assessed by two investigators independently with any disagreement resolved by consensus.

Assessment of Possible Reporting Biases in Systematic Reviews and Meta-Analyses of MBT

Search Strategy and Identification of Eligible Systematic Reviews and Meta-Analyses.

The CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched on August 26, 2013 for recent systematic reviews and meta-analyses, published since January 1, 2011, on the effectiveness of MBT for improving mental health outcomes (see S1 Appendix for search terms). We restricted the search to this approximately 3-year period to obtain recent systematic reviews and meta-analyses that reflect relatively current practices.

Systematic reviews and meta-analyses published in any language were eligible if they reviewed the effectiveness of MBT on mental health outcomes. The same review procedure was used as for individual published RCTs of MBT, described above.

Data extraction.

Two investigators independently extracted and entered data items into a standardized spreadsheet, with discrepancies resolved by consensus. For each systematic review or meta-analysis, they determined whether authors conducted a statistical test (e.g., asymmetry test, fail-safe N, regression analysis) or a visual inspection of funnel plots to assess possible reporting bias. They also reviewed the abstract and discussion section of each systematic review or meta-analysis to determine whether authors mentioned the possibility that reporting biases could have influenced results.

Results

Statistical Significance in Results of Published MBT RCTs

Search results.

The electronic database search yielded 1,183 unique publications for review, of which 830 were excluded after review of titles and abstracts and 193 after full-text review. There were 36 related publications that reported on the same RCT in multiple publications, leaving 124 unique MBT RCTs (see Fig 1).

thumbnail
Fig 1. PRISMA Flow Diagram of Selection of Published Randomized Controlled Trials of Mindfulness-based Therapy.

PRISMA flow diagram of selection of published randomized controlled trials of mindfulness-based therapies on mental health outcomes, including reasons for and number of excluded trials.

https://doi.org/10.1371/journal.pone.0153220.g001

Characteristics of Published MBT RCTs.

Of the 124 included RCTs, there were 62 RCTs (50%) from North America, 42 (34%) from Europe, and 20 (16%) others. There were 4 RCTs (3%) published before 2000, 40 (32%) between 2000 and 2009, and 80 (65%) in 2010 or later. The total number of patients analyzed in the combined intervention and control groups ranged from 10 to 357 participants per RCT. There were 58 trials (47%) with 10–49 total participants analyzed, 40 (32%) with 50–99, and 26 (21%) with 100–357. The mean number of patients analyzed in each trial was 70.3, and the median was 54.5. There were 13 trials that were published only as dissertations and retrieved via electronic databases, but were not published in a peer-reviewed journal.

According to the three key subgroup classifications, there were 28 RCTs (23%) of MBCT versus 96 (77%) of other MBTs; 41 RCTs (33%) that required a minimum symptom threshold for trial eligibility versus 83 (67%) that did not; and 83 RCTs (67%) with clinical samples versus 41 (33%) with non-clinical samples. Of the 83 RCTs with clinical populations, there were 36 with psychiatric patients, 12 with chronic pain patients, 11 with cancer patients, 7 with obese or diabetic patients, and 17 with patients with other conditions. See S2 Appendix for characteristics of all included RCTs, including subgroup classifications.

Of the 124 RCTs, 26 (21%) had a registration record, including 21 (17%) that were registered prior to data collection. Of these, 12 were listed as completed by December 31, 2010 and included in our analysis of trial registrations and publication status (see section below, Evaluation of MBT Registrations); 1 was registered in the Centre for Clinical Trials registry (http://www.cct.cuhk.edu.hk/cctwebsite/default.aspx), which was not one of the registries that we searched, so it was not included in our trial registry analysis; and 8 were registered prospectively, but the completion date was after 2010, which was an exclusion criteria for trial registry analysis.

Positive Results in RCTs.

Of the 124 included RCTs, 108 (87%) were classified as positive and 16 (13%) as negative based on reporting at least one positive outcome in the abstract. When classifications were instead based on study conclusions, there were 109 (88%) clearly positive studies that concluded that MBT was effective, 11 (9%) with mixed conclusions, and 4 (3%) negative studies that concluded that MBT had not been effective. There was a 91% rate of agreement between the two methods (113 of 124 trials). Of the 13 RCTs published only as dissertations, 8 (62%) were classified as positive based on both methods.

Of the 108 positive RCTs based on our primary classification method, 94 reported at least one significant between-groups mental health outcome, and 14 did not report any between-groups mental health outcomes, but reported at least one significant within-group mental health outcome. In the abstracts of the 16 negative RCTs, 11 reported positive pre-post changes for the MBT group in addition to negative between-groups results. Additionally, 5 of the 16 negative RCTs included a caveat. Only 3 reported negative between-groups results without highlighting significant within-group findings or providing a caveat.

For an assumed effect size of d = 0.55, the expected number of positive RCTs was 65.7. As shown in Table 1, the overall ratio of observed-to-expected positive studies was 1.6 times. Within the subset of 15 studies with power <0.25, the observed-to-expected ratio was 3.6 times; among 45 studies with power between 0.25 and 0.50, the observed-to-expected ratio was 2.5 times. As shown in Table 2, results for the observed to expected ratio were similar and consistent across subgroups. See S3 Appendix for power value used in calculations for individual studies.

thumbnail
Table 1. Summary of Observed and Expected number of Positive Studies with Power Calculation Based on Effect Size d = 0.55a.

https://doi.org/10.1371/journal.pone.0153220.t001

thumbnail
Table 2. Summary of Observed and Expected number of Positive Studies for Key Subgroups with Power Calculation Based on Effect Size d = 0.55a.

https://doi.org/10.1371/journal.pone.0153220.t002

To obtain an expected number of positive studies of 108, which was the number of observed positive studies, the true effect size would have needed to be d = 1.03.

Evaluation of MBT Trial Registrations

Search results.

The trial registry search yielded 313 unique registrations, of which 292 were excluded, leaving 21 eligible trial registrations of MBT RCTs (see Fig 2).

thumbnail
Fig 2. PRISMA Flow Diagram of Selection of Trial Registrations of Completed Randomized Controlled Trials of Mindfulness-based Therapy.

PRISMA flow diagram of trial registrations of completed randomized controlled trials of mindfulness-based therapy on mental health outcomes, including reasons for and number of excluded trial registrations.

https://doi.org/10.1371/journal.pone.0153220.g002

Characteristics of Trial Registrations of MBT RCTs.

Of the 21 registered trials, 8 (38%) were published within 30 months of trial completion. All 8 reported positive outcomes in the published abstract and were classified as positive studies based on their conclusions.

None of the 21 registered trials adequately specified a single primary outcome, including the outcome measure, the assessment time, and the metric. When metric specification was not required, there were 2 (10%) adequate trial registrations and 19 trials (90%) not adequately registered. These 19 registrations were classified as inadequate because multiple outcomes were listed without specifying a primary outcome or plan to adjust statistically for multiple outcomes (n = 16), because a specific measure was not listed for the primary outcome (n = 2), or because a time point was not specified for the primary outcome (n = 1). See S4 Appendix for details.

Assessment of Possible Reporting Biases in Systematic Reviews and Meta-Analyses of MBT

Search results.

The search for systematic reviews and meta-analyses yielded 93 unique articles for review, of which 29 were excluded after review of titles and abstracts and 28 after full-text review, leaving 36 systematic reviews and meta-analyses eligible for evaluation (see Fig 3).

thumbnail
Fig 3. PRISMA Flow Diagram of Meta-Analysis and Systematic Review Selection Process for Study.

PRISMA flow diagram of recent meta-analyses and systematic reviews of mindfulness-based therapy on mental health outcomes, including reasons for and number of excluded reviews.

https://doi.org/10.1371/journal.pone.0153220.g003

Characteristics of Systematic Reviews and Meta-Analyses of MBT.

As shown in S5 Appendix, only 2 of the 36 systematic reviews and meta-analyses included >20 RCTs. Of the 36, 14 (39%) conducted a statistical or visual test to assess possible reporting bias. Of these, 9 reviews concluded that there was no apparent bias, 4 were inconclusive or stated that presence of publication bias was possible, and 1 that publication bias was present, but in the opposite direction (small effect sizes more likely to be published). None mentioned possible reporting bias in the review abstract.

Discussion

The main finding of this study was that of 124 MBT RCTs that were reviewed, almost 90% were presented as positive studies when published. Furthermore, there were only 3 trials that were presented unequivocally as negative trials without alternative interpretations or caveats to mitigate the negative results and suggest that the treatment might still be an effective treatment.

For a point of reference, we compared the number of positive trials that we found to the number that would have been generated by a group of heterogeneous studies with a true effect size of d = 0.55, which is the effect size obtained from a recent meta-analysis of individual therapy for depression [57]. This effect likely overstates the actual effect size of MBT since MBT is often administered in groups by people who do not necessarily have professional mental health training to treatment recipients without defined diagnoses or levels of symptoms, all of which likely reduce effect sizes [5054]. Furthermore, this effect estimate may be exaggerated even for depression treatments. A recent study of US National Institute of Health grants for psychological treatments for patients with depressive disorders found that the effect size of g = 0.52 among published studies was reduced to g = 0.39 when data for non-published studies was integrated [63]. Additionally, it is of note that d = 0.55 substantially exceeds effect estimates published in a recent AHRQ meta-analysis of meditative therapies [29]. Based on this reference point, there were 1.6 times as many positive MBT RCTs among the 124 RCTs we reviewed as would be expected if the true effect size of d = 0.55 in a relatively homogeneous group of trials. For trials with low power, this ratio was substantially higher. When we examined subgroups of only studies of MBCT (versus other MBTs), only studies with clinical populations (versus general population, employees, or students), and only studies that required mental health symptoms for enrollment, results were consistent.

Although there is reason to believe that the effect estimate we used as a reference point may have been too large and, thus, overestimated the expected number of positive studies, we cannot rule out several different explanations for why we found so many positive trials. One explanation is simply that we cannot be sure that the effect size that we used as a reference point was indeed an accurate estimate or that it overstated likely effectiveness, as we believe. Second, it may be the case that heterogeneity in study effects could have contributed to the high number of positive studies. Finally, it may be the case that reporting biases played an important role in this. This idea is supported by the fact that the tendency to generate more positive studies that would be expected was concentrated in smaller studies, although it is also possible that lower quality in smaller studies could have played a role.

Our review of trial registration records also suggest the possibility that reporting biases may have been an important factor. Of the 124 RCTs reviewed, only 21 (17%) were registered prior to data collection, even though 80 of the eligible RCTs were published recently (since 2010). When we examined trial registries, we identified 21 registrations of MBT trials listed as completed by 2010 and found that 13 (62%) remained unpublished 30 months after completion; of the published trials, all conveyed a positive conclusion. None of the 21 registrations, however, adequately specified a single primary outcome (or multiple primary outcomes with an appropriate plan for statistical adjustment) and specified the outcome measure, the time of assessment, and the metric (e.g., continuous, dichotomous). When we removed the metric requirement, only 2 (10%) registrations were classified as adequate. We evaluated more than 30 published systematic reviews and meta-analyses of MBTs, and none concluded that reporting biases likely exaggerated estimates of effect (see S6 Appendix). The authors of one recent meta-analysis, which was published after our search, on the other hand, raised concern about possible publication bias and other reporting biases based on trial registry records [29].

Ross et al. [59] recently reviewed publication patterns of all clinical trials funded by the United States National Institutes of Health, including pharmacological and non-pharmacological trials, and found that 46% of trials were published in a peer-reviewed journal within 30 months of the completion date documented in the trial registration. Overall, trials were followed for a median of 51 months post-completion, and 68% of trials were published in a peer-reviewed journal by the end of the study. The rate of publication of registered MBT trials within 30 months of completion in the present study was slightly lower (38%). It is possible, as reported by Ross et al., that additional trials will eventually be published, but this suggests that publication bias likely contributed to the findings of excess significance in the present study.

Thus, it may be the case that selective outcome reporting, as well as “data dredging” [56] and selective reporting of analyses may play important roles in the proportion of positive studies that we found among MBT RCTs in the present study. If one assumes that there is some effect of MBT on mental health outcomes, albeit a smaller effect than reported in published studies, the ability to selectively publish from multiple outcome options or multiple analyses could easily lead to exaggerated effect estimates and a rate of positive trial reports that exceeds plausibility, as we found in our study. Indeed, others have suggested that exaggerated effect sizes are problematic in trials that work with “soft” outcomes, as is typically the case in psychological or behavioral research [6466], and that selective reporting of only some outcomes and analytical flexibility may be even larger problems than classic publication bias in psychological studies compared to “harder” sciences [64, 65].

Mathieu et al. [60] investigated trial registrations and outcomes from trials published in high-impact general and specialty medicine journals in 2008. Of the trials they reviewed, there were 186 that had been registered a priori, of which 147 (79%) were adequately registered with a clear description of a single primary outcome measure. On the other hand, a study of RCTs published in four top behavioral health journals between 2008 and 2009 found that only 1 of 63 RCTs was adequately registered, and within that one RCT, registered and published outcomes were discrepant [24]. Two recent studies of published trials in psychology and behavioral health journals reported similar results [61, 62]. The results of the present study suggest that inadequate trial registration and the lack of pre-specified primary trial outcomes may continue to plague research of non-pharmacological mental health interventions. The burden of registering a trial does not add substantively to the overall burden of designing, funding, conducting, and reporting a trial, and there are no real barriers to doing this. Thus, a greater focus on training of trialists is needed, as well as increasing rigor from journal reviewers and editors to ensure that only trials that are registered with enough information to compare pre-specified outcomes to reported outcomes should be published.

The very small number of trials that clearly declared negative results in the present study without caveats or “spin” also reminds us that when negative results are reported, they are often “spun” so that they appear to be equivocal or even positive findings [67]. One might reasonably expect that in the text of articles authors may attempt to justify their trials with caveats and discuss why statistically significant results were not found. However, the failure to provide a clear statement of non-significance in the abstract may serve to distort understandings of results, since many readers base their assessment of trial results on what is reported in the abstract.

In the present study, we found that most existing evidence syntheses either did not evaluate reporting biases or concluded that they were not present. The majority of these systematic reviews, which focused on a wide range of applications of MBT, included very small numbers of RCTs, which did not permit a statistical assessment of reporting biases. However, that would not have precluded approaches such as reviewing trial registries, in order to better understand the likelihood that completed trials of MBT may go unreported or that outcomes in published trials may be selectively reported. A meta-analysis, which was published subsequent to our search period and not included in our analysis, for instance, did not assess publication or other forms of reporting bias with statistical methods, but did identify patterns of non-publication and likely selective outcome reporting by reviewing MBT trial registrations [29]. This is an approach that can be utilized, whether or not statistical approaches are feasible.

There are a number of limitations that should be considered in interpreting the results of the present study. First, we were not able to conduct a statistical test to determine if there was excess significance bias. The results that we presented in comparison to a reference point effect size cannot rule determine the relative contributions to the relatively high number of positive results to the use of an inaccurate effect estimate as a reference point, to heterogeneity across studies, or to reporting biases. Generally, risk of reporting bias will be higher in fields with small, underpowered trials [68]; when there is a strong incentive for reporting positive results, which is often the case when professionals who practice a given psychological treatment also test it [69, 70]; and when there is prior documentation of bias in the field [68], as is the case with psychological treatments dealing with “soft” outcomes [6466]. In the present study, only 30 of 124 trials (24%) had power of 75% or more when an effect size of d = 0.55 was assumed. Furthermore, there was evidence from trial registries that reporting biases may be problematic. Among registered, completed MBT trials that we reviewed, the majority were not published within 3 years of completion. Although it is not necessarily the case that unpublished studies were negative studies, it has long been established that negative studies are more likely to remain unpublished than positive studies [26]. Additionally, virtually no MBT registrations defined outcome variables with sufficient precision to compare to subsequently published trial results, which would reduce the likelihood of selective outcome reporting.

In summary, MBT appears to be a low-cost and easily implemented treatment that may be useful for providing effective mental health care to the large number of patients who are currently under-served [71]. However, the proportion of positive trials that are reported, despite small sample sizes and low statistical power are concerning. Although we could not determine with certainty the degree that reporting biases play a definitive role in this, there was evidence that this may be driving force. Investigators who conduct trials of MBT and other non-pharmaceutical interventions to improve mental health should register their trials with enough information so that readers can verify whether published outcomes match the pre-specified outcomes. In addition, journal editors and reviewers should routinely compare a priori defined and published outcomes as part of the review process. Inadequate registration should be considered a major limitation that introduces a serious risk of bias. Additionally, we encourage increased attention by researchers who conduct trials of MBT to factors in trial design that reduce bias, as well as efforts to conduct trials with adequate sample sizes. Ideally, a smaller number of large, adequately powered trials that provide robust evidence will be conducted going forward rather than a large number of small underpowered trials that tend to fragment the literature, as often occurs presently.

Supporting Information

S2 Appendix. Characteristics of Mindfulness-Based Therapy Studies in Analysis.

https://doi.org/10.1371/journal.pone.0153220.s002

(DOCX)

S3 Appendix. Results from Mindfulness-Based Therapy Studies Included in Analysis.

https://doi.org/10.1371/journal.pone.0153220.s003

(DOCX)

S4 Appendix. Characteristics of Mindfulness-Based Therapy Trial Registrations Included in Analysis.

https://doi.org/10.1371/journal.pone.0153220.s004

(DOCX)

S5 Appendix. Characteristics of Mindfulness-Based Therapy Systematic Reviews and Meta-Analyses Included in Analysis.

https://doi.org/10.1371/journal.pone.0153220.s005

(DOCX)

S6 Appendix. Characteristics of Mindfulness-Based Therapy Systematic Reviews and Meta-Analyses Included in Analysis.

https://doi.org/10.1371/journal.pone.0153220.s006

(DOCX)

Acknowledgments

The authors would like to thank Shervin Assassi, MD, MSc of the University of Texas Health Science Center, Houston, Texas, USA and Shadi Gholizadeh, MSc of the San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology, San Diego, California, USA for assistance with translation. They were not compensated for their contribution.

Author Contributions

Conceived and designed the experiments: SC-M AWL LK RJS EHT BDT. Performed the experiments: SC-M AWL LK BDT. Analyzed the data: SC-M AWL BDT. Wrote the paper: SC-M BDT. Contributed critical revision of manuscript and approved submission: SC-M AWL LK RJS EHT BDT.

References

  1. 1. Teasdale JD, Segal ZV, Williams JM, Ridgeway VA, Soulsby JM, Lau MA. Prevention of relapse/recurrence in major depression by mindfulness-based cognitive therapy. J Consult Clin Psychol. 2000;68: 615–623. pmid:10965637
  2. 2. Kabat-Zinn J. An outpatient program in behavioral medicine for chronic pain patients based on the practice of mindfulness meditation: theoretical considerations and preliminary results. Gen Hosp Psychiatry. 1982;4: 33–47. pmid:7042457
  3. 3. Hofmann SG, Sawyer AT, Witt AA, Oh D. The effect of mindfulness-based therapy on anxiety and depression: a meta-analytic review. J Consult Clin Psychol. 2010;78: 169–183. pmid:20350028
  4. 4. Rodgers M, Asaria M, Walker S, McMillan D, Lucock M, Harden M, et al. The clinical effectiveness and cost-effectiveness of low-intensity psychological interventions for the secondary prevention of relapse after depression: a systematic review. Health Technol Assess. 2012;16: 1–130.
  5. 5. Fjorback LO, Arendt M, Ornbol E, Fink P, Walach H. Mindfulness-based stress reduction and mindfulness-based cognitive therapy—a systematic review of randomized controlled trials. Acta Psychiatr Scand. 2011;124: 102–119. pmid:21534932
  6. 6. Kabat-Zinn J. Full catastrophe living: using the wisdom of the body and the mind to face stress, pain and illness. New York: Dell; 1990.
  7. 7. Williams JMG, Teasdale JD, Segal ZV, Kabat-Zinn J. The mindful way through depression: freeing yourself from chronic unhappiness. New York: Guilford Publications; 2007.
  8. 8. Center for Mindfulness in Medicine, Health Care, and Society [Internet] Massachusetts: The Center for Mindfulness; c2014 [cited 2014 October 22]. Stress reduction program. Available: http://umassmed.edu/cfm/stress/index.aspx.
  9. 9. Centre for Mindfulness Studies [Internet] Toronto: The Centre for Mindfulness Studies; c2015 [updated 2011 March 24; cited 2014 October 22] MBCT clinical practicum. Available: http://www.mindfulnessstudies.com/.
  10. 10. Kaviani H, Hatami N, Javaheri F. The impact of mindfulness-based cognitive therapy (MBCT) on mental health and quality of life in a sub-clinically depressed population. Arch Psychiatry Psychother. 2012;14: 21–28.
  11. 11. Kim YW, Lee SH, Choi TK, Suh SY, Kim B, Kim CM, et al. Effectiveness of mindfulness-based cognitive therapy as an adjuvant to pharmacotherapy in patients with panic disorder or generalized anxiety disorder. Depress Anxiety. 2009;26: 601–606. pmid:19242985
  12. 12. Jazaieri H, Goldin PR, Werner K, Ziv M, Gross JJ. A randomized trial of MBSR versus aerobic exercise for social anxiety disorder. J Clin Psychol. 2012;68: 715–731. pmid:22623316
  13. 13. King AP, Erickson TM, Giardino ND, Favorite T, Rauch SA, Robinson E, et al. A pilot study of group mindfulness-based cognitive therapy (MBCT) for combat veterans with posttraumatic stress disorder (PTSD). Depress Anxiety. 2013;30: 638–645. pmid:23596092
  14. 14. Masuda A, Hill ML. Mindfulness as therapy for disordered eating: a systematic review. Neuropsychiatry. 2013;3: 433–447.
  15. 15. Brewer JA, Sinha R, Chen JA, Michalsen RN, Babuscio TA, Nich C, et al. Mindfulness training and stress reactivity in substance abuse: results from a randomized, controlled stage I pilot study. Subst Abus. 2009;30: 306–317. pmid:19904666
  16. 16. van Son J, Nyklicek I, Pop VJ, Blonk MC, Erdtsieck RJ, Spooren PF, et al. The effects of a mindfulness-based intervention on emotional distress, quality of life, and HbA1c in outpatients with diabetes (DiaMind): a randomized controlled trial. Diabetes Care. 2013;36: 823–830. pmid:23193218
  17. 17. Hughes JW, Fresco DM, Myerscough R, van Dulmen MH, Carlson LE, Josephson R. Randomized controlled trial of mindfulness-based stress reduction for prehypertension. Psychosom Med. 2013;75: 721–728. pmid:24127622
  18. 18. Hoffman CJ, Ersser SJ, Hopkinson JB, Nicholls PG, Harrington JE, Thomas PW. Effectiveness of mindfulness-based stress reduction in mood, breast- and endocrine-related quality of life, and well-being in stage 0 to III breast cancer: a randomized, controlled trial. J Clin Oncol. 2012;30: 1335–1342. pmid:22430268
  19. 19. Pradhan EK, Baumgarten M, Langenberg P, Handwerger B, Gilpin AK, Magyari T, et al. Effect of mindfulness-based stress reduction in rheumatoid arthritis patients. Arthritis Rheum. 2007;57: 1134–1142. pmid:17907231
  20. 20. Lillis J, Hayes SC, Bunting K, Masuda A. Teaching acceptance and mindfulness to improve the lives of the obese: a preliminary test of a theoretical model. Ann Behav Med. 2009;37: 58–69. pmid:19252962
  21. 21. Parswani MJ, Sharma MP, Iyengar S. Mindfulness-based stress reduction program in coronary heart disease: a randomized control trial. Int J Yoga. 2013;6: 111–117. pmid:23930029
  22. 22. Johansson B, Bjuhr H, Ronnback L. Mindfulness-based stress reduction (MBSR) improves long-term mental fatigue after stroke or traumatic brain injury. Brain Inj. 2012;26: 1621–1628. pmid:22794665
  23. 23. National Collaborating Centre for Mental Health. Common mental health disorders: identification and pathways to care. London (UK): National Institute for Health and Clinical Excellence (NICE): 2011.
  24. 24. Milette K, Roseman M, Thombs BD. Transparency of outcome reporting and trial registration of randomized controlled trials in top psychosomatic and behavioral health journals: a systematic review. J Psychosom Res. 2011;70: 205–217. pmid:21334491
  25. 25. Ioannidis JP. Excess significance bias in the literature on brain volume abnormalities. Arch Gen Psychiatry. 2011;68: 773–780. pmid:21464342
  26. 26. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLOS ONE. 2008;3: e3081. pmid:18769481
  27. 27. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358: 252–260. pmid:18199864
  28. 28. Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med. 2009;361: 1963–1971. pmid:19907043
  29. 29. Goyal M, Singh S, Sibinga EMS, Gould NF, Rowland-Seymour A, Sharma R, et al. Meditation programs for psychological stress and well-being: a systematic review and meta-analysis. JAMA Intern Med. 2014;174: 357–368. pmid:24395196
  30. 30. Bawa FL, Mercer SW, Atherton RJ, Clague F, Keen A, Scott NW, et al. Does mindfulness improve outcomes in patients with chronic pain? Systematic review and meta-analysis. Br J Gen Pract. 2015;65: e387–400. pmid:26009534
  31. 31. Abbott RA, Whear R, Rodgers LR, Bethel A, Thompson Coon J, Kuyken W, et al. Effectiveness of mindfulness-based stress reduction and mindfulness-based cognitive therapy in vascular disease: A systematic review and meta-analysis of randomised controlled trials. J Psychosom Res. 2014;76: 341–351. pmid:24745774
  32. 32. Bohlmeijer E, Prenger R, Taal E, Cuijpers P. The effects of mindfulness-based stress reduction therapy on mental health of adults with a chronic medical disease: A meta-analysis. J Psychosom Res. 2010;68: 539–544. pmid:20488270
  33. 33. Cramer H, Lauch R, Paul A, Dobos G. Mindfulness-based stress reduction for breast cancer—a systematic review and meta-analysis. Curr Oncol. 2012;19: e342–352.
  34. 34. Piet J, Würtzen H, Zachariae R. The effect of mindfulness-based therapy on symptoms of anxiety and depression in adult cancer patients and survivors: A systematic review and meta-analysis. J Consult Clin Psychol. 2012;80: 1007–1020. pmid:22563637
  35. 35. Chen KW, Berger CC, Manheimer E, Forde D, Magidson J Dachman L, et al. Meditative therapies for reducing anxiety: A systematic review and meta-analysis of randomized controlled trials. Depress Anxiety. 2012;29: 545–562. pmid:22700446
  36. 36. Chiesa A, Serretti A. Mindfulness based cognitive therapy for psychiatric disorders: A systematic review and meta-analysis. Psychiatry Res. 2011;187: 441–453. pmid:20846726
  37. 37. de Vibe M, Bjørndal A, Tipton E, Hammerstrøm KT, Kowalski K. Mindfulness based stress reduction (MBSR) for improving health, quality of life and social functioning in adults. Campbell Systematic Reviews. 2012;3:
  38. 38. Galante J, Iribarren SJ, Pearce PF. Effects of mindfulness-based cognitive therapy on mental disorders: A systematic review and meta-analysis of randomised controlled trials. J Res Nurs. 2012;18: 133–155.
  39. 39. Klainin-Yobas P, Cho MAA, Creedy D. Efficacy of mindfulness-based interventions on depressive symptoms among people with mental disorders: A meta-analysis. Int J Nurs Stud. 2012;49: 109–121. pmid:21963234
  40. 40. Lakhan SE, Schofield KL. Mindfulness-based therapies in the treatment of somatization disorders: A systematic review and meta-analysis. PLOS ONE. 2013;8: e71834. pmid:23990997
  41. 41. Piet J, Hougaard E. The effect of mindfulness-based cognitive therapy for prevention of relapse in recurrent major depressive disorder: A systematic review and meta-analysis. Clin Psychol Rev. 2011;31: 1032–1040. pmid:21802618
  42. 42. Strauss C, Cavanagh K, Oliver A, Pettman D. Mindfulness-based interventions for people diagnosed with a current episode of an anxiety or depressive disorder: A meta-analysis of Randomised controlled trials. PLOS ONE. 2014;9: e96110. pmid:24763812
  43. 43. Rothstein HR, Sutton AJ, Borenstein M (2005) Publication bias in meta-analysis: prevention, assessment, and adjustments. Chichester: Wiley.
  44. 44. Mavridis D, Salanti G (2014) Exploring and accounting for publication bias in mental health: a brief overview of methods. Evid Based Ment Health 17: 11–15. pmid:24477532
  45. 45. Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ. 2006;333: 597–600. pmid:16974018
  46. 46. Ioannidis JPA, Trikalinos TA. The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. CMAJ. 2007;176: 1091–1096. pmid:17420491
  47. 47. Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Intervention. Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available: www.cochrane-handbook.org.
  48. 48. Kyzas PA, Denaxa-Kyza D, Ioannidis JP. Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007;43: 2559–2579. pmid:17981458
  49. 49. Tzoulaki I, Siontis KC, Evangelou E, Ioannidis JP. Bias in associations of emerging biomarkers with cardiovascular disease. JAMA Intern Med. 2013;173: 664–671. pmid:23529078
  50. 50. Cuijpers P, van Straten A, Bohlmeijer E, Hollon SD, Andersson G. The effects of psychotherapy for adult depression are overestimated: a meta-analysis of study quality and effect size. Psychol Med. 2010;40: 211–223. pmid:19490745
  51. 51. Cuijpers P, van Straten A, Warmerdam L, Smits N. Characteristics of effective psychological treatments of depression: a metaregression analysis. Psychother Res. 2008;18: 225–236. pmid:18815968
  52. 52. Driessen E, Cuijpers P, Hollon SD, Dekker JJ. Does pretreatment severity moderate the efficacy of psychological treatment of adult outpatient depression? A meta-analysis. J Consult Clin Psychol. 2010;78: 668–680. pmid:20873902
  53. 53. Bower P, Kontopantelis E, Sutton A, Kendrick T, Richards DA, Gilbody S, et al. Influence of initial severity of depression on effectiveness of low intensity interventions: meta-analysis of individual patient data. BMJ. 2013;346: f540. pmid:23444423
  54. 54. Schneider S, Moyer A, Knapp-Oliver S, Sohl S, Cannella D, Targhetta V, et al. Pre-intervention distress moderates the efficacy of psychosocial treatment for cancer patients: a meta-analysis. J Behav Med. 2010;33: 1–14. pmid:19784868
  55. 55. Ioannidis JP, Trikalinos TA. An exploratory test for an excess of significant findings. Clin Trials. 2007;4: 245–253. pmid:17715249
  56. 56. Ioannidis JPA. Clarifications on the application and interpretation of the test for excess significance and its extensions. J Math Psychol. 2013;57: 184–187.
  57. 57. Flint J, Cuijpers P, Horder J, Koole SL, Munafò MR. Is there an excess of significant findings in published studies of psychotherapy for depression? Psychol Med. 2014 Jul; 1–8.
  58. 58. R Development Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: the R Foundation for Statistical Computing. Available: http://www.R-project.org/.
  59. 59. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM, et al. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012;344: d7292. pmid:22214755
  60. 60. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302: 977–984. pmid:19724045
  61. 61. Riehm KE, Azar M, Thombs BD. Transparency of outcome reporting and trial registration of randomized controlled trials in top psychosomatic and behavioral health journals: A 5-year follow-up. J Psychosom Res. 2015;79: 1–12. pmid:25956011
  62. 62. Azar M, Riehm KE, McKay D, Thombs BD. Transparency of outcome reporting and trial registration of randomized controlled trials published in the Journal of Consulting and Clinical Psychology. PLOS ONE. 2015;10: e0142894. pmid:26581079
  63. 63. Driessen E, Hollon SD, Bockting CLH, Cuijpers P, Turner EH. Does publication bias inflate the apparent efficacy of psychological treatment for depressive disorder? A systematic review and meta-analysis of US National Institutes of Health-funded trials. PLOS ONE. 2015;10: e0137864. pmid:26422604
  64. 64. Fanelli D, Ioannidis JP. US studies may overestimate effect sizes in softer research. Proc Natl Acad Sci USA. 2013;110: 15031–15036. pmid:23980165
  65. 65. Fanelli D. "Positive" results increase down the hierarchy of the sciences. PLOS ONE. 2010;5: e10068. pmid:20383332
  66. 66. Ferguson CJ, Brannick MT. Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol Methods. 2012;17: 120–128. pmid:21787082
  67. 67. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010;303: 2058–2064. pmid:20501928
  68. 68. Ioannidis JP. Interpretation of tests of heterogeneity and bias in meta-analysis. J Eval Clin Pract. 2008;14: 951–957. pmid:19018930
  69. 69. Leykin Y, DeRubeis RJ. Allegiance in psychotherapy outcome research: separating association from bias. Clin Psychol Sci Prac. 2009;16: 54–65.
  70. 70. Easterbrook PJ, Gopalan R, Berlin JA, Matthews DR. Publication bias in clinical research. Lancet. 1991;337: 867–872. pmid:1672966
  71. 71. Whiteford HA, Degenhardt L, Rehm J, Baxter AJ, Ferrari AJ, Erskine HE, et al. Global burden of disease attributable to mental and substance use disorders: findings from the Global Burden of Disease Study 2010. Lancet. 2013;382: 1575–1586. pmid:23993280