Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Meta-analyses of positive psychology interventions: The effects are much smaller than previously reported

  • Carmela A. White ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    whitecarmela@gmail.com

    Affiliation University of British Columbia, Kelowna, Canada

  • Bob Uttl,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Mount Royal University, Calgary, Canada

  • Mark D. Holder

    Roles Conceptualization, Supervision, Writing – original draft, Writing – review & editing

    Affiliation University of British Columbia, Kelowna, Canada

Meta-analyses of positive psychology interventions: The effects are much smaller than previously reported

  • Carmela A. White, 
  • Bob Uttl, 
  • Mark D. Holder
PLOS
x

Abstract

For at least four decades, researchers have studied the effectiveness of interventions designed to increase well-being. These interventions have become known as positive psychology interventions (PPIs). Two highly cited meta-analyses examined the effectiveness of PPIs on well-being and depression: Sin and Lyubomirsky (2009) and Bolier et al. (2013). Sin and Lyubomirsky reported larger effects of PPIs on well-being (r = .29) and depression (r = .31) than Bolier et al. reported for subjective well-being (r = .17), psychological well-being (r = .10), and depression (r = .11). A detailed examination of the two meta-analyses reveals that the authors employed different approaches, used different inclusion and exclusion criteria, analyzed different sets of studies, described their methods with insufficient detail to compare them clearly, and did not report or properly account for significant small sample size bias. The first objective of the current study was to reanalyze the studies selected in each of the published meta-analyses, while taking into account small sample size bias. The second objective was to replicate each meta-analysis by extracting relevant effect sizes directly from the primary studies included in the meta-analyses. The present study revealed three key findings: (1) many of the primary studies used a small sample size; (2) small sample size bias was pronounced in many of the analyses; and (3) when small sample size bias was taken into account, the effect of PPIs on well-being were small but significant (approximately r = .10), whereas the effect of PPIs on depression were variable, dependent on outliers, and generally not statistically significant. Future PPI research needs to focus on increasing sample sizes. A future meta-analyses of this research needs to assess cumulative effects from a comprehensive collection of primary studies while being mindful of issues such as small sample size bias.

Introduction

Mental health has often been conceptualized as the absence of negative symptomatology [1]. Traditionally, research and intervention efforts in psychology have reflected this conceptualization by focusing primarily on deficits, disease and dysfunction. Although this focus has been invaluable to psychology, the expanding field of positive psychology offers a complementary approach by focusing on understanding and increasing well-being, defined by Ryan and Deci [2] as “optimal psychological functioning and experience” (p. 1), and the components of well-being including strengths, life satisfaction, happiness, and positive behaviours [3,4]. Together, the traditional approach to psychology along with positive psychology, provide a well-balanced understanding of humanity [4] that is consistent with the World Health Organization’s view that “Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.” [5].

Seligman [6] identified five essential factors of well-being: Positive emotions, Engagement, Relationships, Meaning, and Accomplishment (PERMA). More specifically, well-being is made up of two similar, yet distinct components: subjective well-being and psychological well-being. Subjective well-being (SWB), also referred to as hedonic perspective of well-being, is the emotional and cognitive interpretation of the quality of one's life, and is often assessed by examining one’s happiness, affect, and satisfaction with life [7,8]. Psychological well-being (PWB), also referred to as a eudaimonic perspective of well-being, includes positive relations, personal maturity, growth, and independence [9]. PWB reflects a broader, more multidimensional construct than SWB. Ryff developed a model of PWB with six dimensions: (1) Self acceptance (viewing oneself positively); (2) Positive relations with others (the ability to be empathetic and connect with others in more than superficial ways); (3) Autonomy (self-motivation and independence); (4) Environmental mastery (the ability and maturity to control and choose environments that are most appropriate); (5) Purpose in life (a sense of belonging, significance, and chosen direction); and (6) Personal growth (continuously seeking growth and optimal functioning). Both components of well-being have led researchers to different hypotheses and interests, continually providing both similar and dissimilar findings [2,10]. In sum, well-being is a broad, multidimensional, construct that includes one’s affect, satisfaction with life, happiness, engagement with others, personal growth, and meaning and functioning in life.

Thus, although decreasing or eliminating negative symptomatology is necessary, it is not sufficient to achieve overall well-being. Health-care practitioners and researchers must also focus on prevention and intervention strategies that create, build upon, and foster well-being. Positive psychology interventions (PPIs) should be used to supplement approaches that address poor health. Rather than focusing directly on decreasing negative symptomatology, PPIs aim to increase positive affect, meaning in life, and engagement [1]. For healthy populations, the aim is to bring clients from a ‘languishing’ state of being to a ‘flourishing’ state of being [11]. For subclinical and clinical populations, the goals are to significantly reduce negative symptomatology and increase well-being [12]. PPIs are typically easy to follow, self-administered, and brief.

Fordyce [13] developed the first documented PPI designed to increase happiness. This PPI was comprised of 14 techniques including spending more time with others, enhancing close relationships, thinking positively, admiring and appreciating happiness, and refraining from worrying. More recent and common interventions developed and tested by Seligman, Steen, Park, and Peterson [4] include: (1) Gratitude visits/letters—where participants write and deliver a letter of gratitude to someone who has been particularly kind or helpful in the past, but who was never suitably thanked; (2) Three good things–each night for one week participants write down three good things that went well each day and identify the reasons these things went well; (3) You at your best–participants write a story of when they were at their best, identify their personal strengths that were utilized in the story, and then read this story and review their personal strengths each day for one week; and (4) Using signature strengths–participants complete and receive feedback from the character strengths inventory [14], and then use one of their top five character strengths in a different way each day for one week. There are many other similar interventions, such as loving kindness meditation [15], acts of kindness [16], hope therapy [17], optimism exercises [18], mindfulness-based strength practices [19], well-being therapy [20,21], and positive psychotherapy [1].

Sin and Lyubomirsky [22] published the first meta-analysis of the effectiveness of PPIs. In the ten years since its publication, this meta-analysis has been cited nearly 2,000 times, highlighting the interest in the effectiveness of PPIs. Sin and Lyubomirsky's reported that the PPIs had a moderate effect on improving well-being and decreasing depression. For well-being, the meta-analysis revealed a significant effect size of r = .29 (equivalent to d = .61) based on 49 studies. For decreasing depressive symptomatology, a significant effect size of r = .31 (equivalent to d = .65) was found based on 25 studies. Four years later, Bolier, Haverman, Westernhof, Riper, Smit, and Bohlmeijer [23] published a second highly cited meta-analysis of the effectiveness of PPIs focusing only on randomized controlled studies. Bolier et al. reported much smaller effects than Sin and Lyubomirsky. Bolier et al.’s meta-analysis revealed a significant effect size of r = .17 (d = .34) for subjective well-being, r = .10 (d = .20) for psychological well-being, and r = .11 (d = .23) for depression. Moreover, after they removed outlier effect sizes, the effect sizes decreased to r = .13 (d = .26) for subjective well-being, r = .08 (d = .17) for psychological well-being, and r = .09 (d = .18) for depression. Notwithstanding the dissimilar findings of the effect sizes of the PPIs, the high citation rates of these two meta-analyses highlight the recent and widespread interest in positive psychology.

Schuller, Kashdan, and Parks [24] recently criticized Bolier et al.’s [23] meta-analysis as unreasonably selective, narrow, and non-comprehensive. They cautioned against drawing any conclusions from Bolier et al’s meta-analysis for at least the following reasons. First, Bolier et al. substantially truncated their search by excluding studies prior to 1998 (“…the start of the positive psychology movement)” (p. 2). This eliminated earlier interventions including the seminal work of Fordyce [13,25]. Second, Bolier et al. only included studies that referenced “positive psychology”. Because of this inclusion criterion, numerous relevant studies (e.g. studies using the “Best Possible Self” intervention) were omitted. Third, Bolier et al. excluded interventions that utilized meditation, mindfulness, forgiveness, and life-review because reviews and meta-analyses had already been conducted for these types of interventions. However, the elimination of a particular type of intervention or blend of interventions from a meta-analysis is an obstacle to determining how effective PPIs are in general. Moreover, meta-analyses restricted to a specific type of PPI makes it impossible to compare the effectiveness of the full range of PPIs. Because of the restrictive inclusion criteria, the estimated effect sizes are relevant to only the particular blend of PPIs retrieved by Bolier et al. [23]. In any case, because the scope of this meta-analysis was restricted, conclusions regarding the effectiveness of PPIs in general, and the effectiveness of many particular types of PPIs are limited.

In contrast to Bolier et al., Sin and Lyubomirsky’s [22] did not constrain their selection of primary studies and because of this, they identified many more relevant studies than Bolier et al. despite that they published their meta-analysis four years earlier. However, it is impossible to assess how comprehensive Sin and Lyubomirsky’s [22] meta-analysis was because the search for primary studies was not adequately described and therefore, not replicable. For example, the search parameters were not sufficiently described and the search strategy included searching whatever was available in Sin and Lyubomirsky's private libraries and gathering studies from their colleagues. The literature search described in Bolier et al. [23] was similarly not replicable. For example, although Bolier at el. [23] listed numerous terms they used in conducting their searches, they did not specify how they combined them when conducting their searches.

A critical review reveals five additional serious methodological issues that were not adequately addressed in either meta-analysis, that undermine their conclusions, and that may help explain the differences in their findings. First, Sin and Lyubomirsky [22] reported only averaged unweighted rs as effect size estimates for well-being and depression (see Table 4, p. 478, in Sin & Lyubomirsky). However, these estimates give the same weight to all studies, regardless of sample size, and are widely considered inappropriate [26].

Second, the previous meta-analyses did not describe in sufficient detail how they calculated effect sizes for each primary study. For example, Sin and Lyubomirsky [22] stated that effect sizes were “computed from Cohen’s d, F, t, p, or descriptive statistics” (p. 469). Bolier et al. [23] state that they calculated Cohen’s d from the post intervention means and standard deviations and, in some instances, “on the basis of pre- post-change score” without giving any further details. This lack of clarity is especially important because the calculation of effect sizes differs depending on study design (e.g., whether the study is a between-subject or within-subject design; [27]). Thus, effect size calculations can produce different results depending on whether the study used a repeated measure design [28]. In repeated measures designs, when effect sizes are calculated from test statistics such as Fs, and ts using usual formulae, the resulting effect sizes can be substantially inflated [29,30].

Third, Sin and Lyubomirsky’s [22] and Bolier et al.’s [23] meta-analyses included articles that were common to both studies. However, we calculated a relatively low correlation between the effect sizes extracted by Sin and Lyubomirsky [22] and Bolier et al. [23], suggesting that the effect sizes were determined differently in the two meta-analyses.

Fourth, an examination of Sin and Lyubomirsky’s [22] Tables 1 and 2 indicated the presence of small sample size bias. Small sample size bias (also called small study bias) occurs when smaller studies (with less precise findings) report larger effects than larger studies (with more precise findings). Small sample size bias is frequently the result of publication bias. It is well established that journals are much more inclined to publish studies with statistically significant findings than studies reporting null effects [31]. Thus, small studies, which typically report much larger effect sizes than larger studies, are more likely to be published. In turn, small sample size bias has become a significant problem in meta-analyses and numerous methods have been developed for identifying and estimating effect sizes in the presence of small sample size bias [27]. Although Sin and Lyubomirsky [22] noted asymmetry in a funnel plot of their data, they did not include the funnel plots in their article. However, relying on the Fail-safe N, they argued that even though publication bias may be present, it is “. . .not large enough to render the overall results nonsignificant” (p. 477) [22]. However, Fail-safe N method is no longer considered useful in assessing the significance of small sample bias because it considers only statistical significance rather than substantive or practical significance, and it improperly assumes that effect sizes in the unpublished studies are zero [26].

thumbnail
Table 1. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.

https://doi.org/10.1371/journal.pone.0216588.t001

thumbnail
Table 2. Effect sizes determined by the current study, for each depression measure and each study included in Sin and Lyubomirsky (2009) depression meta-analysis.

https://doi.org/10.1371/journal.pone.0216588.t002

In contrast to Sin and Lyubomirsky [22], Bolier et al. [23] addressed publication bias by computing the Orwin’s fail-safe number, and by using the Trim and Fill method [32]. Although the Orwin’s fail-safe number and Trim and Fill methods used to address publication bias are preferred over the Fail-safe N method, these approaches are limited and have been superseded by more advanced methods designed to estimate an effect size in the presence of small study bias including cumulative meta-analyses, the top 10%, and limit meta-analyses [3335]. Thus, it is unclear whether a reanalysis of Sin and Lyubomirsky’s [22] and Bolier et al.’s [23] data, using more appropriate methods for taking into account small sample size effects, would confirm their findings or result in smaller effect size estimates.

Fifth, both Sin and Lyubomirsky and Bolier et al. also reported a number of group moderator analyses. Sin and Lyubomirsky reported six moderator analyses on well-being and six moderator analyses on depression. Similarly, Bolier et al. reported six moderator analyses on subjective well-being, six on psychological well-being, and six on depression. The inspection of these moderator analyses shows that groups consisted as few as two studies in Sin and Lyubomirsky (10 out of 12 moderator analyses included groups with 10 or fewer studies), and as few as one study in Bolier et al. (15 out of 16 moderator analyses included groups with 10 or fewer studies). Moreover, the number of studies in the moderator groups was widely discrepant for most of their moderator analyses. However, moderator analyses based on such a small number of studies in individual groups are not powerful enough to detect even large moderator effects [36]. Moreover, the power to detect moderator effects decreases still further when the number of studies in moderator groups is unequal [36]. Thus, in addition to the issues detailed above, the moderator analyses lacked the statistical power to make them meaningful.

Accordingly, the current study had two major objectives. The first objective was to reanalyze the reported data provided by the two meta-analyses while taking into account small sample size bias and comparing the findings to the original meta-analyses. The second objective was to replicate the two meta-analyses starting with extracting relevant data to calculate effect sizes directly from the primary studies rather than relying on the data published in the previous meta-analyses. In conducting these meta-analyses, the data were analyzed using weighted random effect models while taking into account small sample size bias using the selected methods discussed above.

Method

Primary studies

The primary studies selected for two major meta-analyses were included in the present study. Sin and Lyubomirsky [22] selected 49 primary studies on well-being [4,13,17,18,20,3769] and 25 primary studies on depression [4,17,20,39,40,4244,48,51,55,56,61,65,67,6972]. Bolier et al. [23] selected 28 primary studies on subjective well-being [41,46,49,51,57,62,64,69,7390], 20 primary studies on psychological well-being [4,17,20,41,42,46,57,62,64,76,80,82,83,9099], and 14 primary studies on depression [4,17,20,42,51,62,77,78,82,91,93,96100].

Relevant data extraction and coding of primary studies

The selected primary studies used a variety of research designs (e.g., pre-post, post only), included one or more relevant interventions within the same study, and included one or more relevant outcome measures. Only interventions designed to improve well-being and/or decrease depression were considered relevant. Similarly, only measures of well-being and/or depression were relevant. Studies that included more than one intervention often employed only one control condition, which was used to determine the effectiveness of each intervention. Some studies included more than one control condition some of which were designed to decrease well-being and some were designed to increase well-being. Accordingly, we coded control conditions according to their presumed effect on well-being (negative, neutral, positive) and chose the most neutral control conditions to calculate PPI effect sizes. Thus, to calculate PPIs effect sizes, we extracted the following data for each study, intervention, and relevant outcome measure: research design (e.g., pre-post, post only); intervention; outcome measure; sample size of both control and intervention group; overall sample size; means and standard deviations of both pre and post assessments; within condition correlations between pre and post measurements (these were rarely provided); any F, t, p, or effect size (e.g., Cohen’s d) statistics reported for post only comparisons between control and intervention conditions; mean differences between pre and post measurements and associated standard deviations; and any other relevant data that allowed for effect size calculations.

Effect size calculations

The primary studies that examined the effectiveness of interventions on well-being and/or depression symptoms used a variety of research designs, including repeated measures, pre-post designs, and between subjects post only measures designs. Although it is relatively straightforward to calculate effect sizes (i.e., rs or Cohen’s ds) for between subject post only designs using means, standard deviations, Fs, ts, or ps, it is much more challenging to calculate effect sizes for repeated measures pre-post designs [26]. Primary studies using repeated measures pre-post designs rarely report sufficient statistical detail (such as the necessary correlations between pre and post scores), and thus, it is often necessary to impute estimated pre-post correlations using data obtained from other studies. Critically, it is not appropriate to use Fs, ts, or ps to calculate effect sizes using formulae designed for between subject designs (i.e., formulae that do not take into account pre-post correlations). Accordingly, our initial approach was to calculate effect sizes for pre-post repeated measures designs using a formula recommended by Morris [101], specifically, dppc2, using means, standard deviations, and when necessary, imputed pre-post correlations. Additionally, effect sizes were calculated using only post means and standard deviations, effectively treating these repeated measures pre-post designs as between subjects post-only designs. However, because the primary studies did not report pre-post correlations for outcome measures, it was not possible to calculate dppc2 without imputing such correlations from elsewhere for each study.

Some primary studies used multiple outcome measures. To ensure that each study only contributed one effect size for each meta-analysis, effect sizes were first calculated for each outcome measure, and then aggregated to yield a single effect size. This was done while taking into account the correlations among the within-study outcomes using methods described by Schmidt and Hunter [102] and imputing a recommended default correlation of r = .50 for between within-study effects [103]. The aggregation of within-study outcomes was done using the R package MAc [104].

Similarly, some primary studies used multiple interventions. Moreover, only some of these interventions were designed within the positive psychology framework to improve well-being and/or decrease depression symptoms. Thus, effect sizes were calculated for each intervention designed to improve well-being and/or decrease depression symptoms within the positive psychology framework, and resulting effect sizes were aggregated to yield a single effect size from each study. For example, Emmons and McCullough [105] employed three experimental conditions: (a) participants listed things they were grateful for in their life, (b) participants listed hassles they encountered that day, and (c) participants listed events that happened during the week that impacted their life. In this case, the first condition (gratitude listing) was classified as the intervention group and the last condition (event listing) as the control group. As another example, Lyubomirsky, Dickerhoof, Boehm, and Sheldon [69] used three experimental conditions: (a) participants expressed optimism, (b) participants expressed gratitude, and (c) participants listed activities from the previous week. In this case, the first two conditions (optimism and gratitude) were classified as the intervention groups and the third condition was classified as the control group. Subsequently, the effect sizes obtained for the two interventions were aggregated into a single effect size for that particular study using methods recommended by Schmidt and Hunter [102] as described above.

Finally, some primary studies–seven in Sin and Lyubomirsky’s (2009) study set and three in Bolier et al.’s (2013) study set–used multiple control or comparison groups, ranging from interventions that may have decreased well-being (e.g., asking participants to reflect on negative experiences), to neutral controls, to interventions that increased well-being. In these cases, the most neutral control was chosen when calculating effect sizes. However, in some cases the control group was not clearly identified. For example, Low et al. [106] included three groups of female patients with breast cancer, who were asked to write about one of three possible options: (a) positive thoughts about their breast cancer experience, (b) deepest thoughts and feelings about their experience with breast cancer, and (c) facts about breast cancer and treatment. The first condition (positive thoughts) was classified as the intervention, which fits within the positive psychology framework, and the last condition (facts about breast cancer and its treatment) was used as the control. Finally, for studies by Cook [38] and Buchanan and Bardi [74], the no intervention controls were chosen over other controls, and for Tkach [107], the condition in which participants described any 3 events, 3 times a day, once a week was selected over other controls.

Effect sizes for primary study outcomes were calculated from available data in the following order of preference: (1) the post intervention means and standard deviations, (2) the post intervention ANOVA F values, (3) the post intervention Cohen's ds, (4) the post intervention p values, and (5) the pre-post difference score means and standard deviations as the difference between intervention and control effect sizes.

Missing data and other irregularities

A number of primary studies included in the previous meta-analyses did not report sufficient data to calculate effect sizes. In the previous meta-analyses, the effects sizes for these studies were imputed to be zero (e.g., [46,57,68]). In the current replication analyses, such studies were excluded unless missing data could be imputed from other relevant sources. For example, if standard deviations for an outcome measure were missing in one study/experiment but were reported elsewhere (e.g., for another study/experiment within the same article), the missing standard deviations were imputed from the available ones to allow the calculation of effect sizes (e.g., Pretorious et al. [108]).

A number of primary studies only reported an overall sample size and did not report the sample size for the control and intervention groups. In such cases, the sample sizes for control and intervention groups were estimated by dividing the overall sample size by the number of control and intervention groups. Lastly, four articles–Shapira and Mongrain [96], Sergeant and Mongrain [99], Mongrain and Anselmo-Matthews [97], and Mongrain, Chin, and Shapira [109]–report on four seemingly different studies but actually report on different conditions/interventions of the same study. Accordingly, these four articles were treated as a single study.

Statistical analyses

After all effect sizes were calculated, they were pooled to obtain a weighted effect size of PPIs using a random effects model. A random effects model was chosen because true PPI effects are unlikely to be the same and are likely to vary across the interventions, participants, and designs [33,110]. A fixed effect model meta-analysis assumes that all primary study effects estimate one common underlying true effect size. In contrast, a random effect model meta-analysis assumes that primary study effects may estimate different underlying true effect sizes (e.g., a true effect size may vary depending on participants’ age and the duration of the interventions).

Heterogeneity–variation or inconsistency found among effect sizes–is expected to be due to chance and to the array of interventions and samples used. Considerable heterogeneity indicates substantial differences between studies. To assess this, two common heterogeneity statistics were calculated: Cochran’s Q [111] and I2 [112]. The Q statistic employs a chi-square distribution k (number of studies)– 1 degrees of freedom—and only informs us of whether or not heterogeneity exists; it does not indicate how much heterogeneity exists and it is dependent on sample size. In contrast, the I2 statistic provides a percentage of total between-study variability found among the effects sizes, where a result of I2 = 0 means that the variability found among the estimated effects size is due solely to sampling error within studies [113].

Small study effects were assessed by first examining scatter plots, forest plots, and funnel plots. Several methods were used to estimate effect sizes while taking into account small study effects. First, the Trim and Fill procedure was used [32]. Second, a cumulative meta-analysis was used to determine how much the addition of small size studies would change the estimated effect size. Third, the effect sizes were estimated based on the top 10% (TOP10) of the most precise studies [114]. Stanley and Doucouliagos [114] demonstrated that the TOP10, despite its simplicity, performs well in estimating effect sizes in the presence of small sample size bias. Finally, the effect sizes were estimated using limit meta-analysis [115] which is the most sophisticated of the methods developed for estimating effect sizes in the presence of small sample size bias. The limit meta-analysis has been shown to be superior to other available methods including the trim-and-fill methods and selection models methods [116]. Accordingly, we report only the limit meta-analysis results. All analyses were conducted using R [117], including packages compute.es [118], MAc [104] meta [119], metafor [120], and metasens [121].

Following the procedure described in Cooper and Hedges [122], outliers were identified as effect sizes that were at least 1.5 times the interquartile range above the upper quartile or below the lower quartile of the distribution of effect sizes. When outliers were identified, a meta-analysis was re-run after removal of the outliers to assess the impact of outliers on the findings. Using the method for identifying outliers described by Viechtbauer and Cheung [123] yielded similar results.

Moderator analyses

For the reasons detailed in the introduction, we have not attempted to reanalyze and replicate the moderator analyses published in Sin and Lyubomirsky (2009) and Bolier et al. (2013). Any such moderator analyses would be uninterpretable and not meaningful due to the small number of studies as well as the discrepant number of studies in the moderator groups [36]. Moreover, other issues reviewed in the introduction–most importantly the prevalent small sample size bias and non-comprehensive search for relevant primary studies–would also render any such analyses uninterpretable.

Results

Sin and Lyubomirsky (2009) meta-analysis

Well-being: Reanalysis of reported data.

The reanalysis used data reported by Sin and Lyubomirsky [22] in their Table 1. Fig 1 shows the forest plot of effect sizes (rs) as reported by Sin and Lyubomirsky, including total sample size for each study in the “Total” column. The forest plot indicates that small studies resulted in larger effect sizes than large studies. A random effect model estimated an effect size of r = .24 [95% CI = (0.18, 0.30)] with substantial heterogeneity as measured by I2 = 71.9%.

thumbnail
Fig 1. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates substantial scatter among the effect sizes and suggests that small studies resulted in larger effect sizes than large studies.

https://doi.org/10.1371/journal.pone.0216588.g001

Fig 2, top panel, shows a scatter plot of effect sizes and study sizes. The scatter plot indicates the presence of a small study effect. Fig 2, bottom panel, shows the funnel plot with substantial asymmetry. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(47) = 4.46, p < .001. Accordingly, we estimated the effect size after accounting for the small study size bias. The limit meta-analyses (Fig 2, bottom panel) resulted in an effect size of r = .08 [95% CI = (0.00, 0.15)]. A test of small-study effects showed Q-Q'(1) = 50.83, p < .001. A test of residual heterogeneity indicated Q(47) = 120.24, p < .001. Thus, taking into account small study effects, the reanalyses resulted in a much smaller estimated effect size for well-being than the effect size (r = .29) reported by Sin and Lyubomirsky [22].

thumbnail
Fig 2. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small-study effect.

https://doi.org/10.1371/journal.pone.0216588.g002

Well-being: Complete replication of meta-analysis.

Table 1 reports effect sizes for PPIs on well-being determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single effect size for each study comparable to those reported in Sin and Lyubomirsky [22] using the aggregation method described in the Method section. The correlation between the effect sizes reported by Sin and Lyubomirsky [22] and the effect sizes calculated through this replication was high, r = .78 [95% CI = (0.62, 0.88)].

Fig 3 shows the forest plot of the replication effect sizes and suggests that small studies reported larger effects than large studies. A random effect model estimated an effect size of r = .23 [95% CI = (0.17, 0.30)] with moderate heterogeneity as measured by I2 = 56.5%. Fig 4, top panel, shows a scatter plot of effect sizes and study sizes. The scatter plot indicates the presence of a small study effect. Fig 4, the bottom panel, shows the funnel plot with substantial asymmetry. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(38) = 3.19, p = .003. Accordingly, we estimated the effect size after accounting for the small study size bias. The limit meta-analyses (bottom of Fig 4) estimated an effect size of r = .10 [95% CI = (-0.01, 0.20)]. A test of small-study effects showed Q-Q'(1) = 18.89, p < .001 and a test of residual heterogeneity indicated that Q(38) = 70.68, p < .001. Thus, similar to the reanalyses of Sin and Lyubomirsky’s [22] data, the replication resulted in a much smaller effect size estimate than that originally reported by Sin and Lyubomirsky (r = .29).

thumbnail
Fig 3. Complete replication of Sin and Lyubomirsky (2009) well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates substantial scatter among the effect sizes and suggests that small studies resulted in larger effect sizes compared to larger studies.

https://doi.org/10.1371/journal.pone.0216588.g003

thumbnail
Fig 4. Complete replication of Sin and Lyubomirsky (2009) well-being effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small-study effect.

https://doi.org/10.1371/journal.pone.0216588.g004

Depression: Reanalysis of reported data.

The reanalysis used data reported by Sin and Lyubomirsky [22] in their Table 2. Fig 5 shows the forest plot of effect sizes. Again, the forest plot indicates that small studies reported larger effects than large studies. A random effect model estimated an effect size of r = .25 [95% CI = (0.14, 0.34)] with substantial heterogeneity as measured by I2 = 74%.

thumbnail
Fig 5. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot of study effect sizes.

The forest plot indicates substantial scatter among the effect sizes and suggests that small studies resulted in larger effects than large studies.

https://doi.org/10.1371/journal.pone.0216588.g005

Fig 6, top panel, shows the scatter plot of effect sizes and study sizes. The scatter plot indicates the presence of small study effects. Fig 6, bottom panel, shows the funnel plot with substantial asymmetry. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(23) = 3.20, p = .004. Accordingly, we estimated the effect size after accounting for the small study size bias. The limit meta-analysis (Fig 6, bottom panel) resulted in effect size of r = .04 [95% CI = (-0.05, 0.13)]. A test of small-study effects showed Q-Q'(1) = 28.40, p < .001 and a test of residual heterogeneity indicated Q(23) = 63.79, p < .001. Thus, similar to the reanalysis of well-being effect sizes, taking into account small study effects, the reanalysis of depression effect sizes resulted in a much smaller, and now non-significant estimated effect size of PPIs on depression compared to the effect size (r = .31) reported by Sin and Lyubomirsky.

thumbnail
Fig 6. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes: Relationship between study sizes and effect sizes.

Top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small study effects.

https://doi.org/10.1371/journal.pone.0216588.g006

Depression: Complete replication of meta-analysis.

Table 2 reports effect sizes for studies that assessed depression. The effect sizes were determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single effect size for each study comparable to those reported in Sin and Lyubomirsky [22] using the aggregation method described in the Method section. The correlation between the effect sizes reported by Sin and Lyubomirsky [22] and the effect sizes calculated through this replication was high, r = .78 [95% CI = (0.52, 0.91).

Fig 7 shows the forest plot of the replication effect sizes. Again, the forest plot indicates that small studies resulted in larger effects than large studies. A random effect model estimated an effect size of r = .26 [95% CI = (0.14, 0.38)] with substantial heterogeneity as measured by I2 = 70.1%. Fig 8, top panel, shows a scatter plot of effect sizes by study sizes. The scatter plot indicates the presence of small study effects. Fig 8, bottom panel, shows the funnel plot with substantial asymmetry. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(19) = 5.33, p < .001. Accordingly, we estimated the effect size in the presence of the small study size bias. The limit meta-analyses (Fig 8, bottom panel) estimated an effect size of r = -.03 [95% CI = (-0.17, 0.11)]. A test of small-study effects showed Q-Q'(1) = 40.06, p < .001 and a test of residual heterogeneity showed Q(19) = 26.82, p = .109. Thus, similar to the re-analysis of depression effect sizes, taking into account small study effects, the replication analyses resulted in a much smaller, and now non-significant estimated effect of PPIs on depression compared to the effect size reported by Sin and Lyubomirsky (r = .31).

thumbnail
Fig 7. Complete replication of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot of study effect sizes.

The forest plot indicates substantial scatter among the effect sizes and suggests that small studies resulted in larger effect sizes than large studies.

https://doi.org/10.1371/journal.pone.0216588.g007

thumbnail
Fig 8. Complete replication of Sin and Lyubomirsky (2009) depression effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small study effects.

https://doi.org/10.1371/journal.pone.0216588.g008

Bolier et al. (2013) meta-analysis

Subjective well-being: Reanalysis of reported data.

The reanalysis used data reported by Bolier et al. [23] in their Table 2 and Fig 2. Fig 9 shows the forest plot of effect sizes reported by Bolier et al. [23]. The forest plot reveals no obvious relationship between effect sizes and study sample sizes. The random effects model estimated an effect size of r = .17 [95% CI = (0.11, 0.22)] with moderate heterogeneity as measured by I2 = 47.1%.

thumbnail
Fig 9. Reanalysis of Bolier et al. (2013) subjective well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates some scatter among the effect sizes but suggests no consistent relationship between effect sizes and study sizes.

https://doi.org/10.1371/journal.pone.0216588.g009

Fig 10, top panel, shows the scatter plot of effect sizes as a function of study size and indicates no obvious relationship between effect sizes and study sizes. Fig 10, bottom panel, shows the funnel plots, which do not illustrate asymmetry. The regression test of the funnel plot symmetry was not statistically significant, t(26) = 1.06, p = .299. Furthermore, the limit meta-analyses (Fig 10, bottom panel) estimated an effect size of r = .13 [95% CI = (0.02, 0.24)], comparable to a random effect model without any adjustments. A test of small study effects showed Q-Q'(1) = 2.12, p = .145 and a test of residual heterogeneity indicated Q(26) = 48.96, p = .004. The reanalysis of Bolier et al.’s [23] subjective well-being data confirmed their findings.

thumbnail
Fig 10. Reanalysis of Bolier et al. (2013) well-being effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis taking into account any small study effects.

https://doi.org/10.1371/journal.pone.0216588.g010

Subjective well-being: Complete replication of meta-analysis.

Table 3 reports effect sizes determined as described above for each outcome measure and intervention comparison. These effect sizes were aggregated to yield a single effect size for each study comparable to those reported in Bolier et al. [23]. The correlation between the effect sizes reported by Bolier et al. [23] and the effect sizes calculated through this replication was high, r = .85 [95% CI = (0.68, 0.94).

thumbnail
Table 3. Effect sizes determined by the current study, for each subjective well-being measure and each study included in Bolier et al. (2013) subjective well-being meta-analysis.

https://doi.org/10.1371/journal.pone.0216588.t003

Fig 11 shows the forest plot of effect sizes with no obvious signs of small study effects. A random effects model estimated an effect size of r = .19 [95% CI = (0.12, 0.26)] with moderate heterogeneity as measured by I2 (63.1%). Fig 12, top panel, shows the scatter plot of effect sizes by study size and indicates no obvious relationship between them. Fig 12, bottom panel, shows the funnel plot with no obvious asymmetry. The regression test of funnel plot symmetry was not statistically significant, t(22) = 1.37, p = .184. Furthermore, the limit meta-analyses (Fig 12, bottom panel) estimated an effect size of r = .13 [95% CI = (0.00, 0.26)]. A test of small-study effects showed Q-Q'(1) = 4.91, p = .027 and a test of residual heterogeneity indicated Q(22) = 57.39, p < .001. These results are similar to those reported by Bolier et al. [23] and obtained by the reanalysis of Bolier et al.’s data.

thumbnail
Fig 11. Complete replication of Bolier et al. (2013) subjective well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates some scatter among the effect sizes but suggests no obvious relationship between effect sizes and study sizes.

https://doi.org/10.1371/journal.pone.0216588.g011

thumbnail
Fig 12. Complete replication of Bolier et al. (2013) subjective well-being effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis taking into account any small study effects.

https://doi.org/10.1371/journal.pone.0216588.g012

Psychological well-being: Reanalysis of reported data.

The reanalysis used data reported by Bolier et al. [23] in their Table 2 and Fig 3. Fig 13 shows the forest plot of effect sizes. The plot indicates the presence of small study effect and the presence of an outlier (Fava.2005.1). A random effect model estimated an effect size of r = .09 [95% CI = (0.04, 0.14)] with heterogeneity, as measured by I2, = 35.2%.

thumbnail
Fig 13. Reanalysis of Bolier et al. (2013) psychological well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates that smaller studies reported large effect sizes than larger studies and also indicates the presence of a possible outlier (Fava.2005.1).

https://doi.org/10.1371/journal.pone.0216588.g013

Fig 14, top panel, shows the scatter plot of effect sizes by study size and indicates the presence of small study effects. Fig 14, bottom panel, shows the funnel plot with visible asymmetry. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(18) = 2.68, p = .02. Accordingly, it is necessary to estimate the effect size in the presence of the small study size bias. The limit meta-analyses (Fig 14, bottom panel) estimated an effect size of r = .02 [95% CI = (-0.04, 0.08)]. A test of small-study effects showed Q-Q'(1) = 8.36, p = .004 and a test of residual heterogeneity indicated Q(18) = 20.97, p = .281.

thumbnail
Fig 14. Reanalysis of Bolier et al. (2013) psychological well-being effect sizes: Relationship between effect sizes and study sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis taking into account any small study effects.

https://doi.org/10.1371/journal.pone.0216588.g014

The analysis was recalculated after removing one outlier (Fava.2005.1). A random effect model estimated an effect size of r = .06 [95% CI = (0.03, 0.10)] with no heterogeneity, as measured by I2, = 0%. The regression test of the funnel plot symmetry revealed significant asymmetry t(17) = 2.13, p = .048. Accordingly, we estimated the effect size after accounting for the small study bias. The limit meta-analyses estimated an effect size of r = .01 [95% CI = (-0.05, 0.08)]. A test of small-study effects showed Q-Q'(1) = 3.68 p = .055 and a test of residual heterogeneity indicated Q(17) = 13.81, p = .681. Thus, a reanalysis of Bolier et al.’s [23] psychological well-being revealed smaller effect sizes than the effect size of r = .10 reported by Bolier et al.

Psychological well-being: Complete replication of meta-analysis.

Table 4 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were aggregated to yield a single effect size for each study comparable to those reported in Bolier et al. [23]. The correlation between the effect sizes reported by Bolier et al. and the effect sizes calculated through this replication was high, r = .88 [95% CI = (0.70, 0.96).

thumbnail
Table 4. Effect sizes determined by the current study, for each psychological well-being measure and each study included in Bolier et al. (2013) psychological well-being meta-analysis.

https://doi.org/10.1371/journal.pone.0216588.t004

Fig 15 shows the forest plot of replication effect sizes. Again, the forest plot indicates that smaller studies reported larger effect sizes than larger studies. A random effect model estimated an effect size of r = .15 [95% CI = (0.08, 0.22)] with moderate heterogeneity as measured by I2, 41.0%. Fig 16, top panel, shows the scatter plot between effect sizes and sample sizes and indicates the presence of small study size bias. Fig 16, bottom panel, shows the funnel plot with visible asymmetry. The regression test of the funnel plot symmetry confirmed the asymmetry, t(15) = 2.66, p = . 018. Accordingly, it is necessary to estimate the effect size in the presence of the small study size bias. The limit meta-analyses (Fig 16, bottom panel) estimated an effect size of r = .02 [95% CI = (-.09, 0.13)]. A test of small-study effects showed Q-Q'(1) = 8.71, p = .003 and a test of residual heterogeneity indicated Q(15) = 18.41, p = .242. Thus, a replication of Bolier et al.’s [17] psychological well-being meta-analysis revealed smaller effect sizes than the effect size of r = .10 reported by Bolier et al.

thumbnail
Fig 15. Complete replication of Bolier et al. (2013) psychological well-being effect sizes: Forest plot of study effect sizes.

The forest plot indicates that smaller studies reported larger effect sizes than larger studies.

https://doi.org/10.1371/journal.pone.0216588.g015

thumbnail
Fig 16. Complete replication of Bolier et al. (2013) psychological well-being effect sizes: Relationship between effect sizes and study sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis taking into account any small study effects.

https://doi.org/10.1371/journal.pone.0216588.g016

Depression: Reanalysis of reported data.

A reanalysis used data reported by Bolier et al. [23] in their Table 2 and Fig 4. Fig 17 shows the forest plot of effect sizes. The forest plots indicates that small studies reported larger effect size than larger studies and it also suggests the presence of outliers. A random effect model estimated an effect size of r = .10 [95% CI = (0.03, 0.16)] with moderate heterogeneity as measured by I2 = 51.4%.

thumbnail
Fig 17. Reanalysis of Bolier et al. (2013) depression effect sizes: Forest plot of study effect sizes.

The forest plot indicates that small studies resulted in larger effect sizes than large studies and also suggests the presence of outliers.

https://doi.org/10.1371/journal.pone.0216588.g017

Fig 18, top panel, shows the scatter plot of effect sizes by study size. The scatter plot indicates the presence of small study effects. Fig 18, bottom panel, shows the funnel plot with substantial asymmetry. The regression test of the funnel plot symmetry confirmed the plot was asymmetrical, t(12) = 2.71, p = .019. Accordingly, it is necessary to estimate the effect size in the presence of the small study size bias. The limit meta-analyses (Fig 18, bottom panel) estimated an effect size of r = .02 [95% CI = (-0.04, 0.07)]. A test of small-study effects showed Q-Q'(1) = 10.14, p = .002 and a test of residual heterogeneity indicated Q(12) = 16.60, p = .165.

thumbnail
Fig 18. Reanalysis of Bolier et al. (2013) depression effect sizes: Relationship between study sizes and effect sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small study effects.

https://doi.org/10.1371/journal.pone.0216588.g018

The analyses were repeated after removing the outliers (Fava.2005.1, Seligman.2006.1). A random effect model estimated an effect size of r = .07 [95% CI = (0.02, 0.12)] with some heterogeneity as measured by I2 = 27.7%. The regression test of the funnel plot symmetry revealed no significant asymmetry, t(10) = 1.55, p = .152. The limit meta-analyses estimated an effect size of r = .03 [95% CI = (-0.03, 0.10)]. A test of small-study effects showed Q-Q'(1) = 2.95, p = .086 and a test of residual heterogeneity indicated Q(10) = 12.27, p = .268. Thus, the reanalyses of Bolier et al.’s data revealed a smaller, non-significant effect for depression, in contrast to Bolier et al.’s finding of r = .11.

Depression: Complete replication of meta-analysis.

Table 5 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were aggregated to yield a single effect size for each study comparable to those reported in Bolier et al. [23]. The correlation between the effect sizes reported by Bolier et al. [23] and the effect sizes calculated through this replication was high, r = .81 [95% CI = (0.49, 0.94). Fig 19 shows the forest plot of effect sizes and displays no apparent small study size effects. A random effect model estimated an effect size of r = .14 [95% CI = (0.08, 0.21)] with moderate heterogeneity as measured by I2 = 23.6%.

thumbnail
Fig 19. Complete replication of Bolier et al. (2013) depression effect sizes: Forest plot of effect sizes.

The forest plot shows some scatter and suggests the presence of an outlier.

https://doi.org/10.1371/journal.pone.0216588.g019

thumbnail
Table 5. Effect sizes determined by the current study, for each depression measure and each study included in Bolier et al. (2013) depression meta-analysis.

https://doi.org/10.1371/journal.pone.0216588.t005

Fig 20, top panel, shows the scatter plot of effect sizes and sample sizes. Smaller studies tend to show larger effects than large studies. Fig 20, bottom panel, shows the funnel plot with some asymmetry. However, a regression test of the funnel plot symmetry indicated no statistically significant asymmetry, t(12) = .52, p = .611. The limit meta-analyses (Fig 20, bottom panel) estimated an effect size of r = .10 [95% CI = (.01, 0.19)]. A test of small-study effects showed Q-Q'(1) = 0.38, p = .539 and a test of residual heterogeneity indicated Q(12) = 16.64, p = .164. However, these results are difficult to interpret due to the small number of studies.

thumbnail
Fig 20. Complete replication of Bolier et al. (2013) depression effect sizes: Relationship between effect sizes and sample sizes.

The top panel shows the scatter plot of effect sizes by study sizes. The bottom panel shows the funnel plot and the results of the limit meta-analysis including the estimated effect size taking into account small study effects.

https://doi.org/10.1371/journal.pone.0216588.g020

The effect size estimates were recalculated after the removal of an outlier (Seligman.2006.2). A random effect model estimated an effect size of r = . 14 [95% CI = (.09, .19)] with no heterogeneity as measured by I2 = 0%. A regression test of the funnel plot symmetry indicated no statistically significant asymmetry, t(11) = -.17, p = .862. The limit meta-analyses estimated an effect size of r = .15 [95% CI = (.06, 0.24)]. A test of small-study effects showed Q-Q'(1) = .03, p = .862 and a test of residual heterogeneity indicated Q(11) = 11.51, p = .402. The replication analyses indicated a somewhat higher effect for depression than that reported by Bolier et al. [23].

Summary

Table 6 summarizes the key findings from our reanalyses of Sin and Lyubomirsky and Bolier et al. meta-analyses. For comparison, it also includes effect sizes (rs) originally reported by Sin and Lyubomirsky and Bolier et al. The table highlights that re-analyses of the data reported in the two previous meta-analyses resulted in much smaller effect sizes than those originally reported. Moreover, of the seven meta-analyses that yielded significant findings in the previously conducted studies, only two remained statistically significant and one more depended on one outlier when reanalyzed in the current study.

Table 7 summarizes the key findings from our complete replications of Sin and Lyubomirsky and Bolier et al. meta-analyses. The table highlights that our replications showed generally small effects of PPI on well-being and depression that were comparable to the effects found by our re-analyses of Sin and Lyubomirsky and Bolier et al.’s data.

Discussion

The first meta-analysis examining the effectiveness of the PPIs on well-being, by Sin and Lyubomirsky [22], reported moderate effects on improving well-being and decreasing depression. A second meta-analysis by Bolier et al. [23] focused on randomized trials only and found much smaller effects of PPIs than the first meta-analysis. Bolier et al. attributed their smaller effects to their inclusion of higher quality studies only. However, in addition to the differences in the inclusion criteria, our detailed reading of the two meta-analyses suggested an alternative explanation for the discrepancy in the reported effect sizes. The discrepancy may be due to common methodological issues affecting many published meta-analyses including (a) the failure to weigh studies by their sample size, (b) the failure to describe the calculation of effect sizes in sufficient detail, and (c) the failure to consider and adjust for small sample size bias. Therefore, though Schueller et al. [24] correctly criticized Bolier et al. study because of the unreasonably narrow selection criteria and cautioned against drawing any conclusions from Bolier et al. meta-analysis, there may be additional reasons that warrant caution.

Accordingly, our study had two major objectives. First, we reanalyzed the reported data from the two previous meta-analyses while taking into account study sizes and small sample size bias. Second, we replicated both meta-analyses starting with extracting relevant effect sizes directly from the primary studies rather than relying on the data published in the previous meta-analyses. In conducting these meta-analyses, the data were analyzed using a weighted random effects model while taking into account small sample size bias using the selected methods discussed above.

Our reanalysis of the effect sizes reported by Sin and Lyubomirsky [22] revealed much smaller effect size estimates for both well-being (r = .08) and depression (r = .04) than the previous authors reported (r = .29 and r = .31, respectively). There were two major reasons for the inflated estimates reported by Sin and Lyubomirsky. First, Sin and Lyubomirsky reported effect size estimates as simple unweighted averages of study level effect sizes (i.e., they averaged rs across the studies included in their meta-analysis). This approach is inappropriate because it gives equal weight to small- and large-size studies [26]. Second, Sin and Lyubomirsky noted that their effect sizes resulted in asymmetric funnel plots, but they used Fail Safe N to conclude that small-study effects did not significantly inflate their findings. However, the Fail Safe N is no longer considered an appropriate way to assess small-study effects [26]. The present study’s reanalysis confirmed that the funnel plots were asymmetric for both well-being and depression, and the random effects limit meta-analysis estimates are much smaller (and not statistically significant for depression) due to small-study effects. The replication of Sin and Lyubomirsky [22] meta-analyses revealed relatively high correlations between effect sizes determined by the current study and by those in the previous study for both well-being and depression. Consistent with the similar effect sizes extracted from the primary studies, the replication analyses and estimated effect sizes for well-being and for depression were very similar to those obtained by our reanalyses of effect sizes reported by Sin and Lyubomirsky. The replication analyses resulted in nearly the same findings as those from the reanalyses even though several studies that did not report essential data to calculate effect sizes were excluded from the replications.

Our reanalysis of the effect sizes reported by Bolier et al. [23] revealed the same estimated effect size for subjective well-being (r = .17) as reported by Bolier et al. However, the estimated effect sizes for psychological well-being (r = .02), and depression (r = .02) were smaller (and no longer statistically significant) than originally reported in Bolier et al. (r = .10, and r = .11, respectively). When outliers were removed, the estimated effect sizes for psychological well-being were r = .01 and for depression were r = .07. The latter result is partially attributable to the test of funnel plot asymmetry being no longer statistically significant, in part due to the smaller number of effect sizes included. However, the limit meta-analysis estimated the effect size for depression after the removal of outliers as r = .03. The replication of Bolier et al. [23] meta-analyses revealed relatively high correlations between effect sizes determined by the current study and those reported in their meta-analysis for subjective well-being, psychological well-being, and depression. Despite the removal of several original studies (due to insufficient data to calculate effect sizes), the results of the replication analyses of subjective well-being and psychological well-being were very similar to those obtained by the reanalyses. The replication of depression effects resulted in slightly larger estimated effect sizes of r = .14. However, these results need to be viewed with caution as they are based on a small number of studies. Moreover, even though the small-study effects were not statistically significant, the number of studies was small and the scatter plots of effect sizes and study sample sizes show that large-size studies resulted in substantially smaller effects than small size studies.

In summary, the reanalyses and replications of Sin and Lyubomirsky [22] and Bolier et al. [23] indicate that there is a small effect of approximately r = .10 of PPIs on well-being. In contrast, the effect of PPIs on depression was nearly zero when based on the studies included in Sin and Lyubomirsky [22] and highly variable, and sensitive to outliers, when based on studies included in Bolier et al. [23]. Notably, Sin and Lyubomirsky [22] included nearly twice as many studies as Bolier et al. [23] in their meta-analysis of the effects of PPIs on depression.

Our review of the two highly cited meta-analyses of PPIs resulted in a number of secondary findings and implications. First, the major reason for the larger effects reported in previous meta-analyses was that these studies did not appropriately account for prevalent small-study effects. The small-study effects are a frequent problem with meta-analyses in many fields and a number of methods (e.g., cumulative meta-analysis, TOP10, limit meta-analysis) have been developed to estimate effect sizes in the presence of small-study effects. Unfortunately, these methods were not employed in the previous meta-analyses addressed by the current study. Given the presence of the small-study effects, future meta-analyses of PPIs must take into account small-study effects using appropriate estimation methods.

Second, these findings are tentative because the previous meta-analyses did not include all available studies. To illustrate, Bolier et al.’s [23] inclusion criteria are restrictive because they excluded (a) all relevant studies published prior to the coining of the term “Positive Psychology”, (b) all studies of effects of mindfulness and meditation on well-being, and (c) all studies that did not explicitly mention “positive psychology”. As pointed out by Schueller, et al. [24], Bolier et al.’s inclusion criteria are too narrow and exclude numerous studies that use the same interventions and same outcome measures. If a substantial number of relevant studies were not included, the findings based on only a small sample of relevant studies may not reflect the cumulative findings across the population of previous studies. In turn, not conducting a comprehensive search for primary studies also reduces meta-analysists’ ability to conduct meaningful moderator analyses [24].

Third, the failure to include all available studies in the previous meta-analyses suggests the need for a comprehensive meta-analysis of PPIs effect on well-being starting with a comprehensive search for relevant studies. A preliminary search using PsycInfo for studies of PPIs using only the most obvious search strategy (search for all studies mentioning both “positive psychology” and at least one of the terms “intervention”, “therapy”, or “treatment”) yielded over 200 relevant studies, more than tripling the number of studies included in the previous meta-analyses.

Fourth, our review of the primary studies included in the previous meta-analyses revealed persistent limitations with their method and results sections. In general, no primary studies with pre-post designs reported pre-post correlations for outcome measures, which are necessary to calculate the most appropriate effect sizes [101]. Though the authors of a number of these primary studies were contacted by email, they did not provide these correlations. As a result, the current study relied primarily on the post data only, following the approach adopted by Bolier et al. [23]. Accordingly, these findings suggest that researchers need to report all necessary statistical information to facilitate future replication and/or meta-analyses. Although numerous guidelines have been provided for reporting the results of studies such as JARS [124], researchers appear slow to adopt them, and the present findings suggest the need to push for adoption of such guidelines by researchers in the PPI field.

Fifth, it is evident from the diverse inclusion and exclusion criteria of previous meta-analyses that there is no consensus as to what constitutes a PPI. Bolier et al. [23] excluded interventions that others consider PPIs (e.g., mindfulness and meditation). Bolier et al. even speculated that different inclusion criteria and differences in study designs were the reason for discrepancies between their findings and those of Sin and Lyubomirsky [22]. However, the current reanalysis casts doubt on this explanation as the findings were comparable when small-study effects were taken into account. The definition of a PPI is critical for determining which studies to include in future meta-analyses. Schueller et al. [24] argued that including only studies that mention “positive psychology” would miss many 'positive intervention' studies. Similarly, Parks and Biswas-Diener [125] acknowledged that it can be arduous to define interventions that are aimed at increasing the ‘positives’. Clearly, this needs to be addressed in the near future.

Thus, the “true” effects of PPIs may be substantially different from what Sin and Lyubomirsky and Bolier et al. meta-analyses indicate. While our re-analyses and replications of these meta-analyses converge and indicate that the effect of the PPIs are relatively small when small sample bias is taken into account, estimates of effect sizes are not definitive because neither Sin and Lyubomirsky nor Bolier et al. meta-analyses were comprehensive and a large number of relevant studies are likely missing.

Accordingly, a comprehensive and transparent meta-analysis of all relevant studies of PPIs is necessary and is likely to have a major influence on the field. Such a meta-analysis is likely to allow for meaningful moderator analyses in answering questions such as: Is group administration more effective than individual administration? Are longer interventions more effective than shorter interventions? Are some types of interventions more effective than other types of interventions? Importantly, a comprehensive meta-analysis is likely to provide a more definitive determination of how effective PPIs are at increasing well-being.

Given that our meta-analyses indicate that the effects of the PPIs on well-being and depression may be smaller than previously reported, future research may need to employ strategies likely to increase the effectiveness of PPIs. For example, PPIs are likely to be more effective if they are deployed over longer periods of time [126]. Some researchers have criticized the use of single short duration PPIs in some areas [127] and others have argued that PPIs ought to be deployed over longer periods of times [128] as was done in only some of the PPI studies [129]. Moreover, the effectiveness of the PPIs may depend not only on overall duration but also on frequency of PPIs. Finally, it may be that a combination of two or three PPIs (e.g., the combination of best possible self and gratitude letters) is more effective than a single type of PPI of equal duration [130].

Conclusions

The current study re-analyzed the data reported in previous meta-analyses that examined the effectiveness of PPIs on increasing well-being and decreasing depression, as well as completely replicated (extracting data from original sources) the previous meta-analyses. The reanalysis of the previously reported data showed that although correlations between the recalculated effect sizes and the previous meta-analyses effect sizes were fairly high (suggesting that the same data were extracted), the effect sizes were lower than previously reported and often nonsignificant. The major contributing factor for this discrepancy was that the present study accounted for the strong presence of small-sample size bias. Critically, both meta-analyses reviewed, did not include a large number of relevant studies, and thus, effect sizes estimated from their sample of primary studies need to be confirmed by future, more comprehensive, meta-analyses. Accordingly, a comprehensive and transparent meta-analysis of all relevant studies of PPIs is necessary. Such a meta-analysis will allow for meaningful moderator analyses to determine effects of various PPIs including whether individual PPIs are more effective than group PPIs and whether longer and more intense PPIs are more effective than shorter and less intense interventions. Our research underscores that any future meta-analyses of PPI effectiveness ought to take into account frequent methodological issues such as prevalent small sample size bias.

References

  1. 1. Seligman MEP, Rashid T, Parks AC. Positive psychotherapy. Am Psychol. 2006;61: 774–788. pmid:17115810
  2. 2. Ryan RM, Deci EL. On happiness and human potentials: a review of research on hedonic and eudaimonic well-being. Annu Rev Psychol. 2001;52: 141–166. pmid:11148302
  3. 3. Seligman MEP, Csikszentmihalyi M. Positive psychology: An introduction. Am Psychol. 2000;55: 5–14. pmid:11392865
  4. 4. Seligman MEP, Steen TA, Park N, Peterson C. Positive psychology progress: Empirical validation of interventions. Am Psychol. 2005;60: 410–421. pmid:16045394
  5. 5. WHO | World Health Organization. In: WHO [Internet]. [cited 20 Sep 2015]. Available: http://www.who.int/about/en/
  6. 6. Seligman MEP. Flourish: A Visionary New Understanding of Happiness and Well-being. Simon and Schuster; 2011.
  7. 7. Diener E. Subjective well-being. Psychol Bull. 1984;95: 542–575. pmid:6399758
  8. 8. Diener E. Subjective well-being: The science of happiness and a proposal for a national index. Am Psychol. 2000;55: 34–43. pmid:11392863
  9. 9. Ryff CD. Happiness is everything, or is it? Explorations on the meaning of psychological well-being. J Pers Soc Psychol. 1989;57: 1069–1081.
  10. 10. Keyes CLM, Shmotkin D, Ryff CD. Optimizing well-being: The empirical encounter of two traditions. J Pers Soc Psychol. 2002;82: 1007–1022. pmid:12051575
  11. 11. Keyes CLM. Mental illness and/or mental health? Investigating axioms of the complete state model of health. J Consult Clin Psychol. 2005;73: 539–548. pmid:15982151
  12. 12. Csikszentmihalyi M. Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi. Springer; 2014.
  13. 13. Fordyce MW. Development of a program to increase personal happiness. J Couns Psychol. 1977;24: 511–521.
  14. 14. Peterson C, Park N, Seligman MEP. Orientations to happiness and life satisfaction: The full life versus the empty life. J Happiness Stud. 2005;6: 25–41.
  15. 15. Fredrickson BL, Cohn MA, Coffery K, Pek J, Finkel S. Open hearts build lives: positive emotions, induced through loving-kindness meditation, build consequential personal resources. J Pers Soc Psychol. 2008;95: 1045–1062. pmid:18954193
  16. 16. Lyubomirsky S, Sheldon KM, Schkade D. Pursuing happiness: The architecture of sustainable change. Rev Gen Psychol. 2005;9: 111–131.
  17. 17. Cheavens JS, Feldman DB, Gum A, Michael ST, Snyder CR. Hope therapy in a community sample: A pilot investigation. Soc Indic Res. 2006;77: 61–78.
  18. 18. Sheldon KM, Lyubomirsky S. How to increase and sustain positive emotion: The effects of expressing gratitude and visualizing best possible selves. J Posit Psychol. 2006;1: 73–82.
  19. 19. Niemiec R, Rashid T, Spinella M. Strong mindfulness: Integrating mindfulness and character ctrengths. J Ment Health Couns. 2012;34: 240–253.
  20. 20. Fava GA, Rafanelli C, Cazzaro M, Conti S, Grandi S. Well-being therapy. A novel psychotherapeutic approach for residual symptoms of affective disorders. Psychol Med. 1998;28: 475–480. pmid:9572104
  21. 21. Fava GA, Ruini C. Development and characteristics of a well-being enhancing psychotherapeutic strategy: Well-being therapy. J Behav Ther Exp Psychiatry. 2003;34: 45–63. pmid:12763392
  22. 22. Sin NL, Lyubomirsky S. Enhancing well-being and alleviating depressive symptoms with positive psychology interventions: A practice-friendly meta-analysis. J Clin Psychol. 2009;65: 467–487. pmid:19301241
  23. 23. Bolier L, Haverman M, Westerhof GJ, Riper H, Smit F, Bohlmeijer E. Positive psychology interventions: A meta-analysis of randomized controlled studies. BMC Public Health. 2013;13: 119. pmid:23390882
  24. 24. Schueller S, Kashdan T, Parks A. Synthesizing positive psychological interventions: Suggestions for conducting and interpreting meta-analyses. Int J Wellbeing. 2014;4. Available: http://www.internationaljournalofwellbeing.org/index.php/ijow/article/view/310
  25. 25. Fordyce MW. A program to increase happiness: Further studies. J Couns Psychol. 1983;30: 483–498.
  26. 26. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis. Wiley; 2009.
  27. 27. Morris SB. Estimating effect sizes from the pretest-posttest-control group designs. Organ Res Methods. 2008;
  28. 28. Dunlap WP, Cortina JM, Vaslow JB, Burke MJ. Meta-analysis of experiments with matched groups or repeated measures designs. Psychol Methods. 1996;1: 170–177.
  29. 29. Dunlap WP, Cortina JM, Vaslow JB, Burke MJ. Meta-analysis of experiments with matched groups or repeated measures designs. Psychol Methods. 1996;1: 170–177.
  30. 30. Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Front Psychol. 2013;4: 863. pmid:24324449
  31. 31. Hedges LV. Estimating the Normal Mean and Variance Under A Publication Selection Model. In: Gleser LJ, Perlman MD, Press SJ, Sampson AR, editors. Contributions to Probability and Statistics. Springer New York; 1989. pp. 447–458. https://doi.org/10.1007/978-1-4612-3678-8_31
  32. 32. Duval S, Tweedie R. Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56: 455–463. pmid:10877304
  33. 33. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis. Wiley; 2009.
  34. 34. Stanley TD, Doucouliagos H. Meta-regression approximations to reduce publication selection bias. Res Synth Methods. 2014;5: 60–78. pmid:26054026
  35. 35. Sterne JA, Egger M, Smith GD. Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis. BMJ. 2001;323: 101–105. pmid:11451790
  36. 36. Hempel S, Miles JN, Booth MJ, Wang Z, Morton SC, Shekelle PG. Risk of bias: A simulation study of power to detect study-level moderator effects in meta-analysis. Syst Rev. 2013;2: 107. pmid:24286208
  37. 37. Burton CM, King LA. The health benefits of writing about intensely positive experiences. J Res Personal. 2004;38: 150–163.
  38. 38. Cook EA. Effects of reminiscence on life satisfaction of elderly female nursing home residents. Health Care Women Int. 1998;19: 109–118. pmid:9526331
  39. 39. Davis MC. Life review therapy as an intervention to manage depression and enhance life satisfaction in individuals with right hemisphere cerebral vascular accidents. Issues Ment Health Nurs. 2004;25: 503–515. pmid:15204893
  40. 40. Della Porta MD, Sin NL, Lyubomirsky S. Searching for the placebo effect in happiness-enhancing interventions: An experimental longitudinal study with depressed participants. Tampa, FL; 2009.
  41. 41. Emmons RA, McCullough ME. Counting blessings versus burdens: An experimental investigation of gratitude and subjective well-being in daily life. J Pers Soc Psychol. 2003;84: 377–389. pmid:12585811
  42. 42. Fava GA, Ruini C, Rafanelli C, Finos L, Salmaso L, Mangelli L, et al. Well-being therapy of generalized anxiety disorder. Psychother Psychosom. 2005;74: 26–30. pmid:15627853
  43. 43. Fordyce MW. A program to increase happiness: Further studies. J Couns Psychol. 1983;30: 483–498.
  44. 44. Freedman SR, Enright RD. Forgiveness as an intervention goal with incest survivors. J Consult Clin Psychol. 1996;64: 983–992. pmid:8916627
  45. 45. Froh JJ, Sefick WJ, Emmons RA. Counting blessings in early adolescents: An experimental study of gratitude and subjective well-being. J Sch Psychol. 2008;46: 213–233. pmid:19083358
  46. 46. Goldstein ED. Sacred moments: Implications on well-being and stress. J Clin Psychol. 2007;63: 1001–1019. pmid:17828765
  47. 47. Green LS, Oades LG, Grant AM. Cognitive-behavioral, solution-focused life coaching: Enhancing goal striving, well-being, and hope. J Posit Psychol. 2006;1: 142–149.
  48. 48. Grossman P, Tiefenthaler-Gilmer U, Raysz A, Kesper U. Mindfulness training as an intervention for fibromyalgia: Evidence of postintervention and 3-year follow-up benefits in well-being. Psychother Psychosom. 2007;76: 226–233. pmid:17570961
  49. 49. King LA. The health benefits of writing about life goals. Pers Soc Psychol Bull. 2001;27: 798–807.
  50. 50. Kremers I, Steverink N, Albersnagel F, Slaets J. Improved self-management ability and well-being in older women after a short group intervention. Aging Ment Health. 2006;10: 476–484. pmid:16938683
  51. 51. Lichter S, Haye K, Kammann R. Increasing happiness through cognitive retraining. N Z Psychol. 1980;9: 57–64.
  52. 52. Low CA, Stanton AL, Danoff-Burg S. Expressive disclosure and benefit finding among breast cancer patients: Mechanisms for positive health effects. Health Psychol. 2006;25: 181–189. pmid:16569109
  53. 53. Macleod AK, Coates E, Hetherton J. Increasing well-being through teaching goal-setting and planning skills: results of a brief intervention. J Happiness Stud. 2008;9: 185–196. http://dx.doi.org.ezproxy.library.ubc.ca/10.1007/s10902-007-9057-2
  54. 54. Otake K, Shimai S, Tanaka-Matsumi J, Otsui K, Fredrickson BL. Happy people become happier through kindness: A counting kindnesses intervention. J Happiness Stud. 2006;7: 361–375. pmid:17356687
  55. 55. Reed GL, Enright RD. The effects of forgiveness therapy on depression, anxiety, and posttraumatic stress for women after spousal emotional abuse. J Consult Clin Psychol. 2006;74: 920–929. pmid:17032096
  56. 56. Ruini C, Belaise C, Brombin C, Caffo E, Fava GA. Well-being therapy in school settings: A pilot study. Psychother Psychosom. 2006;75: 331–336. pmid:17053333
  57. 57. Sheldon KM, Kasser T, Smith K, Share T. Personal goals and psychological growth: Testing an intervention to enhance goal attainment and personality integration. J Pers. 2002;70: 5–31. pmid:11908535
  58. 58. Tkach C, Lyubomirsky S. How do people pursue happiness?: Relating personality, happiness-increasing strategies, and well-being. J Happiness Stud. 2006;7: 183–225.
  59. 59. Watkins PC, Woodward K, Stone T, Kolts RL. Gratitude and happiness: Development of a measure of gratitude, and relationships with subjective well-being. Soc Behav Personal Int J. 2003;31: 431–451.
  60. 60. Wing JF, Schutte NS, Byrne B. The effect of positive writing on emotional intelligence and life satisfaction. J Clin Psychol. 2006;62: 1291–1302. pmid:16810662
  61. 61. Zautra AJ, Davis MC, Reich JW, Nicassario P, Tennen H, Finan P, et al. Comparison of cognitive behavioral and mindfulness meditation interventions on adaptation to rheumatoid arthritis for patients with and without history of recurrent depression. J Consult Clin Psychol. 2008;76: 408–421. pmid:18540734
  62. 62. Seligman MEP, Rashid T, Parks AC. Positive psychotherapy. Am Psychol. 2006;61: 774–788. pmid:17115810
  63. 63. Bédard M, Felteau M, Mazmanian D, Fedyk K, Klein R, Richardson J, et al. Pilot evaluation of a mindfulness-based intervention to improve quality of life among individuals who sustained traumatic brain injuries. Disabil Rehabil. 2003;25: 722–731. pmid:12791557
  64. 64. Spence GB, Grant AM. Professional and peer life coaching and the enhancement of goal striving and well-being: An exploratory study. J Posit Psychol. 2007;2: 185–194.
  65. 65. Smith WP, Compton WC, West WB. Meditation as an adjunct to a happiness enhancement program. J Clin Psychol. 1995;51: 269–273. pmid:7797651
  66. 66. King LA, Miner KN. Writing about the perceived benefits of traumatic events: Implications for physical health. Pers Soc Psychol Bull. 2000;26: 220–230.
  67. 67. Rashid T, Anjum A. Positive psychotherapy for young adults and children. In: Abela JRZ, Hankin BL, Abela JRZ (Ed), Hankin BL (Ed), editors. Handbook of depression in children and adolescents. New York, NY, US: Guilford Press; 2008. pp. 250–287.
  68. 68. Lyubomirsky S, Sheldon KM, Schkade D. Pursuing happiness: The architecture of sustainable change. Rev Gen Psychol. 2005;9: 111–131.
  69. 69. Lyubomirsky S, Dickerhoof R, Boehm JK, Sheldon KM. Becoming happier takes both a will and a proper way: An experimental longitudinal intervention to boost well-being. Emotion. 2011;11: 391–402. pmid:21500907
  70. 70. Surawy C, Roberts J, Silver A. The effect of mindfulness training on mood and measures of fatigue, activity, and quality of life in patients with chronic fatigue syndrome on a hospital waiting list: A series of exploratory studies. Behav Cogn Psychother. 2005;33: 103–109.
  71. 71. Bédard M, Felteau M, Mazmanian D, Fedyk K, Klein R, Richardson J, et al. Pilot evaluation of a mindfulness-based intervention to improve quality of life among individuals who sustained traumatic brain injuries. Disabil Rehabil. 2003;25: 722–731. pmid:12791557
  72. 72. Lin W-F, Mack D, Enright RD, Krahn D, Baskin TW. Effects of forgiveness therapy on anger, mood, and vulnerability to substance use among inpatient substance-dependent clients. J Consult Clin Psychol. 2004;72: 1114–1121. pmid:15612857
  73. 73. Boehm JK, Lyubomirsky S, Sheldon KM. A longitudinal experimental study comparing the effectiveness of happiness-enhancing strategies in Anglo Americans and Asian Americans. Cogn Emot. 2011;25: 1263–1272. pmid:21432648
  74. 74. Buchanan KE, Bardi A. Acts of Kindness and Acts of Novelty Affect Life Satisfaction. J Soc Psychol. 2010;150: 235–237. pmid:20575332
  75. 75. Burton CM, King LA. The health benefits of writing about intensely positive experiences. J Res Personal. 2004;38: 150–163.
  76. 76. Frieswijk N, Steverink N, Buunk BP, Slaets JPJ. The effectiveness of a bibliotherapy in increasing the self-management ability of slightly to moderately frail older people. Patient Educ Couns. 2006;61: 219–227. pmid:15939567
  77. 77. Grant AM, Curtayne L, Burton G. Executive coaching enhances goal attainment, resilience and workplace well-being: A randomised controlled study. J Posit Psychol. 2009;4: 396–407.
  78. 78. Hurley DB, Kwon P. Results of a study to increase savoring the moment: Differential impact on positive and negative outcomes. J Happiness Stud. 2012;13: 579–588.
  79. 79. Kremers IP, Steverink N, Albersnagel FA, Slaets JPJ. Improved self-management ability and well-being in older women after a short group intervention. Aging Ment Health. 2006;10: 476–484. pmid:16938683
  80. 80. Layous K, Nelson SK, Lyubomirsky S. What is the optimal way to deliver a positive activity intervention? The case of writing about one’s best possible selves. J Happiness Stud. 2012;14: 635–654.
  81. 81. Lyubomirsky S, Sousa L, Dickerhoof R. The costs and benefits of writing, talking, and thinking about life’s triumphs and defeats. J Pers Soc Psychol. 2006;90: 692–708. pmid:16649864
  82. 82. Mitchell J, Stanimirovic R, Klein B, Vella-Brodrick D. A randomised controlled trial of a self-guided internet intervention promoting well-being. Comput Hum Behav. 2009;25: 749–760.
  83. 83. Page KM, Vella-Brodrick DA. The working for wellness program: RCT of an employee well-being intervention. J Happiness Stud. 2012;14: 1007–1031.
  84. 84. Peters ML, Flink IK, Boersma K, Linton SJ. Manipulating optimism: Can imagining a best possible self be used to increase positive future expectancies? J Posit Psychol. 2010;5: 204–211.
  85. 85. Quoidbach J, Wood AM, Hansenne M. Back to the future: The effect of daily practice of mental time travel into the future on happiness and anxiety. J Posit Psychol. 2009;4: 349–355.
  86. 86. Sheldon KM, Lyubomirsky S. How to increase and sustain positive emotion: The effects of expressing gratitude and visualizing best possible selves. J Posit Psychol. 2006;1: 73–82.
  87. 87. Wing JF, Schutte NS, Byrne B. The effect of positive writing on emotional intelligence and life satisfaction. J Clin Psychol. 2006;62: 1291–1302. pmid:16810662
  88. 88. Grant AM. Making positive change: A randomized study comparing solution-focused vs. problem-focused coaching questions. J Syst Ther. 2012;31: 21–35.
  89. 89. Martínez-Martí ML, Avia MD, Hernández-Lloreda MJ. The effects of counting blessings on subjective well-being: A gratitude intervention in a Spanish sample. Span J Psychol. 2010;13: 886–896. pmid:20977036
  90. 90. Green LS, Oades LG, Grant AM. Cognitive-behavioral, solution-focused life coaching: Enhancing goal striving, well-being, and hope. J Posit Psychol. 2006;1: 142–149.
  91. 91. Abbott J-A, Klein B, Hamilton C, Rosenthal AJ. The impact of online resilience training for sales managers on wellbeing and performance. E-J Appl Psychol. 2009;5: 89–95.
  92. 92. Feldman DB, Dreher DE. Can hope be changed in 90 minutes? Testing the efficacy of a single-session goal-pursuit intervention for college students. J Happiness Stud. 2011;13: 745–759.
  93. 93. Gander F, Proyer RT, Ruch W, Wyss T. Strength-based positive interventions: Further evidence for their potential in enhancing well-being and alleviating depression. J Happiness Stud. 2012;14: 1241–1259.
  94. 94. Luthans F, Avey JB, Patera JL. Experimental analysis of a web-based training intervention to develop positive psychological capital. Acad Manag Learn Educ. 2008;7: 209–221.
  95. 95. Luthans F, Avey JB, Avolio BJ, Peterson SJ. The development and resulting performance impact of positive psychological capital. Hum Resour Dev Q. 2010;21: 41–67.
  96. 96. Shapira LB, Mongrain M. The benefits of self-compassion and optimism exercises for individuals vulnerable to depression. J Posit Psychol. 2010;5: 377–389.
  97. 97. Mongrain M, Anselmo-Matthews T. Do positive psychology exercises work? A replication of Seligman et al. (2005). J Clin Psychol. 2012;68: 382–389. pmid:24469930
  98. 98. Mongrain M, Chin JM, Shapira LB. Practicing compassion increases happiness and self-esteem. J Happiness Stud. 2011;12: 963–981.
  99. 99. Sergeant S, Mongrain M. Are positive psychology exercises helpful for people with depressive personality styles? J Posit Psychol. 2011;6: 260–272.
  100. 100. Schueller SM, Parks AC. Disseminating self-help: Positive psychology exercises in an online trial. J Med Internet Res. 2012;14: e63–e63. pmid:22732765
  101. 101. Morris SB. Estimating Effect Sizes From the Pretest-Posttest-Control Group Designs. Organ Res Methods. 2008;
  102. 102. Schmidt FL, Hunter JE. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. SAGE Publications; 2014.
  103. 103. Wampold BE, Mondin GW, Moody M, Stich F, Benson K, Ahn H. A meta-analysis of outcome studies comparing bona fide psychotherapies: Empiricially, “all must have prizes.” Psychol Bull. 1997;122: 203–215.
  104. 104. Re AD, Hoyt WT. MAc: Meta-Analysis with Correlations. R package version 1.1. [Internet]. 2012. Available: http://CRAN.R-project.org/package=MAc
  105. 105. Emmons RA, McCullough ME. Counting blessings versus burdens: An experimental investigation of gratitude and subjective well-being in daily life. J Pers Soc Psychol. 2003;84: 377–389. pmid:12585811
  106. 106. Low CA, Stanton AL, Danoff-Burg S. Expressive disclosure and benefit finding among breast cancer patients: Mechanisms for positive health effects. Health Psychol. 2006;25: 181–189. pmid:16569109
  107. 107. Tkach CT. Unlocking the treasury of human kindness enduring improvements in mood, happiness, and self-evaluations [Internet]. 2005. Available: http://search.proquest.com/docview/305002749?accountid=14521
  108. 108. Pretorius C, Venter C, Temane M, Wissing M. The design and evaluation of a hope enhancement program for adults. J Psychol Afr. 2008;18: 301–310.
  109. 109. Mongrain M, Chin JM, Shapira LB. Practicing compassion increases happiness and self-esteem. J Happiness Stud. 2011;12: 963–981.
  110. 110. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. PLoS Med. 2009;6. pmid:19621070
  111. 111. Cochran WG. The Combination of Estimates from Different Experiments. Biometrics. 1954;10: 101–129.
  112. 112. Higgins JPT, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21: 1539–1558. pmid:12111919
  113. 113. Huedo-Medina TB, Sánchez-Meca J, Marín-Martínez F, Botella J. Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol Methods. 2006;11: 193–206. pmid:16784338
  114. 114. Stanley TD, Doucouliagos H. Meta-regression approximations to reduce publication selection bias. Res Synth Methods. 2014;5: 60–78. pmid:26054026
  115. 115. Rücker G, Schwarzer G, Carpenter JR, Binder H, Schumacher M. Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis. Biostat Oxf Engl. 2011;12: 122–142. pmid:20656692
  116. 116. Rücker G, Carpenter JR, Schwarzer G. Detecting and adjusting for small-study effects in meta-analysis. Biom J. 2011;53: 351–368. pmid:21374698
  117. 117. R Core Team. R: A language and enivronment for statistical computing. [Internet]. Vienna, Austria; 2015. Available: https://www.R-project.org/
  118. 118. Re AD. compute.es: Compute Effect Sizes [Internet]. 2013. Available: https://cran.r-project.org/web/packages/compute.es/index.html
  119. 119. Schwarzer G. meta: General Package for Meta-Analysis [Internet]. 2015. Available: https://cran.r-project.org/web/packages/meta/index.html
  120. 120. Viechtbauer W. Conducting meta-analyses in R with the metafor Package. J Stat Softw. 2010;36. Available: https://www.jstatsoft.org/article/view/v036i03
  121. 121. Schwarzer G, Carpenter J, Rücker G. metasens: Advanced Statistical Methods to Model and Adjust for Bias in Meta-Analysis [Internet]. 2014. Available: https://cran.r-project.org/web/packages/metasens/index.html
  122. 122. Cooper H, Hedges LV, editors. The Handbook of Research Synthesis. New York: Russell Sage Foundation; 1993.
  123. 123. Viechtbauer W, Cheung MW-L. Outlier and influence diagnostics for meta-analysis. Res Synth Methods. 2010;1: 112–125. pmid:26061377
  124. 124. APA Publications and Communications Board Working Group on Journal Article Reporting Standards. Reporting standards for research in psychology: Why do we need them? What might they be?. Am Psychol. 2008;63: 839. pmid:19086746
  125. 125. Parks AC, Biswas-Diener R. Positive interventions: Past, present, and future. Bridging acceptance and commitment therapy and positive psychology: A practitioner’s guide to a unifying framework. Oakland, CA: New Harbinger; 2013.
  126. 126. Howell AJ, Passmore H-A, Holder MD. Implicit theories of well-being predict well-being and the endorsement of therapeutic lifestyle changes. J Happiness Stud. 2016;17: 2347–2363.
  127. 127. McMahan EA, Estes D. The effect of contact with natural environments on positive and negative affect: A meta-analysis. J Posit Psychol. 2015;10: 507–519.
  128. 128. Capaldi CA, Passmore H-A, Nisbet EK, Zelenski JM, Dopko RL. Flourishing in nature: A review of the benefits of connecting with nature and its application as a wellbeing intervention. Int J Wellbeing. 2015;5.
  129. 129. Passmore H-A, Holder MD. Noticing nature: Individual and social benefits of a two-week intervention. 2016;0: 1–10.
  130. 130. Kushlev K, Heintzelman SJ, Lutes LD, Wirtz D, Oishi S, Diener E. ENHANCE: Design and rationale of a randomized controlled trial for promoting enduring happiness & well-being. Contemp Clin Trials. 2017;52: 62–74. pmid:27838475