Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Systematic review and meta-analysis of Mental Health First Aid training: Effects on knowledge, stigma, and helping behaviour

  • Amy J. Morgan ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft

    Affiliation Centre for Mental Health, Melbourne School of Population and Global Health, The University of Melbourne, Parkville, Victoria, Australia

  • Anna Ross,

    Roles Conceptualization, Data curation, Investigation, Methodology, Writing – review & editing

    Affiliation Centre for Mental Health, Melbourne School of Population and Global Health, The University of Melbourne, Parkville, Victoria, Australia

  • Nicola J. Reavley

    Roles Conceptualization, Methodology, Visualization, Writing – review & editing

    Affiliation Centre for Mental Health, Melbourne School of Population and Global Health, The University of Melbourne, Parkville, Victoria, Australia



To provide an up-to-date assessment of the effectiveness of the Mental Health First Aid (MHFA) training program on improving mental health knowledge, stigma and helping behaviour.


Systematic review and meta-analysis.


A systematic search of electronic databases was conducted in October 2017 to identify randomised controlled trials or controlled trials of the MHFA program. Eligible trials were in adults, used any comparison condition, and assessed one or more of the following outcomes: mental health first aid knowledge; recognition of mental disorders; treatment knowledge; stigma and social distance; confidence in or intentions to provide mental health first aid; provision of mental health first aid; mental health of trainees or recipients of mental health first aid. Risk of bias was assessed and effect sizes (Cohen’s d) were pooled using a random effects model. Separate meta-analyses examined effects at post-training, up to 6 months post-training, and greater than 6 months post-training.


A total of 18 trials (5936 participants) were included. Overall, effects were generally small-to-moderate post-training and up to 6 months later, with effects up to 12-months later unclear. MHFA training led to improved mental health first aid knowledge (ds 0.31–0.72), recognition of mental disorders (ds 0.22–0.52) and beliefs about effective treatments (ds 0.19–0.45). There were also small reductions in stigma (ds 0.08–0.14). Improvements were also observed in confidence in helping a person with a mental health problem (ds 0.21–0.58) and intentions to provide first aid (ds 0.26–0.75). There were small improvements in the amount of help provided to a person with a mental health problem at follow-up (d = 0.23) but changes in the quality of behaviours offered were unclear.


This review supports the effectiveness of MHFA training in improving mental health literacy and appropriate support for those with mental health problems up to 6 months after training.

Trial registration

PROSPERO (CRD42017060596)


Up to 1 in 5 adults will develop a mental health problem in any year[1] and because of this high prevalence, members of the public are likely to have contact with someone who has a mental health problem. The Mental Health First Aid (MHFA) course was developed in 2000 to teach community members first-aid skills to support people with mental health problems [2]. Developed in partnership between Betty Kitchener, an educator and mental health consumer, and Professor Tony Jorm, a mental health researcher, it adapts a familiar model from physical first aid training for injuries and emergencies to mental health problems. Mental health first aid is the help provided to a person who is developing a mental health problem or who is in a mental health crisis, until appropriate professional help is received or the crisis resolves. The course teaches how to recognise the clusters of symptoms of different mental disorders and mental health crises, how to offer and provide initial help, and how to guide a person towards appropriate treatments and other supportive help. This is summarised via the ‘ALGEE’ Action Plan:[3] Approach the person, assess and assist with any crisis; Listen and communicate non-judgmentally; Give support and information; Encourage the person to get appropriate professional help; and Encourage other supports.

From its beginnings in Australia, the MHFA program has spread worldwide with more than 22 countries adopting the program, and over 2 million people attending a course [4]. The Standard adult course is 12 hours long and includes information on depression, anxiety problems, psychosis, substance use problems, and crisis situations (e.g. suicide and self-harm, panic attacks, drug/alcohol overdose). As well as the Standard MHFA course for adults, there is a tailored course for adults assisting adolescents (Youth MHFA) which contains additional teaching about eating disorders. There are also specialized courses for various cultural and professional groups, including Aboriginal and Torres Strait Islanders, tertiary students, financial counsellors and pharmacists. These courses all teach how to give mental health first aid using the ALGEE Action Plan and include appropriate adaptation towards cultural or workplace contexts. MHFA courses are typically delivered via face-to-face instruction, but online/CD-ROM versions have also been developed. MHFA Instructors undergo rigorous selection procedures, training, and annual accreditation. Underpinning the MHFA program is a strong commitment to delivering training based on research evidence [5]. All course content is as evidence-based as possible and undergoes regular revision to incorporate new knowledge. Course materials draw on expert consensus studies that systematically combine the views of consumers, carers and professionals on how to provide MHFA (e.g [68]). This commitment to research evidence is also reflected in the continued focus on course evaluation.

A meta-analysis of 15 studies evaluating MHFA found that the program improves mental health knowledge, reduces stigmatising attitudes, and increases helping behaviours [9]. However, this review only included 6 controlled trials, 3 of which were conducted by the developers of MHFA. It also didn’t evaluate several outcomes of interest, including confidence and intentions to provide MHFA, MHFA knowledge, and the quality of MHFA behaviours provided to a person with a mental health problem. Since this review was conducted, a considerable number of new trials have been published evaluating the program. A new meta-analysis is warranted in order to include these new studies, evaluate the full range of training outcomes, examine the persistence of effects, and to further explore whether effects vary according to study characteristics. Therefore, this review aimed to provide an updated evaluation of the effectiveness of the Mental Health First Aid training program on improving mental health knowledge, stigmatising attitudes and helping behaviour.

Materials and methods

This review was conducted in accordance with the PRISMA recommendations for systematic reviews[10] and the review protocol was registered with PROSPERO (CRD42017060596).

Study eligibility criteria

Studies were eligible if they were randomised controlled trials (including cluster-randomised), quasi-randomised controlled trials, or controlled trials, in adults aged 18+. We excluded uncontrolled trials given their greater risk of yielding a biased estimate of effect [11]. We included studies evaluating the MHFA program if it was the primary intervention and not a component of a larger intervention. Both the standard adult course and its variants (e.g. translations or adaptations for specific populations) and the youth mental health first aid course were included. All modes of delivery were eligible for inclusion, including face-to-face, online/CD-ROM, and blended. If two versions of MHFA were evaluated within the one study (e.g. face-to-face or online), we selected the program that was more frequently evaluated in the included studies to avoid double-counting control group participants [11]. Studies could include any comparison condition, including waitlist, no intervention, a generic health education intervention, or other mental health education intervention. If there were multiple comparison conditions, we selected the comparison that most closely matched the intervention condition (e.g. in length or dose). We included studies that assessed the impact of MHFA on at least one outcome including knowledge, attitudes, perceived confidence or intentions to provide help, mental health first aid behaviours, and mental health symptoms. We planned to include studies in any language if they could be adequately translated into English using Google Translate. We identified only one such study [12], but as this was also reported in English in a separate publication [13], we used the English version. Unpublished or ongoing studies were eligible if the authors were able to provide us with outcome data.

Identification and selection of studies

A systematic search of the literature was conducted for studies published between the year of MHFA inception (2000) and 5th April 2017. This systematic search was also re-run on 12th Oct 2017, to detect any additional studies published since the initial search. We searched PubMed, PsycINFO (OVID interface), EMBASE (OVID interface), and the Cochrane Central Register of Controlled Trials (Wiley interface) for eligible studies. Specific search strategies were developed for each database, using a combination of both key words and MeSH/Map terms to include the following: ‘controlled trial’, ‘mental health’, ‘mental disorder’, ‘first aid’, ‘mental health training’, ‘MHFA’, ‘mental health first aid’, limiting to studies involving humans (see supplementary S3 Table for search terms used in each database). These database searches were supplemented by searching for trial protocols through the International Clinical Trials Registry Platform Search Protocol ( and by examining the list of MHFA course evaluations provided on the MHFA International website ( The reference lists of included studies and relevant reviews identified through the search were also checked. One author (AR) screened titles and abstracts for potential inclusion and two authors (AR, AJM) independently screened the full-text of retrieved articles, with discrepancies resolved by consensus or arbitration from a third author (NJR). The flow of studies is presented in Fig 1.

Data extraction and assessment of risk of bias

A data extraction template was developed and piloted. Study characteristics and outcome data from each study were extracted independently by two authors, with disagreements resolved by discussion. We attempted to contact authors to resolve any uncertainties or to obtain missing information, and received information from 10 studies [1322]. Information extracted from each study included participant characteristics (% female, age [mean, SD, range], education level); intervention characteristics (name of program, length of program in hours, number of sessions, delivery mode [online, face-to-face, blended]; and study characteristics (design [controlled, RCT, cluster-RCT], comparison condition [waitlist, no intervention, generic health education intervention, other mental health education intervention], study population [university student, employee, member of the public], study size [total, intervention/control], follow-up occasions, country, publication status [unpublished data, peer-reviewed, not peer-reviewed], involvement of MHFA founder [yes/no]).

Four sets of primary outcomes were evaluated: (1) knowledge, (2) stigmatising attitudes, (3) confidence and intentions in providing MHFA, (4) provision of mental health first aid. These outcomes are not measured as part of MHFA training but have been evaluated in research trials as they measure the main constructs of the training.


Three kinds of knowledge outcomes were extracted, where available. Mental health first aid knowledge, when measured objectively, typically by true or false questions based on course content (e.g. It is not a good idea to ask someone if they are feeling suicidal in case you put the idea in their head and It is best not to try to reason with a person having delusions). These questions have face validity but no other psychometric properties have been reported. Recognition of mental health problems was based on accuracy of identification of mental health problems described in a short vignette, most commonly depression or schizophrenia. These vignettes were written to satisfy diagnostic criteria and correct recognition has been validated by an Australian national sample of clinical psychologists, psychiatrists, and general practitioners (GPs) [23]. Beliefs about effective treatments for mental health problems was assessed by the extent to which participants agreed with health professionals about which interventions would be useful for problems described in a vignette. Helpful treatments were based on the consensus of national samples of Australian clinical psychologists, psychiatrists, and GPs [24, 25]. For example, scores for depression ranged from 0 to 6 based on professional consensus that six interventions are helpful (GPs, psychiatrists, clinical psychologists, antidepressants, counselling and cognitive-behaviour therapy).

Stigmatising attitudes.

Stigmatising attitudes towards people with mental health problems were primarily assessed with Social Distance Scales based on Link et al[26] and Personal Stigma Scales based on Griffiths et al [27]. Other validated stigma scales included the Attitudes to Mental Illness Scale,[28] Opening Minds Scale for Healthcare Providers[29] and the Personal Attributes scale [30].

Confidence and intentions.

Confidence and intentions to provide MHFA were extracted as proxy measures of behaviour change. Perceived confidence was usually assessed with a single question in relation to helping a person with a mental health problem. This has been shown to be predictive of high quality support provided up to six months later [31]. Intentions to provide mental health first aid were assessed in response to a person with a mental health problem described in a vignette. Studies either used a list of actions consistent with the ALGEE action plan, or invited open-ended responses which were scored for congruence with the ALGEE action plan as described by Yap et al [32]. These scores have high inter-rater reliability [32, 33] and have been shown to predict the quality of support provided up to 2 years later [32].

Provision of mental health first aid.

This was assessed in two ways in included studies: the amount of help or mental health first aid provided to someone with a mental health problem, and the quality of first aid provided. Studies measured amount of help either on frequency scales (e.g. “never” to “many times”) or whether help had been provided at all (yes/no). Quality was assessed by summing the number of helpful actions participants reported taking to support a person with a mental health problem, which were consistent with MHFA training.

Secondary outcomes collected were:

  1. Mental health of MHFA trainees, assessed using a validated measure of depression, anxiety, or psychological distress/internalising symptoms.
  2. Mental health of recipients of mental health first aid (via MHFA trainees), assessed using a validated measure of depression, anxiety, or psychological distress/internalising symptoms.

Where outcomes were reported as both continuous (e.g. means and standard deviations) and dichotomous (e.g. percentage above a cut-off), we extracted the continuous data. Where studies reported both endpoint data and change over time data (e.g. group by time interactions) we extracted the change over time data due to the potential for baseline imbalances. Where intentions, confidence or behavioural outcomes towards multiple people were reported (e.g. students AND colleagues), we extracted data for the person who was the clearly intended recipient of MHFA (i.e. school students for Youth MHFA participants).

Risk of bias was assessed using the revised Cochrane Risk of Bias tool for randomised trials (RoB 2.0) [34]. This has five domains and assesses bias due to the randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of the reported result. An additional domain is included for cluster RCTs which assesses bias arising from the timing of identification and recruitment of individual participants in relation to the timing of cluster randomization. On each domain, risk of bias was judged as low risk, some concerns, or high risk. Studies are rated as high risk of bias overall if at least one domain is judged as high risk of bias. Risk of bias judgements were made independently by two review authors, with discrepancies resolved by consensus. The quality of evidence for all outcomes was judged using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system [35, 36]. Five factors may lead to rating down the quality of evidence (risk of bias, inconsistency, indirectness, imprecision, and publication bias) and three factors may lead to rating up (large effect size, dose–response relationship, and consideration of all plausible confounding variables). The quality of evidence for each outcome across studies falls into one of four categories: high, medium, low, or very low.

Data synthesis

Outcomes were analysed with Comprehensive Meta-Analysis (CMA) V2 software using a random-effects model and reported as standardized mean differences. Separate meta-analyses were conducted to examine effects at 3 different pre-specified time-points (post-intervention, up to 6 months post-intervention, greater than 6 months post-intervention). Multiple validated outcomes of the same construct (e.g. attitudes assessed with personal stigma scales and social distance scales) within a study were combined into one effect size using CMA, unless these data were already pooled and reported by trial authors. Where studies reported an effect size for more than one independent subgroup in their sample (e.g. depression vignette versus schizophrenia vignette), we included each subgroup as a separate ‘study’. When interpreting mean effect sizes, we followed Cohen’s guidelines whereby d of 0.2 = small, 0.5 = medium, and 0.8 = large [37].

To explore the robustness of the results we performed several pre-specified sensitivity analyses. We tested the impact on pooled estimates of removing studies conducted by the developers of MHFA, studies judged to have high risk of bias, and outlier studies (studies whose 95% CI did not overlap with the 95% CI of the pooled effect size).

Statistical heterogeneity for each pooled outcome was examined with the I2 statistic, which expresses the amount of heterogeneity in effect sizes in percentages. A percentage of 25% indicates low heterogeneity, 50% moderate and 75% high heterogeneity [38]. The 95% CI around I2 was calculated using the ‘heterogi’ module in STATA. Where I2 was not zero and the confidence interval included high levels of heterogeneity, further subgroup and meta-regression analyses were conducted to explore possible causes of heterogeneity. Subgroup analyses used a mixed-effects model (a random-effects model within subgroups and a fixed-effect model across subgroups) and explored whether mean effect sizes differed for face-to-face versus online delivery, standard adult course versus youth MHFA, and type of comparison condition. When there were at least 3 studies per analysis, meta-regression explored the effect of the percentage of female participants and program length.

Small study effects (e.g. publication bias) were assessed when there were 10 or more studies in the meta-analysis by visually examining the funnel plot, supplemented by Egger’s test of funnel plot asymmetry [39, 40]. Duval and Tweedie’s trim and fill procedure was used to estimate the effect size after imputing potentially missing studies [41].


Searches generated 611 records of which 34 were assessed for full-text eligibility (see Fig 1). Sixteen studies were excluded at the full-text screening stage: 11 were not an MHFA program, 4 were not an RCT or controlled trial, and 1 was a duplicate publication. A final 18 studies were included in the review, with a total of 5936 participants [1322, 4249]. Only 6 of these were included in the prior review of MHFA [13, 1719, 45, 48], with 12 additional controlled trials identified. Study characteristics are detailed in Table 1.

There were 4 cluster-RCTs, 10 RCTs, and 4 controlled trials. Studies used a variety of comparison conditions, with 6 using a waitlist condition, 6 using another health or mental health education program, and 6 using a no intervention design. Outcomes were assessed post-intervention in 8 studies, up to 6 months follow-up in 13 studies (14 comparisons), and only 2 studies examined effects at more than 6 months post-intervention (up to 1 year follow-up). Three studies were unpublished or ongoing [16, 20, 21].

Most studies evaluated the adult course or a variant (k = 15) rather than youth MHFA (k = 3) and delivered the course in a face-to-face format (k = 15) rather than online or CD-ROM (k = 3). The program was evaluated in several settings, including workplaces (k = 7), with university or heath care students (k = 5), members of the public (k = 3), teachers (k = 1), parents (k = 1), and the military (k = 1). Study participants tended to be female (median = 78.1%, range 57.5% - 94.5%). The majority of studies were conducted in Australia (k = 8), with 5 conducted in North America, 4 in Europe/UK, and 1 in Hong Kong. Most studies did not involve the founders of MHFA in their evaluation (k = 12).

Risk of bias in included studies

Risk of bias was rated per outcome according to Cochrane guidelines[34] and is summarised in Figs 2 and 3. The majority of studies were rated as low risk of bias arising from randomisation; having adequate random sequence generation and allocation concealment and few baseline imbalances. There was generally low risk of bias due to deviations from intended interventions, as most studies reported delivery of MHFA consistent with usual practice and analysed participants in the group they were randomly allocated. Most studies were judged as low risk of bias in the selection of reported results, with 10 studies prospectively registering trial protocols. However, less than half the studies were judged as low risk of bias due to missing data, with 7 studies reporting different proportions of missing data between groups, and inadequate statistical methods for handling missing data, such as analysing complete cases or using last outcome carried forward imputation [34]. As all outcome measures were self-reported, most studies had a high risk of bias in measurement of the outcome. This is because participants were aware of their allocation and may have had expectations of benefit from the MHFA intervention, which may have influenced outcome measurement. Studies that included a mental health education condition as their comparison were rated as low risk of bias, as expectations of benefit would be similar across groups. The assessment of MHFA knowledge, which was an ‘objective’ test of knowledge via true or false answers, was also less likely to be influenced by knowledge of intervention received, hence this outcome was rated as low of bias for all 7 studies that included it. For all other primary outcomes, only one study was rated as low risk of bias overall.

Fig 2. Summary of risk of bias judgements presented as percentages across all included studies.

Fig 3. Summary of risk of bias judgements for each study.

Effects of MHFA


As shown in Table 2, accurate identification of mental health problems showed small non-significant improvements at post-intervention (d = 0.22), with low-to-moderate heterogeneity, increasing to moderate improvements by 6-month follow-up (d = 0.52) with low heterogeneity. Effect sizes were similar with the removal of 3 studies involving the developers of MHFA at post-intervention, d = 0.39 (95% CI: -0.04, 0.81) and 4 studies at follow-up, d = 0.43 (95% CI: 0.28, 0.58). There was no evidence of publication bias in the 11 studies at follow-up < = 6 months (Egger’s test, two-tailed p = .271).

Table 2. Results of meta-analyses of effects of mental health first aid training on outcomes.

Beliefs about effective treatments for mental health problems significantly improved at post-intervention (d = 0.45) and at up to 6-month follow-up (d = 0.19) with moderate-to-high heterogeneity in effect sizes. Effects were robust in sensitivity analyses exploring researcher allegiance post-intervention, d = 0.91 (95% CI: 0.69, 1.13, k = 1) and follow-up, d = 0.17 (95% CI: 0.00, 0.35, k = 7). Removal of one outlier study[22] at follow-up slightly reduced the effect size, d = 0.16 (95% CI: 0.07, 0.24). There was no evidence of publication bias in the 11 studies at follow-up < = 6 months (Egger’s test, two-tailed p = .450).

There was a moderate-to-large significant improvement in MHFA knowledge at post-intervention (d = 0.72), which was smaller at 6-month follow-up (d = 0.54) and 12 month follow-up (d = 0.31), with no heterogeneity. Removing studies involving the developers of MHFA increased the effect size at post-intervention to d = 0.91 (95% CI: 0.55, 1.26, k = 1) and d = 0.53 (95% CI: 0.41, 0.65, k = 4) at 6-month follow-up. Mean effect sizes were the same when removing studies at high risk of bias, d = 0.72 (95% CI: 0.44, 1.00, k = 1) at post-intervention and d = 0.54 (95% CI: 0.42, 0.67, k = 3) at follow-up less than 6 months.


There were small significant effects on stigmatizing attitudes at post-intervention (d = 0.14) and up to 6-month follow-up (d = 0.14), reducing to very small non-significant effects at 12-month follow-up (d = 0.08). Heterogeneity was low at post-intervention and moderate at 6-month follow-up. Effects were weaker when removing studies involving the founders of MHFA at post-intervention, d = 0.09 (95% CI: -0.05, 0.23, k = 5) and 6-month follow-up, d = 0.11 (95% CI: 0.00, 0.21, k = 10), but confidence interval limits overlapped. Similarly, removing one outlier study[48] at follow-up reduced the size of the pooled effect size to d = 0.09 (95% CI: 0.03, 0.16). Visual inspection of the funnel plot suggested some asymmetry in effects at 6-month follow-up (Egger’s test, two-tailed p = .113). The Duval & Tweedie trim-and-fill approach suggested that four studies were potentially missing and, if imputed, the mean effect size would drop to d = 0.08 and would no longer be significant (95% CI: -0.02, 0.18).

Supplementary analyses were conducted to explore whether effects varied depending on the type of stigma, as most studies included measures of both social distance and personal stigma. The mean effect size at post-intervention was d = 0.35 (95% CI: -0.08, 0.78) for social distance (k = 4) and d = 0.12 (95% CI: -0.06, 0.29) for personal stigma (k = 8). Similarly, at up to 6 months follow-up, effect sizes tended to be larger for social distance, d = 0.23 (95% CI: 0.11, 0.36, k = 11) than for personal stigma, d = 0.06 (95% CI: -0.02, 0.15, k = 13).

Confidence and intentions.

There were moderate significant improvements in confidence in helping someone with a mental health problem at post-intervention (d = 0.58) and follow-up by 6 months (d = 0.46), with high levels of heterogeneity. Effects beyond 6 months were small in size (d = 0.21). There were two outlier studies at 6-month follow-up [44, 45]; removing these only slightly reduced the pooled effect size to d = 0.43 (95% CI: 0.30, 0.57). Results were robust to researcher allegiance, with very similar effects at post (d = 0.59, 95% CI: 0.18, 1.00, k = 4) and < = 6-month follow-up (d = 0.48, 95% CI: 0.28, 0.68, k = 8). However, inspection of the funnel plot for the 12 studies at 6-month follow-up suggested evidence of asymmetry (Egger’s test, two-tailed p = .008). The Duval & Tweedie trim-and-fill approach suggested that four studies were potentially missing and, if imputed, the mean effect size would drop to d = 0.32 (95% CI: 0.15, 0.48).

Effects on intentions to provide mental health first aid were moderate-to-large at post-intervention (d = 0.75 with no heterogeneity) and by 6-month follow-up (d = 0.55 with high heterogeneity), with smaller effects observed at longer follow-up (d = 0.26 with moderate heterogeneity). Effects were somewhat larger when removing studies with MHFA developer involvement at post (d = 0.93, 95% CI: 0.63, 1.23, k = 2), up to 6-month follow-up (d = 1.04, 95% CI: 0.58, 1.50, k = 1), and beyond 6-months (d = 0.55, 95% CI: -0.00, 1.10, k = 1).


There were no improvements in the amount of help provided to a person with a mental health problem at post (d = -0.06, no heterogeneity), but small improvements were evident at less than 6-month follow-up (d = 0.23, moderate heterogeneity). Effects remained similar when removing studies with MHFA founder involvement, post d = -0.14 (95% CI: -0.44, 0.15) and follow-up d = 0.27 (95% CI: 0.08, 0.46).

The quality of mental health first aid provided to a person with a mental health problem showed a moderate, non-significant improvement at post-intervention (d = 0.73, high heterogeneity), but no effects were evident at less than 6-month follow-up (d = -0.01, high heterogeneity). One study found small, non-significant improvements at 12-month follow-up (d = 0.25).

Mental health.

Improvement in the mental health of MHFA trainees was not evident at post-intervention (d = -0.04) or by 6-month follow-up (d = 0.16), with no evidence of heterogeneity at either time point. One study showed no effects at 12-month follow-up (d = 0.02). At follow-up less than 6 months, effects were slightly lower when removing three studies that involved the founders of MHFA, d = 0.11 (95% CI: -0.40, 0.62).

Three studies evaluated the effect of MHFA training on recipients of MHFA (via MHFA trainees), and found a small non-significant effect by 6-month follow-up (d = 0.14, no heterogeneity), and a non-significant negative effect at 12 month follow-up (d = -0.09). At follow-up less than 6 months, removing the one study that involved the founders of MHFA and was at high risk of bias had little impact on the effect size, d = 0.17 (95% CI: -0.05, 0.39).

Possible moderators

Where I2 was not zero and the confidence interval included high levels of heterogeneity, we investigated possible reasons for variations in effect sizes according to pre-specified study characteristics: type of comparison condition, program delivery format (face-to-face versus online), and program type (adult versus youth). Full results are presented in S1 Table. Subgroup analyses revealed a difference in mean effect sizes between comparison conditions for beliefs about effective treatments at post-intervention (p< .001) and follow-up less than 6 months (p = .005), MHFA confidence at post-intervention (p < .001), and amount of MHFA provided at less than 6-month follow-up (p < .001). Studies that included another mental health education intervention as the comparison condition consistently had smaller mean effect sizes than those using a waitlist, no intervention, health education intervention or other form of comparison condition.

For outcomes with at least 3 studies we investigated whether gender and program length were possible sources of heterogeneity in separate meta-regression analyses. There were no consistent effects across outcomes (see S2 Table). Longer programs were associated with lower scores on beliefs about effective treatments at follow-ups less than 6 months (b = -0.06, 95% CI: -0.12, 0.00) and studies with a higher proportion of females were associated with lower amounts of MHFA provided to a person with a mental health problem at follow-ups less than 6 months (b = -0.01, 95% CI: -0.02, 0.00). Given the lack of consistency in effects, potential for Type II errors from the number of analyses conducted, and the observational nature of meta-regression, these findings should be interpreted with caution.


This review identified 18 controlled trials evaluating the effectiveness of MHFA training in a variety of settings. Across primary outcomes, there were generally small to moderate improvements at post-training and up to 6 months later. Effects at up to 12-month follow-up were less clear and require further replication. For knowledge outcomes, there is strong evidence that the training improves knowledge about mental health problems, with effects persisting up to a year after training. It also leads to moderate improvements in beliefs about appropriate treatments. Accurate identification of a person with a mental health problem improved up to 6 months later, but effects were smaller immediately after training and it is unclear why this was the case. MHFA training led to approximately small reductions in stigmatising attitudes, with post-hoc analyses suggesting greater reductions in social distance attitudes than other measures of stigma. Perceived confidence in helping a person with a mental health problem was largest post-training, but moderate effects persisted up to 6 months later. At post-training there were moderate-to-large improvements in intentions to provide first aid to a person with a mental health problem, which reduced at follow-up. Perhaps unsurprisingly, there were no changes in the amount of MHFA provided to others at post-training, given the short duration of the course and lack of opportunity to assist others. However, small effects were evident by 6-month follow-up, with heterogeneity explained by the lack of effect in studies comparing MHFA with another mental health education intervention. The impact of MHFA training on the quality of behaviours offered to a person with a mental health problem is less clear, as the pooled effect size confidence intervals failed to rule out no improvement or less improvement compared with controls. Effects on secondary outcomes–the mental health of trainees and MHFA recipients–were less convincing. There were potentially very small effects at up to 6-month follow-up, but estimates were imprecise and may change with further research.

Overall, the effects of MHFA training were robust to researcher allegiance, as effects were similar when excluding studies conducted by the developers of MHFA. This suggests that outcomes will be similar if evaluated by different research groups in different countries, and supports the program’s spread worldwide. Several outcomes showed significant variation in effect sizes across studies, which was often explained by the choice of comparison condition. Studies that compared MHFA with a different mental health education intervention generally showed no significant difference in effects. Other subgroup analyses investigating the effect of delivery format and program type were inconclusive due to a lack of power. With the exception of MHFA knowledge, the quality of the evidence was generally moderate or low, typically due to lack of blinding or potential bias from missing outcome data. It is difficult to achieve blinding for this type of intervention, as it requires a plausible control condition. The combination of self-reported outcomes with participants aware of whether they were in the intervention or control meant that most studies suffered potential bias in measurement of outcomes. This limitation could be overcome with an objective measure of MHFA behaviours, such as a simulated encounter with a person with a mental health problem, as has been used to evaluate therapist competence [50]. Nevertheless, positive changes to actual helping behaviour from MHFA training is supported by qualitative research that gathered stories from MHFA trainees approximately 20 months after training [51].

Results were generally consistent with those reported in an earlier review of MHFA that included fewer controlled trials [9] and 6 overlapping studies. Estimates are not directly comparable as the earlier review pooled together post-training and follow-up effects. For knowledge outcomes, Hadlaczky and colleagues found a small-to-moderate improvement (g = 0.38, 6 trials) in recognition of mental health problems and beliefs about effective treatments. Improvements in stigma (social distance) were somewhat higher than those reported here, but are comparable with our supplementary analyses restricted to social distance outcomes specifically. Similarly, effects on helping behaviour (g = 0.24, 5 trials) were very similar to the amount of MHFA provided at follow-up from 9 trials in our analyses. Consistent with prior literature [52], our analyses found greater effects for intentions to provide help than actual help provided. Nevertheless, evidence indicates that intentions to provide MHFA do predict actual behaviour provided to a person with a mental health problem later on (r = 0.27) [31].

This review’s strengths were its rigorous methodology and comprehensive inclusion of relevant studies, including 3 unpublished or ongoing studies. We extracted the full range of outcomes from MHFA training and conducted a detailed evaluation of risk of bias of each study outcome. Limitations included the small number of studies in some analyses, which affected the precision of pooled estimates and limited the power to investigate heterogeneity in subgroup and meta-regression analyses.

This review has identified some evidence gaps that could be further investigated. Only two studies have thus far examined MHFA training effects beyond 6 months [16, 47], hence the persistence of effects in the longer-term is unclear. Few studies have examined the quality of mental health first aid behaviours, that is, how well trainees actually provided support or assistance to a person with a mental health problem. This is arguably the key outcome of interest from the training, but is more difficult to assess than other outcomes. It requires participants to have had contact with someone with a mental health problem, and to describe in sufficient detail what actions they took. Assessing this over a longer follow-up period would mean there would be more opportunities to provide help, and greater statistical power to show an impact from the training. As the ultimate aim of MHFA training is to improve mental health outcomes, collecting data on the recipients of mental health first aid behaviours is an important goal. Although few studies in this review achieved this, it is acknowledged that collecting this data is challenging because recipients of MHFA support are not usually study participants.

Although effects were generally small to moderate, MHFA training could potentially have a large public health impact. Evidence suggests that many people are not well informed about how to recognise mental health problems in others, how to respond to them, and what services and effective treatments are available [53]. Stigmatising attitudes about mental health problems may also impact on treatment seeking and adherence and increase social exclusion [54, 55]. MHFA training offers more than just facts about mental health problems and myth-busting; it provides concrete steps and advice on how to approach and support a person with a mental health problem. This is important as avoidance and lack of understanding are key experiences of discrimination reported by those with common mental disorders [56]. Given low rates of treatment-seeking [57], and evidence that people are more likely to seek help if someone close to them suggests it [58, 59], the support that people receive from those in their social networks is an important factor in improving mental health outcomes.

In conclusion, this review supports the effectiveness of MHFA training in improving mental health literacy and appropriate support for those with mental health problems. This public health intervention is a noteworthy contribution to improving the lives of people with mental health problems.

Supporting information

S1 Table. Results of sub-group analyses investigating causes of heterogeneity in effect sizes in studies evaluating MHFA.


S2 Table. Meta-regression analyses of percentage of female participants and length of program as predictors of effect sizes in studies examining the effect of MHFA.



The authors would like to thank the researchers who provided additional data on their studies, and Professor Julian Higgins, for advice in using the Cochrane Risk of Bias 2.0 tool.


  1. 1. Steel Z, Marnane C, Iranpour C, Chey T, Jackson JW, Patel V, et al. The global prevalence of common mental disorders: a systematic review and meta-analysis 1980–2013. Int J Epidemiol. 2014;43(2):476–93. pmid:24648481
  2. 2. Kitchener BA, Jorm AF. Mental health first aid training for the public: evaluation of effects on knowledge, attitudes and helping behavior. BMC Psychiatry. 2002;2:10. pmid:12359045
  3. 3. Kitchener BA, Jorm AF, Kelly CM. Mental Health First Aid Manual. Melbourne: Mental Health First Aid Australia; 2010.
  4. 4. Mental Health First Aid Australia. Our impact accessed 12/9/17. Available from:
  5. 5. Kitchener BA, Jorm AF. Mental Health First Aid: an international programme for early intervention. Early Interv Psychiatry. 2008;2(1):55–61. pmid:21352133
  6. 6. Armstrong G, Ironfield N, Kelly CM, Dart K, Arabena K, Bond K, et al. Re-development of mental health first aid guidelines for supporting Aboriginal and Torres Strait Islanders who are engaging in non-suicidal self-injury. BMC Psychiatry. 2017;17(1):300. pmid:28830485
  7. 7. Kingston AH, Morgan AJ, Jorm AF, Hall K, Hart LM, Kelly CM, et al. Helping someone with problem drug use: a delphi consensus study of consumers, carers, and clinicians. BMC Psychiatry. 2011;11(1):3. pmid:21208412
  8. 8. Langlands RL, Jorm AF, Kelly CM, Kitchener BA. First aid for depression: a Delphi consensus study with consumers, carers and clinicians. J Affect Disord. 2008;105(1–3):157–65. pmid:17574684
  9. 9. Hadlaczky G, Hökby S, Mkrtchian A, Carli V, Wasserman D. Mental Health First Aid is an effective public health intervention for improving knowledge, attitudes, and behaviour: A meta-analysis. Int Rev Psychiatry. 2014;26(4):467–75. pmid:25137113
  10. 10. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. PLoS Med. 2009;6(7):e1000100. pmid:19621070
  11. 11. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 5.1.0 [updated March 2011] ed. Available from The Cochrane Collaboration; 2011.
  12. 12. Svensson B, Stjernswärd S, Hansson L. UTBILDNING I FÖRSTA HJÄLPEN VID PSYKISK OHÄLSA. En effektstudie i två län [Mental Health First Aid education. An evaluation study in two counties] Socialstyrelsen. 2013:15–7.
  13. 13. Svensson B, Hansson L. Effectiveness of Mental Health First Aid training in Sweden. A randomized controlled trial with a six-month and two-year follow-up. PLoS ONE. 2014;9(6). pmid:24964164
  14. 14. Burns S, Crawford G, Hallett J, Hunt K, Chih HJ, Tilley PJ. What's wrong with John? a randomised controlled trial of Mental Health First Aid (MHFA) training with nursing students. BMC Psychiatry. 2017;17(1):111. pmid:28335758
  15. 15. Davies B, Beever E, Glazebrook C. The mental health first aid eLearning course for medical students: a pilot evaluation study. European Health Psychologist. 2016;18 Supp.:861.
  16. 16. Jorm AF. First aid training for parents of teenagers—a randomized controlled trial. Unpublished.
  17. 17. Jorm AF, Kitchener BA, O'Kearney R, Dear KBG. Mental health first aid training of the public in a rural area: a cluster randomized trial [ISRCTN53887541]. BMC Psychiatry. 2004;4(1):33. pmid:15500695
  18. 18. Jorm AF, Kitchener BA, Sawyer MG, Scales H, Cvetkovski S. Mental health first aid training for high school teachers: a cluster randomized trial. BMC Psychiatry. 2010b;10(1):51. pmid:20576158
  19. 19. Kitchener BA, Jorm AF. Mental health first aid training in a workplace setting: A randomized controlled trial [ISRCTN13249129]. BMC Psychiatry. 2004;4(1):23. pmid:15310395
  20. 20. Moll S, Patten SB, Stuart H, Kirsh B, MacDermid JC. Beyond silence: protocol for a randomized parallel-group trial comparing two approaches to workplace mental health education for healthcare employees. BMC Med Educ. 2015;15:78. pmid:25880303
  21. 21. Reavley NR. WorkplaceAid—A trial on improving mental health and physical first aid skills in the workplace. Unpublished.
  22. 22. Wong DFK, Lau Y, Kwok S, Wong P, Tori C. Evaluating the effectiveness of Mental Health First Aid Program for Chinese People in Hong Kong. Res Soc Work Pract. 2015;27(1):59–67.
  23. 23. Morgan AJ, Jorm AF, Reavley NJ. Beliefs of Australian health professionals about the helpfulness of interventions for mental disorders: Differences between professions and change over time. Aust N Z J Psychiatry. 2013;47(9):840–8. pmid:23677848
  24. 24. Jorm AF, Korten AE, Jacomb PA, Rodgers R, Pollitt P, Christensen H, et al. Helpfulness of interventions for mental disorders: beliefs of health professionals compared with the general public. Br J Psychiatry. 1997;171:233–7. pmid:9337975
  25. 25. Jorm AF, Morgan AJ, Wright A. Interventions that are helpful for depression and anxiety in young people: A comparison of clinicians' beliefs with those of youth and their parents. J Affect Disord. 2008;111:227–34. pmid:18410970
  26. 26. Link BG, Phelan JC, Bresnahan M, Stueve A, Pescosolido BA. Public conceptions of mental illness: labels, causes, dangerousness, and social distance. Am J Public Health. 1999;89(9):1328–33. pmid:10474548
  27. 27. Griffiths KM, Christensen H, Jorm AF. Predictors of depression stigma. BMC Psychiatry. 2008;8:25. pmid:18423003
  28. 28. Luty J, Fekadu D, Umoh O, Gallagher J. Validation of a short instrument to measure stigmatised attitudes towards mental illness. BJPsych Bull. 2006;30(7):257–60.
  29. 29. Kassam A, Papish A, Modgill G, Patten S. The development and psychometric properties of a new scale to measure mental illness related stigma by health care providers: the Opening Minds Scale for Health Care Providers (OMS-HC). BMC Psychiatry. 2012;12(1):62.
  30. 30. Angermeyer MC, Matschinger H. The stigma of mental illness: effects of labelling on public attitudes towards people with mental disorder. Acta Psychiatr Scand. 2003;108(4):304–9. pmid:12956832
  31. 31. Rossetto A, Jorm AF, Reavley NJ. Predictors of adults' helping intentions and behaviours towards a person with a mental illness: A six-month follow-up study. Psychiatry Res. 2016;240:170–6. pmid:27107671
  32. 32. Yap MBH, Jorm AF. Young people's mental health first aid intentions and beliefs prospectively predict their actions: Findings from an Australian National Survey of Youth. Psychiatry Res. 2012;196(2–3):315–9. pmid:22377574
  33. 33. Rossetto A, Jorm AF, Reavley NJ. Quality of helping behaviours of members of the public towards a person with a mental illness: a descriptive analysis of data from an Australian national survey. Ann Gen Psychiatry. 2014;13:2. pmid:24438434
  34. 34. Higgins JPT, Sterne J, Savović J, Page M, Hróbjartsson A, Boutron I, et al. A revised tool for assessing risk of bias in randomized trials. In: Chandler J, McKenzie J, Boutron I, Welch V, editors. Cochrane Database Syst Rev. Issue 10 (Suppl 1)2016.
  35. 35. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94. pmid:21195583
  36. 36. Guyatt G, Oxman AD, Sultan S, Brozek J, Glasziou P, Alonso-Coello P, et al. GRADE guidelines: 11. Making an overall rating of confidence in effect estimates for a single outcome and for all outcomes. J Clin Epidemiol. 2013;66(2):151–7. pmid:22542023
  37. 37. Cohen J. A power primer. Psychol Bull. 1992;112:155–9. pmid:19565683
  38. 38. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60. pmid:12958120
  39. 39. Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629–34. pmid:9310563
  40. 40. Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343. pmid:21784880
  41. 41. Duval S, Tweedie R. Trim and fill: A simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56(2):455–63. pmid:10877304
  42. 42. Jensen KB, Morthorst BR, Vendsborg PB, Hjorthøj C, Nordentoft M. Effectiveness of Mental Health First Aid training in Denmark: a randomized trial in waitlist design. Soc Psychiatry Psychiatr Epidemiol. 2016;51(4):597–606. pmid:26837214
  43. 43. Jorm AF, Kitchener BA, Fischer JA, Cvetkovski S. Mental Health First Aid training by e-learning: a randomized controlled trial. Aust N Z J Psychiatry. 2010a;44(12):1072–81. pmid:21070103
  44. 44. Lipson SK, Speer N, Brunwasser S, Hahn E, Eisenberg D. Gatekeeper training and access to mental health care at universities and colleges. J Adolesc Health. 2014;55(5):612–9. pmid:25043834
  45. 45. Massey J, Brooks M, Burrow J. Evaluating the effectiveness of Mental Health First Aid training among student affairs staff at a Canadian university. J Stud Aff Res Pract. 2014;51(3):323–36.
  46. 46. Moffitt J, Bostock J, Cave A. Promoting well-being and reducing stigma about mental health in the fire service. J Public Ment Health. 2014;13(2):103–13.
  47. 47. Mohatt NV, Boeckmann R, Winkel N, Mohatt DF, Shore J. Military Mental Health First Aid: Development and preliminary efficacy of a community training for improving knowledge, attitudes, and helping behaviors. Mil Med. 2017;182(1):e1576–e83. pmid:28051976
  48. 48. O'Reilly CL, Bell JS, Kelly PJ, Chen TF. Impact of Mental Health First Aid training on pharmacy students' knowledge, attitudes and self-reported behaviour: A controlled trial. Aust N Z J Psychiatry. 2011;45(7):549–57. pmid:21718124
  49. 49. Rose T, Leitch J, Collins KS, Frey JJ, Osteen PJ. Effectiveness of Youth Mental Health First Aid USA for social work students. Res Soc Work Pract. 2017.
  50. 50. Armstrong G, Blashki G, Joubert L, Bland R, Moulding R, Gunn J, et al. An evaluation of the effect of an educational intervention for Australian social workers on competence in delivering brief cognitive behavioural strategies: A randomised controlled trial. BMC Health Serv Res. 2010;10:304-. pmid:21050497
  51. 51. Jorm AF, Kitchener BA, Mugford SK. Experiences in applying skills learned in a Mental Health First Aid training course: A qualitative study of participants' stories. BMC Psychiatry. 2005;5. pmid:16280088
  52. 52. Webb TL, Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull. 2006;132(2):249–68. pmid:16536643
  53. 53. Reavley NJ, Jorm AF. Recognition of mental disorders and beliefs about treatment and outcome: Findings from an Australian National Survey of Mental Health Literacy and Stigma. Aust N Z J Psychiatry. 2011;45(11):947–56. pmid:21995330
  54. 54. Barney LJ, Griffiths KM, Jorm AF, Christensen H. Stigma about depression and its impact on help-seeking intentions. Aust N Z J Psychiatry. 2006;40(1):51–4. pmid:16403038
  55. 55. Corrigan PW, Watson AC. Understanding the impact of stigma on people with mental illness. World Psychiatry. 2002;1(1):16–20. pmid:16946807
  56. 56. Morgan AJ, Reavley NJ, Jorm AF, Beatson R. Discrimination and support from friends and family members experienced by people with mental health problems: Findings from an Australian national survey Soc Psychiatry Psychiatr Epidemiol. 2017;52(11):1395–403. pmid:28477071
  57. 57. Burgess PM, Pirkis JE, Slade TN, Johnston AK, Meadows GN, Gunn JM. Service use for mental health problems: Findings from the 2007 National Survey of Mental Health and Wellbeing. Aust N Z J Psychiatry. 2009;43(7):615–23. pmid:19530018
  58. 58. Cusack J, Deane FP, Wilson CJ, Ciarocchi J. Who influences men to go to therapy? Reports from men attending psychological services. Int J Adv Couns. 2004;26(3):271–83.
  59. 59. Vogel DL, Wade NG, Wester SR, Larson L, Hackler AH. Seeking help from a mental health professional: the influence of one's social network. J Clin Psychol. 2007;63(3):233–45. pmid:17211874