Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Can a social media intervention improve online communication about suicide? A feasibility study examining the acceptability and potential impact of the #chatsafe campaign

  • Louise La Sala ,

    Roles Investigation, Methodology, Project administration, Resources, Writing – original draft, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Zoe Teh,

    Roles Investigation, Project administration, Writing – original draft, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Michelle Lamblin,

    Roles Methodology, Project administration, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Gowri Rajaram,

    Roles Formal analysis, Visualization, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Simon Rice,

    Roles Conceptualization, Funding acquisition, Supervision, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Nicole T. M. Hill,

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia, Telethon Kids Institute, Perth, Western Australia, Australia

  • Pinar Thorn,

    Roles Conceptualization, Investigation, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia

  • Karolina Krysinska,

    Roles Conceptualization, Project administration, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia, Centre for Mental Health, The Melbourne School of Population and Global Health, University of Melbourne, Parkville, Victoria, Australia

  • Jo Robinson

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Orygen, Parkville, Victoria, Australia, Centre for Youth Mental Health, The University of Melbourne, Parkville, Victoria, Australia


There is a need for effective and youth-friendly approaches to suicide prevention, and social media presents a unique opportunity to reach young people. Although there is some evidence to support the delivery of population-wide suicide prevention campaigns, little is known about their capacity to change behaviour, particularly among young people and in the context of social media. Even less is known about the safety and feasibility of using social media for the purpose of suicide prevention. Based on the #chatsafe guidelines, this study examines the acceptability, safety and feasibility of a co-designed social media campaign. It also examines its impact on young people’s willingness to intervene against suicide and their perceived self-efficacy, confidence and safety when communicating on social media platforms about suicide. A sample of 189 young people aged 16–25 years completed three questionnaires across a 20-week period (4 weeks pre-intervention, immediately post-intervention, and at 4-week follow up). The intervention took the form of a 12-week social media campaign delivered to participants via direct message. Participants reported finding the intervention acceptable and they also reported improvements in their willingness to intervene against suicide, and their perceived self-efficacy, confidence and safety when communicating on social media about suicide. Findings from this study present a promising picture for the acceptability and potential impact of a universal suicide prevention campaign delivered through social media, and suggest that it can be safe to utilize social media for the purpose of suicide prevention.


Suicide is the leading cause of death among young Australians and the second worldwide, with rates steadily increasing over the past decade [13]. Although young people who die by suicide frequently experience mental ill-health, many are reluctant to seek professional help and are not in contact with services at the time of their death [4]. In light of this, more effective, youth-friendly, and community-based suicide prevention initiatives are required.

As social media use increases among young people [5, 6] a growing body of literature points to the potential for online interventions to improve mental health outcomes [711], including in suicide prevention [1216]. These studies have identified that social media can provide an accessible and acceptable forum for young people to communicate about suicide, seek support for themselves and also support others [8, 1618]. These interventions also have the potential to facilitate access to specialist care [19, 20]. There are however downsides to using social media to communicate about suicide. For example, concerns exist regarding the potential for certain types of content (e.g., graphic images of suicide methods) to cause distress or harm to others [21, 22], and in some cases the spreading of suicide-related information via social media has been thought to contribute to the development of suicide clusters [21]. Despite these concerns, social media remains popular with young people [23], and therefore interventions that can better equip them to communicate about suicide safely on these platforms are required [16, 17].

Guidelines to support safe communication about suicide by mainstream media have become a widely accepted suicide prevention strategy in many countries including Australia [2427], and when adopted by journalists, they appear to be linked to improvements in the quality of media reporting and a reduction in suicide rates [28]. However, because of its dynamic and interactive nature, young people communicate about suicide on social media in fundamentally different ways to the ways in which mainstream media operates [16, 29, 30]. For this reason existing guidelines are unlikely to have much traction with young people; nor are they necessarily transferable to social media platforms.

In response to this, we developed the #chatsafe guidelines which were specifically designed with both young people and social media platforms in mind (see The guidelines were developed using the Delphi expert consensus method and include information on how to safely post about suicidal thoughts or experiences, engage with suicide content, respond to someone affected by or at risk of suicide, and how to manage memorial pages and closed groups [29]. We then worked in partnership with young people from across Australia to co-design a social media campaign to help disseminate the guidelines and facilitate their uptake [30]. The #chatsafe campaign was rolled out across three states and two territories in Australia between September 2019 and January 2020.

Previous campaigns targeting physical health outcomes in young people have been shown to be effective, for example in reducing sedentary behaviour and smoking, as well as improvements in sexual health [31]. Factors believed to facilitate behaviour change include evoking emotional responses that assist learning, depicting relevant and meaningful stories through familiar characters, and involving young people themselves in the creation and delivery of the campaign [32]. With this in mind, the delivery of population-wide campaigns has gained attention as a potentially effective suicide prevention strategy. Whilst some evidence exists to suggest that they can improve outcomes such as knowledge, awareness, and attitudes toward help-seeking [6, 3338], there is a lack of evidence to support their capacity to change behaviour, particularly among young people and in the context of social media. There is also no evidence to date regarding the acceptability, safety, or feasibility of conducting and testing the impact of, suicide prevention campaigns on social media platforms.

Thus, the aims of this study were to examine the acceptability and safety of the #chatsafe campaign, and the feasibility of delivering and testing this intervention entirely via social media. Additional aims were to examine the impact of the intervention on young people’s willingness to intervene against suicide (e.g., feeling confident in their ability to discuss suicide with someone who is suicidal), as well as their perceived self-efficacy, confidence and safety when communicating about suicide on social media platforms.


Study design

This study adheres to the Template for Intervention Description and Replication (TIDieR) checklist [39]. The study employed a single group pre-test/ post-test survey design with a 12-week intervention period. Participants completed self-assessments at three timepoints (baseline—T1, post-intervention—T2, and 4-weeks post-intervention—T3). See Fig 1.

Fig 1. Timeline of study and delivery of #chatsafe intervention.

The study was conducted by researchers based in Melbourne, Australia. It received approval from the University of Melbourne Human Research and Ethics Committee (ID:1954623).

Participants and recruitment

Young people were eligible for inclusion in the study if they were aged between 16 and 25 inclusive, in line with the youth participation policy at the organisation where the research was conducted. Participants aged 16 and 17 years were determined by the ethics committee to be mature minors who were able to provide informed consent to participate. Eligibility criteria also included that they lived in Victoria, New South Wales or Tasmania, Australia, had not already read the #chatsafe guidelines or been exposed to the campaign, and endorsed any of the following: 1) had used social media to talk about suicide; 2) managed, or were part of, a suicide discussion group; 3) had viewed suicide-related content on social media; and/or 4) had wanted to talk about suicide on social media but did not feel equipped to do so. Study requirements also asked participants to provide their social media handles for Facebook, Instagram, Snapchat, Twitter or Tumblr.

Participants were recruited over a three-month period (September 2019 –December 2019) via social media advertising on Facebook, Instagram, Snapchat, YouTube and Twitter. Individuals who clicked into the online survey were screened for eligibility and those eligible for inclusion were asked to provide consent. Participants completed the baseline assessment immediately after providing consent (T1), the second assessment at the end of the 12-week intervention (T2), and the third assessment four-weeks after the intervention concluded (T3). All participants were reimbursed AUD$30 per assessment.

As seen in Fig 1, there was a four-week gap between the baseline assessment and commencement of the intervention (i.e., delivery of the first piece of #chatsafe content), and another four weeks between T2 and T3, meaning the study period was 20 weeks.

The #chatsafe intervention

The intervention was delivered to participants once a week for 12 weeks via a direct message sent by one of the research team (LLS or ZT) to a social media account of the participants’ choice. Each message included a link to one piece of social media content (either a short video, animation or static image) that was hosted on the #chatsafe Instagram page ( Each social media post directly mapped onto one of the themes from the guidelines e.g., safe language to use when talking about suicide. In order to avoid over-exposure to content relating specifically to suicide, every alternate week the content had a self-care theme. Self-care content included information about digital literacy and on- and off-line wellbeing. Content themes and the delivery schedule are described in Table 1 and examples of the campaign content are presented in Fig 2.

Fig 2. Examples of social media content shared during the #chatsafe campaign.

Image 1: A still image of a short video (with no audio) depicting a young person “taking a break”. Image 2: A still image of an animation video that discusses how to support a friend who might be suicidal. Image 3: A photo and quote by a young person.

Table 1. Delivery schedule, content theme and content type for each week of the #chatsafe social media campaign.

As this study ran alongside the national campaign, participants were able to view the wider campaign content posted on social media in addition to the individual pieces of content sent to them via direct message. This also meant that they could engage with the content as much or as little as they wished.

Outcomes and outcome measures

All questionnaires were completed online via Qualtrics. Demographic information was collected at baseline using a 10-item purpose-designed questionnaire assessing age, nationality, Aboriginal or Torres Strait Islander identity, gender identity, and state of residence. Time spent on social media at baseline was measured using the Patterns of Social Media Use Questionnaire [6].

Each standardised measure (i.e., willingness to intervene against suicide online, perceived self-efficacy, and confidence and safety when communicating online about suicide) was measured at baseline (T1), T2 and T3. In addition to this, a short emoji scale measuring acceptability and safety accompanied the #chatsafe campaign content sent to participants each week. Acceptability data was also collected through a series of evaluation questions administered at T2 (see Fig 1).


Acceptability was assessed in the following ways. First, a purpose-designed three-item acceptability questionnaire was sent with each piece of weekly content. This short momentary-assessment asked participants: 1) What did you think about the campaign content this week?; 2) Would you share this week’s campaign content with your contacts on social media?; and 3) How did the campaign content you received today make you feel?. Each question was presented on a 5-point scale comprising a series of emojis depicting different mood states (see Fig 3). Participants also had the option to ‘snooze’ the delivery of the campaign, which would suspend delivery of the content for one week. Participants who selected response options 1 or 2 (see Fig 3) to the question relating to how the content made them feel were assessed as potentially showing signs of distress and asked if they would like to snooze the content for one week or withdraw from the study. Participants had to confirm that they would like to continue receiving #chatsafe content the following week to remain in the study.

Fig 3. Evaluation emoji rating scale with 1 coded as most negative/distressed and 5 coded as most positive/happy.

Second, a series of evaluation questions were administered at T2. These questions asked participants if they found the content helpful, if it increased their confidence to talk safely online about suicide, if they thought it would be helpful for others, and if they felt that the campaign had any negative effects on them or if they thought it would have a negative effect on others.


Detailed safety procedures were developed. This included the establishment of an independent Safety Monitoring Committee to oversee study safety and conduct, comprising a clinical psychologist, an external subject-matter expert and an organizational operations manager, all of whom have extensive experience conducting clinical trials with young people.

Participant safety was assessed daily by monitoring the #chatsafe social media accounts for any messages or comments that indicated distress, and by monitoring the weekly survey responses for participants who snoozed or withdrew from the study (who were contacted within 24 hours). Contact details of relevant support services such as eheadspace and Kids Helpline were included in all study materials.

In addition, any adverse events (AEs) and serious adverse events (SAEs) that were brought to our attention were recorded. AEs were defined as any untoward or adverse effect, whether or not related to the study (e.g., comments that expressed suicidal ideation). SAEs were defined as an event that resulted in death and/or was immediately life threatening and/or required hospitalization [40]. All adverse events were monitored and recorded by a member of the study team (KK) with oversight from the study psychologist (SR). They were then reported to the Safety Monitoring Committee who determined whether or not the event was considered attributable to the #chatsafe intervention, if it could be appropriately dealt with by the existing safety protocols, or if the intervention needed to be withdrawn or suspended.


Criteria relating to feasibility were based on participant recruitment, attrition, and the reach of the broader campaign (including the overall number of impressions, and the number of times the post was ‘liked’ or viewed). Social media metrics were recorded and analysed by our digital design partners, Portable. As this was an exploratory study, no a priori social media metrics were set.

Willingness to intervene against suicide online.

Participants’ perceived ability and intention to intervene against suicide were measured using two adapted subscales of the Willingness to Intervene Against Suicide Questionnaire [41]. The Perceived Behavioral Control subscale comprised 20 Likert-type items and assessed the participant’s confidence and belief in their ability to intervene with someone who might be at risk of suicide. The Intent to Intervene subscale comprised 22 items and assessed the participant’s ability to recognize the need for action, encourage help-seeking, and connect the suicidal person with resources or services. Items were scored on a 5-point scale (1 = Strongly disagree to 5 = Strongly agree), and composite scores for the Perceived Behavioral Control and Intent scales ranged from 20 to 100 and 22 to 110, respectively. Both subscales were adapted to remove the emphasis of seeking help in a college campus setting (e.g., locate someone on campus for the suicidal person to talk to), and increase emphasis on seeking information online (e.g., I would feel comfortable seeking information from a credible source online). Excellent reliability was observed for both the Perceived Behavioral Control (Cronbach’s α = 0.92) and the Intent to Intervene (Cronbach’s α = 0.90) subscales.

Perceived self-efficacy, confidence and safety when communicating online about suicide.

Perceived self-efficacy was measured using an adapted version of the Internet Self-Efficacy scale [42], which comprised 17 items on a 7-point Likert scale (1 = Totally not confident to 7 = Completely confident), with composite scores ranging from 17 to 119. This assessed participants’ levels of confidence on reactive/generative, differentiation, organization, communication, and search self-efficacy, with higher scores indicating a higher level of internet self-efficacy. Reliability ranged from acceptable to excellent for the five domains (reactive/generative: α = 0.85; differentiation: α = 0.91; organization: α = 0.86; communication: α = 0.73; search: α = 0.82).

These five domains are categorized into three general levels of self-efficacy: high, medium and low. Domains with high levels of self-efficacy include communication (navigating social networking sites) and search (using advanced search engines) self-efficacy. Domains with medium levels of self-efficacy include organization (organizing information that may already be partially structured by the platform in use) and differentiation (participants’ willingness to follow hyperlinks in goal-oriented tasks). The domain with the lowest level of self-efficacy is a combination of reactive problem-solving (participant’s perceived ability to react and solve problems online) and generative self-efficacy (participants’ perceived ability to contribute unique information online).

Perceived safety was measured using an adapted version of the Perceived Safety Questionnaire [43]. This measure has previously been used to assess risk perception, including perceived safety, agency, coping and resolution online with a sample of young people, however not with reference to suicide-related content. Adaptations included making it specific to suicide-related content online (i.e., creating a post about suicide, viewing suicide-related information, or sharing suicide-related information on social media). This measure asked participants for information relating to the frequency and type of suicide-related content that was seen on social media throughout the study period.

Data analysis

Descriptive statistics were used to assess acceptability, safety and feasibility. Acceptability was measured based on responses to the T2 evaluation questions. Safety was monitored daily and assessed weekly by responses to the Acceptability questionnaire. Ordinal logistic regression was used to ascertain whether participant characteristics were associated with weekly evaluations of campaign content. Participant characteristics considered were age, gender, sexual orientation, Aboriginal and/or Torres Strait Islander status, nationality, language spoken at home and current student status. Feasibility metrics were gathered based on recruitment, retention, and attrition, as well as social media metrics such as reach, impressions, and number of views.

Composite scores for the Willingness to Intervene Against Suicide subscales and the Internet Self-Efficacy Scale were calculated, and negatively worded items were reverse-coded. Friedman’s test was used to determine statistically significant change in median scores between timepoints. Median scores and nonparametric tests were used as the data violated assumptions of normality (Shapiro-Wilk test). If statistical significance was observed, post hoc analysis with Wilcoxon signed ranks test was conducted to determine whether the change occurred between T1 and T2, or T2 and T3. A conservative Bonferroni correction was calculated by dividing the significance level (0.05) by the number of tests (3), resulting in level of significance set at p<0.017. P-values greater than 0.017 were considered not significant.

The Perceived Safety Questionnaire was analysed using McNemar’s test comparing differences in proportions of responses across timepoints where the outcome was dichotomous. For categorical outcomes, Pearson chi-square test for univariate frequency distribution was used to determine change in proportions of responses across timepoints.

For measures included in the T1, T2 and T3 questionnaires, subgroup analyses by age and gender were also conducted using McNemar’s test, Pearson chi-square test and Mann Whitney U test. All analyses were conducted using SPSS v25 software package and Stata/IC Version 15.1.

Sample size.

As this was as an exploratory feasibility study and novel in design, no power calculation was used to determine sample size [44]. However, one of the main measures used in this study was created with sample sizes ranging from 172–367 [41]. Therefore, we used this to guide the target sample size.



A total of 6,840 young people responded to the advertisements and clicked into the survey. Of them 514 were eligible and completed the baseline questionnaire. Only participants who commenced the intervention and completed all three assessments were retained for analyses. This resulted in a final sample of 189 young people–see Fig 4. Participant demographics and social media usage are reported in Table 2.

Fig 4. Participant flow diagram from enrolment, follow-up, and data analysis for the #chatsafe intervention.

Table 2. Demographic characteristics and baseline social media usage of the sample.

Eligible participants indicated one or more of the following: they had used social media to talk about suicide (n = 96, 50.79%), they had viewed suicide-related content on social media (n = 169, 89.42%), they had wanted to talk about suicide on social media but did not feel equipped to do so (n = 72, 38.10%), and/or they managed, or were part of, a suicide discussion, bereavement, or memorial group online (n = 11, 5.82%).

Participants retained for analyses did not significantly differ from those not retained on any demographic variables.

Key findings


Weekly acceptability data. The content themes for each week can be seen in Table 1 and reactions to the content can be seen in Table 3. As can be seen in Table 3, weekly acceptability responses decrease each week (from 84.66% during Week 1 to 59.79% at Week 11) followed by an increase in responses at Week 12 (67.72%).

Table 3. Weekly evaluations of the social media content shared within the #chatsafe intervention.

Overall, participants reported that their most preferred peice of content was Week 2 (an animation encouraging users to self-care; n = 157, 96.91) and that they were most likely to share Week 9’s campaign content (an animation containing practical tips on how to safely memorialize someone who had died by suicide; n = 101, 80.80%). Participants also reported that Week 7 content made them feel most positive (an animation encouraging users to pause and reflect before posting; n = 116, 88.55%).

Conversely, participants reported that their least preferred piece of content was from Week 6 (a video on self-care; n = 20, 14.93%), that they would be least likely to share the content from Week 8 (a video on self-care; n = 36, 27.91%), and that the Week 5 content (a text tile about safe posting on social media; n = 15, 11.28%) was associated with the most negative feelings.

Few associations were observed between individual-level characteristics and participant evaluations of the #chatsafe campaign content. An increase in age was associated with more negative feelings towards Week 1’s campaign content, with an odds ratio of 0.822 (95%CI 0.764, 0.995), Wald’s χ^2 (1) = 4.147, p = 0.042. An increase in age was also associated with more negative evaluations (OR: 0.861 (95%CI 0.748, 0.990), Wald’s χ^2 (1) = 4.431, p = 0.035) and more negative feelings towards Week 3’s campaign content (OR: 0.850 (95% CI 0.740, 0.975), Wald’s χ^2 (1) = 5.363,p = 0.021. An increase in age was associated with more positive evaluations of Week 7’s campaign content, with an odds ratio of 1.178 (95% CI 1.004, 1.382), Wald’s χ^2 (1) = 4.058, p = 0.044.

No gender differences were observed.

Post-intervention evaluation data. At the end of the intervention (T2), 80% of participants (n = 150) reported that they found the campaign content helpful for themselves; with 32% (n = 60) reporting that it was ‘moderately’ helpful, 38% (n = 72) reporting it to be ‘very’ helpful, and 10% (n = 18) reporting it to be ‘extremely’ helpful. Participants also reported that they thought the content would be either ‘very’ or ‘extremely’ helpful for others (44%, n = 83 and 29%, n = 55 respectively). Forty-four per cent (n = 83) reported that their confidence talking online about suicide was moderately improved as a result of the intervention, 30% (n = 57) reported it to be ‘highly’ improved, and 11% (n = 21) reported that the intervention ‘extremely’ improved their confidence.

Finally, participants were asked if the content had a negative effect on them, or if they believed it would have a negative effect on others. Seventy-eight percent of participants (n = 148) said that the campaign content did ‘not at all’ have a negative impact, 17% (n = 33) said that it ‘somewhat’ impacted them and 4% (n = 7) said it ‘moderately’ impacted them. When asked about others, 40% (n = 76) believed that the content would ‘not at all’ have a negative impact on others and a further 53% (n = 101) believed the content could have a ‘somewhat’ negative impact on others.


Adverse events. During the study period no SAEs and six AEs were recorded. Of the six AEs, three involved participants opting to withdraw from the study and three involved participants contacting the study team via direct message or a comment on the #chatsafe social media platforms, expressing their own (or someone else’s) experience of past or current suicidality. These participants were contacted by a member of the study team and they were provided helplines and/or the opportunity speak with the study psychologist.

In addition, out of a maximum potential of 5,160 weekly responses, 2,451 responses were recorded. Of these, only four responses included a withdrawal request (with no distress recorded at follow up) and 31 included a snooze request. At no point during the study was it deemed necessary to remove any content from the #chatsafe social media pages.


Recruitment and adherence. The response rate for study completion (defined as completing T1, T2 and T3) was 189/430 (43.95%). Despite a high attrition rate, this study was able to recruit a sufficient sample size to investigate the feasibility and acceptability of the #chatsafe intervention.

Reach. Throughout the 12-week #chatsafe campaign that ran parallel to this study, 1,430,789 individuals were reached through the #chatsafe social media platforms, both through organic sharing of the content and paid advertising [45]. The #chatsafe content was shown a total of 3,796,978 times on social media between October 2019 and January 2020. Snapchat and Instagram were the two best performing platforms, followed by Facebook, YouTube and Twitter. Each Instagram post received a mean of 67 likes and each animation/video was watched an average of 365 times. Videos on the #chatsafe YouTube page were viewed 151,023 times.

Willingness to intervene against suicide online.

Table 4 presents the median scores for the Willingness to Intervene Against Suicide and Internet Self-Efficacy measures for the whole sample. Table 5 presents the sub-group analyses by gender and age. Only data for male and female participants were able to be analyzed, as the sample of participants with other gender identities was too small to allow for robust analyses (see Table 1).

Table 4. Ability and willingness to intervene against suicide online and internet self-efficacy for the entire sample.

Table 5. Sub-group analyses for ability and willingness to intervene against suicide online.

It should be noted that participants who completed all three assessments demonstrated a greater increase in their intent to intervene against suicide compared to participants who only completed the first two assessments (U = 4554.00, p = 0.036). No other differences were observed in any of the outcome variables.

Ability to intervene. There was a change in participants’ perceived ability to respond to someone who may be suicidal from T1 to T3, χ2 (2) = 75.57, p< .001. An increase in ability was observed from T1 to T2 by 9.46% (Z = -6.744, p< .001). No changes were observed from T2 to T3. Increases in ability to intervene from T1 to T2 were observed in males by 7.50% (Z = -3.96, p<0.001) and females by 9.46% (Z = -5.41, p<0.001), and in both the < 20-years and ≥ 20-years age groups by 7.50% and 9.33%, respectively (<20 years: Z = -6.07, p<0.001 and ≥20 years: Z = -2.96, p = 0.003). However, the ≥ 20-years age group reported a greater ability to intervene at all timepoints compared to the < 20-years age group.

Intent to intervene. There was a change in participants’ intent to respond to someone who may be suicidal from T1 to T3, χ2 (2) = 36.36, p < .001. There was an increase in participants’ intent to intervene from T1 to T2 by 4.76% (Z = -6.324, p < .001) and no change from T2 to T3. Increases in intent to intervene were observed in males by 5.88% (Z = -4.37, p<0.001) and females by 5.71% (Z = -4.61, p<0.001), however female participants indicated a greater intent to intervene across all three timepoints. Increases were also observed in both the < 20-years age group by 3.57% (Z = -5.41, p<0.001) and ≥ 20-years age group by 5.95% (Z = -3.31, p = 0.001).

Internet self-efficacy, confidence and safety when communicating about suicide online.

Internet self-efficacy. A change was observed in three subscales of this measure: reactive/generative, differentiation, and organisation self-efficacy across the timepoints (reactive/generative: χ2 (2) = 16.40, p < .001, differentiation: χ2 (2) = 17.91, p < .001, and organisation: χ2 (2) = 14.38, p = .001). Increases were observed in females and the <20 years age group for reactive/generative, differentiation, and organisation self-efficacy between T1 and T2. There was also an increase for males in reactive/generative self-efficacy from T1 to T2. No changes were observed from T2 to T3. See S1 Table for subgroup analyses for each domain.

Perceived confidence and safety. At all three timepoints, the majority of participants responded that they ‘rarely’ or ‘never’ created, shared or liked posts involving suicide content. Change was observed in the distribution of responses T1 to T2, χ2 (4) = 15.49, p = .004, and from T2 to T3, χ2 (4) = 17.42, p = .002. The proportion of participants who indicated that in the past month they ‘sometimes’ created, liked or shared a post involving suicide content decreased between timepoints, and the proportion of respondents who indicated that in the past month they ‘never’ created, liked or shared a post involving suicide content increased between timepoints. No age or gender differences were observed.

Of those who indicated that they did create a post involving suicide-related content at each of the timepoints, the proportion of participants who monitored their post for unsafe content increased from T1 to T2, χ2 (1) = 58.84, p < .001, with no change from T2 to T3. The majority of participants in both the < 20-years and ≥20-years age groups indicated that they monitored their post across all time points. There was an increase in monitoring in both males and females from T1 to T3. When asked how they responded to unsafe content, the most common actions were to delete or hide the post and/or to contact the person who made the post.

Table 6 presents the type of online suicide-related social media content seen by participants on their social media feeds during the course of the study. The most common form of suicide-related content that participants reported seeing at T1 were statements that appeared to ‘deliberately seek to trigger difficult or distressing emotions in other people’. The majority of participants reported viewing at least one form of suicide-related content during the course of the study, most commonly ‘graphic descriptions of suicide’.

Table 6. Forms of suicide-related social media content seen by participants across timepoints.

As shown in Table 7, despite participants reporting that they frequently viewed online content related to suicidal behaviour, the majority reported that the content did not make them believe that the creator of that post was at risk of suicide. Those who had seen a post that made them concerned were asked how they responded. There was a difference in responses from T1 to T2, χ2 (6) = 18.88, p = .004, and from T2 to T3, χ2 (6) = 20.29, p = .002. Most apparent was the increase over time in the proportion of participants who responded directly to the person. Subgroup analyses indicated that for both age groups they would most likely reach out to the person or report the post to the social media platform (see S2 Table). At all three timepoints, female participants most commonly responded by directly contacting the creator of the post. At T1 and T2 male participants were more likely to report the post to the social media platform, but at T3, were more likely to contact the person directly. Participants were more likely to report that they felt capable of responding to someone who was at risk immediately post-intervention (n = 107, 71%), compared to baseline (n = 102, 63%).

Table 7. Select questions and responses from the Perceived Safety questionnaire across timepoints.


This was the first study to examine a suicide prevention campaign specifically designed for young people and delivered entirely through social media. The study found the #chatsafe campaign to be acceptable, safe and feasible. Following the campaign, participants reported being more willing to intervene against suicide, and reported greater self-efficacy, confidence and perceived safety when communicating on social media about suicide. The #chatsafe intervention also appeared to improve aspects of online behaviour, with participants reporting being: less likely to share suicide-related content; more likely to monitor their posts for harmful content; and being more likely to contact someone directly if they believed they were at risk, following the intervention. The trends observed in this study not only improved immediately following the delivery of the #chatsafe intervention but were maintained at a four-week follow up. This suggests that the impact of the #chatsafe intervention has the potential to be maintained over time.

Findings from this study also support that young people are viewing online suicide-related content at an increasing rate [22], including graphic depictions of self-harm, which is widely considered to be potentially harmful [29]. Although survey items in this study mostly referred to suicide-related content created by participants’ peers or online networks, suicide-related content can appear on young people’s news feeds without prior warning. A recent example was the live streaming of a suicide on the social media platform, TikTok, which was viewable by their estimated 328 million users under the age of 24 [46, 47]. Prior studies report that exposure to unsafe and poorly moderated suicide-related content were associated with an increase in young people experiencing suicidal ideation and suicide attempts [48]. This speaks to the need for young people to feel equipped in knowing how to manage the content they encounter and the findings from this study suggest that the #chatsafe intervention can play a useful part in this process.


Until now, despite significant debate about the relationship between social media and young people’s mental health [4954], there has been a paucity of research examining the potential effectiveness of social media interventions in youth suicide prevention, and much of the existing evidence pertaining to safe communication about suicide has arisen from studies involving mainstream media [5355]. However, it can be argued that young people use social media to communicate about suicide in fundamentally different ways when compared to mainstream media. Critically, young people tend not to use social media to consume news; rather, their online behaviours are more dynamic–to build a sense of community by sharing their feelings with others who have had similar experiences, to seek help and to provide help to others, and to express grief for people who have died by suicide [16, 17, 55]. As a result, the knowledge gained from previous research examining mainstream media may not apply here. Our findings suggest that rather than being harmful, delivering suicide prevention content via social media can be acceptable, safe and feasible to do. Moreover, it may be associated with notable benefits.

After receiving the 12-week #chatsafe intervention, participants in this study reported an increase in: their willingness to intervene against suicide online; aspects of their internet self-efficacy; and perceived confidence and safety when communicating online about suicide. Although these improvements were reported for all participants, females and participants aged over 20 years recorded the greatest increases, suggesting that slightly different content may resonate better with younger people and males. Moreover, females and younger participants’ internet self-efficacy saw the greatest increases, particularly with their ability to organize information online (e.g., retain control of the information they want or do not want to see), their ability to find and share information, and their perceived ability to create appropriate content to share with others. Again, this suggests that different types of content resonates differently across the population.

Most of the content was well received by participants and the most preferred pieces of content were animation videos that encouraged users to practice self-care online, provided them with practical tips on how to talk safely about someone who has died by suicide, and encouraging them to pause and reflect before posting suicide-related content on social media. These positive evaluations of this type of content support previous public health campaigns that suggest content that evokes emotional responses and assist learning resonate most strongly with young people. Further, considerable attention is paid to the potential negative impact of media reporting following the suicide of both a public figure or member of the community, with research suggesting that exposure to sensationalist or graphic content can cause harm and potentially contribute to the development or maintenance of suicide clusters [56, 57]. Thus, access to information on safe ways to communicate about someone who has died by suicide might go some way to mitigating the risk of future suicide clusters.

In contrast, and somewhat surprisingly, the content that participants evaluated least favorably, and were least likely to share, related to self-care. As described above, every second week, self-care content was sent to the participants, as opposed to content specific to suicide. The reasons for this were to reduce the risk of over-exposure to suicide-related content and any associated distress or risk. However, the findings suggest that young people were more likely to share the more ‘active’ content that included practical tips and advice compared to the more benign self-care content. This reiterates the findings from our earlier study that reported on the development of the #chatsafe social media content [30], in which young people specifically requested that the #chatsafe campaign was not simply “another awareness campaign” but actually provided them with tangible skills to help themselves and each other. It also supports earlier work exploring the effectiveness of digital health interventions, which reported that participants favoured content that taught them something they did not already know [58, 59]. While there have been a number of suicide awareness campaigns previously, they appear to have limited capacity to shift behaviour [35, 60], therefore a campaign, such as the one reported here, may stand to have greater utility for young people.

The emphasis on self-care content was part of the safety strategy associated with this study. There are unique safety and ethical challenges that exist when including young people in suicide research, that may be amplified when interventions are delivered online [12]. However, this study found that the #chatsafe intervention was not only well received by young people, it was also safe, and there are likely a number of reasons for this. In addition to sharing self-care and general wellbeing content throughout the intervention, a robust safety protocol was established whereby any time an AE was recorded, the research team met with the safety committee. This was to ensure that the safety protocol designed for this study was sufficient and to establish that the AEs were not attributable to the #chatsafe intervention. Second, the campaign was delivered universally, and as such did not set out to target those at elevated risk. That being said, although participants largely felt that the campaign did not have a negative impact on themselves, it is important to recognize that there were some participants who believed that the content may have a negative impact on others. This likely supports the fact that young people are aware that exposure to suicide-related content online can cause distress and remains a sensitive topic [30]. Together, this adds to a growing body of literature that suggests that it can be safe to involve young people in suicide prevention research, including research testing online interventions [61]. It also suggests that suicide prevention social media campaigns can be both safe and potentially effective as a suicide prevention strategy in the future.

A key benefit of social media is its capacity to reach large numbers of people quickly. The metrics relating to the campaign that ran in parallel to this study, suggest that approximately 1.5 million young people were exposed to the #chatsafe content in a three-month period. Following the current study, we received funding to adapt the guidelines and social media content for an additional 10 regions around the world, which reached a further 1 million individuals over a six-week period [45]. This has clear implications for the widescale delivery of information relating to suicide prevention both in Australia and worldwide. It also raises questions as to whether campaigns such as #chatsafe could in fact be used as a way of directing young people to clinical services or as part of a real-time response following the suicide of a young person, or for providing health-related information, particularly in low resource areas.

This could be critical as it is known that many young people at risk of suicide do not seek professional help, and among those that do, many get turned away without receiving adequate care [62]. Whilst the Australian government, at both state and federal level, is attempting to address this by providing additional resources for mental health services [63, 64], there is still an urgent need for community-based interventions that can reach large numbers of young people quickly and provide them with much-needed skills and information.

Strengths and limitations

A key strength of this study was that the intervention was entirely co-designed with young people [30]. Although the importance of co-design is becoming increasingly recognized [65, 66], it is rare in youth suicide prevention [16, 67]. In this study young people were active partners and this likely contributed to the acceptability, safety and impact of the #chatsafe intervention.

An additional strength relates to feasibility; the study was able to recruit an appropriate sample size and whilst a high attrition rate was recorded, this is not uncommon in psycho-educational online interventions [68]. Also, while signing up to a study such as this is quite simple, the burden associated with weekly or time-based responses often results in higher attrition rates particularly with younger participants [69, 70] and in longer studies [71]. Despite this, a large sample size was initially recruited into the study and a sufficient sample size was retained across the duration of the 20-week period. It would be beneficial for future studies to examine why participants do drop out in order to be certain that it does relate to burden and not safety or acceptability.

There are however a number of limitations. First, relates to study design. This was an exploratory, and not a controlled, study, and as such it is not certain that the changes observed were the result of the #chatsafe intervention. That said the findings from this feasibility study will inform a larger, controlled study due to commence in 2021.

Second, the data collected were entirely self-reported. This poses issues relating to participant recall and subjectivity of the data. Similar to all studies investigating social media behaviour, more objective measures of social media usage are required as inaccurate retrospective self-reports of behaviour are common in internet-based research [72]. However, despite the T1, T2 and T3 surveys relying on retrospective reporting, the weekly assessments included in this study had methodological strengths. In using a short momentary-assessment each week, participant recall bias was minimised and this also allowed the research team to collect reactions to the intervention content in real-time. This was particularly useful when attempting to monitor levels of distress, risk, and engagement of study participants in a novel intervention. To this end, future studies should attempt to collect more objective measures of social media behaviour and minimise the time frame between survey responses and the behaviour being investigated [69, 73].

Third, the sample recruited to this study was not fully representative of the Australian population, and self-selection into this study may have biased the findings. In particular, there was a higher proportion of females and a higher proportion of non-heterosexual young people than in the general community. Although the national campaign managed to reach similar numbers of males and females, there was an underrepresentation of young males in the study sample. This is not unusual in suicide prevention research [16] but does warrant attention in future studies, particularly given the over-representation of males in the suicide statistics [1]. Also, despite having partnered closely with young people from culturally and linguistically diverse, and Aboriginal and Torres Strait Islander, backgrounds in the co-design process [30], both of these groups were under-represented in the study. In determining eligibility for this study, over half of the sample had previously used social media to talk about suicide and just over a third of the sample indicated that they had wanted to talk about suicide on social media but did not feel equipped to do so. Collecting data from young people who already had experience in using social media to communicate about suicide may have produced a sample of young people who self-selected into the study due to an interest in the subject matter. As a result, these findings may not apply to all young people.

Fourth, although the data indicate an increase in perceived ability, intent, confidence and safety when communicating online about suicide, it is unknown if this impacted on actual behaviour. The next phase of this study will address these limitations by objectively coding social media data collected from participants. This will allow a direct comparison between the nature of suicide-related communication prior to, during, and after exposure to the #chatsafe campaign content.

There are also difficulties in measuring the precise level of engagement by study participants as social media metrics could not differentiate between study participants and the general public and there was no way of knowing how much or how little the participants interacted with the #chatsafe content. While it was possible to see that the direct messages sent to the participants’ social media platforms were ‘opened’, ‘read’, or ‘received’, it was not possible to measure the amount of time that participants spent viewing the content, or whether they clicked through to the #chatsafe website for more information. Indeed, this reflects the amorphous nature of social media platforms. However, the content did reach a large number of people in a short period of time and there was no indication that the content was harmful to anyone who came across it. Finally, the adoption of an emoji scale, while appropriate and familiar to a younger demographic, did make it difficult to interpret specific mood states [74].

Despite its limitations, this study has demonstrated that it is feasible, safe and acceptable to use social media for the purpose of suicide prevention. It also provided promising evidence for the potential impact of social media campaigns on increasing young people’s digital safety when it comes to suicide prevention. Although there are challenges associated with measuring real-world interventions in real-time and across uncontrolled settings [12, 75, 76], this study has provided important data which will inform a larger-scale, and more rigorous study.


Overall, findings from this study present a promising picture for the acceptability and impact of a universal suicide prevention campaign delivered through social media. Until now, little was known about the potential benefits of a social media campaign for suicide prevention. This study has demonstrated that it is safe, acceptable and feasible to share youth suicide prevention information via social media, and its findings also indicate that the #chatsafe intervention may have increased young people’s perceived capacity to intervene against suicide online, internet self-efficacy and perceived safety. The next steps will be to examine the impact of the #chastafe intervention on actual social media behaviour using a controlled study design. In the meantime, however, it would appear that the use of social media to educate and equip young people with suicide prevention information appears to be safe and effective.

Supporting information

S1 Table. Internet self-efficacy subgroup analyses by age and gender.


S2 Table. Select questions from the Perceived Safety Questionnaire subgroup analyses by age and gender.



The authors would like to thank all of the young people who participated in this study. They would also like to thank their study partners, Portable.


  1. 1. Australian Bureau of Statistics. Causes of Death, Australia: Statistics on the number of deaths, by sex, selected age groups, and cause of death classified to the International Classification of Diseases (ICD) Canberra, Australia 2020.
  2. 2. WHO. Suicide Data 2019 2019
  3. 3. Naghavi M. Global burden of disease self-harm collaborators: global, regional, and national burden of suicide mortality 1990 to 2016: systematic analysis for the global burden of disease study 2016. BMJ. 2019;364(l94).
  4. 4. Hill NT, Witt K, Rajaram G, McGorry PD, Robinson J. Suicide by young Australians, 2006–2015: a cross-sectional analysis of national coronial data. Med J Aust. 2021;214(3):133–9. pmid:33236400
  5. 5. Yellow Social Media Report 2020. Part One—Consumers. Yellow.
  6. 6. Anderson M, Jiang J. Teens, social media and technology 2018. Pew Reseach Center; 2018.
  7. 7. Ridout B, Campbell A. The use of social networking sites in mental health interventions for young people: systematic review. Journal of medical Internet research. 2018;20(12):e12244. pmid:30563811
  8. 8. Gibson K, Trnka S. Young people’s priorities for support on social media:“It takes trust to talk about these issues”. Computers in Human Behavior. 2020;102:238–47.
  9. 9. Gibson K, Cartwright C. Young people’s experiences of mobile phone text counselling: Balancing connection and control. Children and youth services review. 2014;43:96–104.
  10. 10. Clarke AM, Kuosmanen T, Barry MM. A systematic review of online youth mental health promotion and prevention interventions. J Youth Adolesc. 2015;44(1):90–113. pmid:25115460
  11. 11. Callahan A, Inckle K. Cybertherapy or psychobabble? A mixed methods study of online emotional support. British Journal of Guidance & Counselling. 2012;40(3):261–78.
  12. 12. Bailey E, Alvarez-Jimenez M, Robinson J, D’Alfonso S, Nedeljkovic M, Davey CG, et al. An Enhanced Social Networking Intervention for Young People with Active Suicidal Ideation: Safety, Feasibility and Acceptability Outcomes. Int J Environ Res Public Health. 2020;17(7):2435. pmid:32260111
  13. 13. Luxton DD, June JD, Fairall JM. Social media and suicide: a public health perspective. Am J Public Health. 2012;102 Suppl 2(S2):S195–200. pmid:22401525
  14. 14. Narang P, Lippmann SB. The Internet: its role in the occurrence and prevention of suicide. Internet and Suicide New York, NY: Nova Science Publishers. 2009:13–20.
  15. 15. Rice S, Robinson J, Bendall S, Hetrick S, Cox G, Bailey E, et al. Online and Social Media Suicide Prevention Interventions for Young People: A Focus on Implementation and Moderation. J Can Acad Child Adolesc Psychiatry. 2016;25(2):80–6. pmid:27274743
  16. 16. Robinson J, Cox G, Bailey E, Hetrick S, Rodrigues M, Fisher S, et al. Social media and suicide prevention: a systematic review. Early Interv Psychiatry. 2016;10(2):103–21. pmid:25702826
  17. 17. Robinson J, Rodrigues M, Fisher S, Bailey E, Herrman H. Social media and suicide prevention: findings from a stakeholder survey. Shanghai Arch Psychiatry. 2015;27(1):27–35. pmid:25852253
  18. 18. Gritton J, Rushing SC, Stephens D, Ghost Dog T, Kerr B, Moreno MA. Responding to concerning posts on social media: Insights and solutions from American Indian and Alaska Native youth. Am Indian Alsk Native Ment Health Res. 2017;24(3):63–87. pmid:29161455
  19. 19. Montague AE, Varcin KJ, Simmons MB, Parker AG. Putting technology into youth mental health practice. SAGE Open. 2015;5(2):2158244015581019-. pmid:26137394
  20. 20. Budenz A, Klassen A, Purtle J, Yom Tov E, Yudell M, Massey P. Mental illness and bipolar disorder on Twitter: implications for stigma and social support. J Ment Health. 2020;29(2):191–9. pmid:31694433
  21. 21. Hawton K, Hill NTM, Gould M, John A, Lascelles K, Robinson J. Clustering of suicides in children and adolescents. Lancet Child Adolesc Health. 2020;4(1):58–67. pmid:31606323
  22. 22. Carlyle KE, Guidry JP, Williams K, Tabaac A, Perrin PB. Suicide conversations on Instagram™: contagion or caring? Journal of Communication in Healthcare. 2018;11(1):12–8.
  23. 23. Australian Psychological Society (APS). Digital me: A survey exploring the effect of social media and digital technology on Australians’ wellbeing
  24. 24. Centre for Policy Alternatives. Suicide sensitive journalism handbook. Centre for Policy Alternatives (Sri Lanka) & PressWise Trust (UK) Sri Lanka & UK; 2013.
  25. 25. Recommendations on suicide reporting and online information dissemination for media professionals. The University of Hong Kong: Centre for Suicide Research and Prevention; 2013.
  26. 26. WHO. Preventing suicide: a resource for media professionals‐update 2017. Geneva World Health Organization 2017.
  27. 27. National Suicide Prevention Alliance. Responding to suicidal content online: Best practice guidelines. UK; 2016.
  28. 28. Bohanna I, Wang X. Media guidelines for the responsible reporting of suicide. Crisis. 2012;33(4):190–98. pmid:22713977
  29. 29. Robinson J, Hill NT, Thorn P, Battersby R, Teh Z, Reavley NJ, et al. The# chatsafe project. Developing guidelines to help young people communicate safely about suicide on social media: A Delphi study. PLoS One. 2018;13(11):e0206584. pmid:30439958
  30. 30. Thorn P, Hill NT, Lamblin M, Teh Z, Battersby-Coulter R, Rice S, et al. Developing a suicide prevention social media campaign with young people (The# Chatsafe project): co-design approach. JMIR mental health. 2020;7(5):e17520. pmid:32391800
  31. 31. Stead M, Angus K, Langley T, Katikireddi SV, Hinds K, Hilton S, et al. Mass media to communicate public health messages in six health topic areas: a systematic review and other reviews of the evidence. Public Health Research. 2019. pmid:31549082
  32. 32. Stanley N, Ellis J, Farrelly N, Hollinghurst S, Bailey S, Downe S. “What matters to someone who matters to me”: using media campaigns with young people to prevent interpersonal violence and abuse. Health Expectations. 2017;20(4):648–54. pmid:27813210
  33. 33. Ftanou M, Cox G, Nicholas A, Spittal MJ, Machlin A, Robinson J, et al. Suicide Prevention Public Service Announcements (PSAs): Examples from Around the World. Health Commun. 2017;32(4):493–501. pmid:27308843
  34. 34. Acosta J, Ramchand R, Becker A. Best practices for suicide prevention messaging and evaluating California’s "Know the signs" media campaign. Crisis. 2017;38(5):287–99. pmid:28228062
  35. 35. Pirkis J, Rossetto A, Nicholas A, Ftanou M. Advancing knowledge about suicide prevention media campaigns. Crisis. 2016;37(5):319–22. pmid:27868447
  36. 36. Klimes-Dougan B, Lee CY. Suicide prevention public service announcements: perceptions of young adults. Crisis. 2010;31(5):247–54. pmid:21134844
  37. 37. Klimes-Dougan B, Yuan C, Lee S, Houri AK. Suicide prevention with adolescents: considering potential benefits and untoward effects of public service announcements. Crisis. 2009;30(3):128–35. pmid:19767268
  38. 38. Jenner E, Jenner LW, Matthews-Sterling M, Butts JK, Williams TE. Awareness effects of a youth suicide prevention media campaign in Louisiana. Suicide Life Threat Behav. 2010;40(4):394–406. pmid:20822366
  39. 39. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348. pmid:24609605
  40. 40. Orygen. Data safety monitoring board charter. Melbourne, Australia. 2019.
  41. 41. Aldrich RS, Harrington NG, Cerel J. The Willingness to Intervene Against Suicide Questionnaire. Death Stud. 2014;38(1–5):100–8. pmid:24517708
  42. 42. Kim Y, Glassman M. Beyond search and communication: Development and validation of the Internet Self-efficacy Scale (ISS). Computers in Human Behavior. 2013;29(4):1421–9.
  43. 43. Wisniewski P, Xu H, Rosson MB, Perkins DF, Carroll JM, editors. Dear diary: Teens reflect on their weekly online risk experiences. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems; 2016.
  44. 44. Jones SR, Carley S, Harrison M. An introduction to power and sample size estimation. Emerg Med J. 2003;20(5):453–8. pmid:12954688
  45. 45. Robinson J, Teh Z, Lamblin M, Hill NT, La Sala L, Thorn P. Globalization of the# chatsafe guidelines: Using social media for youth suicide prevention. Early Intervention in Psychiatry. 2020.
  46. 46. Mohsin M. 10 TikTok statistics that you need to know in 2020 [infographic]: Oberlo
  47. 47. Wakefield J. TikTok tries to remove widely shared suicide clip BBC News
  48. 48. Swedo EA, Beauregard JL, de Fijter S, Werhan L, Norris K, Montgomery MP, et al. Associations between social media and suicidal behaviors during a youth suicide cluster in Ohio. Journal of Adolescent Health. 2021;68(2):308–16. pmid:32646827
  49. 49. Przybylski AK, Weinstein N. A large-scale test of the goldilocks hypothesis: quantifying the relations between digital-screen use and the mental well-being of adolescents. Psychological science. 2017;28(2):204–15. pmid:28085574
  50. 50. Hollis C, Livingstone S, Sonuga-Barke E. The role of digital technology in children and young people’s mental health–a triple-edged sword? Journal of Child Psychology and Psychiatry. 2020;61(8):837–41. pmid:32706126
  51. 51. O’Reilly M, Dogra N, Whiteman N, Hughes J, Eruyar S, Reilly P. Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clinical child psychology and psychiatry. 2018;23(4):601–13. pmid:29781314
  52. 52. Rideout V, Fox S. Digital health practices, social media use, and mental well-being among teens and young adults in the US. 2018.
  53. 53. Nesi J. The Impact of Social Media on Youth Mental Health: Challenges and Opportunities. N C Med J. 2020;81(2):116–21. pmid:32132255
  54. 54. Abi-Jaoude E, Naylor KT, Pignatiello A. Smartphones, social media use and youth mental health. Cmaj. 2020;192(6):E136–E41. pmid:32041697
  55. 55. Krysinska K, Andriessen K. Online Memorialization and Grief After Suicide: An Analysis of Suicide Memorials on the Internet. Omega (Westport). 2015;71(1):19–47. pmid:26152025
  56. 56. Robertson L, Skegg K, Poore M, Williams S, Taylor B. An adolescent suicide cluster and the possible role of electronic communication technology. Crisis. 2012;33(4):239–45. pmid:22562859
  57. 57. Marchant A, Brown M, Scourfield J, Hawton K, Cleobury L, Dennis M, et al. A Content Analysis and Comparison of Two Peaks of Newspaper Reporting During a Suicide Cluster to Examine Implications for Imitation, Suggestion, and Prevention. Crisis. 2020;41(5):398–406. pmid:32141331
  58. 58. Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn’t work? A systematic review of digital mental health interventions for depression and anxiety in young people. Frontiers in psychiatry. 2019;10:759. pmid:31798468
  59. 59. Lederman R, Wadley G, Gleeson J, Bendall S, Álvarez-Jiménez M. Moderated online social therapy: Designing and evaluating technology for mental health. ACM Transactions on Computer-Human Interaction (TOCHI). 2014;21(1):1–26.
  60. 60. Torok M, Calear A, Shand F, Christensen H. A systematic review of mass media campaigns for suicide prevention: understanding their efficacy and the mechanisms needed for successful behavioral and literacy change. Suicide and Life‐Threatening Behavior. 2017;47(6):672–87. pmid:28044354
  61. 61. Blades CA, Stritzke WG, Page AC, Brown JD. The benefits and risks of asking research participants about suicide: A meta-analysis of the impact of exposure to suicide-related content. Clinical psychology review. 2018;64:1–12. pmid:30014862
  62. 62. Robinson J, Bailey E, Brown V, Cox G, Hooper C. Raising the bar for youth suicide prevention. Melbourne: Orygen, The National Centre of Excellence in Youth Mental Health; 2016.
  63. 63. $1.1 Billion to support more mental health, medicare and domestic violence services [press release]. of Australia2020.
  64. 64. Ilanbey S. Victoria’s ‘broken’ mental health system gets $870m lifeline
  65. 65. Hickie IB, Davenport TA, Burns JM, Milton AC, Ospina-Pinillos L, Whittle L, et al. Project Synergy: co-designing technology-enabled solutions for Australian mental health services reform. Med J Aust. 2019;211 Suppl 7:S3–S39. pmid:31587276
  66. 66. Thabrew H, Fleming T, Hetrick S, Merry S. Co-design of eHealth Interventions With Children and Young People. Front Psychiatry. 2018;9:481. pmid:30405450
  67. 67. Bailey E, Teh Z, Bleeker C, Simmons M, Robinson J. Youth partnerships in suicide prevention research: A failed investigator survey. Early Interv Psychiatry. 2020. pmid:33181863
  68. 68. Muñoz RF, Bunge EL, Chen K, Schueller SM, Bravin JI, Shaughnessy EA, et al. Massive open online interventions: a novel model for delivering behavioral-health services worldwide. Clinical Psychological Science. 2016;4(2):194–205.
  69. 69. Christensen TC, Barrett LF, Bliss-Moreau E, Lebo K, Kaschub C. A practical guide to experience-sampling procedures. Journal of Happiness Studies. 2003;4(1):53–78.
  70. 70. Rintala A, Wampers M, Myin-Germeys I, Viechtbauer W. Momentary predictors of compliance in studies using the experience sampling method. Psychiatry research. 2020;286:112896. pmid:32146247
  71. 71. Van Berkel N, Ferreira D, Kostakos V. The experience sampling method on mobile devices. ACM Computing Surveys (CSUR). 2017;50(6):1–40.
  72. 72. Scharkow M. The accuracy of self-reported internet use—A validation study using client log data. Communication Methods and Measures. 2016;10(1):13–27.
  73. 73. Kross E, Verduyn P, Demiralp E, Park J, Lee DS, Lin N, et al. Facebook use predicts declines in subjective well-being in young adults. PloS one. 2013;8(8):e69841. pmid:23967061
  74. 74. Bai Q, Dan Q, Mu Z, Yang M. A Systematic Review of Emoji: Current Research and Future Perspectives. Front Psychol. 2019;10:2221. pmid:31681068
  75. 75. Robinson J, Cox G, Malone A, Williamson M, Baldwin G, Fletcher K, et al. A systematic review of school-based interventions aimed at preventing, treating, and responding to suicide-related behavior in young people. Crisis: The Journal of Crisis Intervention and Suicide Prevention. 2013;34(3):164.
  76. 76. Robinson J, Hetrick SE, Martin C. Preventing suicide in young people: systematic review. Aust N Z J Psychiatry. 2011;45(1):3–26. pmid:21174502