Figures
Abstract
Background
Conceptually, there may be some overlap between measures of voice hearing experiences and measures assessing broader outcome domains. Despite this possibility, it is unknown whether measures of voice hearing and broader outcomes are assessing similar or separate concepts. This study aimed to examine whether measures of voice hearing are distinct from measures of emotional states, well-being and recovery.
Methods
Study 1 examined whether the Hamilton Program for Schizophrenia Voices Questionnaire is distinct from the Depression Anxiety Stress Scale-21 and the Short Warwick-Edinburgh Mental Well-being Scale using secondary data (n = 401). Study 2 examined whether the Psychotic Symptoms Rating Scale for Auditory Hallucinations is distinct from the Hospital Anxiety Depression Scale and the CHoice of Outcome in Cbt for psychoses short form using baseline data from two randomized controlled trials (n = 187).
Results
In Study 1, a six-factor model was found to be reasonable and accounted for 54.04% of the total variance (F1: 13%, F2: 11.26%, F3: 8.55%, F4: 4.04%, F5: 7.30%, F6: 9.9%). In Study 2, a five-factor model was identified and accounted for 39.99% of the total variance (F1: 15.52%, F2: 7.47%, F3: 6.53%, F4: 6.70%, F5: 3.78%). Within both studies, the items from the voice hearing measures loaded uniquely onto factors that contained no items from other measures.
Citation: Loizou S, Field AP, Fowler D, Hayward M (2025) Are measures of voice hearing distinct from measures of emotional states, recovery and well-being? A factor analysis study. PLoS One 20(10): e0333069. https://doi.org/10.1371/journal.pone.0333069
Editor: Hong Wang Fung, The Hong Kong Polytechnic University, HONG KONG
Received: April 25, 2025; Accepted: September 9, 2025; Published: October 7, 2025
Copyright: © 2025 Loizou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: No new data were created. This research study used existing datasets (Approve; Hayward et al., 2020, VIS; Strauss et al., in prep.,GIVE2; Hayward et al., 2021, M4V; Chadwick et al., 2016).
Funding: This was supported by the South-east Network for Social Sciences (https://www.senss.ac.uk) Supervisor-led Collaborative Studentship sustained by Economic and Social Research Council and Sussex Partnership NHS Foundation Trust (ES/P00072X/1). The South-east Network for Social Sciences has no role in in the design, analysis, write-up of the manuscript or the decision to submit for publication.
Competing interests: The authors have declared that no competing interests exist.
Introduction
A symptom-specific approach to the treatment of psychotic experiences has been developed in response to the limitations identified in trials of CBTp, such as small-to-medium effect sizes, heterogeneity in presentation and reliance on broad severity outcome measures that may not capture changes generated by CBTp [1,2]. By targeting mechanisms linked to individual symptoms, this approach may reduce heterogeneity and enable the use of outcome measures that are more closely aligned with the aims of CBTp. However, recent studies of psychological interventions for distressing voices have used a range of outcomes, suggesting differing understandings among experts about the changes that may be generated by these interventions [3]. This heterogeneity of outcomes has also impeded the comparability of interventions to determine whether symptom-specific approaches are superior in efficacy to general CBTp [1]. This can create confusion for clinicians and patients as they seek to choose from a range of treatment options.
The confusion for clinicians and patients has been exacerbated by the array of measures that have been used to capture outcomes. In some cases, measures of anxiety and depression (e.g., Depression Anxiety Stress Scale-21 [DASS-21] [4], Hospital Anxiety Depression Scale [HADS] [5]) have been used to index the emotional impact of voices. However, it is unclear whether measures of voice hearing (e.g., Hamilton Program for Schizophrenia Voices Questionnaire [HPSVQ] [6], Psychotic Symptoms Rating Scale – Auditory Hallucinations [PSYRATS-AH] [7]) and emotional distress (i.e., anxiety and depression) capture distinct concepts or whether there is a considerable measurement overlap, given they may contain items capturing similar emotional states. This potential measurement overlap, may also arise from how items are understood or interpreted, and whether patients and/or clinicians ascribe certain experiences specifically to voices or to broader emotional distress. Consistent with this, previous research has shown strong associations between anxiety, depression, and aspects of voice hearing including severity, content, loudness, intrusiveness, distress and perceived power [8–13]. This raises the possibility that changes observed in emotional distress following psychological interventions may in part reflect changes in voice-related distress (and vice versa). This highlights the importance of clarifying the distinctiveness of these measures to avoid redundancy, reduce participant burden of having to complete multiple measures, increase outcome measurement precision and improve the comparability of findings across research and clinical contexts.
Measures assessing broader outcome domains such as recovery (CHoice of Outcome in Cbt for psychoses [CHOICE] [14,15]) and well-being (Short Warwick-Edinburgh Mental Well-being Scale [SWEMWBS] [16,17]) have also been linked to measures of depression, anxiety and psychosis-related distress, supporting the suggestion that there may be an overlap in measurement. While voice hearing, anxiety, depression, recovery, and wellbeing are distinct concepts, the extent to which existing measures substantially overlap or whether they capture distinct dimensions has not been explored. Therefore, it is unclear whether their combined use provides supplemental information, or whether they may, in practice, be capturing overlapping aspects of the same experience due to similarities in item content. Clarifying this relationship is important for ensuring precise and transparent outcome measurement in both research and routine clinical practice.
The purpose of the study was to examine whether voice hearing measures are distinct from measures of emotional states, recovery and well-being by undertaking factor analysis. Two questions were addressed: 1) Is HPSVQ distinct from DASS-21 and SWEMWBS? (Study 1) and 2) Is PSYRATS-AH distinct from HADS and CHOICE-SF? (Study 2). If items from both voice-hearing and broader outcome measures load onto the same factor, this would suggest that measures of voice hearing may not be entirely distinct and instead share substantial variance. If items from the voice hearing measures load uniquely onto their own factors, this would suggest that these measures are capturing distinct concepts.
Study 1: HPSVQ, DASS-21 and SWEMBS
Methods
Design.
Study 1 used cross-sectional data from existing datasets to determine the factor structure of HPSVQ, DASS-21 and SWEMWBS. The dataset from the Approve study [18] was used to identify underlying factors and the dataset from the Voice Impact Scale study (VIS [19]) was used to verify the factor structure. Data were accessed between 2019–2023. Data were anonymised and therefore individual participants could not be identified.
Participants. Participants (n = 401) were recruited from inpatient and mental health NHS services as part of the Approve study and had to meet the following criteria: a) 18 years of age and above, b) experiencing voice hearing for at least six months irrespective of diagnosis and c) have sufficient English language skills. In the VIS study, participants (n = 398) were also recruited from mental health NHS services. Participants were: a) at least 18 years of age, b) met diagnostic criteria for schizophrenia- or psychosis-spectrum disorders, c) experienced voices for minimum a year, and d) understood written English. The two samples were statistically comparable on all demographic variables except for the percentage of participants from black and mixed ethnic backgrounds, and the percentage of participants with a diagnosis of schizophrenia (Table 1). S1 Table in S1 File shows the descriptive statistics.
Materials.
Hamilton Program for Schizophrenia Voice Questionnaire (HPSVQ [6]). The HPSVQ is a 9-item self-report measure developed to assess the characteristics and negative impact of voice hearing. Each item is rated on a 5-point rating scale (0–4), with higher scores reflecting greater severity or impairment. The HPSVQ has been found to have acceptable internal consistency and test-retest reliability [6].
Depression, Anxiety and Stress Scale – 21 items (DASS-21 [4,20]. The DASS-21 is a self-report measure assessing the severity of emotional states of depression (7 items), anxiety (7 items) and stress (7 items). Items are rated on a 4-point rating scale (0 = did not apply to me, 1 = applied to me to some degree, or some of the time, 2 = applied to me to a considerable degree, or a good part of time, 3 = applied to me very much, or most of the time). The DASS-21 has good internal consistency and good concurrent validity [21].
Short Warwick-Edinburgh Mental Well-being Scale (SWEMWBS [16], WEMWBS [22]). The SWEMWBS is a 7-item self-report scale assessing well-being. Items are rated on a 5-point rating scale (1 = none of the time, 2 = rarely, 3 = some of the time, 4 = often, 5 = all of the time). The scale has shown excellent internal consistency in clinical samples [23]. Items have been reverse-coded, so that higher scores reflect lower well-being and are consistent with the HPSVQ and DASS-21 scores.
Procedure
Eligible service users were referred to the Approve study by clinicians. Those interested in taking part met with a researcher to review the participant information sheet and the consent form. Once written consent was received, service users were asked to complete a questionnaire battery. In the VIS study, service users were identified either by clinicians or through an NHS trust research system. Eligible service users were sent a letter inviting them to take part. Written consent was received for all participants. Consented service users were asked to complete a set of questionnaires. The Approve study (recruitment start date:01/02/2018, recruitment end date: 31/08/2019) received ethical approval by the London-South-East REC and HRA (18/LO/0046) and the VIS study (recruitment start date: 01/06/2016, recruitment end date: 01/01/2017) received ethical approval by the North West – Lancaster REC and HRA (16/NW/0446).
Statistical analyses
All statistical analyses were carried out in R [24]. Full information maximum likelihood (FIML) was considered as an appropriate method for handling missing data on the basis that the proportion of missing data was low (Approve 0.61%, VIS 2.15%). Parallel analysis [25] and a scree plot were used to determine the number of factors to retain. Parallel analysis compares the observed eigen values of the correlation matrix with those from a random data matrix. Exploratory factor analysis (EFA) was performed on the Approve data to identify the underlying factor structure of the HPSVQ, DASS-21 and SWEMWBS. Oblique rotation was used under the assumption that the factors will be correlated [26]. The ‘psych’ package [27] was used for these analyses. Confirmatory factor analysis (CFA) was performed on the VIS data using the ‘lavaan’ package [28] to verify the factor structure derived from the EFA. The following fit indices were used to examine model fit [29,30]: χ2 to degrees of freedom (χ2/df ratio < 2), Comparative Fit Index (CFI) good fit > 0.95, sufficient fit > .90; Tucker Lewis Index (TLI) good fit > 0.95, sufficient fit > 0.90; Root Mean Square Error of Approximation (RMSEA) close fit < 0.06, adequate fit < 0.08; and Standardized Root Mean Residual (SRMR) good fit < 0.05, sufficient fit < 0.08.
Results
Exploratory factor analysis.
The Kaiser-Meyer-Olkin statistic suggested there were sufficient numbers of variables to determine a factor structure, KMO = .95, and Barlett’s test confirmed that correlations were significantly different to zero, χ2(666) = 8481.53, p < .001. The correlation matrix (S2 Table in S1 File) suggested correlations above zero and none were too large to suggest multicollinearity or singularity (i.e., r < 0.80). Parallel analysis suggested six factors and the scree plot had points of inflexion at 2, 3, 4, 5 and 6 factors. It was concluded that the six-factor model was reasonable (TLI = 0.953, RMSEA = 0.037, RMSR = 0.02, CFI = 0.967) (S3 Table in S1 File).
The six-factor model accounted for 54.04% of the total variance of the original data (Table 2). Factor 1 (depression) accounted for 13% of the variance and consisted of eight items of the DASS-21, of which seven items related to depression (anhedonia – item 3, inertia – item 5, hopelessness – item 10, dysphoria – item 13, lack of interest – item 16, self-deprecation – item 17, devaluation of life – item 21) and one related to anxiety (situational anxiety – item 9). Factor 2 (voice hearing experience) consisted of eight items of the HPSVQ (frequency – item 1, negative content – item 2, loudness – item 3, duration – item 4, interference with daily activities – item 5, distress – item 6, impact on self-appraisal – item 7, clarity – item 8), and accounted for 11.26% of the variance. Factor 3 (well-being) consisted of all items of the SWEMWBS and explained 8.55% of the variance. Factor 4 (voice characteristics) consisted of five items of the HPSVQ (frequency – item 1, loudness – item 3, duration – item 4, clarity – item 8, obey commands – item 9) and accounted for 4.04% of the variance. Factor 5 (stress) consisted of five items of the DASS-21 relating to stress (difficulties relaxing – items 1 and 12, nervous arousal – item 8, agitation – item 11, impatience – item 14) and 1 item of the SWEMWBS (feeling relaxed – item 3) and accounted for 7.30% of the variance. Factor 6 (states of anxiety and stress) explained 9.9% of the variance and consisted of eight items of the DASS-21, of which six items related to anxiety (autonomic arousal – items 2, 4 and 19, skeletal musculature effects – item 7, subjective anxious affect – items 15 and 20) and two items related to stress (nervous energy – item 8, irritable/over-reactive – item 18). The inter-factor correlations showed small-to-medium correlations between voice-specific (2, 4) and broad factors (1, 3, 5, 6) (S4 Table in S1 File).
Confirmatory factor analysis.
A six-factor model was tested, in which items with cross-loadings were allowed to load onto both factors. In other words, items 1, 3, 4 and 8 of the HPSVQ were allowed to load onto Factors 2 and 4, item 3 of the SWEMWBS was allowed to load onto Factors 3 and 5, and item 8 of the DASS-21 was allowed to load onto Factors 5 and 6. The model had an acceptable fit, χ2/df = 2.04, TLI = 0.914, CFI = 0.922, RMSEA = 0.051 [90% CI 0.047–0.055], SRMR = 0.044.
Study 2: PSYRATS-AH, HADS and CHOICE-SF
Methods
Design.
Study 2 used baseline data from two trials, the Mindfulness for Voices trial (M4V [31]) and the Guided Self-help Cognitive Behavioural Intervention for Voices trial (GiVE2, [32]). Datasets were merged to identify the underlying structure of the PSYRATS-AH, HADS and CHOICE-SF. Data were accessed between 2019–2023. Data were anonymised and therefore individual participants could not be identified.
Participants.
A total of 187 participants experiencing distressing voices were recruited from mental health NHS services. All participants met diagnostic criteria for Schizophrenia-spectrum or Psychotic disorders (Table 3 and S5 Table in S1 File).
Materials.
Psychotic Symptom Rating Scales – Auditory Hallucinations (PSYRATS-AH [7]). The PSYRATS-AH is a structured interview assessing the severity of a variety of dimensions of the voice hearing experience. It consists of 11 items and each one is rated on a scale of 0 (absent) to 4 (severe). The PSYRATS-AH has been found to have good to excellent inter-rater reliability [7,33] and good internal consistency [33] in schizophrenia and first-episode psychosis populations.
Hospital Anxiety Depression Scale (HADS [5]). The HADS is a 14-item self-report questionnaire used to assess anxiety (7 items) and depression (7 items) symptoms. Each item is rated on a 4-point rating scale, with higher scores reflecting more severe symptoms. The HADS has shown good internal consistency in a schizophrenia population [34].
CHoice of Outcome in Cbt for psychoses – Short Form (CHOICE [14,15]). The CHOICE-SF is a self-report 11-item questionnaire measuring service-user defined recovery in relation to Cognitive Behavioural Therapy for psychosis (CBTp). The CHOICE has good test-retest reliability and internal consistency [14]. The short form has been validated in a transdiagnostic sample of voice-hearers [15]. All items have been reverse-coded, thereby higher scores on the scale indicate lower recovery.
Procedure
M4V participants were recruited from mental health services in Sussex and Hampshire, and GiVE2 participants were recruited from sites within Sussex Partnership NHS Foundation Trust and Pennine Care NHS Foundation Trust. In both studies, eligible service users were referred by clinicians. Following written informed consent, participants were asked to complete a list of measures. M4V and GiVE2 were trials of Group Person-Based Cognitive Therapy (PBCT) and Guided self-help cognitive Behavioural intervention for VoicEs (GiVE), respectively. The M4V study (recruitment start date: 03/10/2011, recruitment end date: 02/10/2013) received ethical approval by the Brighton and Sussex REC (11/L0/1330). The GiVE2 study (recruitment start date: 02/01/2019, recruitment end date: 31/08/2020) received ethical approval by the London – Surrey REC (8/LO/ 2091).
Statistical analysis
Missing values (0.23%) were handled using FIML. Parallel analysis (Horn, 1965) and a scree plot were used to determine the number of factors to retain. Exploratory factor analysis (EFA) with oblique rotation was performed to investigate the underlying factor structure of PSYRATS-AH, HADS and CHOICE-SF. Model fit was determined in the same way as for study 1.
Results
A Kaiser-Meyer-Olkin statistic indicated adequate numbers of variables to determine a factor structure, KMO = 0.86. Barlett’s test confirmed that correlations were significantly different to zero χ2(630) = 2564.94, p < .001. Items did not strongly correlate with each other (S6 Tables in S1 File). Parallel analysis suggested five factors to be retained, whilst the scree plot indicated points of inflexion at 2, 3, 4 and 5 factors. The five-factor model was the best fit of the data (TLI = 0.915, RMSEA = 0.037, RMSR = 0.04, CFI = 0.939) (S7 Table in S1 File).
The model accounted for 39.99% of the total variance (Table 4). Factor 1 (psychological recovery) accounted for 15.52% of the variance and consisted of seven items of the HADS relating to depression (anhedonia – items 3, 6, 10 and 14], slowed down – item 2, laugh – item 7, cheerful – item 11) and two relating to anxiety (tension – item 1, feeling relaxed – item 13), and seven items of the CHOICE-SF (ability to approach problems – item 1, self-confidence – item 2, positive social relating – item 3, ability to question way of thinking – item 4, dealing with stresses – item 5, peace of mind – item 8, positive ways of thinking – item 11). Factor 2 (voice impact and phenomenology) consisted of six items of the PSYRATS-AH (duration – item 2, loudness – item 4, amount of negative content – item 6, degree of negative content – item 7, amount of distress – item 8, intensity of distress – item 9) and accounted for 7.47% of the variance. Factor 3 (anxiety) consisted of six items of the HADS (feelings of tension – item 1, fright – items 4 and 5, restlessness – item 8, worry – item 9, panic – item 12) and accounted for 6.53% of variance. Factor 4 (cognitive processing) consisted of three items of the CHOICE (facing upsetting thoughts – item 7, understanding self – item 9, understanding experiences – item 10) and accounted for 6.70% of the variance. Factor 5 (voice frequency) accounted for 3.78% of the variance and consisted of four items (frequency – item 1, duration – item 2, disruption – item 10, controllability – item 11). Items 3 (location) and 5 (beliefs re:origin) of the PSYRATS-AH did not significantly load onto any factor, although there was some indication that item 3 loaded onto Factor 5 and item 5 loaded onto Factor 3. The inter-factor correlations demonstrated small-to-medium correlations between voice-specific (2, 5) and broad factors (1, 3, 4) (S8 Table in S1 File).
Discussion
Main findings
The current studies aimed to investigate whether voice hearing measures are distinct from broader outcome measures. In Study 1, the factors generated did not contain items from both the HPSVQ and the DASS-21 or from both the HPSVQ and the SWEMWBS. Similarly, in Study 2, the factors generated did not contain items from both the PSYRATS-AH and the HADS or from both the PSYRATS-AH and CHOICE-SF. The HPSVQ and the PSYRATS-AH each showed a two-factor solution. These findings highlight the distinctiveness of voice hearing measures from measures of emotional states, well-being and recovery. While previous research was suggestive of links between voice hearing and broader measures of outcome [2,11–14,17], the current findings suggest these measures are empirically distinct and do not reflect substantial measurement overlap. However, it is worth noting that across both samples, self-rated measures tended to show stronger intercorrelations with each other in comparison to the clinician-rated PSYRATS-AH. This suggests that the rating method may have influenced the observed relationships between measures.
Within Study 1, most of the items of the HPSVQ loaded onto Factor 2. However, the items measuring frequency, loudness, duration, clarity and obey commands also loaded onto Factor 4, forming a factor that resembles the ‘physical’ characteristics’ sub-scale from previous psychometric evaluations., These findings suggest that, although the ‘negative impact’ and the ‘physical characteristics’ sub-scales can overlap, they are, to some extent separate and distinct features of the voice hearing experience. This is in line with previous research, showing that the HPSVQ provides a two-factor solution: emotional and physical characteristics [35,36]. Contrary to previous research demonstrating a three- or four-factor structure for the PSYRATS-AH [7,37–39], our study identified a two-factor solution; Factor 2 (voice impact and phenomenology) consisted of items relating to voice-related distress and the ‘physical’ characteristics of the voices and Factor 5 (voice frequency) consisted of items relating to both the ‘cognitive’ and ‘physical’ characteristics of the voices. The amount and degree of negative content and the amount and intensity of distress loaded onto Factor 2 and the frequency and duration of voices loaded onto Factor 5, consistent with previous studies demonstrating that these items are grouped under the same factors, whilst the remaining items loaded onto various factors across studies [7,37–41].
Differences in findings within the current study could be attributed to the small sample size in study 2; therefore, results should be interpreted with caution. Several items such as items 3 (location) and 5 (beliefs re:origin) of the PSYRATS-AH did not significantly load onto any factor (the general rule of thumb is 0.30 but this can vary depending on sample size [42]. Differences in findings could also be attributed to participant characteristics and a lack of understanding of the dimensions of the voice hearing experience [37,38]. Items from the DASS-21 and the CHOICE-SF appeared to load onto the same factor (Factor 1 psychological recovery), suggesting that subjective recovery as measured by the CHOICE-SF largely captures emotional distress.
The findings provide empirical support for the view that the ‘physical’ characteristics of voices and the emotional responses they can evoke are related but separable aspects of the voice hearing experience, consistent with cognitive models of voice hearing [43,44]. They also support the continued use of measures of voice hearing and of broader measures of emotional states, wellbeing and recovery in both research and routine clinical practice, as they appear to capture unique and supplemental information that can assist with obtaining an extensive and nuanced understanding of people’s experiences. In line with the single-symptom approach, outcomes should match the target; our findings suggest that measures of anxiety and depression index general emotional distress and should not be used as proxies for voice-related distress. Instead, voice hearing measures should be used to assess associated distress, with measures of anxiety, depression, recovery and wellbeing used alongside to capture emotional states and broader life outcomes.
Previous research suggests that voices may become less emotionally salient over time [45,46]. In both studies, the mean participant age was 40 years, and the mean voice hearing duration was 17 years. Therefore, it is possible that our findings were influenced by voice hearing duration. For instance, some dimensions of voice hearing may show greater overlap with emotional distress earlier on in the course, with this overlap attenuating at longer durations. Our analyses focused on measurement structure using EFA in cross-sectional data, thus within-person change cannot be inferred. Future research (ideally longitudinal and sufficiently powered) should model duration continuously and evaluate changes in shared variance over time, and/or examine whether the factor structures replicate in samples with shorter voice hearing duration. Such evidence would help identify when, and for whom, streamlining outcome measures is appropriate.
Our results also highlight several other implications. Firstly, it may be useful to clearly specify the focus of questions, particularly for self-reported measures, as items can be interpreted differently. For example, instructions could indicate whether distress ratings should reflect the impact of voices specifically or mood more generally. Secondly, voice-hearing and broad outcome domain scores are best interpreted together rather than in isolation, as they can help to contextualise and make sense of people’s experience, including how these may influence each other. For example, previous research has shown cross-sectional associations between emotional distress, voice-related beliefs, and voice-related distress [47]. While causality cannot be established, it is plausible that mood and anxiety symptoms may maintain and/or exacerbate voice-related beliefs and associated distress. By considering the broader context, this can help clinicians and researchers to better understand people’s experiences and tailor interventions accordingly.
Strengths and limitations
Analyses considered whether voice hearing measures were distinct from broad outcome measures using different instruments and datasets. In study 1, two large samples were used with relatively low missing data to identify and to confirm the underlying factor structure, which ensured the reduction of the sampling error, leading to more stable factors. However, despite the Kaiser-Meyer-Olkin statistic indicating that the sample in study 2 was suitable for factor analysis, the size of the sample was small and consequently this may have affected the factor structure by increasing the likelihood of false negatives. A large sample size is needed to undertake EFA to ensure that there is adequate statistical power [48–50]. It was also not possible to confirm the factor structure in study 2 as there was insufficient data to undertake CFA. A further limitation refers to demographic differences between the samples (e.g., the Approve sample was transdiagnostic, whereas participants in the VIS trial met criteria for schizophrenia- or psychotic-spectrum disorders).
Conclusions
This study found that voice hearing measures were distinct from measures of emotional states, well-being and recovery, suggesting that measures of voice hearing and measures of broader outcomes may not be measuring similar concepts. This confirms the psychometric properties of the PSYRATS-AH and HPSVQ by establishing their distinctiveness from broad outcome measures and encourages their future use as measures of voice hearing experiences.
References
- 1. Lincoln TM, Peters E. A systematic review and discussion of symptom specific cognitive behavioural approaches to delusions and hallucinations. Schizophr Res. 2019;203:66–79. pmid:29352708
- 2. Thomas N, Hayward M, Peters E, van der Gaag M, Bentall RP, Jenner J, et al. Psychological therapies for auditory hallucinations (voices): current status and key directions for future research. Schizophr Bull. 2014;40 Suppl 4(Suppl 4):S202-12. pmid:24936081
- 3. Loizou S, Fowler D, Hayward M. Measuring the longitudinal course of voice hearing under psychological interventions: A systematic review. Clin Psychol Rev. 2022;97:102191. pmid:35995024
- 4. Henry JD, Crawford JR. The short-form version of the Depression Anxiety Stress Scales (DASS-21): construct validity and normative data in a large non-clinical sample. Br J Clin Psychol. 2005;44(Pt 2):227–39. pmid:16004657
- 5. Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983;67(6):361–70. pmid:6880820
- 6. Van Lieshout RJ, Goldberg JO. Quantifying self-reports of auditory verbal hallucinations in persons with psychosis. Canadian Journal of Behavioural Science / Revue canadienne des sciences du comportement. 2007;39(1):73–7.
- 7. Haddock G, McCarron J, Tarrier N, Faragher EB. Scales to measure dimensions of hallucinations and delusions: the psychotic symptom rating scales (PSYRATS). Psychol Med. 1999;29(4):879–89. pmid:10473315
- 8. Birchwood M, Gilbert P, Gilbert J, Trower P, Meaden A, Hay J, et al. Interpersonal and role-related schema influence the relationship with the dominant “voice” in schizophrenia: a comparison of three models. Psychol Med. 2004;34(8):1571–80. pmid:15724887
- 9. Delespaul P, deVries M, van Os J. Determinants of occurrence and recovery from hallucinations in daily life. Soc Psychiatry Psychiatr Epidemiol. 2002;37(3):97–104. pmid:11990012
- 10. Hartley S, Barrowclough C, Haddock G. Anxiety and depression in psychosis: a systematic review of associations with positive psychotic symptoms. Acta Psychiatr Scand. 2013;128(5):327–46. pmid:23379898
- 11. Lucas S, Wade T. An Examination of the Power of the Voices in Predicting the Mental State of People Experiencing Psychosis. Behav change. 2001;18(1):51–7.
- 12. Smith B, Fowler DG, Freeman D, Bebbington P, Bashforth H, Garety P, et al. Emotion and psychosis: links between depression, self-esteem, negative schematic beliefs and delusions and hallucinations. Schizophr Res. 2006;86(1–3):181–8. pmid:16857346
- 13. Soppitt W, Max Birchwood R. Depression, beliefs, voice content and topography: A cross-sectional study of schizophrenic patients with auditory verbal hallucinations. Journal of Mental Health. 1997;6(5):525–32.
- 14. Greenwood KE, Sweeney A, Williams S, Garety P, Kuipers E, Scott J, et al. CHoice of Outcome In Cbt for psychosEs (CHOICE): the development of a new service user-led outcome measure of CBT for psychosis. Schizophr Bull. 2010;36(1):126–35. pmid:19880823
- 15. Webb R, Bartl G, James B, Skan R, Peters E, Jones A-M, et al. Exploring the Development, Validity, and Utility of the Short-Form Version of the CHoice of Outcome In Cbt for PsychosEs: A Patient-Reported Outcome Measure of Psychological Recovery. Schizophr Bull. 2021;47(3):653–61. pmid:33215190
- 16. Haver A, Akerjordet K, Caputi P, Furunes T, Magee C. Measuring mental well-being: A validation of the Short Warwick-Edinburgh Mental Well-Being Scale in Norwegian and Swedish. Scand J Public Health. 2015;43(7):721–7. pmid:26041133
- 17. Vaingankar JA, Abdin E, Chong SA, Sambasivam R, Seow E, Jeyagurunathan A, et al. Psychometric properties of the short Warwick Edinburgh mental well-being scale (SWEMWBS) in service users with schizophrenia, depression and anxiety spectrum disorders. Health Qual Life Outcomes. 2017;15(1):153. pmid:28764770
- 18. Hayward M, Schlier B, Strauss C, Rammou A, Lincoln T. Construction and validation of the Approve questionnaires - Measures of relating to voices and other people. Schizophr Res. 2020;220:254–60. pmid:32199714
- 19.
Strauss C, Atterburyx K, Hugdahl K, Hayward M, Jones N, Longden L, et al. Voice impact scale (VIS): Evaluating the psychometric properties of an expert-developed self-report measure for capturing outcomes from psychological therapy for hearing voices. 2024.
- 20. Lovibond PF, Lovibond SH. The structure of negative emotional states: comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behav Res Ther. 1995;33(3):335–43. pmid:7726811
- 21. Antony MM, Bieling PJ, Cox BJ, Enns MW, Swinson RP. Psychometric properties of the 42-item and 21-item versions of the Depression Anxiety Stress Scales in clinical groups and a community sample. Psychological Assessment. 1998;10(2):176–81.
- 22. Stewart-Brown S, Tennant A, Tennant R, Platt S, Parkinson J, Weich S. Internal construct validity of the Warwick-Edinburgh Mental Well-being Scale (WEMWBS): a Rasch analysis using data from the Scottish Health Education Population Survey. Health Qual Life Outcomes. 2009;7:15. pmid:19228398
- 23. Shah N, Cader M, Andrews WP, Wijesekera D, Stewart-Brown SL. Responsiveness of the Short Warwick Edinburgh Mental Well-Being Scale (SWEMWBS): evaluation a clinical sample. Health Qual Life Outcomes. 2018;16(1):239. pmid:30577856
- 24.
R Core Team. R: A Language and Environment for Statistical Computing. 2022. Available from: https://www.R-project.org
- 25. Horn JL. A rationale and test for the number of factors in factor analysis. Psychometrika. 1965;30:179–85. pmid:14306381
- 26.
Child D. The essentials of factor analysis. Cassell Educational. 1990.
- 27.
Revelle W. How to use the psych package for mediation/moderation/regression analysis. 2021. Available from: https://personality-project.org/r/tutorials/HowTo/mediation.pdf
- 28. Rosseel Y. lavaan: AnRPackage for Structural Equation Modeling. J Stat Soft. 2012;48(2).
- 29.
Byrne BM. Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic Concepts, Applications and Programming. Lawrence Erlbaum Associates. 1998.
- 30. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999;6(1):1–55.
- 31. Chadwick P, Strauss C, Jones A-M, Kingdon D, Ellett L, Dannahy L, et al. Group mindfulness-based intervention for distressing voices: A pragmatic randomised controlled trial. Schizophr Res. 2016;175(1–3):168–73. pmid:27146475
- 32. Hayward M, Berry K, Bremner S, Jones A-M, Robertson S, Cavanagh K, et al. Increasing access to cognitive–behavioural therapy for patients with psychosis by evaluating the feasibility of a randomised controlled trial of brief, targeted cognitive–behavioural therapy for distressing voices delivered by assistant psychologists: the GiVE2 trial. BJPsych open. 2021;7(5).
- 33. Drake R, Haddock G, Tarrier N, Bentall R, Lewis S. The Psychotic Symptom Rating Scales (PSYRATS): their usefulness and properties in first episode psychosis. Schizophr Res. 2007;89(1–3):119–22. pmid:17095193
- 34. Allan R, Martin CR. Are there gender differences in affective disturbance in schizophrenia?. Clinical Effectiveness in Nursing. 2004;8(3–4):140–2.
- 35. Berry C, Newcombe H, Strauss C, Rammou A, Schlier B, Lincoln T, et al. Validation of the Hamilton Program for Schizophrenia Voices Questionnaire: Associations with emotional distress and wellbeing, and invariance across diagnosis and sex. Schizophr Res. 2021;228:336–43. pmid:33540145
- 36. Kim SH, Jung HY, Hwang SS, Chang JS, Kim Y, Ahn YM, et al. The usefulness of a self-report questionnaire measuring auditory verbal hallucinations. Prog Neuropsychopharmacol Biol Psychiatry. 2010;34(6):968–73. pmid:20472012
- 37. Steel C, Garety PA, Freeman D, Craig E, Kuipers E, Bebbington P, et al. The multidimensional measurement of the positive symptoms of psychosis. Int J Methods Psychiatr Res. 2007;16(2):88–96. pmid:17623388
- 38. Wahab S, Zakaria MN, Sidek D, Abdul Rahman AH, Shah SA, Abdul Wahab NA. Evaluation of auditory hallucinations in patients with schizophrenia: A validation study of the Malay version of Psychotic Symptom Rating Scales (PSYRATS). Psychiatry Res. 2015;228(3):462–7. pmid:26142835
- 39. Woodward TS, Jung K, Hwang H, Yin J, Taylor L, Menon M, et al. Symptom dimensions of the psychotic symptom rating scales in psychosis: a multisite study. Schizophr Bull. 2014;40 Suppl 4(Suppl 4):S265-74. pmid:24936086
- 40. Favrod J, Rexhaj S, Ferrari P, Bardy S, Hayoz C, Morandi S, et al. French version validation of the psychotic symptom rating scales (PSYRATS) for outpatients with persistent psychotic symptoms. BMC Psychiatry. 2012;12:161. pmid:23020603
- 41. Kronmüller K-T, von Bock A, Grupe S, Büche L, Gentner NC, Rückl S, et al. Psychometric evaluation of the Psychotic Symptom Rating Scales. Compr Psychiatry. 2011;52(1):102–8. pmid:21220071
- 42.
Albano T. Factor Analysis. In Introduction to Educational and Psychological Measurement Using R. 2010. Available from: https://www.thetaminusb.com/intro-measurement-r
- 43. Chadwick P, Birchwood M. The omnipotence of voices. A cognitive approach to auditory hallucinations. Br J Psychiatry. 1994;164(2):190–201. pmid:8173822
- 44. Birchwood M, Chadwick P. The omnipotence of voices: testing the validity of a cognitive model. Psychol Med. 1997;27(6):1345–53. pmid:9403906
- 45. Cohen CI, Izediuno I, Yadack AM, Ghosh B, Garrett M. Characteristics of auditory hallucinations and associated factors in older adults with schizophrenia. Am J Geriatr Psychiatry. 2014;22(5):442–9. pmid:24021224
- 46. Hartigan N, McCarthy-Jones S, Hayward M. Hear today, not gone tomorrow? An exploratory longitudinal study of auditory verbal hallucinations (hearing voices). Behav Cogn Psychother. 2014;42(1):117–23. pmid:23866079
- 47. Tsang A, Bucci S, Branitsky A, Kaptan S, Rafiq S, Wong S, et al. The relationship between appraisals of voices (auditory verbal hallucinations) and distress in voice-hearers with schizophrenia-spectrum diagnoses: A meta-analytic review. Schizophr Res. 2021;230:38–47. pmid:33667857
- 48.
Comrey AL, Lee HB. A First Course in Factor Analysis. 2nd ed. Lawrence Erlbaum. 1992.
- 49. de Winter JCF, Dodou D, Wieringa PA. Exploratory Factor Analysis With Small Sample Sizes. Multivariate Behav Res. 2009;44(2):147–81. pmid:26754265
- 50.
Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Allyn & Bacon/Pearson Education. 2007.