Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Speaker differences in volitional voice modulation reflected in empathy and functional activation patterns

  • Stella Guldner,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft

    Affiliations Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Germany, Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany

  • Frauke Nees,

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliation Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig- Holstein, Kiel University, Kiel, Germany

  • Herta Flor,

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliation Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany

  • Carolyn McGettigan

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – review & editing

    c.mcgettigan@ucl.ac.uk

    Affiliation Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom

Abstract

How we use our voice is central to how we express information about ourselves to others. A speaker’s dispositional social reactivity might contribute to how well they can volitionally modulate their voice to manage listener impressions. Here, we investigated individual differences in social vocal control performance in relation to social reactivity indices and underlying neural mechanisms. Twenty-four right-handed speakers of British English (twenty females) modulated their voice to communicate social traits (sounding likeable, hostile, intelligent) while undergoing a rapid-sparse fMRI protocol. Performance in social vocal control was operationalized as the specificity with which speakers evoked trait percepts in an independent group of naïve listeners. Speakers’ empathy levels, as well as psychopathic and Machiavellian traits, were assessed using self-report questionnaires. The ability to express specific social traits in voices was associated with activation in brain regions involved in vocal motor and social processing (left posterior TPJ, bilateral SMG, premotor cortex). While dispositional cognitive empathy predicted general vocal performance, self-reported levels of Machiavellianism were specifically related to better performance in expressing likeability. These findings highlight the psychological and neural mechanisms involved in strategic social voice modulation, suggesting differential processing in a combined network of vocal control and social processing streams.

Introduction

The human voice is a predominantly social signal and the primary channel for social communication. Listeners form impressions about a speaker rapidly [1] and reliably [2], and these impressions guide subsequent behavior towards the speaker [37]. Speakers can influence listener impressions about themselves by modulating their voice, for instance to sound more dominant, intelligent or attractive [8,9]. Using the voice effectively to achieve favourable trait judgments from listeners might be an important tool in social interactions. Nevertheless, speakers differ considerably in their ability to modulate their voice on demand [10]. So, what are the speaker characteristics that influence volitional voice modulation?

One important route might be a speaker’s ability to empathize with an interlocutor, i.e., to understand their feelings and thoughts, in order to adjust their own behavior accordingly. On the listener’s side, higher levels of empathy seem to support the decoding of nonverbal social information in vocalizations, such as authenticity [11], irony [12,13], or emotions [14]. However, speakers also spontaneously modulate their voice to encode information fitting to the listener’s needs [15,16]. As such, empathy might also support the speaker in strategic voice modulation, helping them to modulate the voice in accordance with the social context. Empathy includes both cognitive and affective aspects. Cognitive empathy in particular might be central to volitional voice changes. It is thought to be under voluntary control [17] and closely related to theory of mind (ToM) [18]. In populations with specific impairments in cognitive empathy, vocal behaviour is often rigid or emotionally flat [1921]. On the other hand, people with affective social reactivity deficits but often preserved cognitive empathy capacity, such as Machiavellianism or Psychopathy [22], show both a strategic use of linguistic vocal behaviour in social interactions [23] and proficieny in volitional facial expression of affect [24]. This might be specific to volitional behaviour, as other work shows that psychopathic traits are associated with less spontaneous prosodic modulation of affective words [21]. Machiavellianism is characterized by a strategic, self-serving manipulative interpersonal style, whereas psychopathy is associated with a highly unemotional, callous, and impulsive style [25,26]. Thus, the ability to coolly apprehend others’ feelings and thoughts might be a route through which social opportunists manage others’ impressions.

On the neural level, voluntary voice changes are achieved by a network of regions involved in vocomotor control – the vocomotor network (VMN) – that includes left inferior frontal gyrus (IFG), anterior cingulate cortex (ACC), supplementary motor cortex (SMA), supramarginal gyrus (SMG), superior termporal gyrus (STG), insula, basal ganglia (BG) and cerebellum (reviewed by [27,28]. Together with sensorimotor cortices, activity patterns in these areas are associated with task performance in the imitation or production of novel speech sounds [2932], pitch control in singers (particularly sensorimotor cortices [33]), and voice training in singers [34]. Very few studies have investigated inter-individual differences in social voice modulation on the neurophysiological level. One study found the ability to volitionally modulate the voice to express sad or happy emotions is associated with increased functional activation of right IFG (triangular part), right middle frontal gyrus, and left superior frontal gyrus that might be related to motor planning for prosody production [14]. Together, these studies suggest that differences in functional activation in VMN might underlie volitional voice modulation efficacy. Beyond the VMN, precuneus, medial prefrontal cortex (mPFC), and superior temporal sulcus (STS)/Temporo-parietal junction (TPJ) support the volitional expression of explicit social traits in the voice [9], as well as explicit [35] and covert [36] vocal identity expression. These areas are often referred to as the social brain network, since they are reliably activated together during domain-general social cognitive processing tasks [37]. Interestingly, activation in STS/TPJ and bilateral insulae during prosody perception was negatively associated with psychopathy traits, while activation in left IFG was associated with listeners’ affective empathy levels [14]. However, whether differences in activation in these regions also applies to vocal production remains to be tested.

Importantly, most previous studies have used either combined measures of empathy [12,13], focussed on affective empathy components [11,14] or studied individual differences in empathy in relation to perception but not voice production. Moreover, these studies mostly refer to emotional, but not social voice changes. It is therefore unclear how individual differences in social reactivity might contribute to the ability to volitionally express social information through the voice, and which neurophysiological mechanisms might support this ability. Here, we addressed this gap and hypothesized, first, that performance in social vocal control would be positively associated with empathy (particularly cognitive empathy) and socially-opportunistic traits (Machiavellianism and psychopathic traits). Given that these measures are interrelated, we tested this in the framework of multiple regression models, to determine each individual and unique contribution. Second, we probed the underlying neurophysiological networks associated with individual differences in social vocal control, hypothesizing a positive association between task performance and activation in areas associated with domain-general social processing (mPFC, STS/TPJ, precuneus). Finally, we investigated whether speaker traits – regardless of their relationship to behavioural performance – predicted underlying differences in task-relevant functional activations during social voice modulation.

Methods

Here we present novel individual-differences analyses using functional MRI and speech production data previously described with group-level analyses [9]. Data relating to individual social reactivity and trait scales are being reported and analysed here for the first time.

Speakers

Twenty-four right-handed, native British English speakers (Mage = 21.04 SD = 3.26, 3 male) participated in this experiment. Data collection occurred from 31.10.2017 to 13.12.2017. All speakers had normal or corrected-to-normal vision and reported no history of hearing, language, neurological, or psychiatric disorders and were recruited from the participant pool at the Department of Psychology at Royal Holloway, University of London, and received 30 GBP as reimbursement. All speakers provided their full informed and written consent prior to participation according to the Declaration of Helsinki (1991). This study was approved by the research ethics committee of Royal Holloway, University of London (587-2017-10-24-14-50-UXJT010).

Assessment of empathy, machiavellianism and psychopathy

To measure individual differences in trait empathy, speakers completed the Questionnaire of Cognitive and Affective Empathy (QCAE) [38]. The QCAE is a 31-item questionnaire measuring cognitive empathy and affective empathy aspects. Speakers choose their level of agreement with each item from 1 (strongly disagree) to 4 (strongly agree). The sum score ranges from 19 to 76 for cognitive empathy and 12–48 for affective empathy. To assess traits of psychopathy and Machiavelliansim, we used the Short Dark Triad (SD3) [39], a 27-item self-report questionnaire. We focused on dispositional psychopathy and Machiavellianism, as these constructs are highly associated with social manipulative behaviours. Responses are given in agreement with each item on a 5-point Likert-scale with anchors 1 (strongly disagree) to 5 (strongly agree). Sumscores for each scale range from 9 to 45.

Social vocal control task

The main experimental task consisted of a social vocal control task in which speakers were asked to express social and non-social traits in the voice. Social traits included vocally expressed intelligence, likeability, and hostility. Modulating the voice to convey a large body size, as well as speaking in non-modulated neutral voice, were implemented as control conditions (for detailed instructions on trait expressions see [9]). Exemplars consisted of four two-syllable, five-letter pseudowords with a C-V-C-V-C (C = consonant, V = vowel) phonotactic structure (belam, lagod, minad, and namil; [40]).

Design and procedure

Speakers filled out the self-report questionnaires online prior to the fMRI scanning session. On the scanning day, speakers first received a brief training on the vocal control task before completing the task in the MRI scanner over the course of 4 runs. Each run consisted of a randomized order of the 5 vocal modulation conditions (neutral/ large/ hostile/ likeable/ intelligent) paired with one of the 4 exemplars, of which each combination appeared during 3 Go trials and 3 No-Go trials. Go and No-Go trials were presented in randomized order. In total, each run included 150 trials, of which 30 were rest trials.

Both Go and No-Go trials started with a two-second presentation of the target trait and a fixation cross. During Go trials, the fixation cross was then substituted with an exemplar for 1.5 seconds. During this 1.5 sec silent gap speakers were asked to vocalize the exemplar while expressing the target trait. Recordings were made on an in-scanner MR-compatible microphone (Opto-acoustics, FOMRI-III). During No-Go trials, the fixation cross remained on the screen for the duration of the silent gap and no exemplar was presented. Go and No-Go trials, therefore allowed filtering out neural activation specifically related to ongoing voice production. Visual cues were projected onto a screen at the back of the scanner bore and viewed via a mirror on the head coil. The total scanning time was approximately 50 minutes.

Speakers were re-invited approximately 1 week later for performance ratings on their in-scanner recordings. They were asked to rate the trait intensity on a 7-point Likert scale ranging from 1 (not at all) to 7 (very much) for each of their voice recordings blocked by the expressed target trait in a soundproof booth via Sennheiser Headphones HD306. We selected one recording for each modulation condition and for each speaker that had received the speaker’s maximal rating. This was done to ensure that independent evaluations of performance would be based on sounds where the speakers had felt confident in their expression of the target trait.

Naïve ratings

For each speaker, the highest-rated recording for each trait category was intensity normalized across speakers [9] and then presented to 24 naïve listeners. All raters (Mage = 19.92, SD = 1.47, 4 male) were recruited at the Department of Psychology at Royal Holloway, University of London, gave their informed consent before participation and received monetary compensation for their participation of 5 GBP. To reduce the experimental duration, each listener heard the recordings of a subset of 10 speakers while ensuring that each speaker was heard by at least 10 different listeners. For each speaker, we included one recording of each vocal modulation condition (trait) and one recording of their neutral voice. Each listener heard and rated each recording of each speaker on all trait scales in separate blocks. Each block consisted of one social trait rating for all recordings in randomized order (for details see [9]). Listeners indicated their responses on 7-point Likert scales, with anchors at 1 (not at all) to 7 (very) to rate how strongly a voice expressed a given trait. Block order was randomized across raters. Raters heard the recordings over Sennheiser Headphones (Sennheiser U.K. Ltd, Marlow, UK) in a soundproof booth (see Fig 1). All in scanner-stimuli, as well as stimuli for rating experiments were presented using the Psychophysics Toolbox [41] in Matlab (2014a, the Mathwork, Natick, MA).

Individual differences in social vocal control.

The ratings data were analysed in R (http://www.R-project.org/). To assess the success of the voice manipulations to express social traits, we calculated the average change in naïve ratings for modulated voices relative to the neutral voice samples, for each speaker and each intended trait (i.e., comparing “intelligent” ratings for the neutral and “intelligent” trials), thus allowing to compute a performance index for each trait for each speaker (henceforth ∆ - [trait]).

Next, we computed representational similarity matrices (RSM) based on the pairwise Pearson correlation coefficients of naïve ratings between pairs of the three social traits (likeability, hostility and intelligence) for each speaker. These matrices permit us to characterise the similarity in perceptual mental representations between different stimulus categories [42]; see, e.g., [43,44]. In this study, we made use of this analysis to explore how specific voice modulations were expressed in respect to the trait percept they induced (reflected in the naïve ratings). A cell of each speaker’s RSM contained the pairwise correlations of ten listener ratings for the respective two traits, that is, ten ratings (one from each listener) for each of the three trait recordings. From these matrices, we computed a general performance parameter to capture the specificity of social voice modulations for each speaker, i.e., the social voice modulation index. This parameter was estimated by calculating the Euclidean distance (ED) of each speaker’s RSM to a theoretical matrix with maximized discrimination between trait ratings (see Figs 2 and S1). Based on the social space dimensions shown elsewhere [1,9], we assumed that there would be no significant correlations (r = 0) of intelligence ratings with either likeable or hostile voice modulation, and that the ratings for hostile and likeable voices would be anti-correlated (r = −1).

thumbnail
Fig 2. A. Theoretical Representational Similarity Matrix (RSM), showing expected correlations between evoked trait ratings. B. Example RSMs for two speakers, who showed high (left) and low (right) specificity between evoked trait ratings.

https://doi.org/10.1371/journal.pone.0325207.g002

Effects of social reactivity on performance in social vocal control.

To investigate the effect of social reactivity indices (cognitive and affective empathy, psychopathic and Machiavellian traits) on vocal modulation performance, we calculated a multiple regression model to test whether social reactivity indices are predictive of social vocal control ability (social voice modulation index). The model included the scaled social reactivity indices as regressors, and age and sex as covariates of no interest. Two participants showed significant influence on the prediction efficacy of the linear model, reflected in multivariate outlier tests on the studentized residuals (uncorrected ps < .05), which did however, not reach significance at the Bonferroni-corrected level (ps > .08). However, the tests for a bias-inducing residual error distribution was significant (p < .001), thus we computed OLS linear regression while excluding these two data points (please note diverging results on the whole sample using OLS here: [45]). For completeness, we also conducted robust regression analysis on the whole sample (i.e., including these influential data points) using an M-estimator. For the robust regression analysis, we used the robust package in R; https://CRAN.R-project.org/package=robust). On the trait level, we calculated partial Pearson correlation coefficients pairwise for each spoken trait performance (∆ - [trait]; e.g., ∆-intelligence) and each social reactivity index, controlling for age and sex. We corrected for multiple comparisons using the False Discovery Rate (FDR; [46]).

Neural correlates of performance in social vocal control

Functional brain images were preprocessed and analysed as described previously in detail [9]; see S1 Text). To determine brain regions associated with performance in social vocal control, we ran a random effects multiple regression model on the group level for the contrast “Social Go> Rest” with the social voice modulation index (ED) as a covariate of interest.

Effect of social reactivity on task-based functional activation.

To explore whether differences in behavioural performance related to social reactivity were associated with differences in neural processing during social voice modulation, we computed a whole-brain multiple regression model with social reactivity indices as predictors on functional activation patterns during social voice modulation (Social Go > Rest). In addition, we calculated a multiple regression model based on the behavioral association on the contrast “[trait] Go> Rest” and the social reactivity indices as covariates of interest. All models included a constant intercept, and age and sex as covariates of no interest.

For all imaging analysis, we used a significance threshold of p < .001 (uncorrected) at the voxel level and corrected for multiple comparisons at the cluster level with p < .05 using an individual cluster extent threshold determined for each contrast using a Monte-Carlo simulation with 1000 iterations [47]. Resulting clusters were labelled based on the location of each peak activation using the built-in Neuromorphometrics and the automated anatomical labeling (AAL) atlas in SPM12.

Results

Individual differences in social vocal control

Descriptive statistics of the behavioural measures can be found in Table 1. Performance in social vocal control (i.e., social voice modulation index as ED from theoretical RSM) ranged from 0.25 to 1.64 (M = 0.74, SD = .034), where smaller values denote higher specificity in vocally evoked trait ratings in naïve listeners (see S1 Fig). Cognitive empathy scores ranged from 39 to 76 (M = 57.42, SD = 8.32), affective empathy scores ranged from 22 to 44 (M = 36.67, SD = 5.15), psychopathy trait scores ranged from 10 to 25 (M = 16.92, SD = 3.86) and Machiavellian trait scores varied between 17 and 34 (M = 25.29, SD = 4.79).

thumbnail
Table 1. Descriptive statistics of social reactivity and performance in social vocal control and partial Pearson correlation coefficients, controlling for speaker sex and age.

https://doi.org/10.1371/journal.pone.0325207.t001

On the trait level, speakers evoked a mean increase of 1.30 points on the 7-point Likert-scale with their likeable voice modulations relative to their neutral voice (∆-Likability: M = 1.30, SD = 0.66, t(23)=5.64, p < .001), an increase of 2.07 for hostile voice (∆-Hostility: M = 2.07, SD = 1.41, t(23)=6.68, p < .001), and an increase of 0.53 for intelligent voice modulations (∆-Intelligence: M = 0.53, SD = 0.87, t(23)=1.87, p < .05; results of multivariate trait rating comparisons have been reported previously; [9]).

Effects of social reactivity on performance in social vocal control

The OLS multiple regression model showed that cognitive empathy significantly influenced performance (β = −.45, t = −3.13, p < .01), above all other social reactivity indices (R2 adj = 0.38, F(6,15)=3.17, p < .03, see Table 2; Results from robust regression, using M-estimator: cognitive empathy (β = −.59, t = −4.12, p < .001), psychopathic traits (β = −.70, t = −2.92, p < .01), and Machiavellian traits (β = .56, t = 2.47, p < .05) predicted the social voice modulation index (ED), multiple-R2 = 0.26, RSE = 0.47).

thumbnail
Table 2. Results from OLS Regression Model of Social reactivity indices prediction social voice modulation index (ED).

https://doi.org/10.1371/journal.pone.0325207.t002

We next explored associations between social reactivity and performance in social vocal control on the trait level and found a significant positive association between ∆-Likeability and Machiavellianism (rp = .66, p < .02), but no other traits (all rs < .32, all ps > .08, see Table 1). To explore this result further, we employed an additional multiple regression model including ∆- Likeability as the outcome variable and all social reactivity indices as predictor variables, while controlling for age and sex. The model explained 46% of the variance in ∆-Likeability (R2adj = .46, F(6,17)=4.21, p < .01) and revealed that levels of Machiavellianism predicted likeable voice performance independent of other social reactivity indices (β = .54, t(17)=2.30, p < .05).

Neural correlates of performance in social vocal control

The whole-brain multiple regression model showed a significant association between functional activation and social voice modulation index (ED) in 9 clusters (uncorrected p < .001, k = 61) with peak voxels in left posterior temporo-parietal junction (TPJ), right middle frontal gyrus (MFG), bilateral supramarginal gyrus (SMG), left middle temporal gyrus (MTG), left somatosensory cortex in postcentral gyrus (S1), cuneus, and bilateral caudate (see Table 3 and Fig 3).

thumbnail
Table 3. Functional activations for the multiple regression whole-brain model of performance (social voice modulation index, ED) on Social Go > Rest.

All contrasts are negative correlations (i.e., greater activation with lower social voice modulation index); the positive direction showed no significant voxels.

https://doi.org/10.1371/journal.pone.0325207.t003

thumbnail
Fig 3. Correlates of social vocal control ability: activation maps.

The multiple regression model revealed a significant negative association between the social voice modulation index (ED) and functional activation in response to Social Go trials > Rest in left posterior TPJ, MTG and right MFG, as well as bilateral SMG. Associations are illustrated based on the peak-voxel parameter estimates in left posterior TPJ and SMG. MFG = middle frontal gyrus, MTG = middle temporal gyrus, TPJ = temporo-parietal junction, L = left, R = right.

https://doi.org/10.1371/journal.pone.0325207.g003

Effect of social reactivity on task-based functional activation

Lastly, we explored associations between functional activation during social voice modulation and social reactivity indices. The whole-brain multiple regression model showed a significant negative association between functional activation and cognitive empathy in the orbital part of right inferior frontal gyrus (IFG) and a positive association with functional activation in left supramarginal gyrus (SMG). Affective empathy was positively associated with changes in activation in posterior part of left temporo-parietal junction (TPJ) and right precuneus (PCu), while psychopathy was negatively associated with task-based activation in left temporal pole (TP; uncorrected p < .001, k = 61, see Table 4, Fig 4). We also conducted an explorative regression analysis of Machiavellianism on functional activation during likeable Go trials based on the behavioral association, which showed a positive association in a cluster in left middle frontal gyrus (MFG) and a negative association in bilateral precuneus (uncorrected p < .001, k = 60, see S1 Table and S3 Fig).

thumbnail
Table 4. Functional activations for the multiple regression whole-brain model of social reactivity traits on Social Go > Rest.

https://doi.org/10.1371/journal.pone.0325207.t004

thumbnail
Fig 4. Activation maps: effect of social reactivity on functional activations during social voice modulations.

Negative associations between task-based functional activation and cognitive empathy (green) were found in right IFG, orbital part. A cluster in left TP was negatively associated with psychopathy (red). Affective empathy was positively associated with activation in left pTPJ, and PrCu. IFG = inferior frontal gyrus, PCu = precuneus, pTPJ = posterior temporo-parietal junction, L = left, R = right.

https://doi.org/10.1371/journal.pone.0325207.g004

Discussion

In this study, we set out to explore individual differences in the ability to volitionally express social information in the voice. We found that general performance in social vocal control was significantly associated with speakers’ self-reported cognitive empathy, whereas performance in expressing favourable social traits (sounding likeable) was positively associated with Machiavellian traits. On the neural level, individual differences in performance in social vocal control were associated with increased functional activation in areas associated with social processing and vocal motor control. Lastly, we found significant association between social reactivity indices and functional processing during social voice changes that might reflect differential task strategies.

Association between social reactivity and social vocal control ability

Our behavioural data showed that a speaker’s level of cognitive empathy capacity was significantly associated with general performance in social vocal control, suggesting that the capacity to reflect on others and their thoughts and feelings, contributes to the ability to express trait-related social information vocally. Previous work on perceptual processing in socio-emotional vocal expression has shown that listeners’ social reactivity is an important contributor to successful nonverbal vocal communication, helping to understand the intended social meaning in spoken conversation [1114]. We show the first evidence that speakers’ social reactivity might support targeted and fine-tuned vocal behaviour to express social information to others. Specifically, a speaker’s ability to mentalize about others’ thoughts and feelings supports the communication of nuanced social information through the voice. This finding adds to clinical observations, which have shown that spontaneous vocal behaviour in clinical populations with deficits in social reactivity is often characteristically changed or monotonous speech [20,21] and can be used as an implicit source of information about a patient’s social functioning [48]. Nevertheless, it remains unclear which specific social cognitive processes support social voice modulation, and whether this is also true in naturalistic live interaction [49]. In live interactions speakers have to flexibly adjust their vocal behaviour in reference to themselves and the listener. This encompasses both perceptual processing, reactive vocal adjustment and calibration to the interlocutor. Interestingly, previous work measuring linguistic style accommodation in live interactions found no significant effects of empathy on the likelihood of linguistic accommodation [23] – however, in that study nonverbal vocal characteristics were not assessed, and cognitive and affective empathy were not distinguished. We suggest that future studies should examine both spontaneous and volitional vocal behaviour dyads to elucidate the relation between social reactivity (both cognitive and affective empathy), perceptual acuity, and vocal modulation.

Interestingly, the magnitude of the likeable trait percepts evoked by the voice modulations was positively associated with self-reported speaker Machiavellianism (not explained by likeability ratings of the neutral voice, see S2 Text), suggesting that people with a more Machiavellian social style might be particularly efficient in evoking favourable trait impressions on others. This adds to the findings that Machiavellianism is associated with increased vocal convergence to interlocutors, given that it is advantageous for the speaker [23]. One might speculate that speakers who describe their own social interactive style as more Machiavellistic might strategically make use of nonverbal vocal strategies to make favourable impressions on others. More generally, such a charming strategy has previously been associated specifically with people scoring high on Machiavellianism as compared to psychopathy or narcissism [50], and might be a strategy to achieve rapport in others [51]. Interestingly, people scoring high on psychopathic traits on the other hand, have been shown to be very successful in volitionally and convincingly expressing emotions facially to others [24]. Previous work suggests that people scoring high on Machiavellianism also tend to show higher levels of self-monitoring and -control [52,53], and a rational thinking style [54]. These attributes might support the volitional display of emotion in nonverbal behaviours (“posing”; reviewed by [55]), i.e., staying cool while acting out. Of course, more work is needed to disentangle these effects and underlying mechanisms. Exploratory analysis on the neural level showed that a speaker’s level of Machiavelliansim was positively associated with activation in premotor cortex (MFG) during likeable voice modulation, overlapping with an area associated with likeability performance (see S2 and S3 Figs). Simultaneously, cognitive and affective empathy levels were associated with activations in left pTPJ, medial prefrontal cortex and superior temporal sulcus during likeable voice modulation (S2 Table and S4 Fig), suggesting diverging task strategies (see below).

Neural networks supporting performance in social vocal control

A second aim of this study was to investigate functional activation associated with individual differences in social vocal control. We found that better performance (evoking specific trait percepts in independent listeners) was supported by an increase in activation in left posterior TPJ, bilateral SMG, right premotor cortex, left middle temporal cortex, somatosensory cortex and cuneus, suggesting the involvement of both vocomotor-related and social processing-related brain regions to support specific social trait expressions through the voice. Beyond implicated regions of the VMN (caudate, SMG), sensorimotor cortex and premotor areas in MFG and SFG were positively associated with task performance. This is in line with work showing correlations with performance indices during affective voice modulation [14], association with expertise in pitch control [33] and vocal tract manipulations [34], likely supporting affect-related voice modulation [56]. Performance differences might therefore reflect increased control of vocal operators [57,58] to achieve social voice changes.

SMG in particular supports both auditory and somatosensory feedback integration during speech [59,60], allowing adjustments and monitoring of vocal outputs by matching target and error maps [61]. Acoustic imitation of voices [62] or vocal matching [63] to an internal target engages SMG, particularly when vocal adjustments are voluntary compared to involuntary (e.g., during pitch changes: [64]). Here, we show that activation in SMG is positively associated with more effectual social vocal control performance. In favor of this, engagement of SMG has been shown to vary as a function of experience in singers to achieve pitch changes [33]. We suggest that activation in SMG during social vocal control might effectively support the internal matching to a trait-related voice pattern for motor planning and monitoring. Interestingly, we also found a region in left SMG to be positively associated with individual differences in cognitive empathy during social voice modulations, suggesting a role of SMG beyond isolated audio-motor feedback integration. SMG activation, in fact, might support self-other distinction in complex social tasks [65], such as this one. The region in SMG we report here overlaps with regions associated with specific social cognitive tasks, such as trait judgements or false belief tasks [66]. This implies a dual function of SMG, integrating both social cognitive and vocal motor related processing.

We also identify more posterior areas of angular gyrus/ intraparietal sulcus to be associated with performance in social vocal trait expression. This region corresponds to a more posterior part of temporo-parietal junction (pTPJ). TPJ is a region reliably involved in domain-general cognitive and affective social processing [37,66,67] and left pTPJ is engaged during live conversation [68], volitional vocal impersonations [35], and intention inference from voices [69]. Adding to these findings, our data suggest that the engagement of left pTPJ is predictive of the efficacy with which social traits are expressed in the voice. In our task, most speakers reported imagining a situation in which they had used or would use their voice to express corresponding traits. Left TPJ activation might reflect this process, serving to imagine interactions with specific persons in their lives [70] from a first-person perspective [71,72], through top-down control of endogenous episodic memory retrieval [73]. Other work has shown that left TPJ is essentially involved in monitoring and differentiating self-produced from other-produced speech [74] suggesting a role of TPJ in social feedback monitoring. Indeed, left TPJ has been found to support regulating another’s emotions through vocalizations [75] and in line with this we found that a speaker’s affective empathy was positively associated with activation in left TPJ. Although future studies are needed to determine the exact contribution of these regions to social vocal control, left pTPJ might work together with VMN areas to support encoding social information in the voice [35] by constructing an internal social context to guide vocal behaviour.

In summary, the network of regions reported here might reflect the speakers’ ability to monitor and encode social trait information during ongoing social voice production (pTPJ), through somatosensory and auditory feedback processing (SMG, S1) and vocal motor planning (premotor cortex). We thus offer first evidence that social processing areas, in particular TPJ and SMG, support the ability with which speakers control listeners’ perception of their voice and that processing in these regions is associated with speakers’ trait empathy.

Differential functional activation associated with social reactivity

We found spatially separable processing regions differentially associated with social reactivity indices, implying different processing strategies might be used to accomplish social voice changes depending on a speaker’s level of these traits. Correspondingly, previous work has suggested that overlapping but separable neural regions might support affective and cognitive social processing [7679]. The observed results overlap largely with a set of regions that showed increased functional connectivity during live conversation in autistic speakers compared to neurotypical speakers without differences in performance [68]. Our findings add that this might also be the case for nonverbal social vocal behaviour in relation to empathy traits in neurotypical speakers. Given that behavioural performance was significantly related only to cognitive empathy levels, speakers might therefore engage in varying (possibly compensatory) underlying strategies to express social information in the voice, depending on their level of psychopathic, or empathic, processing.

Considerations and further studies

A limitation of this study is that our sample included a majority of female speakers and raters. Previous work has shown sex differences in some acoustic parameters associated with expressing social traits in the voice [8], as well as specific modulation performance [10]. Although these might be more important in mating contexts, which often rely on exaggerations of sexually dimorphic physical characteristics [80,81], we controlled for effects of sex statistically in all reported analyses. Another limitation is the reliance on self-report measures. However, our instruments showed satisfactory internal consistencies and variation comparable to previous work in non-clinical populations [39,8284], including specific correlations with informant ratings (peers/spouses) for Machiavellian and psychopathic traits [39].

Conclusion

Our findings suggest that the success with which speakers can communicate a variety of implicit information about themselves to others is related to their capacity to empathize with others. Further, we find that the efficacy of social vocal control relies on a network of regions supporting fine-tuned motor planning, auditory and somatosensory feedback control and socio-cognitive processing. Lastly, our data indicate differential engagement of these regions depending on a speaker’s dispositional social reactivity.

Supporting information

S1 Text. Acquisition of Imaging Data.

All functional brain images were recorded on a 3T Siemens TIM Trio scanner with a 32 channel head coil, using a rapid–sparse event-related 3D echo-planar imaging (EPI) sequence (32 axial slices, slice gap 25%, resolution 3x3x3mm2, flip angle 78°, matrix 64 x 64, TE: 30 msec, TR: 3.5 sec, TA: 2 sec). A 3D T1-weighted MP-RAGE scan was acquired for EPI image alignment and spatial normalization (voxel size 1 mm isotropic; flip angle 11°; TE 3.03 ms; TR 1830 ms; image matrix 256 x 256). Analysis was conducted in SPM12 (http://www.fil.ion.ucl.ac.uk/spm/). Preprocessing steps included spatial realignment, segmentation, co-registration, normalization (functional images were resampled to a voxel size of 2x2x2mm) and smoothing (FWHM = 8 mm). 1st Level general linear models included the vocal control task conditions as regressors and subjective ratings as parametric modulators for each condition.

https://doi.org/10.1371/journal.pone.0325207.s001

(DOCX)

S2 Text. Correlations between naïve ratings of likeability in neutral voices and Machiavellian Traits.

To test for possible baseline effects of Machiavellianism on perception of likeability of speakers’ neutral voices, we did additional post-hoc correlation analysis to compare mean likeability ratings of neutral voices and speakers’ Machiavellianism scores. Partial Pearson correlation analysis revealed no significant association between Machiavellian traits and likeability ratings of neutral voices (rp = −.03, p = .89) controlling for sex and age.

https://doi.org/10.1371/journal.pone.0325207.s002

(DOCX)

S1 Fig. Boxplot and descriptive statistics of the social voice modulation index (ED).

Higher values indicate worse performance in social vocal control ability, operationalized to worse specificity in evoked trait percepts in listeners. Exemplary RSMs for two speakers are given to illustrate better and worse specificity of evoked trait percepts, reflected in pairwise correlation coefficients in each cell. The minimum of the Euclidean distance (ED) measure would be 0, assuming that a speaker achieves maximal differentiation between evoked trait ratings (the speaker’s RSM and the theoretical RSM would be identical) as the theoretical matrix predicts, whereas 2.45 would be the maximum distance.

https://doi.org/10.1371/journal.pone.0325207.s003

(TIF)

S2 Fig. Activation maps and descriptive statistics for exploratory functional multiple regression analysis of ∆-Likeability during Likeable Go trials.

Activation Maps are shown for Likeability Performance on Likeable Go > Rest together with descriptive statistics (boxplot) of Likeable voice performance (∆-Likeability). Performance in likeable voice modulation was positively associated with functional activation in a cluster in middle frontal gyrus. Only positive correlations survived. The contrast likeable performance (∆-Likeability) on Likeable Go > Rest showed one cluster (k = 61) with a peak voxel in left middle frontal gyrus (premotor cortex; MNI coordinates x = −34, y = 10, z = 42, T = 5.48, Z = 4.23) with an uncorrected p < .001 and a minimal cluster threshold of k = 60. L = left.

https://doi.org/10.1371/journal.pone.0325207.s004

(TIF)

S3 Fig. Activation Maps for self-report Machiavellianism and Likeability Performance on Likeable Go > Rest.

Machiavellianism was positively associated with functional activation in a cluster in middle frontal gyrus (red), overlapping with the left MFG cluster related to likeability performance (yellow). Machiavellianism was negatively associated with activation in bilateral precuneus (blue). (+/-) denotes positive or negative relationships, respectively. MACH = Machiavellianism, L = left.

https://doi.org/10.1371/journal.pone.0325207.s005

(TIF)

S4 Fig. Activation maps of social reactivity on functional activations while speaking in a likeable voice.

Negative associations between task-based functional activation and cognitive empathy (green) were found in right IFG, orbital part, right STS, left pHC and dmPFC. The cluster in orbital parts of right IFG was also negatively associated with psychopathy (red; overlap shown in yellow). Affective empathy was positively associated with activation in left pTPJ, PrCu, vmPFC. IFG = inferior frontal gyrus, dmPFC = dorsomedial prefrontal cortex, pHC = parahippocampal gyrus, PrCu = precuneus, pTPJ = posterior temporo-parietal junction, STS = superior temporal sulcus, vmPFC = ventromedial prefrontal cortex, L = left, R = right.

https://doi.org/10.1371/journal.pone.0325207.s006

(TIF)

S1 Table. Table showing exploratory functional activations for multiple regression model of Machiavellianism during Likeable Go trials.

The multiple regression model showed a negative association with Machiavellianism in bilateral precuneus (blue) and a positive association with activation in middle frontal gyrus during likeable voice modulations (red; uncorrected p < .001, k = 25). The cluster in MFG overlapped with a cluster associated with likeable voice performance (∆-Likeability; see S4 Fig).

https://doi.org/10.1371/journal.pone.0325207.s007

(DOCX)

S2 Table. Table showing effect of social reactivity on functional activations while speaking in a likeable voice.

We ran a multiple regression analysis on the contrast Likeable Go > Rest with the questionnaire indices as regressors, while controlling for gender and age. We found no significant clusters associated with Machiavellianism. However, given that this analysis was exploratory, we report associations with all social reactivity indices. Higher affective empathy was associated with increased activation during likeable voice production in the left posterior TPJ, ventral medial prefrontal cortex (mPFC) and precuneus (PrCu). Lower cognitive empathy was associated with increased activation in a cluster in the dorsal mPFC/ anterior cingulate cortex (ACC), right STS, left parahippocampal gyrus (pHC), right orbital inferior frontal gyrus (IFG) and PrCu. Higher psychopathic trait scores were associated with decreased activation in an overlapping region of the orbital IFG, and a region in the temporal pole (uncorrected p < .001, k = 60).

https://doi.org/10.1371/journal.pone.0325207.s008

(DOCX)

Acknowledgments

We thank Elise Kanber for her help with data acquisition.

References

  1. 1. McAleer P, Todorov A, Belin P. How do you say “hello”? Personality impressions from brief novel voices. PLoS One. 2014;9(3):e90779. pmid:24622283
  2. 2. Mahrholz G, Belin P, McAleer P. Judgements of a speaker’s personality are correlated across differing content and stimulus type. PLoS One. 2018;13(10):e0204991. pmid:30286148
  3. 3. Torre I, Goslin J, White L, Zanatto D. Trust in artificial voices. Proceedings of the Technology, Mind, and Society on ZZZ - TechMindSociety’18; 2018. p. 1–6.
  4. 4. O’Connor JJM, Barclay P. The influence of voice pitch on perceptions of trustworthiness across social contexts. Evol Hum Behav. 2017;38(4):506–12.
  5. 5. Schroeder J, Epley N. The sound of intellect: speech reveals a thoughtful mind, increasing a job candidate’s appeal. Psychol Sci. 2015;26(6):877–91. pmid:25926479
  6. 6. Pavela Banai I, Banai B, Bovan K. Vocal characteristics of presidential candidates can predict the outcome of actual elections. Evol Hum Behav. 2017;38(3):309–14.
  7. 7. Tigue CC, Borak DJ, O’Connor JJM, Schandl C, Feinberg DR. Voice pitch influences voting behavior. Evol Hum Behav. 2012;33(3):210–6.
  8. 8. Hughes SM, Mogilski JK, Harrison MA. The perception and parameters of intentional voice manipulation. J Nonverbal Behav. 2013;38(1):107–27.
  9. 9. Guldner S, Nees F, McGettigan C. Vocomotor and social brain networks work together to express social traits in voices. Cereb Cortex. 2020;30(11):6004–20. pmid:32577719
  10. 10. Belyk M, Waters S, Kanber E, Miquel ME, McGettigan C. Individual differences in vocal size exaggeration. Sci Rep. 2022;12(1):2611. pmid:35173178
  11. 11. Neves L, Cordeiro C, Scott SK, Castro SL, Lima CF. High emotional contagion and empathy are associated with enhanced detection of emotional authenticity in laughter. Q J Exp Psychol (Hove). 2018;71(11):2355–63. pmid:30362411
  12. 12. Jacob H, Kreifelts B, Nizielski S, Schütz A, Wildgruber D. Effects of emotional intelligence on the impression of irony created by the mismatch between verbal and nonverbal cues. PLoS One. 2016;11(10):e0163211. pmid:27716831
  13. 13. Amenta S, Noël X, Verbanck P, Campanella S. Decoding of emotional components in complex communicative situations (irony) and its relation to empathic abilities in male chronic alcoholics: an issue for treatment. Alcohol Clin Exp Res. 2013;37(2):339–47. pmid:23136931
  14. 14. Aziz-Zadeh L, Sheng T, Gheytanchi A. Common premotor regions for the perception and production of prosody and correlations with empathy and prosodic ability. PLoS One. 2010;5(1):e8759. pmid:20098696
  15. 15. Kitamura C, Burnham D. Pitch and communicative intent in mother’s speech: adjustments for age and sex in the first year. Infancy. 2003;4(1):85–110.
  16. 16. Burnham D, Kitamura C, Vollmer-Conna U. What’s new, pussycat? On talking to babies and animals. Science. 2002;296(5572):1435. pmid:12029126
  17. 17. Bird G, Viding E. The self to other model of empathy: providing a new framework for understanding empathy impairments in psychopathy, autism, and alexithymia. Neurosci Biobehav Rev. 2014;47:520–32. pmid:25454356
  18. 18. Blair RJR. Responding to the emotions of others: dissociating forms of empathy through the study of typical and psychiatric populations. Conscious Cogn. 2005;14(4):698–718. pmid:16157488
  19. 19. Fusaroli R, Lambrechts A, Bang D, Bowler DM, Gaigg SB. “Is voice a marker for Autism spectrum disorder? A systematic review and meta-analysis”. Autism Res. 2017;10(3):384–407. pmid:27501063
  20. 20. Shriberg LD, Paul R, McSweeny JL, Klin AM, Cohen DJ, Volkmar FR. Speech and prosody characteristics of adolescents and adults with high-functioning autism and Asperger syndrome. J Speech Lang Hear Res. 2001;44(5):1097–115. pmid:11708530
  21. 21. Louth SM, Williamson S, Alpert M, Pouget ER, Hare RD. Acoustic distinctions in the speech of male psychopaths. J Psycholinguist Res. 1998;27:375–84.
  22. 22. Wai M, Tiliopoulos N. The affective and cognitive empathic nature of the dark triad of personality. Pers Individ Differ. 2012;52(7):794–9.
  23. 23. Muir K, Joinson A, Cotterill R, Dewdney N. Characterizing the linguistic chameleon: personal and social correlates of linguistic style accommodation. Hum Commun Res. 2016;42(3):462–84.
  24. 24. Book A, Methot T, Gauthier N, Hosker-Field A, Forth A, Quinsey V, et al. The mask of sanity revisited: psychopathic traits and affective mimicry. Evol Psychol Sci. 2015;1(2):91–102.
  25. 25. Jones D, Paulhus D. Machiavellianism. Guilford Press; 2009. p. 93–108.
  26. 26. Fehr B, Samson D, Paulhus DL. The construct of Machiavellianism: twenty years later. Advances in personality assessment, Vol 9. Hillsdale (NJ): Lawrence Erlbaum Associates, Inc; 1992. p. 77–116.
  27. 27. Pisanski K, Cartei V, McGettigan C, Raine J, Reby D. Voice modulation: a window into the origins of human vocal control? Trends Cogn Sci. 2016;20(4):304–18. pmid:26857619
  28. 28. Simonyan K, Horwitz B. Laryngeal motor cortex and control of speech in humans. Neuroscientist. 2011;17(2):197–208. pmid:21362688
  29. 29. Reiterer SM, Hu X, Erb M, Rota G, Nardo D, Grodd W, et al. Individual differences in audio-vocal speech imitation aptitude in late bilinguals: functional neuro-imaging and brain morphology. Front Psychol. 2011;2:271. pmid:22059077
  30. 30. Reiterer SM, Hu X, Sumathi TA, Singh NC. Are you a good mimic? Neuro-acoustic signatures for speech imitation ability. Front Psychol. 2013;4:782. pmid:24155739
  31. 31. Simmonds AJ, Leech R, Iverson P, Wise RJS. The response of the anterior striatum during adult human vocal learning. J Neurophysiol. 2014;112(4):792–801. pmid:24805076
  32. 32. Golestani N, Pallier C. Anatomical correlates of foreign speech sound production. Cereb Cortex. 2006;17(4):929–34.
  33. 33. Kleber B, Veit R, Birbaumer N, Gruzelier J, Lotze M. The brain of opera singers: experience-dependent changes in functional activation. Cereb Cortex. 2010;20(5):1144–52. pmid:19692631
  34. 34. Waters S, Kanber E, Lavan N, Belyk M, Carey D, Cartei V, et al. Singers show enhanced performance and neural representation of vocal imitation. Philos Trans R Soc Lond B Biol Sci. 2021;376(1840):20200399. pmid:34719245
  35. 35. McGettigan C, Eisner F, Agnew ZK, Manly T, Wisbey D, Scott SK. T’ain’t what you say, it’s the way that you say it--left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations. J Cogn Neurosci. 2013;25(11):1875–86. pmid:23691984
  36. 36. Brown S, Cockett P, Yuan Y. The neuroscience of Romeo and Juliet: an fMRI study of acting. R Soc Open Sci. 2019;6(3):181908. pmid:31032043
  37. 37. Van Overwalle F. Social cognition and the brain: a meta‐analysis. Hum Brain Mapp. 2008;30(3):829–58.
  38. 38. Reniers RLEP, Corcoran R, Drake R, Shryane NM, Völlm BA. The QCAE: a questionnaire of cognitive and affective empathy. J Pers Assess. 2011;93(1):84–95. pmid:21184334
  39. 39. Jones DN, Paulhus DL. Introducing the short Dark Triad (SD3): a brief measure of dark personality traits. Assessment. 2014;21(1):28–41. pmid:24322012
  40. 40. Frühholz S, Klaas HS, Patel S, Grandjean D. Talking in fury: the cortico-subcortical network underlying angry vocalizations. Cereb Cortex. 2015;25(9):2752–62. pmid:24735671
  41. 41. Kleiner M, Brainard D, Pelli D, Ingling A, Murray R, Broussard C. What’s new in Psychtoolbox-3? Perception. 2007;36:1–16.
  42. 42. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Front Syst Neurosci. 2008;2:4. pmid:19104670
  43. 43. Kuhn LK, Wydell T, Lavan N, McGettigan C, Garrido L. Similar representations of emotions across faces and voices. Emotion. 2017;17(6):912–37. pmid:28252978
  44. 44. Sauter DA, Eisner F, Calder AJ, Scott SK. Perceptual cues in nonverbal vocal expressions of emotion. Q J Exp Psychol (Hove). 2010;63(11):2251–72. pmid:20437296
  45. 45. Guldner S. Psychological and neurophysiological correlates of social vocal control; 2021. Available from: https://madoc.bib.uni-mannheim.de/59371//
  46. 46. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B: Stat Method. 1995;57(1):289–300.
  47. 47. Slotnick SD, Moo LR, Segal JB, Hart J Jr. Distinct prefrontal cortex activity associated with item memory and source memory for visual shapes. Brain Res Cogn Brain Res. 2003;17(1):75–82. pmid:12763194
  48. 48. Bone D, Lee C-C, Black MP, Williams ME, Lee S, Levitt P, et al. The psychologist as an interlocutor in autism spectrum disorder assessment: insights from a study of spontaneous prosody. J Speech Lang Hear Res. 2014;57(4):1162–77. pmid:24686340
  49. 49. Redcay E, Schilbach L. Using second-person neuroscience to elucidate the mechanisms of social interaction. Nat Rev Neurosci. 2019;20(8):495–505. pmid:31138910
  50. 50. Jonason PK, Webster GD. A protean approach to social influence: Dark Triad personalities and social influence tactics. Pers Individ Differ. 2012;52(4):521–6.
  51. 51. Paulhus DL, Williams KM. The Dark Triad of personality: narcissism, machiavellianism, and psychopathy. J Res Pers. 2002;36(6):556–63.
  52. 52. Corral S, Calvete E. Machiavellianism: dimensionality of the Mach IV and its relation to self-monitoring in a Spanish sample. Span J Psychol. 2000;3(1):3–13. pmid:11761738
  53. 53. Nagler UKJ, Reiter KJ, Furtner MR, Rauthmann JF. Is there a “dark intelligence”? Emotional intelligence is used by dark personalities to emotionally manipulate others. Pers Individ Differ. 2014;65:47–52.
  54. 54. Bereczkei T, Birkas B. The insightful manipulator: machiavellians’ interpersonal tactics may be linked to their superior information processing skills. IJPS. 2014;6(4):65–70.
  55. 55. DePaulo BM. Nonverbal behavior and self-presentation. Psychol Bull. 1992;111:203–43.
  56. 56. Belyk M, Brown S. Pitch underlies activation of the vocal system during affective vocalization. Soc Cogn Affect Neurosci. 2016;11(7):1078–88. pmid:26078385
  57. 57. Belyk M, Brown S. Somatotopy of the extrinsic laryngeal muscles in the human sensorimotor cortex. Behav Brain Res. 2014;270:364–71. pmid:24886776
  58. 58. Pa J, Hickok G. A parietal-temporal sensory-motor integration area for the human vocal tract: evidence from an fMRI study of skilled musicians. Neuropsychologia. 2008;46(1):362–8. pmid:17709121
  59. 59. Oberhuber M, Hope TMH, Seghier ML, Parker Jones O, Prejawa S, Green DW, et al. Four functionally distinct regions in the left supramarginal gyrus support word processing. Cereb Cortex. 2016;26(11):4212–26. pmid:27600852
  60. 60. Golfinopoulos E, Tourville JA, Bohland JW, Ghosh SS, Nieto-Castanon A, Guenther FH. fMRI investigation of unexpected somatosensory feedback perturbation during speech. Neuroimage. 2011;55(3):1324–38. pmid:21195191
  61. 61. Tourville JA, Guenther FH. The DIVA model: a neural theory of speech acquisition and production. Lang Cogn Process. 2011;26(7):952–81. pmid:23667281
  62. 62. Garnier M, Lamalle L, Sato M. Neural correlates of phonetic convergence and speech imitation. Front Psychol. 2013;4:600. pmid:24062704
  63. 63. Peschke C, Ziegler W, Kappes J, Baumgaertner A. Auditory-motor integration during fast repetition: the neuronal correlates of shadowing. Neuroimage. 2009;47(1):392–402. pmid:19345269
  64. 64. Zarate JM, Wood S, Zatorre RJ. Neural networks involved in voluntary and involuntary vocal pitch regulation in experienced singers. Neuropsychologia. 2010;48(2):607–18. pmid:19896958
  65. 65. Preckel K, Kanske P, Singer T. On the interaction of social affect and cognition: empathy, compassion and theory of mind. Curr Opin Behav Sci. 2018;19:1–6.
  66. 66. Schurz M, Radua J, Aichhorn M, Richlan F, Perner J. Fractionating theory of mind: a meta-analysis of functional brain imaging studies. Neurosci Biobehav Rev. 2014;42:9–34. pmid:24486722
  67. 67. Bzdok D, Langner R, Schilbach L, Jakobs O, Roski C, Caspers S, et al. Characterization of the temporo-parietal junction by combining data-driven parcellation, complementary connectivity analyses, and functional decoding. Neuroimage. 2013;81:381–92. pmid:23689016
  68. 68. Jasmin K, Gotts SJ, Xu Y, Liu S, Riddell CD, Ingeholm JE, et al. Overt social interaction and resting state in young adult males with autism: core and contextual neural features. Brain. 2019;142(3):808–22. pmid:30698656
  69. 69. Hellbernd N, Sammler D. Neural bases of social communicative intentions in speech. Soc Cogn Affect Neurosci. 2018;13(6):604–15. pmid:29771359
  70. 70. Anderson AJ, McDermott K, Rooks B, Heffner KL, Dodell-Feder D, Lin FV. Decoding individual identity from brain activity elicited in imagining common experiences. Nat Commun. 2020;11(1):5916. pmid:33219210
  71. 71. Bonnici HM, Cheke LG, Green DAE, FitzGerald THMB, Simons JS. Specifying a causal role for angular gyrus in autobiographical memory. J Neurosci. 2018;38(49):10438–43. pmid:30355636
  72. 72. Moscovitch M, Cabeza R, Winocur G, Nadel L. Episodic memory and beyond: the hippocampus and neocortex in transformation. Annu Rev Psychol. 2016;67:105–34. pmid:26726963
  73. 73. Ciaramelli E, Grady C, Levine B, Ween J, Moscovitch M. Top-down and bottom-up attention to memory are dissociated in posterior parietal cortex: neuroimagingand and neuropsychological evidence. J Neurosci. 2010;30(14):4943–56. pmid:20371815
  74. 74. Mondino M, Poulet E, Suaud-Chagny M-F, Brunelin J. Anodal tDCS targeting the left temporo-parietal junction disrupts verbal reality-monitoring. Neuropsychologia. 2016;89:478–84. pmid:27452837
  75. 75. Hallam GP, Webb TL, Sheeran P, Miles E, Niven K, Wilkinson ID, et al. The neural correlates of regulating another person’s emotions: an exploratory fMRI study. Front Hum Neurosci. 2014;8:376. pmid:24936178
  76. 76. Tholen MG, Trautwein F-M, Böckler A, Singer T, Kanske P. Functional magnetic resonance imaging (fMRI) item analysis of empathy and theory of mind. Hum Brain Mapp. 2020;41(10):2611–28. pmid:32115820
  77. 77. Kanske P, Böckler A, Trautwein F-M, Parianen Lesemann FH, Singer T. Are strong empathizers better mentalizers? Evidence for independence and interaction between the routes of social cognition. Soc Cogn Affect Neurosci. 2016;11(9):1383–92. pmid:27129794
  78. 78. Shamay-Tsoory SG, Aharon-Peretz J, Perry D. Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain. 2009;132(Pt 3):617–27. pmid:18971202
  79. 79. Völlm BA, Taylor ANW, Richardson P, Corcoran R, Stirling J, McKie S, et al. Neuronal correlates of theory of mind and empathy: a functional magnetic resonance imaging study in a nonverbal task. Neuroimage. 2006;29(1):90–8. pmid:16122944
  80. 80. Fraccaro PJ, O’Connor JJM, Re DE, Jones BC, DeBruine LM, Feinberg DR. Faking it: deliberately altered voice pitch and vocal attractiveness. Anim Behav. 2013;85(1):127–36.
  81. 81. Jones BC, Feinberg DR, DeBruine LM, Little AC, Vukovic J. A domain-specific opposite-sex bias in human preferences for manipulated voice pitch. Anim Behav. 2010;79(1):57–62.
  82. 82. Maples JL, Lamkin J, Miller JD. A test of two brief measures of the dark triad: the dirty dozen and short dark triad. Psychol Assess. 2014;26(1):326–31. pmid:24274044
  83. 83. Seara-Cardoso A, Neumann C, Roiser J, McCrory E, Viding E. Investigating associations between empathy, morality and psychopathic personality traits in the general population. Pers Individ Differ. 2012;52(1):67–71.
  84. 84. Persson BN, Kajonius PJ, Garcia D. Revisiting the structure of the short dark triad. Assessment. 2019;26(1):3–16. pmid:28382846